SAN FRANCISCO (Legal Newsline) - Plaintiffs who warned of the end of humanity failed to make their case that their personal information was stolen to teach the artificial intelligence program ChatGPT.
That's the argument of ChatGPT's maker, OpenAI, in a Dec. 4 motion to dismiss in California federal court. The company and Microsoft face a proposed class action alleging the illegal use of private information.
"Critically, Plaintiffs fail to identify any of their specific personal information that was misappropriated," the motion says.
"And while Plaintiffs contest OpenAI's purported use of data allegedly acquired through ChatGPT, they acknowledge that OpenAI's Terms of Use disclose its data use practices and allow users options to opt out of training when using ChatGPT (which they do not allege that they did)."
The plaintiffs, who are ChatGPT users, allege in their class action that the defendants are unlawfully developing, marketing and operating their AI products including ChatGPT 3.5 and ChatGPT 4.0, Dall-E and VAll-E, because they use stolen private information from hundreds of millions of internet users, including children, without their informed consent or knowledge.
The plaintiffs allege the defendants "rushed the products to market" without implementing proper safeguards or controls and that the AI products have "demonstrated their ability to harm humans in real ways." They further allege the defendants collect, store, track, share and disclose account information that users entered when signing up that includes payment information, login credentials, social media information, preferences through Spotify and chat log data.
The plaintiffs claim the defendants have enough information to "create our digital clones" including voice and likeness to manipulate the use of technology by users.
OpenAI says much of the 117-page complaint is untethered to any legal theory. In the fourth paragraph, plaintiff lawyers at Morgan & Morgan wrote "Defendants' disregard for privacy laws is matched only by their disregard for the potentially catastrophic risk to humanity."
"The Complaint - which spans 117 pages and includes 230 footnotes, most referencing third-party commentary on AI - fails to meet Rule 8's pleading requirements," OpenAI's first argument begins.
"It lacks even the most basic information about OpenAI's supposed privacy violations."