Quantcast

AG Platkin Sues Messaging App “Discord” for Unlawful Practices That Expose NJ Kids to Child Predators and Violent, Sexual Content

LEGAL NEWSLINE

Friday, April 25, 2025

AG Platkin Sues Messaging App “Discord” for Unlawful Practices That Expose NJ Kids to Child Predators and Violent, Sexual Content

Matthew

Attorney General Matthew Platkin | Attorney General Matthew Platkin Official website

Attorney General Matthew J. Platkin and the Division of Consumer Affairs announced a lawsuit today against messaging application provider Discord, Inc. (“Discord”) for deceptive and unconscionable business practices that misled parents about the efficacy of its safety controls and obscured the risks children faced when using the application. The multiyear investigation by the New Jersey Office of the Attorney General and Division of Consumer Affairs revealed Discord’s conduct violated New Jersey’s consumer protection laws and exposed New Jersey children to sexual and violent content, leaving them vulnerable to online predators lurking on the Discord app.

The complaint, filed partially under seal today in the Superior Court of New Jersey, Chancery Division, Essex County, alleges Discord engaged in multiple violations of the New Jersey Consumer Fraud Act (“CFA”). Discord knew its safety features and policies could not and did not protect its youthful user base, but refused to do better, the complaint alleges. In particular, Discord misled parents and kids about its safety settings for direct messages (“DMs”).

“Discord markets itself as a safe space for children, despite being fully aware that the application’s misleading safety settings and lax oversight has made it a prime hunting ground for online predators seeking easy access to children,” said Attorney General Platkin. “These deceptive claims regarding its safety settings have allowed Discord to attract a growing number of children to use its application, where they are at risk. We intend to put a stop to this unlawful conduct and hold Discord accountable for the harm it has caused our children.”

Discord, based in San Francisco, owns and manages an application that allows users to communicate through text, audio, and video. Since its inception in May 2015, the app has become one of the most popular online social platforms in the world, especially among children, who make up a significant portion of Discord’s massive user base.

According to the filed complaint, the investigation revealed that, for years, Discord has represented its app as safe—relying in part on its policies barring underage use of the app and the circulation of explicit material, including child sexual abuse content. Most importantly, Discord has touted its Safe Direct Messaging feature and its successors, which it claimed to automatically scan and delete private direct messages that contained explicit media content.  But Discord’s promises fell, and continue to fall, flat, the complaint alleges.

News accounts and reports from prosecutors’ offices illustrate that despite the app’s promises of child safety, predators use the app to stalk, contact, and victimize children.  These sources identify alarming cases where adults were charged and convicted with using Discord to contact children, often posing as children themselves, and transmitting and soliciting explicit images through the app, including through the use of sextortion. In many criminal cases involving sexual exploitation of children on Discord, the children were under the age of 13, despite Discord’s claim to enforce its policy prohibiting children under 13 from using the app.

“Discord claims that safety is at the core of everything it does, but the truth is, the application is not safe for children. Discord’s deliberate misrepresentation of the application’s safety settings has harmed—and continues to harm—New Jersey’s children, and must stop,” said Cari Fais, Director of the Division of Consumer Affairs. “By filing this lawsuit, we’re sending a clear message that New Jersey will not allow businesses to grow their customer base through unlawful and deceptive practices, especially when those practices put children at grave risk.”

Highlights of the complaint’s allegations include, but are not limited to:

Discord’s Platform is Structured to Encourage Unchecked and Unmoderated Engagement Among Its Users 

Discord designed its app to appeal to children’s desire for personalization and play by offering custom emojis, stickers, and soundboard effects, all of which are intended to make chats more engaging and kid-friendly. And it has created or facilitated “student hubs” as well as communities focused on popular kids’ games, like Roblox.

Once engaged, Discord encourages and facilitates free interaction and engagement between its users. Specifically, Discord’s default settings allow users to receive friend requests from anyone on the app—and to receive private direct messages from friends and anyone using the same server or virtual “community”—enabling child users to connect easily and become “friends” with hundreds of other users.  Then, because Discord’s default safety settings disable message scanning between “friends,” child users can be—and are—inundated with explicit content. This explicit content can include user-created child sexual abuse material, messages intended to sexually exploit or coerce a child to engage in self-harm, internet links to sexually explicit content, images,  and videos depicting violence, and videos containing sexually explicit content. In short, the app’s design makes it easy for children to connect with other users, but also allows predators to lurk and target them, undeterred by the safety features Discord touts as reasons that parents and users should trust its app.

Discord Misled Users About its “Safe Direct Messaging” Feature

From March 28, 2017 until April 22, 2023, Discord included “Safe Direct Messaging” settings in the “Privacy & Safety” menu of Discord’s “User Settings.” The settings purported to address how direct messages from other users will be scanned and deleted before receipt by the intended user. The Safe Direct Messaging setting contained three options:

  • Keep me safe. Scan direct messages from everyone.
  • My friends are nice. Scan direct messages from everyone unless they are a friend.
  • Do not scan. Direct messages with not be scanned for explicit content.
For most of the feature’s existence, Discord made the “My friends are nice” option the default setting for every new user on the app. This option only scanned incoming direct messages if the sender was not on the user’s friends list. For both the “Keep me Safe” and “My friends are nice” settings, Discord represented that it would “[a]utomatically scan and delete direct messages you receive that contain explicit media content.” But this was not true. Despite its claims, Discord knew that not all explicit content was being detected or deleted.

Discord’s Design Decisions Exacerbated the Risk to Children on the App

Combined with Discord’s deception about its Safe Direct Messaging features, Discord’s other design choices worked together to virtually ensure that children were harmed or placed at risk of harm on its app. For example:

  • By default, Discord allows users to exchange DMs if they belong to a common server. Therefore, a malicious user—adult or child—need only to join a community server, which could contain over a million users, to exchange DMs with an unsuspecting child user.
 

  • DMs among “friends” are even more dangerous. Discord’s default settings not only allow any user to send a friend request to a child, they also then permit those users, once “friends,” to exchange totally unscanned DMs through the default “My friends are nice” setting. Children can receive and accept friend requests from users whom they do not know and with whom they have no connection, and then engage privately on the platform without any oversight—all by design.
 

  • Users may also create multiple accounts to hide their activities and circumvent being banned from servers, or from facing other repercussions. And even if users are banned from a server, or from Discord itself, Discord’s design allows them to simply re-engage using a brand new, easily created account.
Discord Misrepresented That Users Under the Age of 13 Are Not Permitted to Create Accounts and Are Banned from Discord Upon Discovery

At all relevant times, Discord’s Terms of Service have stated that users must be “at least 13 years old and meet the minimum age required by the laws in [the users’] country.” To this day, however, Discord only requires individuals to enter their date of birth to establish their age when creating an account—nothing more. Discord does not require users to verify their age or identity in any other way. Simple verification measures could have prevented predators from creating false accounts and kept children under 13 off the app more effectively.

Nevertheless, Discord actively chose not to bolster its age verification process for years and has allowed children under the age of 13 to operate freely on the app, despite their vulnerability to sexual predators.

Simply put, Discord has promised parents safety while simultaneously making deliberate choices about its app’s design and default settings, including Safe Direct Messaging and age verification systems, that broke those promises. As a result of Discord’s decisions, thousands of users were misled into signing up, believing they or their children would be safe, when they were really anything but.

The lawsuit seeks a number of remedies, including an injunction to stop Discord from violating the CFA, civil penalties, and the disgorgement of any profits generated in New Jersey through this unlawful behavior.

Today’s complaint is the latest action taken by the Office of the Attorney General to keep children safe online. Last fall, the office sued media giant TikTok for unlawful conduct tied to features that keep children and teens online for ever-increasing amounts of time despite the harms that result. A year earlier it sued Meta Platforms, the owner of Instagram and Facebook, for similar unlawful conduct. Both the Meta and TikTok complaints arose from the same national investigation, which was co-led by New Jersey. Additionally, in recent years, the New Jersey Division of Criminal Justice has prosecuted numerous cases in which defendants allegedly used social media platforms and chat apps–including Discord—to prey on children and engage them in sexually explicit conversations as a means of obtaining child sexual abuse material.

Deputy Attorneys General Mandy Wang, Ethan Rubin, and Kathleen Riley, under the supervision of Data Privacy & Cybersecurity Section Chief Kashif T. Chand and Assistant Section Chief Thomas Huynh, and Deputy Director Sara M. Gregory, within the Division of Law’s Affirmative Civil Enforcement Practice Group, are representing the State in the matter. Investigator Aziza Salikhova of the Office of Consumer Protection within the Division of Consumer Affairs conducted the investigation.

Original source can be found here.

ORGANIZATIONS IN THIS STORY

More News