NEW YORK (Legal Newsline) - A federal judge blocked enforcement of a New York law requiring social media platforms to create a mechanism for receiving complaints about “hateful conduct” online, saying it could have a “profound chilling effect” on speech that wasn’t justified by a compelling government interest.
New York legislators passed the Hateful Conduct law on Dec. 3, 2022, six months after a racist mass shooting in Buffalo that the killer livestreamed on Twitch, a social media platform. The law came after New York Gov. Kathy Hochul directed Attorney General Letitia James to investigate the role of social media platforms in broadcasting and amplifying the crimes and her office issued a report stating “online platforms should be held accountable for allowing hateful and dangerous content to spread.”
The Hateful Conduct Law required social media platforms to create a mechanism for users to file complaints about “hateful conduct” and platforms had to disclose their policy for dealing with such complaints. Three days later, UCLA Law School Professor Eugene Volokh and Rumble Canada, a YouTube-like site that says it is devoted to free speech, sued to block enforcement of the law, arguing it compelled them to speak on an issue they would otherwise remain silent about.
In a Feb. 14 ruling, U.S. District Judge Andrew Carter largely agreed. He issued an injunction blocking enforcement of the law, saying the plaintiffs had a substantial likelihood of success in knocking it down.
“Although preventing and reducing the instances of hate-fueled mass shootings is certainly a compelling governmental interest, the law is not narrowly tailored toward that end,” the judge wrote. “Banning conduct that incites violence is not protected by the First Amendment, but this law goes far beyond that.”
New York argued the law regulated conduct, not speech, because it doesn’t say how platforms must respond to complaints. The state likened it to a law requiring fast-food restaurants to set up a mechanism for employees to donate part of their paychecks to a nonprofit of their choice, which survived court review.
Judge Carter rejected the comparison, saying the Hateful Conduct Law requires platforms to create a policy explaining how they will deal with complaints about “hateful conduct,” or compelled speech. And that policy must address conduct as the state defines it: conduct which tends to “vilify, humiliate, or incite violence” “on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.”
“A social media network that devises its own definition of `hateful conduct’ would risk being in violation of the law and thus subject to its enforcement provision,” the judge wrote.
Rumble argued it already removes content that is illegal, pornographic, “grossly offensive” or incites terrorism. To comply with the New York law, the judge wrote, Rumble would have to issue a policy that includes New York’s definition of hateful conduct. The judge said the law’s definitions were broad and hard for platforms to interpret: “Could a post using the hashtag `BlackLivesMatter’ or `BlueLivesMatter’ be considered `hateful conduct’ under the law?” he asked.
“The Hateful Conduct Law places Plaintiffs in the incongruous position of stating that they promote an explicit `pro-free speech’ ethos, but also requires them to enact a policy allowing users to complain about `hateful conduct’ as defined by the state,” he concluded.
The judge also rejected the state’s argument it was merely regulating commercial speech, which has a lower level of First Amendment protection. The law goes beyond “purely factual and uncontroversial” speech that the government can compel, the judge wrote.
“The actual title of the law — `Social media networks; hateful conduct prohibited’ …strongly suggests that the law is really aimed at reducing, or perhaps even penalizing people who engage in, hate speech= online,” the judge wrote.
The judge rejected arguments based upon Section 230, the federal law protecting social media platforms against lawsuits over content posted by third parties, because the New York law doesn’t impose liability for anything other than failing to have a complaint mechanism.