The U.S. Supreme Court is set to deliver a decision in a pair of cases that could help define the government’s role when it comes to regulation of social media companies and their freedoms. The cases of NetChoice v. Paxton and Moody v. NetChoice could shape how speech is governed on the internet and on social media for years to come.
Nancy Costello is a clinical professor of law at MSU’s College of Law, where she serves as the director for the First Amendment Clinic and the McLellan Online Free Speech Library. She provides an overview of the cases and what it could mean for speech and social media.
What are the facts of the cases?
NetChoice v. Paxton and Moody v. NetChoice involve challenges to laws in Florida and Texas that would regulate how and when social media companies can moderate their content and require companies to be transparent about these practices. NetChoice is a coalition of social media companies and internet platforms that challenged the laws, claiming they violate tech companies’ free speech rights. The laws were largely a reaction to X, formerly Twitter, Facebook and YouTube suspending Donald Trump and removing content posted by conservative voices following the Jan. 6 attack on the nation’s capitol.
How is the Supreme Court going to evaluate the ruling?
The task of the court is difficult. It must sort out what practices by social media companies are expressive conduct protected by the First Amendment and are nonexpressive and can be regulated by the states. For instance, is there a difference between removing or downranking a person’s post on social media based on its viewpoint versus delivering content using habit-forming design practices to maximize a user’s time spent on the platform? The social media companies argue that both these practices (and all others) are exercises of editorial discretion that are protected by the First Amendment and similar to the free speech rights of newspapers and parades. But the states contend that given the sweeping breadth, size and variety of social media platforms, they are more like utilities and telegraph companies — common carriers — and subject to greater government regulation.
How can we view the relationship between social media and speech?
I believe there is a big difference in these two practices and a decision to impose regulation requires a nuanced approach. A decision to remove or flag a comment on a user’s social media feed based on its viewpoint is the social media platform’s (a private company’s) decision about what speech will be posted on the internet, and thus protected by the First Amendment. In contrast, using algorithms to relentlessly push customized content to a user without their consent to extend that user’s time on a platform and grow advertising revenues is a product design business practice and not speech.
Harmful product designs can be regulated by the government to protect the public. Cases in point: cigarettes and asbestos materials. Social media companies should not be immune from regulation or lawsuits just because they are in the business of distributing speech. Free expression is different from an algorithmic product design specifically used to entice users and drive up profits — especially when such a practice can be harmful to the public, most notably children.
How might the court rule and what could be the implications?
Complicating things further in the NetChoice cases is the strategy of the social media companies to sue before the laws even take effect, thus making it almost impossible to assess exactly what kind of harm the laws would cause. It means the Supreme Court must decide based on mere speculation of harm and not specific factual findings. Given these limitations, it is likely the court will send the cases back to the lower courts with instructions on how to proceed. But the Supreme Court should give precise guidance to the lower courts about the scope of social media activities that are protected under the First Amendment and what can be subject to government regulation.