Audio: Tech Platforms and the 1st Amendment: Impacts of Landmark Supreme Court Rulings

Congressional Internet Caucus Academy

Congress Hears Tech Policy Debates

Audio: Tech Platforms and the 1st Amendment: Impacts of Landmark Supreme Court Rulings

Congress Hears Tech Policy Debates

You're listening to podcasts from the Congressional Internet Caucus Academy.

Welcome to today's event.

My name is Rhett Stiles. I'm a legislative assistant in Congressman McCaul's office, the co-chair of the Congressional Internet Caucus.

I want to welcome you to today's luncheon event called Tech Platforms and the First Amendment Impacts of Landmark Supreme Court Rulings.

I want to note that this event is hosted by the Congressional Internet Caucus Academy in conjunction with the Congressional Internet Caucus and its co-chairs.

On the House side, the co-chairs of the caucus are Congressman McCaul and Congresswoman Anna Eshoo.

On the Senate side, it is Senator John Thune.

We have been doing these luncheon briefings regularly with a brief pause during the pandemic since 1996 when the Internet Caucus was founded.

Our next briefing after this.

This will be on Friday, September 20th in this room.

So keep your eye out for an announcement.

Today, we have a panel of experts who are on the front lines of freedom of expression issues with deep knowledge of these cases and what they mean for Congress.

Our moderator today is Nadine Fareed Johnson, who is the integral policy director of the Knight First First Amendment Institute at Columbia University.

Nadine, let me hand it over to you.

Thanks.

Thank you so much.

Thank you all for being here and making it through the rain to join us.

I really appreciate it and happy to see everyone on live stream as well.

So I'm going to do a quick introduction of our panelists.

There is so much to discuss here.

So I really want us to dive in as quickly as possible.

But I will start all the way down at the end with Steve DelPianco, who is the president and CEO of NetChoice.

Steve is an expert on Internet governance, online consumer protection and Internet taxation.

He was testified several times before Congress and state legislatures.

As president.

And CEO Steve works with NetChoice members to set and execute the NetChoice agenda.

Next to Steve is Yael Eisenstadt, who is a senior fellow at Cybersecurity for Democracy.

Yael is a tech and democracy activist and the former vice president of the Anti-Defamation League Center for Technology and Society at Cybersecurity for Democracy.

She is working on policy solutions for how social media, AI powered algorithms and generative AI affect political discourse, polarization and democracy.

Olivia Sylvain is the professor of policy at MIT.

Professor of law, a professor of law at Fordham University School of Law and a senior policy research fellow at the Knight Institute.

He is a senior adviser to the chair of the Federal Trade Commission from 2021 to 2023.

A wide ranging expert on these issues, his research is in information and communications law and policy.

And next to me, I have we have Vera Edelman, who was a staff attorney with the ACLU's Speech, Privacy and Technology Project, where she works on the rights to free speech and privacy in the digital age.

A First Amendment expert, she focuses on the free speech rights of protesters and young people, online speech and genetic privacy.

So, as I mentioned, we have just under an hour here, so I'd like to just really dive in because I think you all know why we are here, and Vera, I'll ask you to kick us off.

If you can lay the foundation for us here, the First Amendment foundation, and tell us what the cases say and don't say with respect to First Amendment protections, please.

Thank you, Nadine.

Nope.

It is on?

Okay, thank you very much.

I'm an expert on tech, right?

Thank you.

So, thank you all for joining, and thank you, Nadine, for the introduction.

I thought I would just sort of table set for us by starting with what is the First Amendment, what does it protect, and who does it protect against, and then I promise I'll discuss the net choice cases.

So, if any of you have heard my colleague, Emerson Sykes, speak on this, this will sound very familiar because I'm basically stealing from him, but with attribution, because I think he does a really good job on this.

So, just so you know.

So, just to start with the First Amendment, what does the text say?

Roughly, it starts with Congress, the government, shall make no law respecting, first, freedom of religion, so essentially freedom of thought, what's happening inside your head.

Second, freedom of speech, your ability to say that out loud, express it.

Third, freedom of the press, to publish it, to publicize it, to put it in writing and make it available to others.

Fourth, the right to assemble, so to join with others.

In public, who share your views, gather with other people.

And fifth, to petition the government for a redress of grievances.

So, we've moved from the ability to think to the ability to act and really make change.

All of those things are protected under the First Amendment, and they're protected against government, congressional, but other, government actor, intervention.

So, starting with that, let's look over at the net choice cases.

I think it's actually a helpful rubric because what the case basically does when talking about,

content moderation specifically, is ask, what is protected by the First Amendment here?

And it gives us a pretty clear answer.

It says that curation and editorial discretion, including when exercised by social media platforms, is protected by the First Amendment.

Some of the verbs that the majority opinion uses includes deciding what to include and exclude online.

Deciding how to organize it, how to prioritize it, how to filter it, how to label it, how to display it.

Selecting content, ordering content, presenting content.

So, publication, choice of what you're publishing, in what order, how you're prioritizing, what you say about it, etc.

The majority makes clear that the First Amendment applies to all of these activities, whether they're happening offline,

as it's held for everything from newspaper publishers to utility bill providers to parade organizers,

or online, as it holds very clearly in this case with respect to social media.

Whether it's using old or new technology, whether the editorial discretion is being exercised to curate a lot or just a little,

and whether that editorial discretion is giving a clear articulable message,

we're a place where you can say X but not Y, or not.

So, the protection is quite expansive, and the court goes on to say,

whatever we think of how social media platforms are exercising those protected rights,

however they're exercising their discretion, and we're not saying they're doing it in a good way,

the government standing in for them and imposing its views of what the editorial preferences ought to be would be even worse.

The court says the First Amendment exists to ensure that the public has exposure to a wide variety of views,

and the way it does that is by limiting the government, not private actors.

It also makes clear that any regulation that is related to the suppression of free expression,

and there I'm quoting from the opinion, quote,

related to the suppression of free expression is very likely to fail,

because that goal is never valid under the First Amendment.

Those, for me, are the main takeaways.

That's a great table setting. Thank you so much.

And now I'd actually like to turn to Steve, because as the plaintiff in this case,

I know we are particularly interested in your take about what ended up happening.

Thanks, Nadine and Vera.

Again, I'm Steve DelBianco, President of NetChoice,

who's one of the two plaintiffs,

and the other was CCIA.

We're so grateful that they were a co-plaintiff.

It has been a wild, strange journey from Tallahassee to Austin

to the Supreme Court over the last three and a half years,

and the journey isn't over, right?

We're back in the courts again in about 30 days.

But that journey starts from the concept of why do NetChoice members even bother to moderate content?

You probably understand they try to cultivate and curate a community.

They want to reduce spam.

They want to limit fraud that scares off users.

They want to reduce the sharing of awful but lawful content,

nudity, pornography, hate speech, and violence.

That's all legal in the United States.

But it turns off the user community.

And let's not forget who pays for everything.

The advertisers.

The advertisers are particularly sensitive of having their ads appear next to objectionable content.

And they're paying the piper.

They get to call the tune.

So the trick is on content moderation is how do you get it right in the balance?

Half of Americans think that social media moderates too much content from users.

The other half of Americans think they don't moderate enough.

It is an impossible squeeze play.

And it was already working itself into a lather.

It came to a boil on January the 6th.

After President Trump was kicked off of Twitter, Facebook, and YouTube,

both the governors of Texas and Florida raced to pass a quick law

prohibiting Facebook, Twitter, and YouTube

from moderating the content based on viewpoints expressed in the content.

So I testified in Texas and Florida against the legislation

and asked two questions of the committees that I think will help you all understand this.

I said, Mr. Chairman, could you force a newspaper to carry your op-eds and letters to the editor

if they didn't want to?

The answer was, well, obviously not.

The First Amendment prohibits that.

I said, could you force every newspaper to explain why they didn't take your letters to the editor?

Or give you an appeal process?

The answer was, well, of course not, the First Amendment.

So what's the follow-up?

Well, then what makes you think you can do it to social media?

Well, it's no difference.

And as Vera has explained, that's what the court upheld.

So Florida and Texas had to go big on different theories

by saying that it's not speech, really.

Or saying that you're a common carrier.

Or that it's a public square, even though it's privately owned.

But the federal courts of Florida and Texas didn't buy any of that

and very quickly gave us an injunction back in 2021 blocking the law.

Florida appealed to the 11th Circuit, who upheld the lower court and agreed with my choice.

Texas appealed to the 5th Circuit, which canceled the injunction,

saying that content moderation wasn't really speech at all.

And even if it were, Texas said, we can regulate anyway

because we want to see a diversity of ideas.

Well, that brings us up to last week, last Monday,

when the Supreme Court rejected the Florida and Texas efforts to fend off the First Amendment.

And the quotations Vera gave you a couple.

Another one was, on the spectrum of dangerous to free expression,

there are few greater than allowing the government to change the speech of a private actor

in order to achieve its own conception of speech nirvana.

And this principle does not change

because the curated compilation has gone from the physical to the virtual world.

Fortunately, all of that has been clarified.

The courts sent our cases back to the lower courts for factual development on the scope of the laws.

How broadly do they apply?

Do they apply to things like Venmo and Uber, Airbnb?

And they want more analysis that follows with the opinion laid out.

Where the opinion laid out is Vera explained,

these are the rules of our First Amendment, it's protected speech.

So the discussion here today is about what are the other avenues of things that can happen,

particularly for a Hill audience.

But we at NetChoice have focused heavily on the idea that the avenue of telling social media

how to moderate, that avenue is a dead end right now.

And I believe that a major avenue, though, that is open

is that social media sites can actually compete about how they moderate.

They can compete for users and compete for advertisers.

And that innovation is allowed to go on.

That experimentation goes on without interference from government now.

We've watched the experiment over Twitter.

Twitter went to X, changed its content moderation policy.

Those changes have consequences.

They've played out in front of all of us and the advertisers are not so happy about it.

We have new social media sites like Truth Social that does its own heavily moderated website.

Reddit does very little content moderation.

So each of these are different models.

There's very few barriers to entry to new social media sites.

And I think that will be there.

So I believe that NetChoice cases are a real win for those who value free expression,

offline and online.

And oppose government control.

That would be opposing government control whichever party is controlling the levers of power.

Great. Steve, thank you so much.

Olivia, I'd like to turn to you because one of the things the court seemed to do

was actually leave open other avenues.

And those include the avenue of consumer protection.

Can you speak a little bit to that?

Thanks, Nadine.

And thanks, Tim and everybody for having me and joining this fantastic panel.

I want to talk about consumer protection.

And the First Amendment.

But I want to weigh in on a couple of things that have come up if that's okay.

One clear thing is I agree that there's language in this opinion that is very good for people who believe in

independence of, political independence of private actors.

The focus of this opinion, however, is on, to the extent there's language on this,

is on two prominent features of our policy.

There's two prominent features of our information economy.

And that is news feed on Facebook and YouTube's homepage.

Everything else is not really clear.

And that's what you've heard Steve say.

There are a lot of things that need to be fleshed out.

And so the court is actually pretty direct about the failure of all the parties, including the courts,

the Fifth Circuit in particular, for failing to think about all the nuance and detail.

That is a rebuke of all the parties, right?

The states, Net Choice, CCIA, and the courts below.

And that's to say the courts have to go back and look more carefully at all the different applications that might be at stake.

All the different functions, like Yelp, like email, right?

All these other functions.

Because that's when we determine whether or not the statute is invalid as a facial matter, right?

As to the application in the context of YouTube and the news feed is probably unlawful.

So that's one thing.

I just wanted to make sure I said that.

Also, another quick point, and this gets to the consumer protection point.

I don't think anyone on this panel would say that the First Amendment protects all expressive activity.

You've heard that, essentially.

And the court recognizes this because that's what it remands back.

There are certain things that are just unlawful, even if you express them.

So advertising is an example.

You can't just advertise in any way you want.

You have to abide by, say, discrimination.

You also have to abide by consumer protection laws.

And this gets to, Nadine, finally to your question.

One of the issues that's important to this case is whether or not governments can require companies to disclose information.

This is the sort of thing we take for granted in the context of pharmaceuticals and health products, right?

And food.

Certain things have to be disclosed.

And what the states try to do here, probably a little clumsily,

is to require companies to explain the decisions they make.

What's really interesting here, to me, is that we don't have any resolution from this court on what the level of scrutiny is for this.

Most of us have thought that this is the sort of thing subject to a lower level of scrutiny.

Its expressive activity is subject to commercial speech regulation doctrine or intermediate scrutiny doctrine,

which is not the strictest level of scrutiny.

And the reason this is important,

the majority in Kega's opinion says it doesn't matter whether we do the most strict analysis or intermediate analysis.

But this is such an important problem.

And the reason you know that is because Clarence Thomas, who likes to invite all kinds of issues to the court, says,

I don't think, I believe that commercial speech is entitled to the same kinds of protection.

Even requirements to disclose information is required to the same kind of protection from government as expressive activity.

That can't be right.

Because that would really go after basically all these consumer protection laws.

So one of the things I want to underscore is that there is an opportunity here for Congress

and for regulators to think creatively about not intruding on expressive activity,

but attending to commercial conduct that is already understood under the doctrine to be unlawful.

Great. Thank you so much for bringing up those points and clarifying that.

So, Yale, I'm going to turn to you.

You've described yourself as a tech and democracy activist, right?

And I think we really, I'm interested in your perspective on these cases,

given your background and the work you've done over the past couple of decades in this space.

Sure. So, you know, I think, can you all hear me? Is this on?

Move it closer.

I'm reclaiming ten seconds.

It's on already.

That was the problem.

Oh.

All right. Good for technology.

So the one thing I think we all agree on on this stage,

although correct me if I'm misspeaking on behalf of anyone,

is that these actual laws were beyond the pale.

The idea, I mean, I think that is probably one of the few areas that we all agree with.

That we understand why Texas and Florida laws went too far,

and we understand why Net Choice challenged them,

and I think we all agree that protecting companies' rights to moderate content without government interference

based on ideological viewpoint is right.

But that does not in any way, in my opinion,

override the fact that we do actually still need to figure out what the guardrails are,

where the lines are, how these systems work,

and what Congress can do about it.

So my overarching analysis and reaction to this case is that,

you know, including this case, over the past year and a half,

we've had a number of Supreme Court cases that touch on social media issues,

and they all prove one clear thing to me,

and that's that Congress

has yet to actually modernize and update legislation

to meet the realities of how the internet works today.

And in that vacuum, SCOTUS has been asked to weigh in

on really complicated issues about social media, about online platforms,

and we keep trying to squeeze these complicated current online realities

into outdated laws and limited judicial precedent,

thanks to what I'm sure we'll talk about later, Section 230,

it's a whole other thing, but it's part of the reason we don't have enough.

precedent to really base some of these decisions on.

So I think this actually sounds like a great opportunity for Congress

to step up and figure out what are the rules of the road for the future.

But on this particular case, I am going to echo a little bit what Olivier said.

In a way, everybody actually lost.

Because what I find really interesting is,

you know, our colleague here will hand out these flyers

that make it sound like this was an overwhelming victory.

And again, on the point of content moderation and government improvement,

interference, I think it was.

That said, what I find...

Just note, the Attorney General of Florida also said

this was an overwhelming victory, which I find very interesting.

And as I view it, everybody lost in part because what I heard

was a Supreme Court that said they are getting frustrated

with these just sweeping First Amendment arguments,

and, you know, the facial First Amendment argument,

has said this is not enough for us to actually base any real judgment on.

And so, I think there's still a lot of work to be done.

I think I'm going to point out a few things that a few Supreme Court justices said

that I think is actually hopeful.

Because I believe there is a tipping point coming out of this case.

And I believe the tipping point is that the Supreme Court

is fed up with what I would call somewhat lazy legal arguments at this point,

of just saying everything that ever happens online on the Internet

is 100% protected by the First Amendment.

And I think the Supreme Court has made it clear in a number of cases

that that is not going to be the argument that wins forever

on everything to do with social media.

You had Justice Barrett hinting at the idea that algorithms and AI

might not all be considered in the same way.

I think that's an invitation to help figure out how they should be considered.

Your Justice Jackson saying not every potential action taken by a social media company

will qualify as expression protected by the First Amendment.

That's a really critical point.

You even had Justice Alito make clear that we need more information

on how these algorithms actually work,

as opposed to taking the industry's explanations at face value.

And so I actually think, again, I'm going to repeat this again,

the Supreme Court justices, going back to Gonzales even,

are getting frustrated with an industry that has operated with impunity for so long

that they're offering these lazy legal arguments

and the Supreme Court has said enough.

The idea that facial First Amendment arguments for everything that ever happens online

has got to change.

And in that, I'm just going to finish this first part with saying

that even in Justice Kagan's oral arguments, going back to Gonzales,

she pretty much made it clear Congress needs to update the laws

around where the lines should be drawn.

What is just content moderation versus, I would hate for us to think content moderation

is the only thing we should be talking about

when we talk about the guardrails of how social media companies operate.

And I think this case and other cases have made it clear

that we have to update these laws so that we can give the Supreme Court

and other courts more guidance on how to view these different cases coming forward.

Nadine, can I weigh in just a little bit, just on this point,

this excellent point from Yael,

to give some specific detail about the information economy

and the effort to regulate commercial activity in the information economy

and the stakes with regards to First Amendment doctrine.

There are two cases I want to draw attention to.

One is Netchoice versus Bonta out of the Northern District of California

where an Obama judge referring to the First Amendment arguments that Netchoice makes

about an age-appropriate design code law in California

is unlawful on First Amendment grounds.

Now, there are elements of that state law that look unlawful in some ways.

I agree.

But this is an example of a First Amendment argument

that is going after a whole set of practices

that we might have historically thought, as a political matter at least,

are fair game, protecting children from targeted advertisements and content

on websites that know that they're directing it at children.

The other one is even more extraordinary to me,

and that is a case out of Maryland

in which Netchoice has invoked the First Amendment to challenge a tax law.

The tax law is a pass-through provision that says

you have to increase your price.

There's a tax imposed on you for advertisements online.

And Netchoice says this is a violation of the First Amendment

because we cannot tell customers that we're increasing the price as a result of this law.

Now, this is a tax law.

This is a First Amendment gone crazy, in my opinion.

And this is the worry that I think we should be thinking about.

So I think that's an excellent segue.

I knew it was coming.

I'll get you right there.

I want to frame this for our audience

because I think it's important to...

You just said very, very well now.

We know from this case, from these cases,

that some kind of regulation in this sphere is not completely barred.

We know there is room, but the question is,

what kind of room and how do we get there?

And so I'd like us to talk about this

in terms of what these cases mean for you all,

for policymaking going forward.

What kind of decisions in terms of regulatory approaches

to content moderations might these cases influence?

Where is Congress now left after these cases?

So, Steve, I will start with you so you can both respond, Olivier,

and give your thoughts here.

And then I'm going to turn back to Yael to bridge that for us.

Thanks, Nadine.

And Olivier is right.

The Maryland law started as a tax law.

They want to tax any online advertising.

And then when the companies, many NetChoice members,

said that we're going to show that tax to the consumer,

the person buying the ads will see that the extra money

was a tax from the state of Maryland.

Well, then they rushed back into special session

and passed an amendment prohibiting us from telling anybody

that the tax increase came from the state of Maryland.

So they turned the tax law

into a speech suppression censorship law.

So, Olivier, there's no wonder that the First Amendment came in.

Olivier also mentioned our NetChoice case against Bonta,

which is the AG of California.

And in that case, it's not just California.

We have injunctions in California, in Utah, Arkansas, Ohio,

and on Monday we got one in Mississippi,

against very similar laws that require age verification.

And the whole point of the law is that age verification requires

that a company verify your age no matter how old you are.

So an old guy like me and a lot of youngsters like you

who are all over 18 are going to have to produce forms of government ID

to use YouTube, to use Google search, to use social media sites.

So I know it's aimed at the children, but it applies to everyone.

So forcing people to provide ID to be able to use YouTube

was deemed by federal courts in all six states,

where we sued, to be an undue burden on their right

to both express and see free expression.

So the First Amendment got implicated not because of what's happening to the kids.

The First Amendment is implicated because it applies to everyone.

And Nadine, did you want me to dive into the notions for application to Congress?

Sure.

That's great.

So I think that, I mean, Yale's talked about the idea for Congress to update laws.

That's always a good idea.

It's even a better idea to pay attention to what the Supreme Court has said

before you update the laws.

Because if Congress were to jump in and try to do some content moderation laws,

not unlike what happened in Texas and Florida,

you now have some pretty clear guidelines.

Because the Supreme Court didn't write the Net Choice rulings just for the states.

It applies to any form of government.

And age verification.

I just discussed earlier what would happen there.

But those haven't made their way to the Supreme Court.

We have only one injunction to federal court,

and the first circuit court is next week,

the California Ninth Circuit.

When it comes to Section 230,

there's always talk about trying to update Section 230.

But the First Amendment was written 230 years ago.

Section 230 was in the mid-90s,

and it had to do with tort reform.

How many of you saw The Wolf of Wall Street?

Well, Leonardo DiCaprio played Jordan Belfort,

who sued Prodigy.

Because a bunch of investors who'd been ripped off by Leonardo DiCaprio

were telling each other on the bulletin board,

stay away from Stratton Oakmont.

So Leonardo DiCaprio sues Prodigy,

and a judge in New York said,

you know, Prodigy, because you moderate some content,

I'm going to hold you liable for everything

that all these investors have said,

these nasty things they said about Leonardo DiCaprio.

Well, that was bizarre.

That was insane.

And Chris Cox, who's on the board of Net Choice,

gave an outline of Section 230

on the airplane back from California the next morning.

Section 230 is tort reform.

It says that if a content moderation occurs,

that that content is the property of the person that wrote it.

That the person that wrote it is liable.

If they break a law or they're going to be sued,

you sue them.

You don't sue the platform

if the platform had nothing to do with the content itself

other than displaying it.

So Section 230 itself, I believe,

might well go through some update,

but remember that Section 230 doesn't apply at all

to any federal criminal law whatsoever.

Read Section 230.

It doesn't apply to intellectual property law.

It doesn't apply to child protection,

like child sexual abuse material,

and it applies to nothing to do with federal criminal law.

This is why the back page executives went to prison.

You might look at transparency and appeal process.

That's an interesting angle.

Olivier said that we were chastised a bit

for failing to think about

the broad scope of the law.

And again, Olivier is right.

I've agreed with him on everything.

But think about it.

When we went into the legislature and the courts

in Texas and Florida,

we were up against the fact that the governors

and the sponsors of the bill

all claimed every day

that the bill was designed to punish Facebook, Twitter, and YouTube.

The fact that it had a broader scope

was because they wrote the law

in a way that was incredibly vague.

So that wasn't their intended target.

But Justice Barrett is right.

We want to go back down to the lower court,

focus on whether there's a substantial implication there.

And then finally, the Twitter files and job hunting.

Maybe a few of you were around in late 22

when the Twitter files came out.

And the first batch of Twitter files,

Net Choice was in 10% of them.

Because we were warning our contacts on the Hill,

we were warning our members

that contacts on the Hill were really upset

about the suppression of the Hunter Biden laptop story

in the week and a half before the election.

We don't understand how content moderation rules

are being invoked to suppress the sharing of that story.

Within a day or two, our members got it

and made the switch.

Elon Musk released these things

to try to show that his company

was being pressured by the government.

And the First Amendment prohibits government officials

using their official capacity

to try to indirectly pressure

any notion, not just social media,

about content to carry.

This House just last March

passed, on a partisan vote,

passed a bill from Cathy McMorris Rogers,

Representative Comer, Representative Jordan,

that prohibits any federal employee

from using their official capacity

to jawbone social media.

It could cost them their job and their pension

if they do so.

Well, it's sitting in the Senate

and probably isn't going to go anywhere.

But then the state of Florida enacted

an identical law applying to Florida employees.

So there are things, in fact,

that we can do legislatively,

but it's a comfort to do so

knowing what the guardrails already are

coming from the Supreme Court.

I think that's a great entree for you, Gail,

so I will turn right to you.

Sure.

So I'm not going to go into the jawboning case.

I think most people can agree

that there should be lines around jawboning.

Unfortunately, the rest of that case, though,

a lot of it was predicated

on some pretty wild conspiracy theories.

So I'm not going to go all the way down that case.

One of my key concerns is that in this situation,

industry wants it both ways.

On the one hand, companies are getting privileges

as speakers under the First Amendment.

That is what we just saw in the Net Choice case, right?

But they don't have any of the responsibilities

or liabilities that speakers usually have,

and that's because of the preemptive immunity

that Section 230 affords them

for so much of what is happening.

Now, I want to be very clear.

I agree that a company should not be held responsible

for third-party speech.

If you say something terrible on Facebook,

I agree that you, as the speaker,

should be held accountable.

What I don't agree with is this overly broad interpretation

that, therefore, anything a tech company does

with that speech, with their own tools,

how they recommend content,

how they micro-target content,

how they monetize,

that that also all counts as speech,

and therefore, again,

Section 230 is not about whether a company

is guilty or innocent of a crime.

It is about not even letting us explore

whether they have any responsibility in that crime.

So I do want to just emphasize

that even though, to your point,

Section 230 is a criminal liability,

it is not about the same situations

in this Net Choice case,

it is still an industry that wants it both ways.

They want First Amendment protection as speakers,

but then they don't want any of the responsibility

that comes with being a speaker.

So, in my opinion,

if we keep going down this path

of categorizing every single facet

of tech platform behavior

as protected expression,

where do we draw that line?

And even Supreme Court justices

have been asking Congress

to help determine where to draw that line.

And I know that we don't have so much time,

but I do want to go to a few examples.

And one point in my background,

that I think is important to know,

even though it was short-lived,

I did actually work at Facebook.

I joined in 2018 to head their elections integrity work

right after Cambridge Analytica.

And part of my role there was to help determine

how to make sure that the money,

that the political advertising

for which they not only make money,

but also sell targeting tools to advertisers,

to help figure out how to make sure that

political advertising could not be manipulated

by bad actors.

And so I do have some inside knowledge as well

of how these systems work

and what the companies do and don't know.

And for me, I guess the question is,

where does this line stop?

Where does this idea of free expression stop?

Does it stop at algorithmic amplification

of hate and harassment?

Does it stop at recommendations to users

to join extremist groups?

Does it stop at revenue sharing with users

or groups who advocate for extremism

or incite violence?

What about companies' own

auto-generation of content?

Because they always talk about it

being third-party speech.

I can point to ample evidence

of where companies themselves

auto-generate content,

but then say it's third-party speech.

So when a company auto-generates pages for ISIS,

which we have proof has happened on Meta,

or YouTube videos for white supremacist bands,

which we have evidence has happened,

is that the point?

Where do they no longer get to hide behind

the idea of just being intermediaries,

of just having this free expression protection?

And I am going to go to extreme examples here,

because for those of you in the room

who are involved with thinking through

how Congress might update these laws,

I have never been one to advocate

for getting rid of Section 230.

I think some of the protections there

are absolutely critical.

But here's an example.

What if a registered sex offender

doesn't just use a social media platform

to go find children,

but what if the platform itself

recommends that that known sex offender

connect with an underage child

and recommends the connections themselves?

And then that person does harm to that child.

Should that child's family

at least be able to have their day in court

to figure out if the company played any role

in facilitating that?

And even though you say that federal crimes

are different,

that would still possibly be over-interpreted

as Section 230, this case can't even go forward.

And so for me, the question is,

at what point do we stop saying

this industry gets a 100% free pass?

And while we agree on some of the results

and arguments,

Net Choice has made some very sweeping arguments

about how anything that we wanna do

to try to rein in some of this company behavior

falls into some form of free expression issues.

Now, the transparency one is another interesting one, right?

Because this case now, the Net Choice case,

the Supreme Court justices made it pretty clear

that mandating transparency around,

well, they didn't say specifically,

but they left the door open for

what might transparency legislation look like?

And what's critical to understand is,

you don't have to have transparency legislation

that actually says, content-based,

this is the content you should or should not allow.

But how about disclosures around

how your systems work,

around how you are recommending content,

around even possibly disclosures around takedowns?

All of these things should still be on the table.

So my two biggest recommendations would be

to figure out what,

because you cannot make smart policy or legislation

in a vacuum.

And transparency, what that provides,

is the data and evidence needed

in order to craft smart policy.

Just imagine if any other industry

got to operate in as much of a black box

as the tech industry does.

Just bear with me for one minute.

Imagine if the pharmaceutical industry

not only got to study itself,

but imagine if it turned out

that they didn't have to do any safety testing

before putting medicines out on the market,

and when people got sick,

they were the ones who got to study what happened.

They were the ones who got to self-select

what data to make public,

and whether or not they caused any harm.

And then they were the ones who got to figure out

how to fix it with no outside independent

audit expertise.

That is where we are

with the tech world right now.

And so I'm not saying

that we should be holding them accountable

for particular acts.

I'm saying we have the right

to at least start understanding

the systems, how they work,

the recommendation engines,

their own tools,

and their own business decisions,

and it will not be easy.

There is no easy, clear-cut answer.

I am sure that you will be told

that it will invite a wave of lawsuits.

Guess what?

Lawsuits in and of themselves

are one way

that we as an American system

have decided we will hold

corporate power to account.

So I don't see that as the valid argument.

And we will also say

it won't pass First Amendment scrutiny.

Again, I'm not talking about content.

I'm talking about platform behavior

and their own tools,

and we have to start considering that.

All right, great.

So I know both Vera and Levi

have a lot to comment on,

so I will start over here with Vera,

and we will move on.

And let's think about this

in terms of, again,

not only responding to this,

but also thinking about,

for this audience,

where you think some key opportunities

are for them.

Thank you.

So I, just listening to my

thoughtful and esteemed co-panelists,

I'm learning a lot.

It's really helpful.

And it sounds to me like we agree

that social media platforms shouldn't be,

aren't First Amendment-free zones,

meaning some First Amendment protection applies.

We also think that they shouldn't be

regulation-free zones.

That seems true across the board.

And I think maybe we slightly disagree

on whether they are regulation-free zones.

Because what I would say is that

I think some of the questions

Yale has raised are real.

Section 235

questions and are interesting,

what is protected and what isn't.

But I certainly don't think that

every single thing that a platform does

is immunized by 230.

For example, the ACLU has successfully

challenged the discriminatory targeting

of housing, employment, and credit ads

on Facebook.

We've also argued that the use

and misuse of users' private data

when given to platforms,

not for publication,

but for actual use of the platforms,

is not immunized by 230.

So I think there's plenty already

that is on the table and perhaps illegal,

notwithstanding 230.

I also think 230 is incredibly important

for users, for all of us,

for the ability of information

to be available online,

to be available in different forms of curation,

to be available for different communities.

But I will not focus solely on 230 here.

I also think that it is worth saying

that as Olivier actually pointed out,

a lot of the problem here also rests

with the regulations.

A lot of what we're looking at

when it comes to the internet

are super messy, unclear regulations

from who is governed

to what is actually being required.

And I completely agree that that was

in some ways the central holding

of the Net Choice case

and part of what the court

was chastising everyone for.

And I would say that to you all,

as you're thinking about regulation

that might be useful,

it's really important to be precise.

Both because I think if you have one thing

in your mind that is the thing

that you're trying to accomplish,

the court has made clear correctly

that that's not all that matters.

What matters is the text

of what you actually write

and the effect of what you actually require

of your regulated entities.

That's been true across the board.

So even when legislation is passed

with good intentions,

which I think it typically is,

that's not all that the court looks to.

The court looks at what the legislation

actually does and requires

of the regulated entities.

So when it comes to regulating

online speakers, online actors,

online businesses,

those are all arguably

slightly different things.

It's important to be clear

what you're regulating.

And I think that that matters

both for the ability of anyone

being regulated to understand

what they have to do

and also to ensure that you don't

have to face lawsuits

where you don't need to.

I think to some of the cases

that people have identified,

I imagine all four of us

would actually slightly differently

define and describe

the laws at issue.

And that's partially because

they're really broad and messy

and they do a lot of different things.

Sometimes they do good things

mixed in with the bad things.

There are a lot of these laws

that I would say

if they had the consumer privacy regulations,

the geolocation disclosure requirements

separate from the content-based burdens,

I'll just speak for myself,

I would probably personally support them.

I can't say what the ACLU would do.

But I think that it's just important

to focus in on the problem

and regulate that thing

rather than regulating so very broadly.

And I do think that both

the First Amendment and 230

give space for that.

I'll also just identify

the two things that I saw in the opinion

that the majority specifically identified

as spaces for potential regulation.

They numerous times referred

to competition policy.

Perhaps that's because

there's a Supreme Court case

that deals with the application

of competition policy

in kind of a similar situation.

But they did identify that

as a viable option.

And the majority also writes

about decreasing inequities

in the speech marketplace in general.

So I read that to say basically

give more and more people

the opportunity to speak.

Perhaps enable certain speakers

who don't feel comfortable speaking

to speak more.

Don't restrict the ability

of anyone to speak.

So we have an hour left, right?

There's so much on the table.

This is something

that's such a rich area.

And I feel grateful to be able

to talk on this stuff

as I think everyone on this panel does.

I'm sorry we don't have enough time.

I'm going to try to hit

as many issues as I can

in the time that we have.

And I think we want Q&A also.

All right.

So I'll talk about

the First Amendment stuff

and then turn to 230

and then potential other reforms,

which I think is what you want us

to talk about, Nadine.

And so with regards

to the Maryland case,

the Fourth Circuit heard this argument

about whether the pass-through provisions

is a speech,

is a violation of the First Amendment,

given the facts that you,

that Steve recounted correctly.

The question that it remanded back

to the district court is,

is this conduct

that you can regulate

or is this speech?

That's the legal question.

And that's the question

that I want to make sure

we're laser,

we're just clear on.

Are these activities

commercial conduct

that we can regulate

in the way that I described,

or are they protective

or are they protected speech?

That's the question.

The other thing I want to touch on

is, you know,

Vera made recommendations about,

made great recommendations

about being attentive

to the details

when you all are talking

to your bosses about language

and drafting language.

And the two things

that I would completely agree on,

given where the court is

after the Supreme Court's

Sorrell opinion in 2011,

is to worry about laws

that are addressed,

that target certain kinds

of people and certain kinds

of speech.

And that's largely

what I think might be difficulties

in the California

and Maryland laws.

But just to be clear,

those are problems

that I think drafters

should be worried about.

And that's where

I would raise caution.

Now, with regards to 230,

how many of you are using

Prodigy right now?

That's the case,

that's the Stratton-Oakmont case.

Nobody uses Prodigy.

I think 230 was my case

and it was mindfully written,

given where things were in 1996.

It's interesting that

then-Representative Cox

wrote, drafted this language

on a plane ride back.

There wasn't a lot of information

about what would come next.

Most of us know

that automated systems

and the ways in which

services are delivered today

look nothing like

they did in 1996.

They enable the sorts of things

that you heard Yael talk about.

So, with regards to Section 230,

the fact pattern,

the hypothetical

that I think Yael offered,

the horrible one,

is actually a case

that courts haven't,

where courts are close to cases,

the courts have relied on 230

to kick the case out.

Match.com

and the Experience Project cases,

I recommend that you take a look.

Citing 230

as a justification

for kicking claims out,

even when the companies know

that they're making recommendations

that will be harmful.

Now, with regards

to the Facebook settlement,

a case I actually participated in

as a consultant with plaintiffs,

that was actually a settlement.

There was no decision on the case.

I think the companies were worried

about what a court might do,

but it was a case

that had to be settled,

given where the courts had gone

with Section 230.

So, discriminatory ads on Facebook,

there's yet an opinion on it,

largely because 230

could have foreclosed discovery

of how implicated Facebook is.

So, the big picture lesson,

I think, from the Supreme Court's

Net Choice cases,

and from the things

I think you're hearing

all of us talk about,

is that we should now be far more alert

to the nuances

and nuances of applications and functions.

And that means being open

to the possibility

that sometimes these companies,

I don't use the term platforms,

because they're commercial enterprises,

that these companies sometimes

might be doing something

that resembles unlawful conduct.

That's something that's important,

and we should be alert to.

That's the key question.

So, other possibilities.

So, sure, reforms to Section 230,

I've written a bunch about it.

Yael has spoken about it.

And I think there's even, you know,

Steve recognized that there's been change

to Section 230.

And I think there's a lot of time.

I want to pivot a little bit

and say that there are other things

that we might want to think about.

And that is what you've heard about, right?

One of the driving considerations

for companies is advertising.

And the reason advertisers are interested,

because these companies,

many of them have access

to a lot of consumer information.

They know how to target information

in ways that have never been possible before.

These are not newspapers.

They know a lot about people.

And so targeting

and processing of information

is an area that is arguably commercial conduct

that might instill a degree of responsibility

to the extent companies now have to be alert

to the ways in which they collect information

and use it to target.

I think this is part

of the content moderation conversation.

There's less incentive to attend

to certain kinds of advertising

if you are careful

about all the information you collect.

So I would put data protection regulation,

the APRA,

the thing that Congress is considering

as part of content moderation,

as part of content moderation.

Why?

This is about commercial incentives,

attending to commercial incentives.

Great.

Yes.

So what I would love,

I want to open up to questions now,

because I'm sure

a number of you have these questions.

And as you've been thinking

about these issues

and where to go from here,

so please raise your hand.

Yes, please.

Go ahead.

You know,

one of the things

that I've heard today,

I hear that it's a lot,

and I think,

you know,

it's a lot to offer

for the past decade.

Who are you, by the way?

I already am.

For whom?

I work for Tech Freedom right now.

I've spent six years

with the foundation

for individual rights and expression,

ACLU for that.

So the comparison of industries,

you know,

strikes me

as a little bit

of an absolute order situation,

false equivalence.

Pharmaceuticals are not speech.

There are different roles,

and comparing the two industries

as one operates

on different industries

with different plans

and different rules

is true,

but that is because

the First Amendment applies to one

and not the other.

And I find that kind of

a rhetorical sleight of hand

that gets the intended effect,

but really kind of misleads people

into what the actual rules

of the world are.

And I don't,

I'm not going to sit here

and say there's no possible

amendment to Section 230

that would be,

you know,

appropriate under anything

that says that,

nor would I think

anybody actually seriously argues

that Section 230

would be appropriate.

I mean,

I don't think anybody

actually seriously argues

that Section 230

would be appropriate.

I mean,

that applies to literally

every single thing

a black man does.

I mean,

that would be silly.

Nobody,

if anybody does anything,

they should feel bad about it.

That being said,

you know,

we do have precedent.

We do have some court savings.

This is not a cover.

This is,

but when it comes

to some of the things

that are recommended content,

recommended content,

there is no doubt about it,

is expressive in itself.

That is,

that is kind of interesting.

The phrase,

I think you will like this

is expressive.

So,

you know,

I think you will like this

is expressive.

So,

even without Section 230,

you don't necessarily

get the lot of content

that you want

because the courts

for decades now

have held

there is no

due to not,

you know,

to prevent speech

from causing harm

to people.

We've seen this

in the cases

about broadcasters

where they broadcast

a true crime documentary

and kids

do copycat crimes.

I don't see this

saying that

Dungeons and Dragons

were kids,

a kid committed suicide

and the court said

the makers of this

board game,

this RPG

can't be held by them

at all

because of that vision

they put this game

in the hands of a

mentally ill individual.

So,

there is no

new character.

So,

how do you get from

no 230

to

getting around

the First Amendment

restrictions

to liability

to make this,

you know,

a plausible course

of action

instead of actually

focusing perhaps

on advertising

which may be

the one thing

that actually

can,

you know,

motivate,

you know,

the kids

when those are

so foreclosed

by so much

good precedent?

Can I say one thing?

Yeah,

I would like you

to answer this.

Just one thing.

The one thing,

the reason

is that

we need a mechanism

to instill

a sense of civic responsibility.

100%.

I'll just say

two quick things.

The apples to oranges,

you're right.

Of course,

the pharmaceutical industry

and the tech industry

are not exactly equal.

However,

this industry

does remain

the only industry

in the world

that has

the most

research findings

and gets to determine

whether or not

anybody else

gets to either

see behind the veil

including

not having

safety requirements

before,

I mean,

for certain,

now here's the problem.

We're using the word

tech industry.

Let's be very clear.

The tech industry

is a broad term

and we know

that I don't mean

every single element

of the tech industry.

But at the end

of the day,

it's apples

to oranges

and we know

that the tech industry

is a broad term

and we know

that they get

to study themselves,

they get to bar access

to data

when they want

and how they want

and that we have

to continue

to take industry

at their word

is a problem.

I will just really quickly

talk about recommendations.

This is why I said

I will never say

there's an easy solution

and there will

always be tradeoffs

and when recommendation

to your example

of hey,

you might like this

whatever

is one thing.

The recommendation

of telling this man

is also

a recommendation engine

so it's about

to your point

being precise

and starting

to figure out

what are the guardrails

and what actually

can lead

to potential

company responsibility.

I don't like

using recommendation engines

to talk about everything.

Some of it

is expressive

and some of it

is not.

Steve,

15 seconds.

I have two more questions.

15 seconds is easy.

The Section 230

protects against

lawsuit abuse.

America is unique.

We don't have 230

in the rest of the world

because the rest of the world

has no lawsuits

meaning the loser

pays the cost

to the other side.

That doesn't happen

in the United States

and we'll never get there

because the plaintiff's bar

loves the current situation

so the lawsuits

would make it so

that without 230

who in their right mind

would allow you

to say whatever

you want

that was lawful

on a platform

because if the platform

gets sued

for defamation

because you insulted

my restaurant

on Yelp

or you made fun

of the way

I drove the car

on Uber.

If those platforms

can be sued

for things

that you said

you're not gonna say

it on platforms anymore.

Without 230

we don't have

user generated content

period.

I'm sorry

I just wanna say

that's why I'm not advocating

for getting rid of 230.

To your point

that is not

the reform

that many of us

are advocating.

All right

I see a number of hands

so I'll ask that you

state who you are

and then give your question

please.

We can get

I have a few more.

Go ahead.

Thank you

the Justice has touched

a tiny bit

on artificial intelligence

and she's wondering

where do you all

see the court going

on that topic

and the interaction

between that

and speech

there was some discussion

of how far away

from policy

and algorithm

do you get

so I just wanna see

what your thoughts

are on that.

Justice Barrett

in her concurrence

was the one

who mentioned it

in the notion

that when humans

design the algorithm

or feed it

the data

that allows it

to perfect itself

that's clearly

expressive content

but she

and she alone

raised the question

that if it's

completely computer

controlled

without any human

design

or interaction

that maybe that might

not enjoy as much

First Amendment protection

so that door got opened

a little bit

but only by one Justice

and not the rest.

Alito as well

actually questioned

how far algorithms

and secrecy

would be okay

so it wasn't just Barrett.

It's a great question

I agree

and I will say one thing

because I actually

really wanna hear

what Vera says about this

and that is

that even when

an automated system

is in operation

it is not

autonomous

it's not the terminator

right?

It is designed

by people

so I'm not

for me

I like that

her parade of possibilities

but I think

that there's still openings

for thinking about

First Amendment issues

that are limited.

I largely agree

with that

I think the hypo

she raised

maybe doesn't exist

and maybe won't exist

and I do think

there's a little bit

of a difference

between AI

and algorithms

which your question

got to

the questions

that one follows

so the court

I think made clear

that the fact

that computers

and algorithms

are used to enforce

content moderation policies

that humans write

and intend

clearly protected

and then yes

I think there's this question

of like human

out of the loop entirely

but I'm just not sure

that's real

it's also a question

that's in one concurrence

and I am very loathed

ever to read

Supreme Court tea leaves

and especially

when it's in

a single concurrence

that's basically saying

adding to the list

of a hundred questions here

which I imagine

at some point

we'll have to decide

but also I hope

will be briefed

explained

technologists will say

what AI is

et cetera et cetera

trying to go

in the wayback machine

when I used to

come to these

as a staffer

and then

sort of learn

afterwards

so trying to go back

to the question

what are paths forward

Vera and Steve

as opposed to these

bright things

about industry

and the way laws are

and they wish

there was less

member protection

I heard from both

people

Vera and Steve

crystal ideas

about how to

go forward

one of those

fell right into

the wheelhouse

of Congress

and that's the courts

Steve I think

he would talk about

maybe changing up

some of the transparency

and appeals

I'd love to hear more

I'm a former

fiduciary staffer

and so

it doesn't

rank in my wheelhouse

Just a quick answer

then

transparency could be

many things

in Texas and Florida

they talk about

make sure that you show

your content moderation

policies transparently

they all do

so it's really

not transparency

as much as it is

explain why

you didn't take

my post

explain why

you didn't rate it

first on my friends

feeds

and then if

the user's not happy

with that answer

they get to appeal

so you have to

stand up

an ability

to explain why

content wasn't

featured or

listed

and then go through

an appeals process

and the court

looked at the

Florida and Texas

and the 11th

and 5th circuit

tried to figure out

what the standard is

and it becomes

a tremendous burden

that burden

then chills

first amendment

one other question

for all of you

to think about

if you can force

Facebook

to tell you

why they didn't carry

your post

and give an appeal

well then I guess

you could force

the New York Times

to explain to you

why they didn't carry

your letter to the editor

or why they didn't

print your op-ed

but in the oral arguments

our attorney said

could the New York Times

be forced to explain

why it didn't take

your wedding announcement

there's a tremendous

competition to get in the Sunday

hall

but they would now

because wedding announcements

are subject to explanations

we didn't like your dress

or appeals process

at some point

they stopped taking

wedding announcements

in the New York Times

that's what we mean

by chilling speech

so a transparency

sounds like a good word

by God

we should be

as transparent as possible

on what our rules are

but if you ask us

to stand up

an appeals process

you have to think

about whether

it's discriminatory

against the online space

versus the offline

I know we're out of time

there are some content

neutral transparency

proposals out there

so it does not always

have to be based

on this part

of what this case

was looking at

I also want to observe

that I think

no serious person

who thinks about this

will do an immediate

equalization

or identity

between newspapers

and the kinds of things

we're talking about

this is what

this is the nuance

we need

and I'm glad

for the question

you focused on Vera

and Steve

I don't think you heard

what I was saying

or what

I was trying to say

so I think we agree

that there are

some transparency mechanisms

and disclosure

is probably really important

I don't know

maybe you didn't hear me

talk about data protection

but data protection law

is actually

a pretty important thing

I think we have

our friend from Tech Freedom

who agrees

attending to

commercial practices

addressed to advertising

is important

and finally

the idea that

maybe you didn't hear me

say this

but I've made

a recommendation

oh you only heard

the griping

about ministry

okay

so the other thing

is that

I made a recommendation

when you write laws

make sure they're

not addressed

to speakers

and specific content

that's right

I was saying

more than

the thing

that I think

you heard you say

yeah you weren't listening

maybe

yeah I'm glad

you were able

to reinforce

those points

and I saw

one more question

and I know

you all have been

very patient

and hopefully

because it's so

scintillating in here

and I agree

so let's have

one more question

I was wondering

how you could

update section 230

without kind of

defaming the whole

legislation

as a whole

and disempowering

its ability

to protect

like

the

one quick answer

is

congress

can enact

so can states

but if you enact

a criminal law

if you made it

a criminal law

to share

CSAM

and I guess

it already is

so that's why

CSAM

has nothing

to do

with 230

will never affect

a CSAM case

it'll never affect

treason

it'll never affect

violations of

harassment policies

and any crime

any federal crime

230 is not even

on the table

so congress

can

put together

federal crimes

and 230 won't

apply at all

and then unfortunately

no no

there's no way

to get into

all of this right now

so we're over time

I would be happy

to we've written

a lot about this

but just to be very clear

while what you say

is true

unfortunately

section 230

has been overly

broadly interpreted

by the courts

and so some of those cases

have been thrown out

and that's where

I think congress

can actually clarify

the rules on the road

because it has just

been so overly

broadly interpreted

to throw out cases

that actually

could have possibly

touched on

some of the things

that he was mentioning

but I do think

it is time

to actually

differentiate

between what we mean

by expression

and third party speech

and what we mean

by company behavior

and that is something

that really warrants updating

and I think

the point here

is that we know

we have that opening

from the court

and it's important

to seize that opportunity

thank you all so much

for being here

and for your time

to our incredible panelists

really appreciate

everyone being here

thanks again

Continue listening and achieve fluency faster with podcasts and the latest language learning research.