California Judge Says Because Snapchat Has Disappearing Messages, Section 230 Doesn’t Apply To Lawsuits Over Snapchat Content

from the this-is-not-how-this-is-supposed-to-work dept

Well, this is dumb. As detailed by NBC News, Los Angeles Superior Court Judge Lawrence Riff has rejected a perfectly reasonable attempt by SnapChat to have a lawsuit thrown out on Section 230 grounds. The case involves family members of kids who overdosed on illegal drugs like fentanyl, suing Snap for apparently providing the connection between the drug dealers and the kids.

And, if you think this is kinda like suing AT&T because a drug buyer used a phone to call a drug dealer, you’re not wrong.

But, that’s not how Judge Riff seems to see things. In the ruling (which NBC did not post), Judge Riff… seems pretty confused. He buys into a recent line of somewhat troubling cases that argue you can get around Section 230 by arguing “defective” or “negligent” design. Here, the “defective design” is… the fact that Snap has disappearing messages and doesn’t do age verification:

According to plaintiffs: Snapchat is specifically chosen by children, teens and young adults for drug distribution because of how Snap designs, markets, distributes, programs and operates Snapchat (SAC, ‘\I 2); Snapchat’s many data-deletion features and functions made it foreseeable, if not intended, that Snapchat would become a haven for drug trafficking (Id. at 11 3); the combination of practices and multiple features Snap chose to build into its Snapchat product-such as ineffective age and identify verification, facilitating easy creation of multiple, fake accounting, connecting kids with strangers and drug dealers “in-app” through the “quick add” features and a live mapping feature makes Snap an inherently dangerous product for young users (Id. at ‘\113); Snap was on notice that Snapchat was facilitating an enormous number of drug deals (Id. at 11 14)…

Of course, this should be an easy Section 230 dismissal. The claims are entirely based on Snap not magically stopping drug deals, which is about user content. The fact that messages disappear is meaningless. The same is true for a phone call. The idea that Snap “intended” for its product to be used for drug deals is similarly bonkers.

But, Judge Riff appears to have opinions on Section 230 and he’s going to use this case to air them out. He notes that the Supreme Court has not yet ruled specifically on the bounds of 230, pointing to the Gonzalez ruling last year where the Court deliberately chose not to rule on Section 230.

Riff seems to mock the commentary around Section 230:

These are, it has been famously (and at this point monotonously) said, “the twenty-six words that created the internet.” At least dozens if not hundreds of courts, academics, and other commentators have by now explained that the provision was designed, in 1996, to protect then-fledgling internet companies from incurring liability when millions of users posted content and when the companies made moves to police that content.

He follows that up with a footnote mentioning an article by Jeff Kosseff (who he calls Kossoff and seems to mock), whom he calls “section 230’s preeminent historian.”

He then goes through an abbreviated (and slightly misleading) history of 230, and does so in a breezy, somewhat mocking tone.

If Congress had intended to immunize all interactive computer services from liabilities “based on” third-party content, there are straightforward elocutions to express that intention. But that is neither what Congress did nor what Congress could have done consistent with the policy statements in subdivision (b) of section 230. Instead, Congress chose to invoke words of art drawn from common law defamation-liability distinctions between “publishers” and “speakers,” on the one hand, and, apparently, “distributors” on the other.

Again, why those words and why in 1996?

At common law, including in New York state in 1996, publishers were held to a higher standard than distributors over defamatory or other illegal content on the theory they did, or at least reasonably could, exercise editorial control. Distributors, on the other hand, were liable only when they knew or should have known that the publication contained illegal content. It is universally accepted by knowledgeable persons, including the members of the California Supreme Court, that Congress’s decision to use the publisher/distributor distinction for section 230 was in response to a New York decision, Stratton Oakmont, Inc. v. Prodigy Services Co. (N.Y. Sup.Ct. 1995) 1995 WL 323710 (Stratton Oakmont), applying New York law. (Barrett v. Rosenthal (2006) 40 Cal.4th 33, 44 (Barrett).) An early Internet case, Stratton Oakmont held that because the defendant had exercised some editorial control – removin9 offensive content and automatically screening for offensive language – over the third-party content, it was properly treated as a publisher and not a mere distributor. Section 230(c)(1) overruled, as it were, the Stratton Oakmont decision by eliminating common law strict liability for acting like a publisher by posting, or removing some of, a third-party’s false statement.

An early federal appellate decision, Zeran v. America Online, Inc. (4th Cir. 1997) 129 F.3d 327, had an outsized influence on the interpretation of section 230. According to the California Supreme Court (among other courts), Zeran rejected the notion of any distinction between publisher and distributor liability, instead finding that Congress intended to broadly shield all providers from liability for publishing information received from third parties. (Barrett, supra, 40 Cal.4th at p. 53.) The Barrett court explained, “We agree with the Zeran court, and others considering the question, that subjecting Internet service providers and users to defamation liability would tend to chill online speech.” (ld. at p. 56; see also Hassell v. Bird (2018) 5 Cal.5th 522, 556-558 (conc. opn. of Kruger, J.) [Zeran’s broad reach did not, however, prevent the Ninth Circuit’s conclusion in Barnes, namely, that section 230 did not immunize Yahoo for alleged promissory estoppel because the claim did not seek to hold Yahoo liable as a publisher or speaker of third-party content).)

He then goes on to cite a few different judges who have recently called into question the way the courts view 230, including Justice Clarence Thomas who famously went off on a rant about 230 and content moderation in a case that had nothing to do with that issue, and in which there had been no briefing or oral arguments about the issue. Oddly, Riff does not mention Thomas’s writing in the Taamneh ruling (which came out with the punting on actually ruling about 230 in Gonzalez), in which (after being briefed) Thomas seems to have a better, more complete understanding of why companies need to be free to make moderation decisions without fear of liability for the end results.

Either way, Judge Riff takes us on an extended tour of every case where a judge has ruled that 230 doesn’t apply, and seems to take that to mean that 230 shouldn’t apply in this case (or, rather, it says that 230 can apply to some of the claims, regarding moderation choices, but cannot be used against the claims around things like disappearing messages, which he argues could be seen as a negligent design).

The allegations assert conduct beyond “incidental editorial functions” for which a publisher may still enjoy section 230 immunity. (See, Batzel v. Smith (9th Cir. 2003) 333 4 F .3d 1018, 1031 .) Additionally, the court finds that the alleged attributes and features of Snapchat cross the line into “content” – as the Liapes and Lee courts found, too. The court rejects, as did the Ninth Circuit in Barnes, Snap’s assertion of “general immunity” under its “based on”/”flows from”/”but for” reading of the scope of section 230.

Basically, this ruling reads Section 230 as a near dead letter. It says that so long as you allege any problematic content you find on social media is a result of “negligent design” then you can take away the 230 defense. And that basically kills Section 230. As we’ve explained repeatedly, the entire benefit of 230 is that it gets rid of these ridiculous cases quickly, rather than having them drag on in costly ways only to lose eventually anyway.

Here, this case has almost no chance of succeeding in the long run. But, the case must now move forward through a much more expensive process, because the judge is willing to let the plaintiffs plead around 230 by claiming negligent design.

There’s a separate discussion, outside the 230 issue, over whether or not Snap can be held liable for product liability since it offers a service, rather than a “tangible product,” but the judge doesn’t buy that distinction at all:

The tangible product versus (intangible) service test is a false dichotomy as applied to Snapchat, at least as Snapchat is described in the SAC. As noted, even the parties struggle to find language with which to categorize Snapchat, but neither “product” nor “service” are up to the job.

And thus, the case must continue to move forward:

The court’s answer is: not enough information yet to tell, and the question cannot be resolved on demurrer. Accordingly, the court overrules Snap’s demurrer to counts 1, 2, and 3 on the ground that Snapchat is not a tangible product. The court will permit the parties to create a factual record as the characteristics, functionalities, distribution, and uses of Snapchat. The court has no doubt that it will revisit later whether California strict products liability law does or should apply in this case, but it will do so on a developed factual record.

Separately, Snap also had pointed out that even outside of 230, there’s nothing in the complaint that would constitute negligence, but again, Judge Riff punts, repeatedly saying there’s enough in the complaint to move the case forward, including on the absolutely insane “failure to warn” claim (arguing that Snap’s failure to warn people about the dangers of buying drugs is part of the negligence it engaged in):

Snap demurs to count (negligent failure to warn) on the basis that “the danger of buying illegal drugs online is obvious, so no warning is required.” (Demurrer, 2.) The SAC, however, alleges that the harm arose from a non-obvious danger, namely, the presence of fentanyl in the drugs purchased by the minors. The SAC does not allege an obvious danger for which no warning is required.

Again, that makes no sense at all, and is exactly why 230 should apply here. First of all, this is a complaint about the content, which should put it squarely back into 230’s purview, even according to Judge Riff’s own framing. The issue is whether or not Snap needs to warn about some of the content posted by users. That should easily be stopped by 230, but here it’s not.

On top of that, how is Snap (or any website) to know every potential “non-obvious danger” that might arise on their platform and effectively warn users that it might occur? That’s why we have laws like 230. To avoid these kinds of nonsense lawsuits.

Anyway, this case is far from over, but the implications of this ruling are shocking. It would enable lawsuits against any platform used for conversations that don’t record all content indefinitely, where the communications tool is used for anything that might lead to harm.

And, yes, things like Signal disappearing messages or the telephone or meeting in a park seem like they could apply. Again, none of this means the plaintiffs will win. There’s still a decent chance that as the case moves on (if it moves on), that they will lose because the facts here are so silly. But just the fact that the judge is saying 230 doesn’t apply here is tremendously problematic and troubling and gives yet another way for plaintiffs and ambulance chasing lawyers to tie up websites in ridiculous litigation.

Filed Under: , , , , , , ,
Companies: snap, snapchat

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “California Judge Says Because Snapchat Has Disappearing Messages, Section 230 Doesn’t Apply To Lawsuits Over Snapchat Content”

Subscribe: RSS Leave a comment
37 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

Snapchat’s many data-deletion features and functions made it foreseeable, if not intended, that Snapchat would become a haven for drug trafficking

Does that mean that mobile land line and VOIP phone systems are also defective in design because they are not designed to keep a record of message contents. Of Snapchat loses, the surveillance state gets a huge gain, as not recording and keeping messages is defective design.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

It is worrying that a lot of people seem to think that if companies run by elites do not act as police intelligent units, they are to blame for criminal actions. Do you really want a full fascist state where everybody works for the good of the state, and that good is measured by the success of the elites, so everybody has to work to increase their wealth.

This comment has been flagged by the community. Click here to show it.

blakestacey (profile) says:

Re:

I was thinking along the same lines. “Drugs are cut with all kinds of [expletive deleted]-ing [expletive deleted]” has been a staple of cop shows, medical dramas, etc., for decades. It’s an idea floating out there in the culture.

I’m tempted to say that the point is to catch Snap in a cleft stick: If the danger was obvious, they should have nerded harder and designed their product to prevent it, but if it wasn’t obvious, they should have done more to warn their users about it. Heads the ambulance-chasers win, tails Snap loses.

ECA (profile) says:

A few things happening here.

The idea of getting rid of Communication will Curtail drug distribution, is abit of a fallacy. It means they will find another way to Communicate, and distribute the drugs and GET the drugs. And it will be harder to track.

Snap, has a Problem MAYBE. As the Police agencies, Could ask them to track and save any mention of CERTAIN THINGS. And report them. But then to keep the concept SAFE from being noticed, DO a Search and Find and everything ELSE, as to HIDE how they got the info int he first place.
As they could with MANY of the internet sites that abound with Any type of forbidden knowledge.

BUT, as has been noticed over the years. Some groups seem to think, scaring Illicit actions/interactions OFF of the internet is a Good thing. Those person MUST have alternative ideals on HOW those services and such should be run, RATHER then PUBLICLY noticed.
As Many of us older Internet used may still understand, that there is the OLD internet Still in the background. That there are Many services, Including Older email programs that WILL ERASE mail and chats, with no notice.

Questions I would have for the parents.
Where were you when the kids Started using drugs?
Was there a service you had access to that May have solved this problem?
Did you medical cover this, or was a FREE service accessible?
There are to many more to put here. But its the idea of WHAT is anyone doing to help those that want to STOP?
NOTHING in the USA.

Samuel Chapman (user link) says:

Snap Inc. Lawsuits

The author/Podcaster misses the point of the suit. The platform is intentionally addicting children to their platforms creating negligence. The addictive features are not text protected under the first ammendment, they are activities designed by the company to keep you on the platform, such as “streaks.” The theory is that the platform is then responsible for any harm that comes as a result of the negligence.

Also, under California law any service with a certain high number of users is considered a product under the law. We have a product liability suit with a failure to warn as another allegation.

All in all, there is plenty to consider factually in these cases and the judge was right to let them go to trial. The harm being caused to children by Snap Inc. via Snapchat has got to be handled by our courts. There is nothing frivolous about the parents’ claim after the loss of our children.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

The theory is that the platform is then responsible for any harm that comes as a result of the negligence.

It’s not a great theory.

The government doesn’t suddenly become liable because people keep having a good experience driving on a road and keep driving on that road because it feels good.

That One Guy (profile) says:

What is it with judges making rulings on things they don't understand...

Well I can’t possibly see how opening up a massive hole in 230 by allowing ‘they designed their platform/product in a way I don’t like, therefore it’s defective’ to fly could possibly lead to an avalanche of lawsuits, so great job Judge Riff, that’s one way to ensure job security.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...