Discover more from The S.A.D Newsletter
Imagine you want a recipe for rice pilaf from Uzbekistan, so you turn to your favourite search engine or AI service. However, instead of finding the desired recipe, the machine returns a recipe from Louisiana. In such a case, who would be held responsible for the error? Or is this in fact an error? Would you sue the company providing the service and algorithm, or the third party that uploaded the recipe? What about the various intermediary machines and bots involved in delivering and analysing the data?
The above was an actual scenario given by the esteemed U.S supreme court justice Clarence Thomas (who recently was in the news for accepting undisclosed luxury gifts) during the Section 230 hearing. Section 230 refers to a provision of the Communications Decency Act of 1996 in the United States — see my previous post on this topic. This example is part of the complexity of issues related to online content regulation, and raises questions about the roles and responsibilities of different actors in the delivery of these services. As the Supreme Court considers potential reforms to Section 230, I wonder how much are we recognising the nuanced interplay between technology, law, and human behaviour.
I want to tie this topic around online content moderation and recommendation with an understanding of the technology, human behaviour and in particular historical context of technology. By exploring the impact of historical context, or the absence of it, on our current perspective towards emerging technological advancements, such as generative AI, we can gain a better understanding of their potential implications. It does not matter how accurate or not accurate an AI engine is — you simply cannot sue it and make it accountable. The people that create and use it on the other hand can be sued.
But first, let’s hear from the judge:
“If you’re interested in cooking, you don’t want [to see YouTube recommendations for] light jazz,” Thomas started. “Say you get interested in rice pilaf from Uzbekistan. You don’t want pilaf from some other place, say, Louisiana… Are we talking about the neutral application of an algorithm that works generically for pilaf and also works in a similar way for ISIS videos?”
Although it may seem like a leap from discussing rice pilaf to the topic of ISIS videos, the recent Section 230 hearing held by the U.S. Supreme Court centred around the influence of such content on platforms, and the potentially dire consequences that can result from their dissemination. Specifically, the hearing examined the question of who bears responsibility when platforms like YouTube recommend harmful content that was uploaded by a third party, and which may contribute to accidents or other negative outcomes.
During the hearing and subsequent commentary, it was widely noted that the provision was originally enacted in 1996 — a time when the internet and the world of online content were vastly different than they are today. As the online world has expanded and become more complex, new questions about regulation and responsibility have emerged. However, it remains unclear whether the Supreme Court and society at large have fully evolved their understanding of these issues in response to the changing technological landscape.
Throughout the hearing, there were indications of a lack of nuanced understanding of how algorithms and online platforms work, and how they impact society. These commentaries and analyses (both from media and academia) fall into two categories. While some commentators express optimism about the transformative potential of technology, others have adopted a more critical, even dystopian perspective, which has been referred to as "Crit-Hype" (see historian Lee Vinsel’s work). Both of these modes have their purposes but do not provide glimpses into truly understanding the nature, value and impact of technology. Rather both types of hypes add fuel to the fire.
A side note: It is worth noting that different approaches to interpreting the law can lead to varying legal outcomes. Two prominent ones are known as legal realism and legal formalism. Legal realism prioritises practical outcomes, while legal formalism emphasises strict adherence to legal principles. When it comes to cases involving internet content moderation and big tech, legal realism may be better suited to ensure that the law is interpreted in a way that takes into account the impact on users and society as a whole. But the current U.S legal system (both at the State and Federal level) is multi tiered and byzantine enough to actually not appreciate the fluid nature of modern technological development. But that is a story for another day.
One of the key arguments put forth by the plaintiff (family of Nohemi Gonzalez in Gonzalez v. Google LLC) centred around the idea that the text of the provision does not explicitly mention the concept of "recommendation". The Section 230 reads as follows:
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
In arguing their case, the plaintiff noted that while the provision makes reference to “interactive computer services”, it makes no mention of recommendations, and fails to provide any clear legal standard for regulating them. Therefore arguing for the Court to narrow the protections given to the big tech companies. The argument for Google on the other side states that companies are not responsible for what algorithms promote. Even though, the programmers employed by the company wrote the code, it was designed to recommend content based on the users activities (not to promote harmful or illegal content).
The above arguments from the hearing highlight the extent to which the law and its interpretation have become outdated in the face of rapidly evolving technological landscapes. Additionally, comprehending the socio-historical elements of code generation, the deployment of code within data infrastructures, and actual usage (i.e., human behaviour) can be perplexing, as it generates additional data and content to feed the algorithm. Despite our tendency to view code and programming as a monolithic, black-box enterprise that functions magically, all those are part of our socio-technical enterprise. In other words, the human factor will remain a potent force in new internet innovations and any other technological changes. Yes, internet platforms are different from cars and bridges but all these technologies get used by humans. Later, in this discussion, I give examples of how other technological phenomena and critiques have similar socio-technical contexts. The current issue of online content and AI is not unique in this respect.
Back to Section 230. Here is a summary of the two court cases involving Section 230:
On Tuesday [Feb 21, 2023], the court’s nine justices heard arguments in the first case, Gonzalez v Google. The family of Nohemi Gonzalez, a US citizen who was killed in an Isis attack in Paris in 2015, claim that YouTube violated the federal Anti-Terrorism Act by recommending videos featuring terrorist groups, and thereby helped cause Gonzalez’s death. On Wednesday, the court heard arguments in the second case, which also involves a terrorism-related death: in that case, the family of Nawras Alassaf, who was killed in a terrorist attack in 2017, claim that Twitter, Facebook, and YouTube recommend content related to terrorism, and thus contributed to his death. After a lower court ruled that the companies could be liable, Twitter asked the Supreme Court to say whether Section 230 applies to it.
Currently, as of April 2023, the Supreme Court has yet to reach a decision on the Section 230 case, and the legal process is expected to take several years. One argument presented in the case is that although the platform may not be responsible for the content, they are responsible for creating the algorithm that recommends it. Therefore, the relationship between the content and the code is significant, and this connection needs to be addressed in the interpretation of Section 230.
It is uncertain whether Congress will take action to update Section 230. Depending on how companies influence lawmakers, the law could either limit all liability for platforms or restrict content. Although media attention has been focused on big companies, it is essential to remember that the law applies to all content and users, from major platforms to local online groups in our Nextdoor app. Platforms need to prioritise user safety instead of focusing on selling ads and grabbing attention. While strong regulation and law will help, the legal process is slow. However, getting the law right is crucial and should not take seven years to reach a conclusion. Unfortunately, past performance suggests that the Supreme Court will avoid answering the question, and Congress will do nothing to fill the void. By the time the issue is addressed, the world may have moved on to another topic and the Section 230 debate will have become obsolete.
Another good take on the Section 230 cases is this interview with Ethan Zuckerman where he states that given the situation the Court actually did understand the nuanced situation:
Rath: If the court, say, rules against the tech companies here and say, "You know what, you're not protected by this," does that mean that the big companies have to staff up with a lot more content moderators and have a lot more legal bills, and the smaller sites maybe go out of business?
Zuckerman: Well, Google's counsel made basically that argument. Google basically said, "Look, without Section 230, the internet is going to go in two different directions. You're going to have some platforms that are essentially a toxic swamp. They're going to be wholly unmoderated because the danger of moderation will end up being so high. If being involved with moderating and recommending means that you're a publisher, some people will run platforms that have absolutely no moderation. There will also be a set of platforms that are likely to be very heavily edited. Things will be as carefully chosen as stories in a magazine or a newspaper because there will be the possibility of lawsuits associated with it."
Let me now briefly bring up a few historical contexts. Our present understanding and use of technology are shaped by our past experiences with it. New technologies do not emerge out of thin air but rather build upon and interact with existing technologies and social contexts. Unfortunately, as a society, we often suffer from collective amnesia, forgetting the nuanced history, sociology, and economics of technology.
While it may be unrealistic to expect everyone to have a thorough understanding of the complexities of technology, it is concerning how these nuances are reflected in legal proceedings, particularly in cases that reach the U.S. Supreme Court. Often, large companies manipulate legal proceedings to serve their interests, shaping outcomes that will influence future cases.
Many of the claims made about the revolutionary potential of current technologies can be understood differently if we look at technological and societal changes since the mid-19th century. For example, elevators are now thought of as one of the safest components in a building. However, when elevators were first introduced, people were terrified. Around 1900 New York had elevators without an operator (think of them as the autonomous vehicle of that era). It took a while for people to accept this. Here’s an excerpt from an interview with Lee Gray who wrote extensively on the history of the elevator:
GRAY: People walked in and looked and walked right back out. They would quickly step back out and try to find someone to say where's the operator?
HENN: But then, in 1945, elevator operators in New York went on strike. New York City ground to a halt. The strike costs New York a hundred million dollars in lost taxes. It prevented one and a half million office workers from getting to work. Building owners demanded a change. And the elevator industry decided they had to convince people to rethink what an elevator was.
The incorporation of voice and music into elevators was intended to offer comfort to riders, illustrating how technological advancements interact with and shape human behaviour. Concerns surrounding the use of AI for content generation, moderation, and recommendation bear resemblance to the initial apprehension toward operatorless elevators. Though Section 230 does not explicitly address this issue, reactions often mirror those of the past, with panic and hype dominating the conversation. Nevertheless, history teaches us that lasting progress and responsible use of technology occur gradually, through regulation and steady adoption, rather than through sudden trends or sensationalism. Similarly, the bicycle also caused a great deal of controversy in its early years, with many questioning its utility and safety. In fact, late-19th-century doctors warned that bicycle riding could lead to a terrifying medical condition (for women) called bicycle face: “usually flushed, but sometimes pale, often with lips more or less drawn, and the beginning of dark shadows under the eyes, and always with an expression of weariness.”
The history of the elevator and bicycle are emblematic of the social and technological changes that have occurred during the modern era. Both technologies transformed human mobility and expanded social and economic opportunities but their introduction also caused uproar and concern among the public, with many questioning the safety and practicality of these new modes of transportation. Despite this, codes, laws, and best practices were established to regulate their usage. While some companies may exploit the legal system to maximise profits, examining past events could aid in Section 230 reform efforts. Such a measured approach would prove more effective in creating positive change around AI usage and in general for online content.
And here’s a few rice pilaf recipes that I tried (from Uzbekistan!):
https://www.alyonascooking.com/how-to-make-uzbek-plov-in-kazan/
Subscribe to The S.A.D Newsletter
A critical look at Software, Algorithm, and Data with a multidisciplinary twist
This is a great essay and also I now must eat plov with the whole roast garlic and the beef. That looks amaaaaaaazing. (A Louisiana pilaf would not do!) ;)