By Chris Ricchetti | 13 February 2021
The urgent crisis of misinformation demands a comprehensive, all-of-society response. The ideas examined here could lead to significant improvements. But many more perspectives and resources, along with as much national soul-searching and determination as we can muster are needed now. Our deep divisions are being hard wired into our brains. And we know that the road we are on now ends at “checkmate humanity.”
Overwhelmed by Technology
Defining the Problem with Big Tech
Addressing the Three Complaints
• Political Discrimination
• Criminality
• Misinformation
➝ Multilateral Working Groups
➝ Fact Checking
More Accountability
• Don’t Wreck the Internet
• Give Humans a Fighting Chance
• Toward an Algorithmic Fairness Doctrine
• Carpe Diem
End Notes
➢ To monitor proposed and pending legislation related to Section 230, check the Disruptive Competition Project.
➢ For research into the sources and tactics of organized misinformation campaigns worldwide, check the Project on Computational Propaganda at Oxford.
➢ For the latest from the front lines of technology activism, follow the Center for Humane Technology.
This discussion builds upon a previous article that examined the enactment, consequences since enactment, and pending proposals for amendment or repeal of Section 230 of Title 47 of the United States Code, a 1996 statute that provides internet companies with broad immunity from civil liability arising from content that users post to their platforms.
Overwhelmed by Technology↑
We live in a world where sophisticated algorithms process gargantuan amounts of data to drive an unending stream of custom-curated content into the mind of each individual technology user. This hyper-automated bias confirmation complex plays upon the neural plasticity of the human brain, continuously reinforcing our tribal beliefs. The colossal rift between the “Two Americas” is being hardwired into our brains at warp speed.
As Tristan Harris, co-founder of the Center for Humane Technology, has observed, futurists have long anticipated with trepidation some distant point in time when technology would become so advanced that it would overwhelm human strength (i.e., perform all tasks—from labor to critical thinking—better than humans), effectively rendering human beings economically “obsolete.” What no one saw coming was the earlier point—according to Harris, we are already there—when technology would have the capacity to overwhelm human weakness.
User “engagement” drives revenue for social media platforms. The more time users spend scrolling their feeds and the more interactions (e.g., likes, shares, comments, etc.) they transact, the more data platforms can collect and the more paid advertising they can display. That is how the business model works. Consequently, content delivery on social media is optimized for user engagement—for “hooking” users and holding their attention, without regard for any of the detrimental effects on them or on the societies in which they live. Playing upon our fear, outrage, and other primordial instincts powerfully and reliably sustains user engagement (that and cat videos).
Evan Williams, a co-founder of both Twitter and Blogger and currently the CEO of Medium, explains that content-curation algorithms generally do not distinguish between looking at content and preferring it. He observes that “everyone” feels compelled to look at car crashes as they pass. He goes on to explain, “The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them.” This leads to an all-car-crashes-all-the-time environment and incentivizes many users and content creators to post content that is ever more extreme. Tristan Harris calls this phenomenon the “race to the bottom of the brain stem.”
Through perpetual arousal of our most destructive instincts (i.e., fear and anger) the algorithmic technology deployed on social media platforms is overwhelming human weakness on a massive scale. Our country is being ripped apart by the polarizing effects of tech-enabled echo chambers. This is an imminent and ongoing threat to national security, indeed to our continued existence as a democratic republic. The United States of America as we know it cannot and will not survive if we allow this to continue.
What actions should Americans and their government take to save ourselves from this dire situation?
Defining the Problem with Big Tech↑
Not surprisingly, there is no consensus around what specific problem(s) need solving. The current broad-based outcry against “Big Tech” is a confusing cacophony of at least three categories of frustrations: (1) complaints about lies, misinformation, and conspiracy theories, (2) complaints about incitement to political violence and other illegal activity, (3) complaints about political discrimination, censorship, and the Freedom of Speech.
Conservatives complain that Big Tech censors their voices (i.e., over-moderates conservative content). To punish this perceived injustice, they say they want social media companies to be legally responsible[1] for anything and everything that billions of users post on their platforms—which, obviously, would incentivize a much greater degree of content moderation (censorship), not less.
One must assume that conservative lawmakers well know that removing Section 230 immunity from social media platforms would not fix their censorship complaint and would, in fact, make the problem worse. They muddy the waters further by ranting about “violations” of their First Amendment “rights”—an argument that rests upon a (presumably intentional) distortion of the right to Freedom of Speech. They know that, to the extent the First Amendment can properly be applied to content moderation at all, it protects platforms, not users. They know that the whole of Corporate America relies on immunity from liability for third-party content.
Confronting the very real existential crisis that we face is not made easier when politicians and pundits knowingly cast their complaints within an improper legal framework and pretend to advocate for solutions that are both legally and politically untenable.
Liberals, for their part, are concerned mostly with misinformation, violence, and other criminal activity. They see in the proliferation of untruth a dangerous, cultural revolt against science, expertise, and empirical facts, and they argue that without a shared set of basic facts, there can be no functioning democracy. While there has always been a certain amount of misinformation in circulation, technology has fundamentally changed the media landscape. Propaganda may be as old as time, but the reach and repetitiveness of today’s tech-enabled propaganda machines are unprecedented in human history.
The use of technology to bring communities of terrorists together, to link predators with potential victims, and to facilitate every kind of crime and corruption is also novel. Public officials across the political spectrum accuse social media and other internet services of willful blindness and negligence in under-moderating unconscionable activity on their platforms.
One must assume that left-leaning lawmakers also realize that the full repeal of Section 230 would not be a sufficient remedy for their complaints and would precipitate a multitude of unintended consequences.
Neither party has yet proposed a targeted, workable strategy to correct Big Tech’s complicity in the misinformation crisis. Both parties seem to agree that social media platforms should not function as arbiters of truth. Platforms have never wanted this responsibility anyway, though some have recently indicated their willingness to take it on, provided that the Congress provides them with “cover” in the form of clear and detailed guidance.
Addressing the Three Complaints↑
Let us address the three categories of complaints against Big Tech (see above) in reverse order.
Political Discrimination↑
With respect to Category 3—political discrimination, censorship, and the Freedom of Speech—this set of complaints is the source of much of confusion. Under current law, social media platforms are private spaces, and there is no “right” to the Freedom of Speech in any private space. A compelling case could be made that the largest social media platforms have become de facto public spaces where discrimination should be prohibited by law, as it is in other public accommodations. However, not all forms of discrimination are illegal, even in public spaces. Fixing the perceived problem of political discrimination on the part of social media platforms would also require that political views be added to the list of Federally protected traits.[2] The types of discrimination that are currently illegal under Federal law are limited to discrimination on the basis of race, color, religion, sex, or national origin.[3]
Adding political views to the list of Federally protected traits and expanding the statutory definition of public accommodations to include the largest social media platforms would be constructive improvements from a fairness perspective and would focus moderation efforts on content that is demonstrably false or misleading and content that facilitates (potential or actual) criminal activity—Categories 1 and 2. ✔ Done.
Criminality↑
With respect to Category 2—incitement to political violence and other illegal activity—this set of complaints could be addressed, in part, by enacting exceptions to Section 230 immunity for any of the following: (i) willful blindness to criminality, (ii) negligence in moderating criminality, and (iii) intentional criminality.[4] Of course, existing criminal statues and case law already prohibit criminal activity online. However, overly broad interpretations of Section 230 immunity frequently impede the robust prosecution of criminal offenses.[5] This clear injustice could be fixed simply by adding clarifying language to Section 230(c)(1) to the effect that the broad immunity granted thereunder shall not impede enforcement of any criminal law or prohibit Federal and state governments or injured parties from bringing civil actions in connection with underlying criminal conduct.
Past efforts to correct the Category 2 problem have been piecemeal, focusing on specific types of heinous criminality, such as sex trafficking or the sexual exploitation of children. Making it clear that Section 230 provides no protection whatsoever for any criminal conduct should eliminate the need to specify every conceivable type of criminality in the amended language of the statue.
In addition to clarifying Section 230 as it relates to criminal conduct (including negligence in moderating and willful blindness to criminal conduct), new legislation is needed to establish mandatory minimum standards and practices for moderating criminality, reporting possible criminal activity, cooperating with law enforcement, responding to court orders in a timely manner, etc. Content distributors that demonstrate faithful compliance should be entitled to “safe harbor” protection from civil liability in connection with what should then be low incidences of crime on their platforms, except when mens rea can be established. ✔ Done.
Misinformation↑
Category 1—lies, misinformation, and conspiracy theories—is the most urgent of the three, as it constitutes a clear and present existential threat to democracy. Addressing the problem of misinformation will be extraordinarily difficult, given that the Two Americas live in vastly different and largely incompatible “realities.” But if we are to believe Tristan Harris’ prophetic warning that technology has already pushed us dangerously close to the point he calls “checkmate humanity,” then we have no choice but to develop systems to curtail the spread of misinformation that will be both constitutional and widely accepted across the Two Americas—even at the cost of some of our liberty,[6] and even at the cost of an internet that may become somewhat less “fun” and less convenient.
Multilateral Working Groups↑
As a starting point, multiple interdisciplinary working groups should be convened to study the crisis of misinformation[7] and to develop systems, procedures, technologies, educational programs, legislation, and other solutions to combat it. It is essential that the solutions ultimately implemented should not be perceived as emanating from government. However, a presidential and/or congressional commission could lead by establishing a timeline for brainstorming, experimentation, and evaluation of potential solutions, as well as target dates for introducing legislation based on the most promising evidence-based ideas.[8]
Working groups could be convened by Federal and state governments, nonprofit organizations, science and academia, Big Tech, smaller tech, grassroots activists, and other stakeholders. Presumably, many of the solutions arising from this process would not require legislative action and could be implemented by the private and nonprofit sectors as they are developed.[9]
Fact Checking↑
Although “truth arbitration” by itself will not be sufficient to bridge the canyon separating the Two Americas, any honest examination of the misinformation crisis should lead to the conclusion that some degree of empirical fact checking is urgently needed.
In our pluralistic, democratic society, the range of content subjected to mandatory fact checking would have to be narrow and limited to that which can be transparently and empirically verified, then confirmed from a multitude of diverse perspectives. As philosophers have observed for thousands of years, objectivity is elusive. On the other hand, facts are not relative.
As a society, we must reach a consensus about demarcation of the line that separates alternative points of view from “alternative facts.” And we must agree that empirical verification of basic facts is essential for democracy, even when, inevitably, not everyone may agree with every determination made by fact checking institutions.[10]
More Accountability↑
As discussed at length in a previous article, it is fashionable now for lawmakers to threaten Big Tech with the full repeal of Section 230 (or major modifications to it), as if using this blunt instrument would somehow solve all three categories of complaints against social media platforms once and for all. For all the reasons considered there, this is a terrible idea that would have far reaching, undesirable consequences.
Don’t Wreck the Internet↑
General protection from civil liability arising from third-party content is a fundamental necessity for doing business on the internet, not a special privilege. Section 230 immunity must be the default for internet platforms that host user-generated content[11]—unless they knowingly facilitate or are negligent in moderating clear criminality. Changes to the law and new regulations should target specific outcomes and be tied to a comprehensive strategy for dealing with the misinformation crisis and the Two-Americas divide.
Give Humans a Fighting Chance↑
Foremost in every effort to fight the misinformation crisis should be the consideration of what to do about Big Tech’s reckless use of algorithms to exploit fear and outrage, for the ultimate purpose of monetizing users. This is an urgent public health[12] and national security threat. The use of algorithmic technology to amplify or to limit the reach of user-generated and third-party content should be categorically prohibited as a strict-liability tort and should disqualify offenders from Section 230 immunity across all social media platforms they own or operate. Severe mandatory civil penalties and regulatory sanctions should also be imposed. Clearly, this would be a new constraint on First Amendment rights on which the Supreme Court would ultimately have to rule. In the face of imploding democracy, this constraint on liberty is not only justified but critical.
The use of algorithms for most purposes should be acceptable. Deploying user data and algorithms for the purpose of serving micro-targeted advertising content is a defensible practice (that should be monitored for potential regulation as the technology continues to evolve).[13] The utilization of algorithms as moderation tools (e.g., auto-removal of content containing the “N” word) is another constructive use. And there are, of course, many others.
As noted above, the First Amendment does not confer upon users any “right” to the Freedom of Speech on private social media platforms. Nonetheless, the freedom to expose oneself to a broad range of ideas is one of the key benefits—for individuals and for democratic society—of our First Amendment rights. (Of course, with or without technology, democracy depends on citizens having the will and making the effort to exercise this freedom.) The abuse of content-curation algorithms is antithetical to this core American value in that the technology drives a customized set of preferred ideas into each user’s brain without relent, sabotaging critical thought and making us all less free. All individual rights are subject to limits that serve the public good. The greater good that will result from prohibiting destructive uses of algorithms far outweighs the cost to tech companies of limiting their right to deploy them.
Toward an Algorithmic “Fairness Doctrine”↑
Instead of using algorithms to turn up the volume in the echo chamber, another partial solution to the misinformation crisis might be to require platforms to deploy algorithms that would present users with content that represents a different point of view on the content they are consuming.
For example, immediately following the 2020 presidential election, several analyses[14] that purported to demonstrate the “statistical impossibility” of a Biden win were widely shared and swallowed whole by millions. Rebuttal analyses quickly arose that were circulated in different echo chambers. It would have been good for the citizens of both Americas to have been presented with convenient links to opposing viewpoints as they were consuming content that reflected their default perspectives.
Such “alternative viewpoint algorithms” might also have helped to lessen the intensity of the debate over mask wearing. There is a legitimate, ongoing inquiry in academic medicine into the efficacy of surgical masks in the operating theatre. The science is not as conclusive as their nearly universal deployment in surgical facilities might suggest. Nonetheless, most guardians of public health, in full awareness of the lingering uncertainty about mask efficacy in the surgical context, have rendered their professional judgment in favor of public mask wearing as a means of reducing the rate of transmission of the virus that causes COVID-19. And research during the pandemic seems to indicate that masks do provide imperfect but substantial protection. Given that most of us are not scientists, our consumption of scientific information calls for a lot more humility, on all sides.
It is dead wrong (pun intended) to assert that masks “don’t work.”[15] It is also not correct, though far less dangerous, to assert that the scientific understanding of mask efficacy is complete and conclusive. Both Americas would have benefited from exposure to a fuller picture. This might have helped some accept the judgment of experts and ratify their participation in the social contract of mask wearing, despite their personal reservations.
Carpe Diem↑
The urgent crisis of misinformation demands a comprehensive, all-of-society response. The ideas examined here could lead to significant improvements. But many more perspectives and resources, along with as much national soul-searching and determination as we can muster are needed now. Our deep divisions are being hard wired into our brains. And we know that the road we are on now ends at “checkmate humanity.”
End Notes↑
-
i.e., by removing the immunity from civil liability provided under Section 230. ↑
-
Some countries already prohibit discrimination on the basis of politics. ↑
-
Legislation proposed by former Attorney General William Barr in September 2020 attempts to “fix” the perceived problem of political discrimination by establishing good faith standards applicable to content moderation. Barr would require platforms to disclose their content moderation policies in detail, apply them consistently, and avoid any appearance of political discrimination. Otherwise, they could be sued by users, whose claims of harm would be legally tenuous, likely rendering Barr’s good faith standards an exercise in futility. ↑
-
Several bills introduced in the 116th and 117th Congresses, as well as Barr’s proposed legislation (see footnote 3), would eliminate Section 230 immunity for platforms that knowingly facilitate criminal content or activity. ↑
-
While Section 230(e)(1) clearly states that the statute was intended to have no effect on criminal law, some courts misapply Section 230(c)(1) to shield guilty parties from criminal prosecution. More frequently, Section 230(c)(1) bars law enforcement and injured parties from civil actions in connection with criminal activity. ↑
-
Most Americans were willing to accept significant limits on their civil liberties following the tragic events of September 11, 2001, ostensibly in service of the public good. ↑
-
A great deal of new research into the psychology and social psychology of misinformation has been conducted since 2016, augmenting the already considerable body of knowledge. The sources and tactics of organized misinformation campaigns is another active area of research. ↑
-
The constitutionality of regulating platforms’ moderation practices is unclear—and effectively unknowable until such time as relevant litigation provides the Supreme Court with an opportunity to decide this. One should not assume that the publisher-distributor distinction is a strict dualism. It is conceivable the Supreme Court would decide that, for purposes of content moderation authorized under Section 230(c)(2), platforms are publishers (of a special kind, given their immunity) and, therefore, beyond the reach of government regulation; but that, for all other purposes, platforms are mere distributors, pursuant to Section 230(c)(1). Or the Court could decide that platforms are distributors for all purposes and, therefore, regulating their content moderation practices would not be a violation of their First Amendment rights. Instead, by establishing content moderation standards to be operationalized by platforms, the government would likely be infringing the First Amendment rights of platform users (i.e., the actual publishers of user generated content). ↑
-
An independent oversight board established by Facebook in May 2020 is already offering to advise other social media platforms regarding their content moderation practices. ↑
-
The Federal government might consider establishing the framework for a new type of nonprofit organization—call them “independent fact checking institutions”—with a governance structure that includes robust checks and balances designed to minimize conflicts of interest and the risk of corruption. Such entities could be required by law to be fully transparent, with every financial and operational detail made public in real time. The Federal government could incentivize the nonprofit sector to create these organizations and require social media platforms to “subscribe” to one or more of them. The fact checking entities would have the power to moderate all user-generated and third-party content published on a subscribing platform. Perhaps fact checking bodies could be crowd funded, similar to public radio and television, except with contribution limits. ↑
-
Section 230 protects commerce and innovation across the entire internet, not just Big Tech. According to the Internet Association, the parties that have relied on Section 230 as a legal defense include “internet service providers and website hosts, newspapers, universities, libraries, search engines, employers, bloggers, website moderators and listserv owners, marketplaces, app stores, spam protection and anti-fraud tools, domain name registrars, and social media companies.” ↑
-
The misuse of algorithms to maximize user “engagement” (i.e., addiction) is not unlike Big Tobacco’s use of additives to enhance addiction and mask the negative effects of cigarettes. ↑
-
Tristan Harris makes a compelling case that the use of personal data and algorithms to serve advertising content is similarly abusive in its power to undermine independent thought. He argues that the power dynamic between humans and algorithms is so far out-of-balance that the interaction cannot be fair. Many others agree.↑
-
Examples include an analysis by Charles J. Cicchetti, Ph.D. comparing Clinton’s (2016) and Biden’s (2020) results in key states, a claim that the election results violate Benford’s Law, an argument based on Biden’s loss of all but one of the traditional “bellwether counties,” etc. ↑
-
Anti-mask arguments based on the extremely small size of the virus relative to the distance between mask fibers are incomplete and inadequate. And there are straightforward explanations as to why the guidance from public health officials regarding mask wearing has changed over time. ↑