Who’s Moderating Who?

Former president, Donald Trump, has been banned or granted limited access to numerous social media platforms. (TMZ)

Caleb Robbins

Earlier this month Facebook, Twitter, Google-owned YouTube, and Amazon-owned Twitch banned former U.S. President Donald Trump either permanently or indefinitely following his comments on the U.S. Capitol riots. 

Shortly after, conservative social media app Parler was blamed for creating the mess that culminated in the riots due to inadequate moderation. This resulted in Apple and Google banning Parler from their app stores and Amazon Web Services cutting Parler, which caused the app to temporarily shut down.

These moves brought controversy and discussions of free speech. While many on the left-wing lauded the actions taken, some are questioning Big Tech’s influence both leading to the riots and afterward.

Facebook, Google, Amazon, Twitter, and Apple have all faced attacks from the public and government officials alike. These often took the form of antitrust hearings or angry comments in the Twitterverse condemning their monopolistic practices and outsized influence on online platforms. 

These gripes are not unfounded. When you uncover the facts, it’s at once amazing and frightening what these companies have achieved.

All five companies have a combined market value of over $5 trillion. Facebook, Google, and Amazon account for nearly 70% of U.S. digital ad revenue. Facebook has nearly 6 billion combined users throughout its platforms. Apple has 1.5 billion active devices and is the first U.S. company to reach a $2 trillion market cap. Twitter is one of the preferred online platforms for politics and politicians, as was shown by former president Trump.

Most of these companies have also indirectly contributed to the rise of hate groups. About six years ago, when the terrorist group ISIS was at its height, jihadists were regularly using Facebook, YouTube, and Twitter while evading detection to recruit and spread propaganda. Amazon was selling their propaganda magazine, Dabiq, before eventually taking it down.

During the 2016 presidential election, Facebook refused to delete or label the many fake news stories that saturated users’ news feeds, and they allowed thousands of Russian-backed ads on their platform that many believed influenced the election and gave Trump the presidency.

Meanwhile, Twitter has continually refused to take a harder stance against white nationalists that occupy its platform. It was only a few years ago when white nationalists, some of whom used Twitter, travelled to the city of Charlottesville on to the University of Virginia campus to participate in a white supremacist rally that turned deadly, and Twitter has done little to combat this threat.

However, despite Big Tech’s influence, holding them liable isn’t so simple. According to UMKC Professor of Law Paul D. Callister, these companies are protected by law.

“As a matter of law, tech companies are protected from liability under Section 230 of the Communication Decency Act,” Callister said.

Callister said that the protections brought by this law allowed tech companies to ban former President Trump and cut Parler from their platforms.

“On the Internet, when law fails,” Callister said. “Other forces such as markets, including operation of private contracts, technology, and even norms can be brought about to exert pressure on offending conduct.”

While Callister didn’t want to speculate on tech companies’ involvement, he does believe that online media can facilitate and coordinate violent action, citing the events at the Capitol and the protests that led to riots during the summer as examples. 

Callister said that while Section 230 of the CDA has reduced the accountability of tech companies, amending it could threaten free speech.

“If new regulatory law is introduced, we need to be careful. In law, we have a saying that bad or egregious facts make for bad law,” Callister said. “In other words, don’t legislate based on the most extreme situations, such as the storming of the Capitol, which really requires an analysis of why security failed before legislating.”

Callister also said that while he agreed with Big tech companies dropping Trump from their platforms, he expressed concern with how they treated Parler.

“There was extreme content on Parler, but such content has been found in the past on the standard Big Tech platforms,” Callister said. “There are ways for Parler to deal with it. The markets and Parler’s technological architecture should have allowed Parler to reexamine its policies and to rapidly set up somewhere else, but they couldn’t because of how their technological architecture is set up.”

UMKC Professor Ye Wang, who has studied interactivity and user engagement on websites and social media, discussed why it’s difficult to blame tech companies for the actions of terrorists and protesters.

“If we look at cultivation theories, and other media effect theories, there is just so little we know about media’s influence on people’s beliefs, attitudes and behaviors,” Wang said. “There is even little research and information on social media’s influence. If we hold tech companies responsible, there has to be evidence that they influenced people’s beliefs, attitudes and behaviors.”

Due to the reasons and laws specified above, the road to regulating Big Tech is uncertain. The federal government and multiple states filed lawsuits against Facebook and Google, accusing them of antitrust and monopolistic practices, but that will be a long and arduous battle.

What isn’t uncertain is tech companies’ ability to regulate what is displayed on their platforms and what guidelines they can create to justify their decisions. Their banning of former president Trump didn’t require a vote. Their banning of Parler didn’t require a courtroom. They also have yet to fully answer for their mistakes. 

Despite the power to affect immediate, necessary change, Big Tech is not being held accountable for their actions. They are able to silence voices in the government, but the government has little power to restrain them.

All of which begs the question, “Who’s moderating who?

[email protected]