One of the areas where America's left and right can find common ground is in their hatred of big tech. One of the ironies is that they often use the same companies they're railing against as a platform to complain about them.

The spread of disinformation during the 2016 and 2020 elections, around COVID, and the removal of Donald Trump from Facebook and Twitter have prompted politicians from both sides of the aisle to call for tighter regulation of social media, albeit in different ways and for different reasons. This has led many to ask the question, "Is it the government's job to regulate what's said online, and, if so, what's the best way to go about it?"

To help answer that question, I asked Matthew Feeney, Director of the Cato Institute's Project on Emerging Technologies on this episode of You Don't Have to Yell. You can listen to the full episode via the player below, on Apple Podcasts, Spotify, or wherever else your bad self gets your podcasts.

Show Notes

Section 230, the law that shields tech companies from liability for what users post on their platforms, has found its way into the crosshairs of conservative activists who claim prominent social media platforms such as Facebook, Twitter, and YouTube have disproportionately removed right-leaning content from their platforms for violating site rules. 

Those on the left have critiqued social media companies for not doing enough to remove objectionable content from their platforms. While not calling out 230 by name, the types of regulation they might propose would certainly change the nature of the law.

Feeney outlines a few flaws with the rush to regulation in our conversation:

  • The companies being targeted compete with each other - While Twitter, Facebook, and YouTube are often discussed in monolithic terms, each competes with the other for users' time. If one network were to deliberately censor political content a broad swath of the public wants to consume, their competitors would gladly serve them.
  • Removing or amending Section 230 could make large tech companies stronger - Without 230, tech companies would be responsible for anything posted on their platforms, creating onerous legal and administrative costs. This would make it far more difficult for competitors to enter the market and only serve to strengthen the position of the largest tech companies.
  • It doesn't address the root causes of disinformation and extremism - Everyone with an internet connection has the same access to extremist content and disinformation, yet only a portion of the population seems swayed by it. In Feeney's opinion, a better question to ask would be what creates the conditions that make people susceptible to disinformation and extremism in the first place?

One last note - aside from the issue of tech regulation, Feeney expressed concern that we haven't determined a way for people to move beyond poor decisions they made in the past that found their way to the internet. He cited the "Unite the Right" rally as an example, where many young men have found themselves unemployable after participating in it.

While we don't need to tolerate racist ideology, society needs a mechanism for helping people evolve ideologically so that extremists can reenter mainstream society. The alternative is to deepen the alienation that drives people to extremism in the first place.

Additional Resources

You can find more of Feeney's work here:

He also recommends Jon Ronson's book So You've Been Publicly Shamed, which critiques the current trend of ordinary people becoming victims of outrage culture for having their mistakes aired on social media. You can find it on Amazon here: