Lawmakers are set to grill the chief executives of Facebook, Google and Twitter on Thursday over their companies’ role in the proliferation of disinformation and misinformation online.
The deadly riot on Jan. 6 at the U.S. Capitol — and the apparent role social media played leading up to the event — looms large over the virtual hearing commenced by two House subcommittees under the Committee on Energy and Commerce.
The decision by Facebook and Twitter to ban former President Donald Trump in the wake of the mob attack will also likely come up as lawmakers have the big tech CEOs in the hot seat.
The hearing, titled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation,” kicks off at noon ET.
In prepared testimony, Facebook CEO Mark Zuckerberg opens by expressing his “deepest condolences to the families of the Capitol police officers who lost their lives in the wake of January 6 and my appreciation to the many officers who put themselves at risk to protect you.”
Zuckerberg goes on to defend his platforms’ policies surrounding disinformation and misinformation as related to both the COVID-19 pandemic and the election.
Finally, Zuckerberg closes his remarks by calling for “updated Internet regulation to set the rules of the road,” and specifically reform to Section 230 of the Communications Decency Act.
“Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation,” Zuckerberg stated. “The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically.”
Section 230 has made headlines in recent months as many people — including Trump — have called for it to be overhauled.
The law essentially provides internet companies legal safety from the content users post on their sites. For example, if someone posts something libelous about someone else on Facebook, Section 230 means that Facebook can’t be sued for that the way someone can sue a news organization.
Zuckerberg argued that reform to Section 230 would mean “when platforms remove harmful content, they are doing so fairly and transparently.”
“Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it,” Zuckerberg said.
Critics and digital rights advocacy groups have argued that Section 230 protects free expression in the digital age.
Twitter’s Dorsey did not touch on Section 230 reform in his prepared testimony, but delved into the importance of social media companies building trust with their users.
“Quite simply, a trust deficit has been building over the last several years, and it has created uncertainty — here in the United States and globally,” Dorsey stated. “That deficit does not just impact the companies sitting at the table today but exists across the information ecosystem and, indeed, across many of our institutions.”
Dorsey emphasized the importance of transparency — including in its algorithms — in building trust and discussed company efforts to make its decisions as transparent as possible.
The Twitter CEO also spoke of efforts Twitter is investing in to address misinformation on its platform including Birdwatch and Bluesky. Birdwatch, launched in beta in January, uses a “community-based approach to misinformation,” Dorsey said. Essentially, it allows users to identify misinformation in Tweets and write notes that provide context.
Bluesky, another anti-misinformation project funded by Twitter, has the goal of developing open and decentralized standards for social media.
“Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice,” Dorsey said.
Dorsey has previously tweeted about the Bluesky initiative in a wide-ranging thread defending the company’s decision to ban Trump.
Finally, in his own testimony, Pichai outlined Google’s responses to the events of Jan. 6, including efforts to “raise up authoritative news sources across our products,” remove content that could incite violence on YouTube, and removing apps that violated policies on the Play Store. He also touched on efforts to maintain election integrity and combat pandemic-related misinformation.
The Google CEO went on to defend Section 230, saying it allows “consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy.”
He argued that Google’s “ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.”
“Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability,” Pichai wrote. “We are, however, concerned that many recent proposals to change Section 230 — including calls to repeal it altogether — would not serve that objective well.”
Pichai continued: “In fact, they would have unintended consequences — harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”