The European Union created the Digital Services Act of 2023 to force digital platforms like Google, Facebook, Twitter, and Telegram to fight disinformation, online extremism, and outright fraud. But when France arrested Telegram founder Pavel Durov, it was unable to prosecute under this landmark law.
The far-reaching DSA represents a significant shift from unbridled internet freedom to tightly regulatory oversight of social media, but several months after it came into force, the difficulty of balancing freedom of expression with government oversight has become clear. Big question marks remain over how the DSA will work, how it will be enforced, and whether it will spark a transatlantic rift.
Let's start with the Telegram paradox. Despite having almost a billion users, Durov's Telegram has fewer than 45 million users in the EU, not enough to qualify for the DSA's designation as a very large online platform. France arrested a Russian entrepreneur under its domestic speech laws, accusing him of running an encrypted platform for pedophiles, cybercriminals, and terrorists. But Telegram has enabled communication in authoritarian regimes and proven invaluable to government opponents.
The lines between acceptable and unacceptable speech, censorship and platform liability are blurred. When I worked at Google, Italy prosecuted three of Google's top executives for violating the privacy of an Italian boy with a disability. The so-called Vividown case began when the boys uploaded a nasty video in which they bullied an autistic child. Google was notified and removed the video, but the executives faced criminal charges. Censorship collided with privacy.
Google lost the first trial. My own efforts to turn the case into a fight for freedom of expression failed. Google eventually hired the same lawyer who represented the boyfriend of wrongly convicted American student Amanda Knox and won the appeal. No Google executives went to jail in Italy.
But the case served as a serious warning: a backlash against the internet's unbridled freedoms was brewing. In the years that followed, there was growing pressure to crack down on controversial content and for companies to show “responsibility.”
When the internet was developed, the US and Europe put clear limits on the liability of digital platforms. Platforms were not liable for illegal content uploaded to their sites, only responsible for removing illegal content if notified. The European regulation was called the E-Commerce Directive. In the US, it was Section 230 of the Communications Decency Act.
Stay up to date
Sign up to receive regular Bandwidth emails and stay up to date on CEPA activities.
Without these legal safe harbors, many of the Internet's success stories would never have come to fruition. Imagine if YouTube was responsible for every upload, Blogger for every blog spot, TripAdvisor for every restaurant and hotel review. User-generated content would be too risky to publish.
Today, governments, courts, and public opinion are demanding that internet companies monitor and prevent illegal content from being posted on their platforms. Copyright owners believe the internet facilitates piracy. Police and intelligence agencies believe the internet facilitates extremist terrorism and want access to suspects' data. Politicians fear that fake news could tarnish their elections or even drive them from power.
While the European Union has passed the DSA, the United States has hesitated. The First Amendment of the U.S. Constitution allows for far more freedom of speech than Europe, which criminalizes certain subjects such as Holocaust denial. The Supreme Court has avoided imposing explicit new rules because its approach to many issues is so radical. Despite bipartisan momentum behind new child safety legislation, Congress still appears stymied from approving major federal technology bills.
It will be difficult to reach agreement on how to reform the E-Commerce Directive and Section 230. Too much enforcement will undermine freedom of expression. The European DSA seeks to strike a balance by preserving the prohibition on aggressive general surveillance in the E-Commerce Directive. But it requires platforms to take a series of measures to control what people can post, what they sell, and the ads they see, all to protect others online. Companies that don't comply will be fined up to 6% of their global revenue.
Even equipped with this powerful weapon, regulators face significant implementation challenges. The Brussels-based DSA implementation team is made up of just a dozen people, and much of its time is spent making sure that new EU laws in areas like product safety are separate from the DSA.
EU governments, which were supposed to support the EU team, have delayed appointing DSA officers, and many of those appointed are waiting for instructions from Brussels. Regulators are holding new consultations on how to use the law to strengthen child safety and how to appoint “trusted flaggers” to identify illegal content.
Enforcement has been focused on a few high-profile cases: Regulators in Brussels forced TikTok to shut down a program that rewarded users for time spent on the platform, and launched an investigation into Meta's decision to shut down transparency tool CrowdTangle.
The most significant conflict is coming from Elon Musk's X, which, unlike Telegram, is a massive online platform with over 45 million users in the EU. European regulators have issued preliminary findings that X's content moderation policies violate the DSA. Ironically, before Musk's recent interview with Republican candidate Donald Trump, European Commissioner Thierry Breton warned X in a tweet not to violate the DSA on the grounds of freedom of speech.
Musk responded with expletives and denounced the so-called censorship. He had previously mocked the DSA as “misinformation.” X could be fined billions of dollars and highlight the growing divide between the US and the EU over how to regulate internet freedom.
Instead of fighting over regulation, encouraging a bottom-up rather than top-down approach to policing content may be the best way forward. Wikipedia and Reddit are good models. Both sites have made the hard journey from tolerating hate speech to being trusted news sources. They police themselves.
The European authorities who administer the DSA don't expect platforms to block all hate speech; they just want strong programs to limit the danger. The U.S. may not legislate such requirements, but such programs give users peace of mind and are good for business. Who wants to advertise in a toxic cesspool?
Bill Echikson is a nonresident senior fellow in CEPA's Digital Innovation Initiative and editor of Bandwidth.
Bandwidth is CEPA's online journal dedicated to promoting transatlantic cooperation on technology policy. All opinions are those of the authors and do not necessarily represent the position or views of the institutions they represent or of the Centre for European Policy Analysis.
Read more about bandwidth
CEPA's online journal dedicated to promoting transatlantic cooperation on technology policy.
read more
Source link