Contact Us

UK planning to hit Facebook, Google with massive fines over harmful content

AFP WORLD
Published March 01,2019
Subscribe

The U.K. government is laying out plans to impose massive fines on social media companies if they do not remove harmful content from their platforms, Britain's digital minister was quoted as saying Thursday.

Margot James told Business Insider that a new independent tech regulator would be able to punish firms like Facebook and Google for failing to protect users.

The regulator will have the power to determine what constitutes harmful content and to impose penalties on companies that fail to take expedient action to remove content deemed inappropriate.

"There will be a powerful sanction regime and it's inconceivable that it won't include financial penalties. And they will have to be of a size to act as a deterrent," James told Business Insider.

She said the sanctions regime would not be "too dissimilar from the powers that the (Information Commissioner's Office) ICO already has." The ICO, under Europe's new GDPR privacy laws, currently has the power to impose fines of up to 4 percent of global revenue for serious data breaches.

Four percent would mean a maximum $2.2 billion penalty for Facebook, whose 2018 revenue was $55.8 billion. The fine would be even steeper for Google's parent company Alphabet – a penalty up to $5.4 billion out of its $136.8 billion revenue in 2018.

A policy paper of internet safety to be released next month will detail the plans in full, but James' statements are the most specific yet from the British government in terms of fines.

In addition to financial reprimands, the government has also hinted that tech executives could face criminal sanctions if they don't bring their platforms to heel.

"We will consider all possible options for penalties," U.K. Culture Secretary Jeremy Wright told the BBC a few weeks ago.

When it comes to what constitutes harmful content, James said the government is taking a "holistic" view, including everything from terrorist recruitment to child grooming and self-harm. The regulator will also consider misinformation within its assessment.

That means the penalty system will be wider-ranging than Germany's, which impose hefty fines for hate speech. James said the U.K. hopes the regulator will serve as a template for other countries.

While the judgments are not all clear at this point, James said one of the guiding principles will be that "what is illegal and unacceptable offline should be illegal and unacceptable online."

She also said that social media firms are not necessarily at fault for harmful content that shows up on their platforms, but they are responsible for removing it promptly.

"You've got to take it down before it proliferates. That's the point. It's too late once three weeks have gone by," she said.

She added, however, that the government is sensitive to the need to not stifle innovation with heavy-handed regulations.

"We clearly don't want a kind of regulatory environment whose default is to deny and suppress because we want to encourage innovation," James said.

U.K. minister are still deciding whether they will set up the new regulator or give the powers to the country's media watchdog Ofcom, which currently makes decisions on inappropriate content aired on television.

Meanwhile on Thursday, EU officials said Facebook and Twitter are doing too little to scrutinize advertising placements on their sites in the run-up to European Union elections in May, despite their pledges to fight disinformation.

In a second monthly report, officials at the European Commission, the EU's executive arm, said the U.S. internet giants did not show what they had done in January to scrutinize such ad placements.

They said Google offered data on actions taken in January to better study ad placements but did not clarify the extent to which they tackled disinformation or another problems, such as misleading advertising.

"We urge Facebook, Google and Twitter to do more across all member states to help ensure the integrity of the European Parliament elections in May 2019," said a statement by Vice President for the Digital Single Market Andrus Ansip and other officials.

"We also encourage platforms to strengthen their cooperation with fact-checkers and academic researchers to detect disinformation campaigns and make fact-checked content more visible and widespread," they added.

In January, the commission started producing monthly reports on what the internet players have done to meet pledges made late last year in a "code of practice" to fight disinformation.

As in the first report, Facebook on Thursday topped the list for criticism.

Not only did Facebook not report the results of activities in January to inspect ad placements, it did not report on the number of fake accounts removed for malicious actions targeting the EU.

The social network has in the past been accused of being used as a platform to spread divisive or misleading information, most notably during the 2016 election that put U.S. President Donald Trump in the White House.

Facebook ads have also been at the center of the FBI investigation over Russia's alleged meddling in that election and suspicions are rife that the Kremlin has interfered in votes across Europe.

Moscow has repeatedly denied allegations of hacking and meddling in foreign elections through disinformation over recent years.

Analysts warn that populist parties opposed to founding EU democratic values could do well in the May 23-26 elections by playing on anti-immigration sentiment.

Such parties have already done well in national elections, including coming to power in Italy.