Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Social media companies accused of 'radicalising and grooming' users with algorithms

Home Affairs Committee calls for Facebook, Twitter and YouTube to stop people being drawn into 'bubble of hate'

Lizzie Dearden
Home Affairs Correspondent
Tuesday 19 December 2017 21:49 GMT
Comments
Facebook said it is increasing efforts against extremist and malicious material on its platform
Facebook said it is increasing efforts against extremist and malicious material on its platform

Internet giants have been accused of aiding terrorist groups by letting their own algorithms radicalise people and draw them into a “bubble of hate”.

The chair of the Home Affairs Committee warned representatives from Facebook, Twitter and YouTube that police were “extremely worried” about the role of technology in extremism and online grooming.

“Your algorithms are doing that grooming and that radicalisation because once people go on one slightly dodgy thing, you are linking them to an awful lot of other similar things,” she told a hearing in the Commons.

“Whether that is racist extremism or Islamist extremism, your technology is doing that job and you’re not stopping it from doing so.”

Ms Cooper said she had been recommended videos from a white supremacist channel on YouTube and been shown tweets from Britain First leaders Paul Golding and Jayda Fransen before they were banned from Twitter.

The Labour MP said the same issue was prevalent on Facebook and called on the companies to ensure their algorithms do not automatically pull people deeper into extremism.

Her warning came days after The Independent revealed how Google searches were leading people to terrorist manuals to make the type of bomb used in last week’s New York attack.

Counter-terror raids arrest four men on suspicion of plotting new terror attacks

One of the media companies leading readers to an explosives guide has since removed the link, while Google appears to have disabled a “related search” that was directing users to a specific terrorist publication.

But other manuals published by al-Qaeda, Isis and other banned groups remain available showing jihadis how to manufacture explosives, poisons and other weapons to be used in attacks.

Major-General Chip Chapman, the former head of counter-terrorism in the Ministry of Defence, said it was almost impossible to completely remove all manuals in an ongoing game of “digital whack-a-mole”.

“It’s more difficult than people think to build a credible bomb but we’d rather not have the guides on the internet at all,” he told The Independent.

“If you can’t take it down, the metadata you get is a powerful enabler of digital interdiction.

“Google doesn’t forget your search history and neither do the security services.”

Mr Chapman said online activity is among a “threat library of markers” that can trigger formal investigation by intelligence agencies and help prosecute terrorists.

Nikita Malik, director of the Henry Jackson Society’s radicalisation centre, said the Government – which is currently relying on voluntary cooperation from online firms, also had a role to play.

“Much remains to be done amongst the social media community in sharing trends and information regarding extremist content,” she added.

“In turn, the Government has an important role to play in making clear which regulation applies to service providers and holding them to account when harmful content is left online for too long.”

All three representatives appearing before the Home Affairs Committee opposed proposals backed by Labour to fine companies failing to react to dangerous material quickly enough but said they were intensifying efforts.

Nicklas Berild Lundblad, Google’s vice president of public policy for Europe, told MPs YouTube was limiting recommendations and other features on concerning videos to stop people “ending up in a bubble of hate”.

“There’s a funnel of vulnerability where individuals are exposed to content that then is recommended,” he admitted. “We are increasing the response to hate speech and racism on our platform – we’re six months in and we’re not quite there yet.”

The meeting on Tuesday came a day after Google launched its “Redirect” strategy in the UK, which will be rolled out across Europe to show people searching for extremist content counter-narratives that debunk their ideas instead.

The search giant, which owns YouTube, said it will have 10,000 people working to identify and remove content by the end of next year and is developing machine learning and artificial intelligence to make the “final human judgement” quicker and more effective.

Facebook also said it was increasing efforts against extremist and malicious material on its platform, addressing Isis and al-Qaeda “first and foremost” and launching emergency responses to terror attacks.

Simon Milner, Facebook’s director of public policy, said the social media company aimed to take down every reported piece of content that violates its rules within 48 hours.

He disagreed with allegations that its algorithms were themselves radicalising users but conceded there was a “shared problem on how we address a person going down a channel that may lead them to be radicalised”.

Twitter has started suspending accounts linked to groups that promote violence but claimed the launch of purges the day before the Home Affairs Committee hearing was coincidental.

Sinead McSweeney, the company’s vice president for public policy and communications in Europe, said there had been “a sea change” in how it takes responsibility for online abuse.

The deputy leader of far-right group Britain First, Jayda Fransen, is among the figures whose Twitter accounts have been suspended (PA)

But Ms Cooper hit out at Twitter for failing to remove anti-Semitic, Islamophobic and racist posts flagged by MPs, including some directed at Theresa May, Diane Abbott and Luciana Berger.

“We sat in this committee in a public hearing and raised a clearly vile anti-Semitic tweet with your organisation … But it is still there,” she added. “I'm kind of wondering what we have to do.”

Tim Loughton, a Conservative MP, accused the internet giants of “profiting from people who use your platforms to further the ills in society”.

Ms McSweeney argued that politicians were not a “protected category” under Twitter’s service and that proactively searching for any abuse among 500 million tweets a day from 330 million users would be impossible.

Ms Cooper welcomed the increasing staff dedicated to monitoring and removing content at Google, Facebook and Twitter but said there was “immense frustration” about the scale of the problem.

She called for companies to search for extremist content proactively stop the use of algorithms to promote it, adding: “You’re response to us is effectively ‘we’re working on it’ but the reasons we’re pressing you so hard is because it’s so important.

“In the end this is about the kinds of extremism, whether that be Islamism or the far-right, that can lead to very violent incidents.

“It is about the kind of hate crime that can destroy lives, the kind of harassment and abuse that can undermine political debate and democracy.

“You are some of the richest companies in the world and that is why we need you to do more.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in