A devastating security lapse at Facebook has exposed the personal details of 1,000 content moderators and put them at risk of being targeted by terrorists.
Facebook uses moderators to evaluate various areas of content, including sexual material, hate speech and terrorist propaganda.
Terrorists are known to take an interest in moderators as counter-terrorism is seen as a sin and could result in death.
One of the moderators, who is an Iraqi-born Irish citizen, has gone into hiding and says he fears for his life.
The moderator, speaking anonymously to The Guardian, said: ‘[When] you have people like that knowing your family name you know that people get butchered for that.
‘The punishment from Isis for working in counter-terrorism is beheading.’
Facebook has confirmed the security breach in a statement, and says it has made changes to ‘better detect and prevent these types of issues form occurring.’
The bug in the software, which was discovered by The Guardian, led to the profiles of these moderators appearing as notifications in the activity log of Facebook groups whose administrators had been removed for breaching terms of service.
This means that their personal details were viewable by the remaining admins in these groups.
Facebook has confirmed the security breach, and a spokesperson said: ‘We care deeply about keeping everyone who works for Facebook safe.
‘As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.’
Six of the moderators who work in Facebook’s Dublin offices have been flagged as ‘high priority’, after the firm found that their personal profiles were probably seen by potential terrorists.
Speaking anonymously to The Guardian, one of the six ‘high priority’ moderators, who himself is Iraqi and has gone into hiding, said: ‘It was getting too dangerous to stay in Dublin.
‘The only reason we’re in Ireland was to escape terrorism and threats.’
The moderator went into hiding after discovering that seven individuals associated with a suspected terrorist group he banned from Facebook –had viewed his personal profile.
The other high-risk moderators are aksi likely to have had their personal profiles viewed by people with links to Isis, Hezbollah and the Kurdistan Workers Party.
The anonymous moderator said: ‘When you come from a war zone and you have people like that knowing your family name you know that people get butchered for that.
‘The punishment from Isis for working in counter-terrorism is beheading.
‘All they’d need to do is tell someone who is radical here.’
The problem was initially flagged after Facebook moderators started receiving friend requests from people affiliated with terrorist organisations.
Facebook then launched an urgent investigation, convening a ‘task force of data scientists, community operations and security investigators’, according to internal emails seen by The Guardian.
Craig D’Souza, Facebook’s head of global investigations, tried to reassure the moderators that there was ‘a good chance’ any suspected terrorists wouldn’t know who they were.
In the internal emails, he wrote: ‘Keep in mind that when the person sees your name on the list, it was in their activity log, which contains a lot of information.
‘There is a good chance that they associate you with another admin of the group or a hacker…’
But many of the moderators were not happy with the response, and one replied: ‘I understand Craig, but this is taking chances.
‘I’m not waiting for a pipe bomb to be mailed to my address until Facebook does something about it.’
Facebook took two weeks to fix the bug, by which point it had been active for a month.
In the hopes of protecting the vulnerable moderators, Facebook offered to install a home alarm monitoring system and provide transport to and from work, as well as counselling.
And as a result of the leak, Facebook told the Guardian that it is testing the use of administrative accounts that are not linked to personal profiles.