Facebook closes 583 million fake accounts

Facebook closes 583 million fake accounts

The report, released Tuesday, revealed how much content has been removed for violating standards.

Facebook released data showing how many fake accounts, spam posts and other types of objectionable content it removed in the first quarter of 2018.

The company estimates around 3 to 4 percent of the active Facebook accounts on the site during the first three months of 2018 were fakes, Mr Rosen said. A Bloomberg report last week showed that while Facebook says it's become effective at taking down terrorist content from al-Qaida and the Islamic State, recruitment posts for other USA -designated terrorist groups are found easily on the site.

- Facebook took enforcement action against 21 million posts containing nudity.

By far the most prevalent of the offending categories was spam and fake accounts, and in the first quarter of this year alone Facebook apparently removed 837 million icees of spam and 583 million fake Facebook accounts.

Improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, almost three times more than it had in the last quarter of 2017.

On Tuesday, May 15, Guy Rosen, Facebook's Vice President of Product Management, posted a blog post on the company's newsroom.

More news: Russian Federation urges Iran, Israel to show restraint

Instead of trying to determine how much offending material it didn't catch, Facebook provided an estimate on how frequently it believes users saw posts that violated its standards, including content that its screening system didn't detect. The company has come under fire for failing to remove content that has incited ethnic violence in Myanmar, leading Facebook to hire more Burmese speakers.

Facebook defines content of graphic violence as the information that glorifies violence or celebrates the suffering or humiliation of others, which it says may be covered with a warning and prevented from being shown to underage viewers.

If a Facebook user makes a post speaking about their experience being called a slur in public, using the word in order to make a greater impact, does their post constitute hate speech? In this case, 86% was flagged by its technology.

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.

But Facebook's progress in policing what users see isn't likely to temper fresh criticism from regulators in Europe over privacy protections for its billions of users worldwide. Through the company's two-part investigation, they found 200 potential apps leaked confidential data.

CEO Mark Zuckerberg previously mentioned flagging of this content at his Senate hearing in April.

Related Articles