Facebook has launched a new Transparency Center, which along with its biannual report on government data requests, will also provide updates on the company’s policies and how it safeguards information on the site with human and tech filters. The social media platform company also announced its transparency report for the first quarter of 2021.” We’ll continue to add more information and build out the Transparency Center as our integrity efforts continue to evolve,” said Facebook’s VP for integrity Guy Rosen.
He also shared a total of 22 policies, 12 on Facebook and 10 on Instagram to let the public know how they implement the policies in the popular social media application.

The Community Standard Report for 2021 lists the rates of harmful content, including nudity, graphic content and violence users were exposed to. While there is nudity of 0.03 percent to 0.04 percent on Facebook and Instagram; there was a 0.05 percent to 0.06 percent hate speech, violence, and uncensored content on Facebook. He also said that AI (artificial intelligence) technology played a big part in their success in reducing their rate from 23.6 percent in 2017 to today. He added that nearly 97% of hate speech was removed from the platform before it reached audiences.
During the first quarter, Facebook took down 9.8 million pieces of organized hate content, up from 6.4 million in the last quarter of last year.
“We evaluate the effectiveness of our enforcement by trying to keep the prevalence of hate speech on our platform as low as possible, while minimizing mistakes in the content that we remove,” said Rosen.
According to Mark Fiore from the Transparency Report team, 99.7 percent of all fraud-related removals were proactive. That means they were removed before being reported by anyone. As a result, approximately 335 million pieces of suspicious fake content were removed. From Instagram, nearly 2.5 million suspicious content was removed.
Facebook has even covered copyright transactions that were removed from its various platforms, nearly 10 million were cut, and 78 percent were spotted with the help of AI. Both Facebook and Instagram use a “Rights Manager” tool that automatically detects infringing material. Facebook also uses the third-party service Audible Magic to spot and remove pirated music tracks.
And since the start of the pandemic, the company says it’s removed more than 18 million pieces of Covid-related misinformation from Facebook and Instagram.
It also revealed government requests for user data during the last six months of 2020 rose by ten per cent to 191,013.
The US once again submitted the largest number of requests, followed by India, Germany, France, Brazil and the UK. Facebook received 61,262 requests in the US – much the same as in the first half of 2020.
“As we have said in prior reports, we always scrutinize every government request we receive to make sure it is legally valid, no matter which government makes the request,” says Chris Sonderby, VP and deputy general counsel.
“If we determine that a request appears to be deficient or overly broad, we push back and will fight in court, if necessary. We do not provide governments with ‘back doors’ to people’s information.”
