Putting out crucial compliance numbers in line with the new IT rules in place, Facebook has announced to have acted on 33.3 million pieces of content, and more than 2.8 million pieces of content at its photo and video sharing site Instagram in the last 45 days between June 16 and July 31, global tech giant said in its monthly transparency report.
According to the social media network, it proactively acted on 3.5 million posts related to violent and graphic content and 2.6 million posts concerning adult nudity and sexual activity, it said in the report, published in line with the new IT Intermediary Guidelines.
Meanwhile, Whatsapp in its transparency report said that it banned over 3 million Indian accounts in the last 46 days between 16 June to 31 July 2021, in the “interest of preventing online abuse and keeping users safe”.
Among the accounts banned include Indian accounts actioned through its prevention and detection methods, for violating the laws of India or WhatsApp’s Terms of Service along with user-reports or grievances received.
The tech company also said that more than 95% of bans in India are due to the unauthorized use of automated or bulk messaging or Spam and the global average number of accounts that it bans to prevent abuse of the platform is around 8 million accounts per month. Whatsapp also said that it received 594 complaints from users during this period and a majority of them appealed to revoke a ban over their account followed by account support and product support.
Also, nearly 25.6 million spam posts were also acted upon by Facebook. The other top categories on which action was taken included suicide, self-injury, and hate speech, Facebook said. The platform received 1,504 reports through the Indian grievance mechanism and responded to all of these reports.
Over one-third of these reports (474) were about the hacking of accounts and 143 were over fake profiles, while 146 were about bullying or harassment. Other reports were about content showing a user in nudity/partial nudity or in a sexual act, inappropriate or abusive content, and issues with how Facebook is processing user data.
“Over the years, we have consistently invested in technology, people, and processes to further our agenda of keeping our users safe and secure online and enabling them to express themselves freely on our platform. We use a combination of Artificial Intelligence, reports from our community, and review by our teams to identify and review content against our policies,” Facebook said.
The compliance report also contains details of the content it has removed proactively using automated tools and the details of user complaints received and action taken. Facebook expects to publish subsequent editions of the report with a lag of 30-45 days after the reporting period to allow sufficient time for data collection and validation.
It will continue to bring more transparency to its work and include more information about its efforts in future reports, the company said. Facebook-owned Whatsapp said the majority of users who reach out to the company are either aiming to have their account restored following an action to ban them or reaching out for product or account support.
Its top focus is preventing accounts from sending automated bulk spam at scale and WhatsApp maintains advanced capabilities to identify accounts sending a high or abnormal rate of messages and bans millions of such accounts attempting this kind of abuse in India and across the world. It also said that the number of accounts actioned has increased significantly since 2019 because its systems have increased in sophistication and it bans the vast majority of these accounts even before it receives complaints in the form of user reports.