Facing global scrutiny over allegations of spreading hate, Facebook global head of safety told British lawmakers that its algorithms demote rather than promote polarising content. Senior FB officers even welcomed effective government regulation if required.
This comes after governments in Europe and the United States are grappling with regulating social media platforms to reduce the spread of harmful content, particularly for young users.
Britain is leading the charge by bringing forward laws that could fine social media companies up to 10% of their turnover if they fail to remove or limit the spread of illegal content.
Secondary legislation that would make company directors liable could be proposed if the measures do not work. Earlier former Facebook employee and current whistleblower Frances told the same committee of lawmakers that Facebook’s algorithms pushed extreme and divisive content to users.
Facebook’s Antigone Davis has denied all the charges. “I don’t agree that we are amplifying hate,” Davis told the committee on Thursday, adding: “I think we try to take in signals to ensure that we demote content that is divisive for example, or polarising.”
She said she could not guarantee a user would not be recommended hateful content, but Facebook was using AI to reduce its prevalence to 0.05%.
“We have zero interest in amplifying hate on our platform and creating a bad experience for people, they won’t come back,” she said. “Our advertisers won’t let it happen either.”
Davis said Facebook, which announced on Thursday it would rebrand as Meta, wanted regulators to contribute to making social media platforms safer, for example in research into eating disorders or body image.