Four takeaways from Whistleblower complaints on Facebook. On Sunday night’s episode of “60 Minutes,” Haugen, a 37-year-old former Facebook (FB) product manager who worked on civic integrity concerns at the firm, disclosed her name. She is said to have submitted at least eight whistleblower complaints with the Securities and Exchange Commission, saying that the corporation is withholding information about its flaws from investors and the general public. She also shared the information with authorities and the Wall Street Journal, which conducted a multi-part investigation that revealed Facebook was aware of issues with its apps.

On Monday, “60 Minutes” aired eight of Haugen’s accusations. The following are four key takeaways from the Whistleblower complaints on Facebook. Internal documents cited in the Whistleblower complaints on Facebook show that Facebook is aware of the societal impact of hate speech and misinformation on its platforms and that its “core product mechanics, such as virality recommendations and optimising for engagement, are a significant part of why these types of speech flourish.”
In one investigation into the hazards of misinformation and polarisation, it only took a few days for Facebook’s algorithm to propose conspiracy pages to an account that was following legitimate, verified pages for conservative personalities like Fox News and Donald Trump. To receive a QAnon recommendation, use the same account. And, according to documents cited in the complaints titled “They used to post selfies, now they’re trying to reverse the election” and “Does Facebook reward outrage?”, not only do Facebook algorithms reward posts on subjects like election fraud conspiracies with likes and shares, but “the more negative comments a piece of content instigates, the higher the likelihood for the link to get more traffic.”

Facebook has only taken a limited number of steps to remedy current disinformation.

According to an internal document on problematic nonviolent narratives cited in at least two of the Whistleblower complaints on Facebook eliminates as little as 3% to 5% of hate speech and less than 1% of content deemed violent or inciting violence. This is due to the volume being too great for human reviewers, and it is difficult for its computers to appropriately classify content when context is taken into account.

Internal Facebook documents on the 2020 election and the January 6 insurgency also show that individuals propagating misinformation are rarely deterred by the company’s intervention measures. According to one document, “enforcing on sites managed by page admins who have posted two or more items of incorrect information in the last 67 days would affect 277,000 pages.” 11,000 of these pages are for current repeat offenders. ”

According to Haugen, “in practice, the ‘XCheck’ or ‘Cross-Check’ mechanism effectively “whitelists” high-profile and/or privileged individuals, despite Facebook’s assertions that they “delete information from Facebook no matter who publishes it, when it violates our standards.” According to an internal document on error prevention mentioned in a complaint, “‘Over the years, many XChecked sites, profiles, and entities have been spared from enforcement.”
Internal documents on “quantifying the concentration of reshares and associated VPVs among users” and a “killswitch plan for all group recommendation surfaces” show that Facebook also reversed several adjustments that had been shown to reduce misinformation.
Furthermore, Haugen says that the corporation misled advertisers by claiming that they had done everything possible to avoid the insurgency. According to a document cited in the filing titled “Capitol Riots Breaks the Glass,” the safer parameters Facebook implemented for the 2020 election, such as demoting content like hate speech that was likely to violate its Community Standards, were actually rolled back and reinstated “only after the insurgency flared up.”
According to one paper, “we were willing to act only *after* things had spiralled into a terrible position.”