FROM a “constant barrage of polarising nationalistic content”, to “fake or inauthentic” messaging, from “misinformation” to content “denigrating” minority communities, several red flags concerning its operations in India were raised internally in Facebook between 2018 and 2020.
However, despite these explicit alerts by staff mandated to undertake oversight functions, an internal review meeting in 2019 with Chris Cox, then Vice President, Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform.
Two reports flagging hate speech and “problem content” were presented in January-February 2019, months before the Lok Sabha elections.
A third report, as late as August 2020, admitted that the platform’s AI (artificial intelligence) tools were unable to “identify vernacular languages” and had, therefore, failed to identify hate speech or problematic content.
Yet, minutes of the meeting with Cox concluded: “Survey tells us that people generally feel safe. Experts tell us that the country is relatively stable.”
These glaring gaps in response are revealed in documents which are part of the disclosures made to the United States Securities and Exchange Commission (SEC) and provided to the US Congress in redacted form by the legal counsel of former Facebook employee and whistleblower Frances Haugen.