New information from internal Facebook documents reveals that the social media company’s misinformation reaches much further than the United States.

The documents, dubbed The Facebook Papers and gathered by whistleblower Frances Haugen, demonstrate Facebook’s lack of resources and social awareness in countries like India, Myanmar, and Sri Lanka, leading to the spread of hate speech and radical political sentiment that could be linked to acts of violence and have an influence on national elections.

Facebook isn’t as plugged into the social pulse of its own website outside the United States, but the company is generally aware that its platform has an impact on politics in such countries, according to The New York Times. Internal researchers have conducted tests and field studies on Facebook’s algorithm in India, where the News Feed quickly yielded hate speech, misinformation, and celebrations of violence. This type of content came from both legitimate users and uncensored bots.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” wrote a researcher in an internal Facebook report.

The documents also show that Facebook has a lopsided approach to battling misinformation, with 87 percent of its global budget dedicated to identifying misinformation solely in the U.S., while the remaining 13 percent handles the rest of the world.

This comparatively tiny set of resources results in ineffective measures in places like India, with its own host of nationally distinct radical politics. It seems like a wild misallocation of focus when you consider that India is Facebook’s largest market, with 340 million users across its platforms.

Facebook users there are easily fed graphic posts that spout anti-Muslim and anti-Pakistan rhetoric in any of the country’s 22 official languages. According to the Times, Facebook’s AI is only trained in five of these languages, with human reviewers for “some others.”



But as found in the company’s own documents, much of the dangerous content in Hindi or Bengali — two of India’s most common languages — never gets flagged, due to inadequate data. And while misinformation efforts are ramped up around national election time, Facebook’s continuous lack of resources in the country fails to more permanently fight dangerous bots and violent groups.

The issues are largest in India, but similar resource problems plague countries like Myanmar, where Facebook’s efforts to stem harmful rhetoric weren’t enough, and may have helped inflame a coup. While the company did enforce measures during Myanmar’s elections to limit the visibility of misinformation posts shared by the military, it couldn’t keep it up afterwards. Facebook rolled back its measures in the end, and three months later, the military carried out a violent coup.

While Facebook clearly recognizes its role in foreign political violence and does try to rectify it, these documents show that it’s often too little, too late. If the company is to ethically operate on a global stage, it owes its largest market — and all other markets — the cultural sensitivity and dedication of resources to safely serve its users. While a U.S.-based company should absolutely address its own country’s misinformation issues, Facebook needs to reexamine how it dedicates its time to misinformation across the globe.

Katie Harbath, former director of public policy at Facebook, told the Times that her ex-employer needs to find a solution that can be applied around the world. “There is definitely a question about resourcing,” said Harbath. “[but the answer is not] just throwing more money at the problem.”

©