NEW DELHI: For a long time now the notion has been spread that the main reason behind “fake news” is gullible individuals’ lack of digital literacy. But a recent study from the Department of Media and Communication at the London School of Economics and Political Sciences and funded by WhatsApp challenges this notion.

The study attempts to answer why people forward lies. It focused on messages forwarded on WhatsApp that led to “an increase in vigilante violence in India since 2005.”

“In the majority of instances, misinformation and disinformation, which contribute to the formation of mobs that engage in lynching and other discriminatory violence, appears to be spread largely for reasons of prejudice and ideology rather than out of ignorance or digital illiteracy,” the report states.

Lead researchers Shakuntala Banaji and Ram Bhat say that while the media literacy explanation is not entirely false, it is “a naive belief. In some cases, certain types of functional media literacy can in fact strengthen the power of some groups to spread ideological disinformation. In different countries the groups targeted will be different. The outcomes are usually violence,” they told The Citizen.

Banaji and Bhat conducted over 200 qualitative interviews this year with experts and focus groups among multiple sets of users across four states: Karnataka, Maharashtra, Madhya Pradesh and Uttar Pradesh. They also analysed words, pictures and videos in a large number of WhatsApp forwards.

They found that various user motivations play a role in forwarding such messages. The naivety of older users, a sense of social responsibility to take matters in hand against something suspicious, the need to be seen as a local expert, and trust in the person that sent them the message were the main motivations identified.

Personal stereotypes against certain communities cultivate these tendencies in users. “Users appear to derive confidence in (mis)information and/or hate speech from the correspondence of message content with their own set of prejudiced ideological positions and discriminatory beliefs.”

“The immediate source of a forward – the person who forwarded the message to a group or individual – is one of the most important factors in a user’s decision” to forward a message, the report says.

Even if the user has doubts about a message, they will hesitate to report it out of respect for the person who sent it to them.

This carries consequence. On the effect of subtle bigotry during elections, or during incidents of cross-border military action, the study says that “the chance of long-term discrimination turning into physical violence against particular demographic groups increases.”

A study participant in a Hindu and Jain housewives focus group in Mumbai told researchers that: “There’s one person in one of the groups who is always trying to stir troubles. He posted that Why was Modiji not stopping Pulwama? Why was govt allowing that attack? We threatened him. Stop that talk or get out. Even if that is his opinion, still shut up, don’t make a fight when you have no fact to show. Just keep quiet or just go to Pakistan and live there. Government is working hard. The country was attacked. We have to attack them (Pakistan, terrorist) in return.”

What vacuum are such forwards able to fulfil?

Elaborating on the current state of the media in India, Banaji and Bhat told The Citizen that “the mainstream media in India needs an urgent overhaul, and strong ethical regulatory frameworks, to prevent the spread of hate speech and encourage critical questioning of all political parties and government decisions, statements and actions.”

“The media should not go on praising a single regime, it should work for citizens, and not for a particular religious or caste group.”

WhatsApp says it has been trying to curb fake news by banning accounts that send too many messages. A spokesperson who didn’t want to be named told The Citizen, “We ban accounts engaging in bulk or automated messaging, we ban 2 million accounts per month through this methodology – this is particularly important and it helps prevent any entity from delivering messages at scale on our system.”

But the report, funded through a WhatsApp Misinformation and Social Science Research Award, suggests that disinformation is not usually rooted in a single entity. Rather it is supported by the transmedial bombardment of similar messages. “The sensationalism of mainstream media formats and genres works very well when edited and used out of context in WhatsApp based propaganda or misinformation,” it observes.

Like other governments, the Indian government has been trying get WhatsApp to let it bypass the platform’s encryption, which would enable government officials to read users’ messages. It is also reportedly trying to link all social media accounts with people’s Aadhar numbers.

The researchers however are of the opinion that these moves are no solution to the problem.

“Even on unencrypted platforms, we have seen no evidence that law enforcement in India has prosecuted those who are originators and/or reported as originators of hateful and harmful materials linked to anti-Muslim, anti-Dalit, or anti-Dissident content. Rather we see evidence of the state and allied authorities selectively taking action against groups or individuals whose ideology does not fit with their own,” they told The Citizen.

On the subject of Aadhar linking, the researchers believe it would be “dangerous” for the health of Indian democracy. They say it will only worsen the situation, as numerous investigations have shown that Aadhar data is breachable.

“Aadhar linking would simply give vigilantes working on behalf of political parties more access to the private and personal data of people, and would also enable state authorities to surveil and frame human rights activists or Dalit activists to prevent criticism of the government.”

Banaji and Bhat suggest that besides making technical changes that would make such online activities difficult, systematic steps should be taken to provide “critical media literacy”, mirroring the efforts taken to curb child pornography, to deal with the rise and spread of hate speech.

You can read the study here.