AI ‘Deepfakes’ On The Rise, Wrestlers Also Targeted
Morphed photos of women wrestlers made viral by trolls
Artificial Intelligence is a technology that has definitely stirred a conversation about the future of technology and how it is going to shape things thereon. However, it has also opened up a lot of avenues for perpetrators to harass women online by morphing their pictures and videos.
Recently, morphed photos of Indian women wrestlers who were arrested by the Delhi Police after they tried to march towards the Parliament on May 28, were circulated.
Social media was filled with fake photos where wrestler Vinesh Phogat was seen smiling as she was detained and sitting inside the police van. Later, two photos – one morphed and the other real – were shared on Twitter as a fact check.
In the original photograph, the Phogat sisters and other detained wrestlers have serious expressions, while they were smiling in the altered photograph.
As the political scenario in India has changed, a lot of women from different communities and backgrounds have taken to social media to raise their voice against different forms of oppression. however with it there has also been a significant rise in online harassment of women.
According to the data from the National Crime Records Bureau, in 2022, there was a 36 percent increase in cyberstalking and cyberbullying cases in India. Women and people from LGBTQI community faced major abuse, ranging from trolling to threatening or harassing phone calls.
For 20-year-old Sonya Gupta (name changed on request) it was shocking when she started receiving calls from unknown men everyday. “There were some men who made me uncomfortable, while others told me they found my number online with a video of me surfacing,” Gupta told The Citizen.
Gupta said her number and her mother’s phone details were leaked online by her estranged husband. With it she was also told by the men who called her that there is a porn video where her face can be seen circulating online.
“My estranged husband works with a big tech firm and is a software engineer and has used some applications to morph my videos and photos in compromising positions,” she said.
It was when Gupta went to the cyber police, she was told that an AI app was used to morph the videos and photos while a private email was used to send links and photos to her colleagues and friends. “It became a nightmare for me as I had to go around asking these private companies to remove the content,” she added.
Gupta is not the only one who has faced such horrendous crime.
Shabnam, a social activist, told the Citizen that her photos were used by right wing trolling accounts after she spoke on the 2019 Pulwama attack. Her photos were morphed and shared widely.
Sofia Syed, a data privacy lawyer also said that in 2018 her photos were morphed. “I was a law student then and there was this guy named Rakesh who morphed my pictures on a naked body and started harassing me demanding I share my contact number with him,” she told The Citizen.
She also said that he used communal slurs and threatened to leak her pictures everywhere if she did not speak to him.
“What is sad is that girls, even professionals, don't want to report it because it brings shame to their families and affects their lives. Irrespective of who the woman is, she thinks twice before filing an FIR or cyber complaint. Even as a lawyer, I will think twice before going to the Police station,” she added.
According to a 2021 survey by the Economist Intelligence Unit, 85% of women have witnessed harassment and online violence. The survey included the age of 18 and above and also said that younger women are more likely to have personally experienced online violence.
“Women in countries with long-standing or institutionalised gender inequality tend to experience online violence at higher rates,” the survey reported.
The use of AI to morph photos and videos is called “deepfakes” which are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another.
A deepfake is a photo, audio, or video that has been manipulated by Machine Learning (ML) and AI to make it appear to be something that it is not. Deepfakes are not videos that have been reworked by video editing software.
Speaking to The Citizen Mishi Chaudhary, founder of SLFC.in, a non-profit legal services organisation that has united lawyers, policy analysts, technologists, business professionals, students and citizens to protect freedom in the digital world since 2010 said that the use of large language model (LLM), a type of AI algorithm that uses deep learning techniques and massively large data sets to understand, summarise, generate and predict new content – will only give rise to disinformation.
“Add harassment and cyberbullying; hate speech; deepfakes; catfishing; sextortion, doxing and privacy violations; and identity theft/fraud to that list and you begin to see a larger list of harms,” she added.
The prime example is the “Sulli Deals” and “Bulli Bai” an open-source app which contains photographs and personal information of hundreds of Muslim women including activists, journalists and politicians. And recently can be seen to harass Muslim women who are “seen hanging out with Hindu men”.
Chaudhary further said that harassment via automated trolls is already known to be a major issue. “The automatic creation of harassing or threatening messages, coordinated comments on numerous platforms and interfaces, and its rapid dissemination is only going to add fuel to fire,” she added.
India has its own cyber law named the Information Technology Act, 2000. The Information Technology Act, 2000 came into force on 17 October 2000. This Act applies to the whole of India, and its provisions also apply to any offence or contravention, committed even outside the territorial jurisdiction of Republic of India, by any person irrespective of his nationality.
In order to attract provisions of this Act, such an offence or
contravention should involve a computer, computer system, or computer network located in India. The IT Act 2000 provides an extraterritorial applicability to its provisions by virtue of Section 1(2) read with Section 75. This Act has 90 Sections.
It is under Section 66E of Indian cyber law, if a person captures, transmits, or publishes private images of another’s body without their knowledge or consent, that person faces imprisonment of up to three years, a fine of up to Rs. 2 Lakhs, or both.
However, the law’s implementation has been weak.
“India has been notorious to have laws that are used not to check actual harassment but harass the victims instead. Weponisation of social media by political parties goes unchecked with impunity,” Chaudhary said.
Experts believe there are ways to counter such harassment by using AI itself. Speaking to The Citizen, Khan Ukkasha Farqaleet, research fellow at Indian Institute of Technology, Delhi said, “We can counter it with the help of AI itself, because there is no other way to counter these deepfakes as they have advanced so much in the last few years that there is no way to counter it without using it.”
Farqaleet explained that fake AI can be a source of spreading misinformation and fake news, it can also be a huge privacy violation. While deep AI has progressed massively, the counter algorithm to tackle it has not progressed, Farqaleet said.
Deep fake applications are also easily accessible and can be used by anyone who handles phones. This makes the situation more dangerous.
The application uses deep learning applications, which means it relies on neural networks to perform its functions. Neural networks are software structures roughly designed after the human brain.
When you give a neural network many samples of a specific type of data, say pictures of a person, it will learn to perform functions such as detecting that person’s face in photos, or in the case of deepfakes, replace someone else’s face with it.
Farqaleet further said that while few counter measures are there, major change will only come from government’s intervention and strict laws.
“Deepfake definitely has irregularity of data so after identifying them with the help of a counter algorithm we can identify the fake pictures. However, there is also a need for a robust algorithm from the government and the agencies and has to be done on a massive level,” he added.
While such applications have been used to harass women by ex-partners or husbands, it is now vehemently being used to attack women who are politically or active.
“It is also imperative to understand that AI by companies in the end is not going to win the race, it is the open-source AI (ChatGPT) that is going to win the race,” Farqaleet added.
Open-source AI is free software that is easily accessible to the public. Many experts have believed AI to be a threat and will cause more havoc as it is being used as a tool to undermine women and minorities in India and other parts of the world.
Cover Photograph - Twitter.