By Anmol Irfan
In 2018, prominent Indian journalist Rana Ayyub became victim to a deep fake pornography campaign – which escalated so much that Ayyub herself shares how she was hospitalised for heart palpitations and anxiety in its aftermath. In South Korea, more than 375,000 people signed a petition against deep fake pornography in 2021 following an increase of deep fake images of Korean actresses being created online. The impact of the misuse of technology in Korea was so widespread that it led many to re-evaluate the benefits of such exponential tech growth. Ninety percent of deepfake victims are women, with reasons ranging from ‘revenge’ to blackmail, which begs the question of why, despite these campaigns ruining lives for years, social media companies and countries have not addressed these issues more efficiently. It also raises concerns about how these campaigns have become so effective – often playing on both existing media algorithms and appealing to psychology.
Many of these deep fakes are often obviously flawed. Ayyub points out that in her case the woman who had her face in the video had obviously straight hair and not Ayyub’s distinct curly look, but the video did damage regardless. Last year, deep fakes of Ukraine’s First Lady Olena Zelenska were released, showing her lying on a beach topless – aimed at undermining her at a time when her country was in crisis. Sexuality, frivolity, and undermining of intelligence are all tools used against women in particular. Lucina Di Meco, disinformation expert and co-founder of organisation #ShePersisted which fights gendered disinformation online points out,
“We see content really obsessing around their sexuality or women supposedly breaking social norms around what is ‘decency’ for women, images portraying them as stupid, drunk, an element of being out of control.”
“It reinforces this idea that women belong to a different sphere than the public sphere and because it builds on existing beliefs and bias, even when the story is disproven, the damage stays because your bias is already reinforced,” she adds.
She also points out that social media platforms reward such behaviour. Media platforms boost content based on likes and shares, which means anything that gets popular grows exponentially with little to no checks and balances. These algorithms have previously been accused of limiting diverse content on social media because users are often only shown content related to what they already agree or engage with, which makes them less exposed to diverse content and views.
Because of the way in which media spaces are growing horizontally and how social media is now becoming the main way in which many everyday citizens consume information, it’s also important to look at the way in which everyday users play a role in forwarding disinformation or making the campaigns more successful. Speaking about what makes these campaigns successful Tom Buchanan, an academic psychologist focusing on online behaviour and misinformation, says while it’s not a simple answer, two reasons have stood out.
“People tend to engage with things that are consistent with what we believe whether that’s political or some other belief – and that’s not a great surprise. Another is the likelihood that we’ve seen things before, and the extent to which it is popular,” he says.
Despite deep fakes being created from as early as 2017 or 2018, they’ve only recently become a part of more mainstream conversations, as the frequency of them increases. What most people thought was a dystopian reality or an unfortunate side effect of being famous is now trickling down to everyday – much like the same way traditional media has bled into social media. Buchanan s says that many psychologists took an interest in the psychology of disinformation following Donald Trump’s election and Brexit when there was evidence of elections being influenced by misinformation.
“The human brain is designed or built to trust what the eyes see up till recently, when all of us have been used to seeing videos we tend to believe they are real. Now we’re entering a new era where it seems very easy for non-experts to generate realistic images and even videos – which damages our trust in what we see and it’s hard to predict how we’ll be reacting to the news landscape. We’ll start becoming more distrustful,” he tells Media Diversity Institute.
While Papadopoulos believes that in an ideal world synthetic images could make visuals in articles and media online more diverse, he’s also aware that current AI models are trained by existing systems, which have made inherent biases more obvious in the content they generate. That highlights a larger problem of media and content diversity.
Ultimately these become much deeper issues that go beyond just sharing a false image or accidentally believing the wrong thing.
“Disinformation relies on wedge issues – they look for things where people can be split into opposing factions and these are a key target for disinformation actors to try and sow divisions within society, so people will try to seek out these divisions and try to exploit them,” Buchanan says.
It’s why Di Meco believes it’s unfair to put the onus of fixing the issue on users themselves. Instead, she calls for an upheaval of the profit systems that reward behaviour that shares controversial disinformation.
“Regulation side is also very important, when it comes to social media platforms, and traditional media has a role to play too. Traditional media has the opportunity and responsibility in reporting on disinformation including deepfakes without replicating them, e.g. without details and without link,” she says.
Ultimately much of the success of disinformation campaigns comes through their connection to the way users consume information. So, media outlets will have to change the way that information is presented if we are to actually have any hope of preventing disinformation loops from taking over our feeds and news cycles.
Photo Credit: Trismegist san / Shutterstock