By Alessandra Vescio
Molly Russell was 14 years old when she was found dead in her bedroom. An inquest, concluded at the end of September 2022, established that “Molly Rose Russell died from an act of self-harm whilst suffering from depression and the negative effects of on-line content”. Andrew Walker, the coroner, said: “Molly subscribed to a number of online sites” that work in such a way that provided her content that she didn’t ask for. Some of the content Molly saw “romanticised acts of self-harm” while others “sought to isolate and discourage discussion with those who may have been able to help”. It was reported that 2,100 of the 16,000 pieces of content Molly viewed on Instagram before her death were about suicide, self-harm and depression, and she created a board on Pinterest with 469 images on similar topics.
Dr Navin Venugopal, the child psychiatrist the inquest heard evidence from, described the content as “very disturbing” and said he couldn’t sleep well for “a few weeks” after seeing the content Molly saw before her death.
The coroner also said that the websites used by Molly were not safe at the time and displayed inappropriate content for a 14-year-old girl.
A spokesperson for Meta, the parent company of Instagram and Facebook, said they are “committed to ensuring that Instagram is a positive experience for everyone, particularly teenagers” and they will continue their work “with the world’s leading independent experts to help ensure that the changes” they make “offer the best possible protection and support for teens”.
Pinterest replied to the report provided by the coroner with actions the company intends to take by the end of 2023, such as developing “ways to further limit the distribution of depressive content to teens” and to update their “self-harm policy to ensure stricter enforcement”.
After Molly’s death, her father Ian Russell started a campaign for protection for children and young people online and more accountability for social media platforms. He talked about social media firms “monetising misery”, referring to algorithms that use ‘engagement-based ranking’, showing content ‘related’ to those with which a user has engaged most in the past, and prioritising posts that are likely to provoke the strongest reactions and interactions. This means that if a user watches videos, likes posts and engages with content focused on a certain topic, they are likely to be shown similar content on the same topic, and this regardless of the harm some content may cause.
Since 2017, social media platforms, including Instagram and Pinterest, have made some important changes to community guidelines and policies, such as banning graphic and non-graphic self-harm related content.
Rhea Freeman, social media expert, told MDI: “I think tech companies are becoming increasingly aware of younger users and what can be done to help make the environment better for them, but the system will always be challenged by the users who spread content that targets those in negative ways.”
As some studies have indeed shown, in recent years there has been a significant increase in posts on Instagram relating to self-harm and self-injury, along with the creation of new hashtags invented by users to avoid censorship by the platform. On TikTok, one of the most popular social media apps among teenagers, some content manages to avoid moderation by using coded hashtags.
Another issue revealed by research commissioned by Ofcom is that a high percentage of children with social media accounts lie about their user age. For instance, a user age of at least 16 has been declared by 47% of children aged between 8 and 15 years old and 39% of children aged between 8 and 12 years old, while 32% of children aged between 8 and 17 have a user age of at least 18. This means that they have access to features and content not available to children and young people and restricted to adults.
“There are various tools across platforms that have been created to help keep young users safer,” Freeman told MDI, “and I would say that while tech companies are developing these features, it’s also really important that we keep up to date with these and employ them as needed and teach them to younger users too.”
As a leaked internal research revealed, Facebook, which bought Instagram in 2012, has been aware of the negative impact the photo-sharing app has on young people’s mental health for years. During an internal presentation, Facebook researchers showed that teenagers accused Instagram of making them feel worse, and of those who reported having suicidal thoughts, 13% of British users and 6% of American users blamed the app for their suicidal thoughts. It has also been reported that the ‘Explore page’, an Instagram feature that provides content related to what has been viewed most in the past, has been described as toxic for young people because it “can send users deep into content that can be harmful”. Instagram said the research shows the “commitment to understanding complex and difficult issues young people may struggle with”, but the company has been criticised for not being transparent and doing nothing with those findings.
Facebook whistleblower Frances Haugen described the choices made inside the company as “disastrous – for our children, for our public safety, for our privacy and for our democracy”. She explained that “Facebook makes more money when you consume more content” and the company “has realised that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads” and Facebook will make “less money”. Talking about the impact Instagram has on young girls, Haugen stated that “Facebook’s own research says it is not just that Instagram is dangerous for teenagers, that it harms teenagers, it’s that is distinctly worse than other forms of social media”.
According to a new study, the TikTok algorithm can also have a negative impact on the mental health of young people. Conducted by the Center for Countering Digital Hate, the research revealed that TikTok hosts content about eating disorders with over 13.2 billion views and provides harmful videos to vulnerable teenagers. After creating new accounts as 13-year-old users, the researchers also found that body image and mental health content was provided every 39 seconds. “This report underscores the urgent need for reform of online spaces,” said Imran Ahmed, CEO at the Center for Countering Digital Hate.
An attempt to control and regulate social media platforms and their content in the UK has been ongoing for nearly three years, with the so-called Online Safety Bill being delayed and amended several times. Amid accusations of posing a threat to free speech and free expression and the opposite allegations of watering down the bill, the government has been called upon to take concrete actions that can safeguard children and young people, hold social media platforms accountable and make the Internet an open and safe space.