By Yosra Akasha
The advancement of AI technology in recent years is reshaping various sectors, including journalism and media. From improving audience reach and engagement to enhancing data analysis and assisting with content creation, AI technologies offer numerous opportunities for the growth and sustainability of independent media outlets and content creators. However, the impact of AI on journalists and newsrooms in war-torn Sudan stands in stark contrast to this optimistic outlook.

In Sudan, the introduction of AI technologies into the mainstream information ecosystem is largely associated with disinformation. Reports indicate that AI-generated content is being used to spread false information and promote narratives that fuel the ongoing conflict. Given widespread media illiteracy, deep societal divides, and entrenched prejudice, many—including politicians, journalists, and human rights activists—tend to accept such content without questioning its authenticity.
One evening in August 2023, I was sitting in my living room, following news about the conflict back home. Suddenly, my phone began buzzing with messages and calls from family members around the world, urgently asking about my whereabouts and whether I was safe. They had come across an AI-generated video on Facebook—with over thirty thousand views—falsely claiming that I was working for the Rapid Support Forces (RSF) out of Addis Ababa, Ethiopia and that I was responsible for organising meetings between the RSF and civilian forces (FFC). Concerned for my safety and reputation, my family feared the video would put me at significant risk from the Sudanese Armed Forces (SAF), their affiliates, and ordinary citizens who have suffered under RSF violations. While some recognised the video as AI-generated, many did not. When I showed it to my 85-year-old father, he almost believed it and sought my reassurance that it wasn’t true.
In addition to the trauma and displacement caused by the conflict, this AI-generated disinformation campaign caused me further distress. It led me to practice self-censorship, take a lower profile, and limit my publishing and public appearances. My experience is not unique—it is part of a broader smear campaign targeting civil society activists and journalists with similar accusations. Fortunately, Meta’s policy team removed the videos, although some were later reposted by accounts with smaller followings. With recent changes to Meta’s fact-checking policies—shifting toward user-generated community labeling similar to that of X (formerly Twitter)—activists and journalists in Sudan and globally are increasingly concerned about the effectiveness of these measures in curbing disinformation, hate speech, and incitement to violence.
AI Technologies: A Privilege Sudanese Media Cannot Afford
Independent media outlets in Sudan have shown interest in exploring AI technologies, but access to resources and training remains limited. Their harsh daily realities make the implementation of AI in Sudanese newsrooms a distant dream. The ongoing conflict has disconnected millions from the internet—either due to the destruction of communication infrastructure or the conflict parties’ control of access.
“During the first few months of the conflict, our correspondents used to send news and reports across the border to Chad in actual written letters. Someone there would access the internet and forward the content to us—the editors based outside Sudan—for publication,” says Mohammed Elfatih Humma, Deputy Editor-in-Chief of Darfur24, an independent Sudanese news website focused mainly on covering local and national news.
In Darfur, now largely controlled by RSF, internet access is heavily restricted and monitored. According to Humma, RSF has set up Starlink devices and established paid, communal internet access points in most cities and villages. These devices are often installed under the shades of trees or in markets, and people must pay hourly fees to use them—under the watchful eyes of RSF soldiers.
“Our correspondents were interrogated for using these RSF-monitored access points. Initially, they had to avoid using the same location twice. Eventually, many had to flee the country after receiving threats,” Humma explains. “Access is also affected by practical limitations. During the rainy season, limited solar power reduces internet availability. On days of aerial bombardment or mass displacement—such as in Tawila—armed groups (The Joint Forces in Al Fasher, RSF in Nyala, and even SLA-Abdul Wahid in Tawila) often ban the use of Starlink altogether.”
While AI technologies represent the future for many industries, it is crucial to remember that one-third of the global population remains offline, primarily in low-income countries like Sudan, where internet penetration was 28.7% at the beginning of 2024. Most AI tools are developed in the United States and China, catering to populations with reliable internet access and reflecting particular cultural norms and biases. This reality has led academics and activists to question whether AI will deepen global inequalities or serve as a tool for progress in journalism and other fields—especially with corporate entities dominating the development and regulation of AI policies.
This year’s celebration of World Press Freedom comes shortly after the second anniversary of the war in Sudan. Over the past two years, journalists and local media outlets have demonstrated remarkable innovation, resilience, and courage in defending press freedom and citizens’ right to information. Like hummingbirds trying to put out a wildfire, they fight to counter disinformation and hate speech.
Though they may lag in adopting the latest AI innovations, their unwavering dedication to building a healthier information ecosystem in Sudan offers hope that they will, in time, harness these tools for the greater good.