Artificial Intelligence Traps and Media Diversity: Mind the Loopholes

By Bojana Kostić

Artificial intelligence (AI), algorithms, also referred to as automation tools, computational systems govern our digital media ecosystem. It is in fact a symbiotic interplay of AI and other socio-technological factors paired with exploitative economic models of social media platforms that shape the media ecosystem of today. And while digital and physical boundaries are blurring, we don’t know nearly enough about how this interplay transforms concepts of diversity, informational and media pluralism.

So, how AI affordances foster or hinder media diversity? This is a central question this essay explores by offering a set of resources and a mind-map to explore these complex issues. To that end, it sketches out the critical points, referred to as AI traps that media, journalists, human rights defenders, and citizens should be aware of when engaging with this topic. A number of open access resources, literature and knowledge zones will be included, also via hyperlinks.

Trap 1: AI is a neutral agent

It is not. As a matter of fact, building the algorithm and the machine learning systems is interwoven with values and goals of their creators, many of which are profit-oriented and exploitative, often lacking human rights oversight mechanisms and “commitments”. Thus, a pure technological and computational presentation of an algorithm as a formula for solving complex tasks, mathematical assignment is one way, often narrow and human-blind, to look at AI. 

“One way to think of AI is as salt rather than its own food group. It’s less interesting to consider it on its own. But once you add salt to your food it can transform the meal.” People’s Guide to AI

We need to put AI into a specific context, in this case the digital ecosystem and social media platforms and only then will we be able to discern affordances and intentions of their creators. To illustrate this point, the most common example are algorithms that recommend content. In practice, this means that through an invisible processing of a set of personal data points, their correlations, economic interests, coupled with the history of previous interaction with similar content, algorithms and AI make decisions and select content that, for example, Facebook user will access, read and interact with. Content means anything that is capable of being shared online, including media content, political information, socially sensitive topics and events. Seen through this example, Facebook‘s News ranking AI is a powerful instrument in the company’s (unsupervised) hands that shapes our media diet, diversity and exposure at the global level. And this is merely one form of content moderation, there are other less discussed agile and invisible processes that affect our media “kaleidoscope”. Finally, keep in mind that our accounts are essentially limited spaces (as much as traditional journals and broadcast content is) and there is a myriad of information that might be relevant for citizens that simply due to this limitation, but more importantly profit driven processes and technological affordances is not able to reach “the networked publics”.

Trap 2:  AI is a complex topic to cover

This is so true. As invisible and powerful tools, algorithms and AI have often been connected to a utopian, cyborg-like future, science fiction and as such covered with the veil of tech-mysticism and unattractive for media and citizens to engage with. But, even if we would like to look into this “black box” as it is often referred to, we would be faced with a set of ethical, legal, technical and societal challenges. The key challenge lies in the fact that all knowledge about users’ interaction, including also the media diversity logic, is locked in and protected by trade secrets and patents owned by social media companies.

Here are some useful and evidence based academic and community knowledge zones that are worth consulting when addressing these complex problems:  Personal communication project from the University of Amsterdam, series of sessions on Reimagine the Internet | Knight First Amendment Institute,  Misinformation studies from The Harvard Kennedy School Misinformation Review,  check also the work of: Share Lab visual research, Digital Freedom Fund, European Digital rights Initiative, AI Now, Algorithmic watch, Access now. 

Trap 3: AI increases inequality and injustice 

And yet, we often tend to focus on the dominant economic position and power of social media platforms. These considerations are the key as they are a front door into the world of problems, among them perpetuation and amplification of injustice and inequality. There are just a handful of reports on how AI-driven process harm individuals, groups and especially communities at risk of discrimination. This does not come as a surprise as we lack a systematic understanding of this emerging phenomena. To know more, media professionals need to engage with the intersectional harms, lift up the stories of marginalized communities and the ways AI shapes their realities. Their diverse perspective and local contexts can help us pinpoint the effects of the digital ecosystem on media diversity and societal cohesion.

To learn more, reach people who suffered harm, check these relevant resources: gaming AI for good – AI for people, AI narratives – beyond angloamerica world, AI harms in Explainable AI project, Amnesty international campaign – Ban the Scan; MIT: Gender-harms and digital blind spots and essential work of Algorithmic justice league and A New digital deal of IT for change.

Trap 4: Beyond the Western hemisphere 

The digital infrastructure, AI and social media platforms spread across the world. But, given that these platforms and other tech-giants are predominantly USA based, thus governed by specific western values, logic and law, we tend to perceive them primarily from this particular geo-political perspective. Thus, we learned a lot about the opaque influence of social media on our democracy, election processes, freedom and rights.  But little is written about the impact of these tech-giants on the media ecosystem outside the  “Western world”. Think about what we know about Facebook businesses and AI driven processes in Pakistan or Nepal, Serbia or Greece? It is essential to uncover these stories and put them in a wider perspective, to be able to paint a more nuanced understanding of the relationship between social media platforms and media diversity.

Trap 5: AI power disbalance 

It is more than a disbalance.  As fully operational AI systems produce tons of personal data (pictures, videos, identifiers, etc.) that are then sold to other private entities and reused to produce other technologies, systems, platforms, etc. These economic models are thus purely  exploitative, as noted in the introduction. But, as such, they generate power of social media giants that position them as  “non-state sovereignties, with political influence and pseudo-diplomatic relations with states”.

But, the governments across the world are relying on this power, data and logic to obtain a greater control of public spaces and private lives. Cameras with facial recognition technologies are opaque products of these super-powerful public-private partnerships in which the state pays with citizens money private companies to build these oppressive systems. Taken together, these processes exhaust citizens, journalists and media from meaningful oversight control and countervailing power. The fact that AI surveillance is outside the public eye, and therefore often underreported, increases its negative impact on media diversity, societies and democracy. 

Avoiding traps 

This article opened up the doors of AI driven processes, their impact on (erosion) of media diversity and the way journalists, citizens and communities could engage with this world of topics.  The key takeaway is that we need to focus on diversifying information and knowledge we produce about AI by focusing on real harms and human aspects of the problem. In other words, to explore digital ecosystem and more importantly to sensitize and educate people globally about the risks and harms stemming from the use of AI on our media and diversity,  we need to “humanize” AI, and write more about topics like  informational desserts, discriminatory content, uncover the spread of hate and other harmful content, mis/disinformation and expose how are all of these transformative processes undermining media diversity.


Related Media Diversity Institute Projects

If you are interested on how Artificial Intelligence can be used in journalism and in countering hate, have a look at DTCT: Detect The Act an MDI project led by TextGain which deploys artificial intelligence to monitor online hate speech and generate insight which can fuel compelling, data-driven campaigns.