Confronting AI bias in southeast Asia: safeguarding democracy in the age of automation 

By Dr. Nuurrianti Jalli, Assistant Professor of Professional Practice, Oklahoma State University

Artificial Intelligence (AI) has become increasingly prevalent in various aspects of our lives, from virtual assistants to facial recognition systems. While AI has the potential to revolutionise industries and improve efficiency, concerns have arisen regarding its use for information warfare and its impact on political discourse globally. As AI systems are primarily developed by Western countries, the issue of bias and fairness poses significant risks, particularly for non-Western countries that are dependent on these systems.  

AI bias refers to the systematic errors or prejudices that can occur in AI systems due to flawed data, algorithms, or human biases embedded in the development process. In Southeast Asia, AI bias manifests in various forms, reflecting the region’s diverse socio-cultural, economic, and political contexts. 

Consequences and implications for diversity and inclusion 

The consequences of AI bias in Southeast Asia are significant and far-reaching. At an individual level, biased AI systems can lead to unfair treatment, discrimination, and denial of opportunities for marginalised communities. At a societal level, AI bias can exacerbate social tensions, erode trust in institutions, and hinder economic development.  

In the realm of election integrity, AI bias presents a significant risk to democratic processes. Predictive AI models that project election results based on skewed data can inadvertently favour certain candidates or political parties, thus undermining the public’s trust in electoral fairness. For instance, Gemini by Google faced criticism for being labelled a“woke AI” due to its tendency to present data that appeared to be biased towards liberal perspectives, raising broader concerns about the neutrality of AI systems in political contexts. Additionally, generative AI technologies are increasingly used to fabricate misinformation and propaganda, effectively shaping voter behaviour to the advantage of specific groups.  

This manipulation extends to organised social media campaigns, as highlighted in a Oxford Internet Institute report, which identified evidence of systematic manipulation in several Southeast Asian countries. These findings emphasise the critical need for vigilance and regulatory measures to safeguard the integrity of elections from AI-driven interference. 

One area where AI bias is evident is in facial recognition systems. A study by Buolamwini and Gebru (2018) found that commercial facial recognition algorithms have significantly higher error rates for darker-skinned individuals, particularly women. This bias can result in wrongful arrests and discrimination, as illustrated by the case of Porcha Woodruff, an eight-month pregnant woman from Detroit who was mistakenly accused of carjacking due to flawed facial recognition technology. This risk is also prevalent in Southeast Asia, where the diverse skin tones of many local populations are often poorly represented in training datasets, which are primarily composed of Western demographics. This lack of representation increases the risk of misidentification, compromising not only individual liberties but also undermining public trust in safety initiatives that utilise such technologies. 

Credit scoring and lending decisions represent another critical area where AI bias can exacerbate existing disparities. In countries such as Indonesia and Vietnam, a substantial segment of the population remains underserved by formal financial services, with access rates notably lower than global averages. AI algorithms, when trained predominantly on data from this limited demographic, risk perpetuating inequities by systematically denying credit to individuals engaged in the informal economy. These individuals, who may rely on informal employment or small-scale entrepreneurship, are often invisible to traditional data collection processes, thus further entrenching financial exclusion. 

Also, large language models (LLMs) pose unique challenges that can contribute to the marginalisation of linguistic communities in Southeast Asia, a region noted for its rich linguistic diversity with over 1,000 languages. These models often exhibit significant performance disparities across languages. AI-powered translation tools, which are typically developed with a focus on high-resource languages that boast substantial digital resources, frequently underperform for the many low-resource languages prevalent throughout Southeast Asia. This performance gap not only degrades the quality of translations but also restricts access to essential information and services for speakers of these languages. Despite recent advancements with locally developed LLMs, such as Kata.Ai in Indonesia and Southeast Asian Language in One Network (SeaLion) in Singapore, the most widely used models are still those developed in Western contexts, which often do not adequately cater to the linguistic needs of the Southeast Asian populations. 

The subpar performance of these tools can lead to broad social repercussions, including restricted access to governmental services, healthcare, education, and business opportunities. Effectively addressing these biases necessitates a dedicated effort to incorporate a diverse array of linguistic data into AI training sets. Additionally, there must be a strong commitment to developing technologies that truly reflect and enhance the region’s vast cultural and linguistic diversity. 

Factors contributing to AI bias 

The factors contributing to AI bias in Southeast Asia are multifaceted and interconnected. One major factor is the lack of diversity and inclusivity in the AI industry, which is dominated by researchers and practitioners from Western countries. This lack of representation can lead to blind spots and biases in the development of AI systems.  

The historical impact of colonialism continues to shape the dynamics of power and social hierarchy in technology. The legacy of colonialism is evident in the ongoing Western dominance in tech development, which often dictates global technology trends and standards without sufficient regard for local variations. This dominance not only shapes the technological landscape but also perpetuates historical inequalities by embedding these power imbalances in the AI systems deployed across different regions, including Southeast Asia.  

Other factors contributing to AI bias include the digital divide, which affects c, the brain drain of local AI talent to Western countries, and the lack of comprehensive legal and ethical frameworks for AI development and deployment. For instance, in Southeast Asia, no country has legally binding regulations specifically for AI, relying instead on national guidelines and roadmaps that provide general direction but no enforceable rules. In Indonesia, concerns about the potential misuse of AI in 2024 Elections despite initial hands-off approach by the General Elections Commissions, eventually led to establishment of a national AI ethics to promote responsible technology use.  

In a recent discussion with Gerry Eusebio from De La Salle University University of the Philippines, it was highlighted that often, policies and guidelines in the Philippines amount to mere lip service with little to no actual implementation by the government. This issue is not unique to the Philippines; similar patterns of inadequate enforcement and commitment are observed throughout the region. 

Mitigating AI bias 

Tackling AI bias in Southeast Asia requires a multi-pronged approach that prioritises diversity, inclusivity, ethics, and accountability. Firstly, fostering greater diversity and inclusivity in the AI industry is crucial. Targeted recruitment and training programs, organised by the Association of Southeast Asian Nations (ASEAN), can help achieve this goal. By sharing resources and engaging in open dialogue about mutual concerns regarding this rapidly growing technology, ASEAN member states can collaborate to improve the AI landscape and ensure its safe development in the region. Encouraging the participation of underrepresented groups, such as women and ethnic minorities, in AI education and workforce initiatives can help bring diverse perspectives and experiences to the table, ultimately leading to more equitable and inclusive AI systems. 

Secondly, it’s imperative that the development and implementation of ethical frameworks and guidelines engage inclusive, multi-stakeholder processes, with citizens taking these guidelines seriously. The recently published ASEAN Model AI Governance Framework provides a solid foundation for the region. However, while it presently emphasises the economic dimensions of AI, it’s essential for ASEAN to expand its focus to encompass the broader societal impacts of AI across Southeast Asia.  

Some ways to improve existing guidelines include conducting comprehensive impact assessments to identify and mitigate the potential negative consequences of AI on various sectors, engaging with civil society organisations, academic institutions, and marginalised communities to ensure that the development of AI policies and guidelines is informed by diverse perspectives and experiences, and establishing clear mechanisms for public participation and feedback in the AI governance process, promoting transparency and fostering trust between governments, technology companies, and citizens.  

By proactively addressing the potential violations and risks associated with AI, ASEAN can demonstrate its commitment to responsible AI development and set an example for other regions grappling with similar challenges. It is essential for Southeast Asian governments, non-governmental organisations, and civil societies to allocate resources towards public awareness and educational initiatives aimed at enhancing digital literacy and critical thinking skills among their citizens. Through empowering individuals to comprehend and interact with AI technologies, Southeast Asian societies can cultivate a culture of informed dialogue and proactive involvement in shaping the future of AI within the region.  

Tackling AI bias in Southeast Asia requires a concerted effort from all stakeholders, including governments, industry, academia, and civil society. By prioritising diversity, inclusivity, ethics, and accountability, and by proactively addressing the potential risks and societal impacts of AI, ASEAN can harness the transformative potential of this technology while safeguarding the rights and well-being of its citizens. Through collaboration, shared learning, and a commitment to responsible AI development, Southeast Asia can emerge as a leader in shaping an AI future that benefits all. 

Pictures from shutterstock.com

Disclaimer: 
The views and opinions expressed in this article are solely those of the author and do not reflect the official policy or position of the Media Diversity Institute. Any question or comment should be addressed to  editor@media-diversity.org