Reporting Ethnicity & Religion Study

reportethnicitymicro2

Sign Up for Newsletter

To join the mailing list for the MDI newsletter send your email address, with the email subject 'Newsletter', to:

Email With Border v1.6

 

News & Events
#RacismInThePress: An Open Letter PDF Print

28 February 2019

Countries: UK

Over the past year, the Independent Press Standards Organisation (IPSO) has received 8,000 complaints—but upheld only one. In a time of increasing antisemitism, Islamophobia and overall racist and discriminatory acts across the United Kingdom, Europe and around the world, we find this lack of oversight over this kind of reporting to be both troubling, and telling.

Today, Media Diversity Institute and Hacked Off are calling on the Independent Press Standards Organisation to end this #RacismInThePress, and start taking complaints of racism and discrimination more seriously. Please circulate our letter below using the hashtag #RacismInThePress.

 

LetterIPSO

 
Despite Push for Diversity, the Oscars Have A Long Way To Go PDF Print

27 February 2019

Country: US, Global

by: Angelo Boccato

Screen_Shot_2019-02-27_at_10.46.41_AMAt first, it seemed like the 91st Academy Awards posed promising for diversity, particularly given its history of being called out for its overwhelming whiteness. Marvel Cinematic Universe’s Black Panther was nominated for several awards, as was Spike Lee’s BlackKKKlansmen, representing great leaps in recognition of African-American artists, actors and directors.

Two international films, Alfonso Cuaron’s Roma and Pawel Pawlikowski’s Cold War were also nominated for both Best Foreign Language Film and Best Director.

The evening started off well. Regina King won Best Supporting Actress for Barry Jenkins’ If Beale Street Could Talk, followed by Ruth E. Carter and Hannah Beachler’s victories for Black Panther’s costumes and production design.However, it took a surprising turn when Spike Lee lost the Best Picture Oscar to Peter Farrelly’s Green Book.

 
The Challenges and Potential of #MIL in the Middle East PDF Print

22 February 2019

Country: MENA region

JordanMILHate speech is a tricky topic anywhere. In the Middle East, it can be particularly polarizing.

“When we talk about hate speech during our trainings, we start with something that everyone can agree on—typically, Islamophobia in Europe,” says Media Information Literacy (MIL)  trainer Dr. Naglaa Elemary. While many of the people that she trains are familiar with hate speech as a concept, Elemary says that most of them only see hate speech as coming from those who don’t think like them, and see themselves as being incapable of spreading hate speech—even though their views can be equally damaging.

“Next, we replace words like ‘Islam’ or ‘Muslim,’ with other words,” she continues. She says that after a while, the conversation turns to debating hate speech in a local context—in the case of Jordan, where she worked most recently, the most divisive topics were Syrian refugees, representations of women in the media and rising inflation.

 
The Launch of the Silence Hate Media Projects PDF Print

21 February 2019

Region: Europe

Screen_Shot_2019-02-21_at_3.35.02_PMFollowing Media Diversity Institute (MDI) and Silence Hate’s Media Camp in London in April of last year, fourteen teams and individual journalists from across Europe have been selected to produce stories and campaigns countering hate speech against migrants. After several months of hard work, many of these journalists are in the final stages of their projects.

The first media project within Silence Hate to go public is called The Undocumented. The Undocumented is a series of podcasts produced by Greek journalists that gives a voice to migrants facing hate speech and discrimination.

“We are really glad to introduce a new format to Greece’s media landscape and society, since podcasts are not that popular in the country,” says Greek journalist Sotiris Sideris, who worked alongside Aristea Protonotariou, the project’s creator and producer.

 
Are Search Engines Racist? It's All About Who Creates the Algorithm PDF Print

21 February 2019

Country: Global

by: Natalie Holmes

Screen_Shot_2019-02-21_at_8.08.45_AMA few years ago, University of Southern California professor Safiya Umoja Noble was shocked to find that Googling “black girls” resulted in countless pages of pornography. The discovery prompted her to start researching how algorithms reflect and entrench bias, resulting in her book Algorithms of Oppression: How Search Engines Reinforce Racism.

“The platform is made by human beings, and the people who make it are biased,” she said in a recent interview for Vox, elaborating how a lack of diversity in STEM fields means that the majority of algorithm creators are white and male--and, in the case of Google’s programmers--often knowingly or unknowingly build their own biases into the code.

“They [the creators] code their biases into the search,” she continued. “They have to take responsibility for their creations.”

Algorithms are meant to make life easier by processing large amounts of data to draw conclusions and make predictions, from the shows you’re recommended by Netflix to the directions provided by your map app. While it is easy to think of them as objective, unthinking tools that operate outside of society, the fact of the matter is that the opposite is true; algorithms cannot exist without the humans who create them. As such, any programmer can either consciously or unconsciously code their own biases into the system, wielding enormous power over the systems that decide everything from who gets a job or a loan to who is granted parole.

It is not just the coders who are to blame. Internet users’ online interactions form the basis of data that trains algorithms, meaning that our own prejudices and assumptions are woven into the fabric of our digital lives. In 2014, a Google user searched for “english major who taught herself calculus.” The results suggested the correct wording was “english major who taught himself english.” Following a backlash on Twitter, Google explained the reasoning behind its seemingly sexist suggestion: The phrase “taught himself calculus” is searched for about 70 times more than “taught herself calculus,” so the algorithms assumed the former was correct.

In this way, algorithms solidify biases, compounding their negative effects and rendering inequality more difficult to deconstruct.

Google has since tweaked its search algorithms, and continues to do so as issues are brought to light. However, such amendments serve to patch the problem rather than solve it, due to inherent bias in the data itself. In 2015, the search engine came under fire when its Photo app algorithms mistakenly auto-tagged Black people as gorillas. Three years later, a Wired investigation revealed that the word gorilla, along with some other primate terms, are now censored, returning no results. As the dataset itself was inherently biased, adjusting the algorithm did not fully solve the problem.

Facial recognition is a growing concern for both privacy and social justice advocates. When we upload personal photos to the Internet, it’s not always possible to predict how they will be used. In April 2018, Forbes revealed that a private surveillance company was compiling a vast database of facial recognition data from popular social media sites, and using that information to make judgements and predictions about individuals. Due to a lack of diversity in datasets that train facial recognition algorithms, the technology has been shown to perform particularly poorly with people of color, especially women. As a consequence, “there is a high chance this technology would regularly misidentify people as terrorists or criminals,” Jennifer Lynch of the Electronic Frontier Foundation told Forbes.

When it comes to tackling biases and making digital media more inclusive, increased diversity in coding is key piece of the puzzle — even Google’s top executives have acknowledged that the field of AI “doesn't have the diversity it needs.” Yet it’s only by creating an inclusive culture that companies can move towards building fairer systems. Or, as Joy Buolamwini of the Algorithmic Justice League puts it, “whoever codes the system embeds her views. Limited views create limited systems.”

Meanwhile, there remains a huge knowledge gap. Only now, in 2019, are journalists and other professionals waking up to the necessity of understanding the workings and influence of algorithms. Many educators argue that algorithm literacy is essential among the wider population, too. According to Professor David Lankes of the University of South Carolina, “unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.”

That’s why it’s crucial for both the media and its consumers to gain a critical understanding of how algorithms are built, and how they interact with personal data. For children and adults alike, digital literacy creates a positive feedback loop, making the programmers, journalists and Internet users of today and tomorrow more aware of algorithmic bias — and of the real-world power structures from which it emerges.

 

 

 
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>

Page 6 of 258