Hate Speech in Lebanon: The Shortcomings and Responsibilities of Social Media Platforms 

As part of the ‘Inclusive Media, Cohesive Society’, a two-year project with the aim to foster a media sector that contributes to a pluralistic, inclusive and cohesive society in Lebanon, the Media Diversity Institute (MDI) and Samir Kassir Foundation (SKF) evaluated the hate speech policies of four social media platforms – Facebook, Twitter, YouTube, and TikTok.  

The research focused on hate speech policies applied to both user-generated and advertising content, shedding light on the policies’ potential effectiveness, transparency, user-friendliness, fairness, and respect for freedom of expression and the right to non-discrimination. It also sought to document any significant disparities in policies’ availability in Arabic and English. 

MDI and SFK partnered with Ranking Digital Rights, whose methodology is employed to benchmark companies in the ICT sector using indicators that establish high yet attainable standards for corporate transparency and policies that align with internationally recognized human rights standards. 

Read the full report HERE.

The key findings are :   

  • There was no evidence to suggest that any of the companies owning the platforms included in the study conduct due diligence in Lebanon. None of the platforms conduct robust human rights impact assessments to understand how their policy enforcement processes affect the fundamental rights of their users in Lebanon, particularly communities at higher risk of experiencing hate speech, such as migrant workers, refugees, LGBTQ+ community, and women. Consequently, they failed to address and mitigate the negative impacts that arise from these risks. 
     
  • In general, all four platforms have the majority of their available policies translated into Arabic. When an Arabic version of the policy is available, it is typically a direct translation into classical Arabic with little to no difference from the English version.  
  • Twitter’s Terms of Service, which govern users’ access to and use of the service, are not provided in Arabic. This creates a barrier for users in Lebanon (and elsewhere) who are only fluent in Arabic, as they cannot give informed consent when signing up for the service.  
     
  • Among the platforms studied, TikTok was the only one that did not explicitly and clearly commit to upholding human rights. Its policy did not encompass freedom of expression and information, and although it expressed a commitment to protecting the right to privacy, this commitment was not grounded in international human rights standards.  
     
    ● Under the leadership of Elon Musk, Twitter has experienced setbacks in terms of freedom of expression and protection from hate speech.  
  • Platforms lacked transparency regarding their use of algorithms to curate, recommend, and rank content. While TikTok disclosed more details compared to its counterparts, including the variables that influence ranking systems and user options to control those variables, this information was not available in Arabic. Facebook only provided information about how its Feed curates and recommends content using algorithmic systems, without specifying how it uses these systems in other areas such as friend recommendations and search results.  
  • Among all the platforms in the study, Twitter and YouTube demonstrated the highest level of transparency regarding their ad content and ad targeting policies. 
  • All platforms publish data on government demands they receive to restrict content and accounts, with YouTube providing the most comprehensive data, including information on the types of subject matter associated with these demands.  

You can read full report here: Hate Speech In Lebanon: The Shortcomings and Responsibilities of Social Media platforms