Two truths and a lie: An experiment to evaluate behaviours in identifying online misinformation

UNDP KENYA
6 min readDec 16, 2021

--

Identifying Online Misinformation

By Njoro, Lillian; Canagarajah, Ruth; Ogutu, Brenda; Mũgĩ, Nelson; Too, Gideon

‘Information pollution’, where facts and figures become a source of division in a society, has a huge impact on behaviour, social cohesion, and public trust. When information is false or misleading but spread without the intent of harm it is misinformation. On the other hand, when it is spread with deliberate intent to both harm and benefit certain interest groups it is disinformation. These two types of information are often being served to audiences alongside and with the same weight as the truth.

To successfully identify and flag mis- and disinformation, audiences must critically evaluate any data or knowledge they come across. However, given the sheer abundance of information available, especially online, it is much easier for people to fall for false information rather than sift through and objectively analyse it themselves. For example, the ongoing COVID-19 pandemic provides a good case in point on how these two types of information overlap or work concurrently, whether it is in how the disease is experienced, or the socially divisive way in which it has been tackled in some locations.

Approach

UNDP Accelerator Lab Kenya has had an active interest in the issue of information pollution, starting by first tackling it through the lens of the pandemic response in 2020 and thereafter zooming out to explore governance, peace and social cohesion as a whole. In 2021, the Lab partnered with Busara Center for Behavioral Economics on a behavioural science experiment to crowdsource harmful online content in collaboration with the Healthy Internet Project (HIP) incubated at TED. The HIP plug-in tool is an open-source web browser extension that allows users to flag content online anonymously, to curb the spread of lies, abuse, fear-mongering and uplifting useful ideas on the internet.

We had three key learning questions for this experiment:

  1. What were the end-user experiences and behaviours of using the HIP tool to identify and report harmful content online?
  2. What was the value proposition of the HIP tool as an innovative approach for tackling online mis- and disinformation?
  3. What was the value proposition of the volunteer-driven crowdsourcing approach to tackle mis- and disinformation and how did that fit into existing ecosystems?

We conducted a live experimental demonstration of the HIP plug-in to understand potential users’ motivations, experiences, and practices in using the platform to flag misinformation.

Starting with a quantitative live experiment, we observed natural behaviours (such as user experience, motivations, accuracy, and demographic trends) of 128 users on the platform, followed by a qualitative exercise with 44 of these users. Respondents were pooled from five counties (Kajiado, Kiambu, Machakos, Murang’a, and Nairobi), and represented diverse age and ethnic groups, as well as levels of education. The qualitative exercise, through in-depth interviews and a focus group discussion, sought to understand context-specific insights related to user motivations. The participants were then classified as either active, moderate, or low users based on how frequently they used the platform. Most of our study participants (109) were considered low users, while only three were considered active users.

Key findings

Overall, most participants deemed HIP an appropriate tool for stopping the spread of misinformation. However, internet challenges and infrequent encounters with harmful content were cited as reasons for the low usage of the platform. Participants also mentioned the lack of feedback mechanisms on their flagged content, not having a computer to access the HIP tool, and the rare usage of the internet.

Types of websites flagged by respondents on the HIP tool

All respondents agreed that information should be verified before sharing but lack the education and awareness to effectively do so themselves. Most people either use their judgment or intuition to determine whether the information they come across is harmful content or check to see whether it would be harmful to them or others in the society. Interestingly, despite the tool’s intent being to stop the spread of misinformation, a whopping 75% of participants used the tool to flag worthwhile content. This was due to concerns that flagging negative content: 1) was more subjective; 2) might have led to harmful repercussions for those who are flagged; and 3) was personally risky, especially with regard to political content.

Naturally, anonymity became a concern, as users feared that they would be identified through platform use, thus increasing skepticism and aversion to using HIP despite assurances that all the flagged content would be anonymous.

As for user accuracy, we engaged the support of PesaCheck, Africa’s largest indigenous fact-checking organisation, to validate a sample of the claims associated with the flagging activity from the study. Misinformation was associated with negative sentiments, such as a dislike for a topic, rather than as misinformation itself. Additionally, it was difficult to know what constituted misinformation amongst flagged content, because users hardly specified exactly what was misinforming about the websites they were on.

Finally, only 40% of the 128 study participants flagged more than one item using the HIP plug-in, effectively reducing data diversity. With these limitations, the generalisability of results was unattainable. Even so, we can conclude that volunteer-driven identification of misinformation is limited if user perception of safety, and accuracy, remain negative.

Image by Markus Spiske from Pexels

Recommendations

To this end, to improve the use and functionality of platforms like HIP, the following recommendations may be considered:

  1. Ensure anonymity: There should be more details to convince users of their anonymity to address the risks they feel on reporting misinformation.
  2. Clearly define misinformation: A detailed description of misinformation should be present to increase the accuracy of user reports.
  3. Remove the “worthwhile” flag: This is to solidify the purpose of the plug-in. In the same breath, have more flagging options with simplified definitions for each such as “cruelty, violence or intimidation” instead of “abuse or harassment”.
  4. Add a required “misinformation identification” field: This is for easier fact-checking since users will specify the content they regard as misinformation, for example identifying specific phrases or sentences rather than linking to a full article.
  5. Develop a phone version: This is necessary to improve the tool’s responsiveness, incentivise active users, enable social media flagging, and translate to other languages. This is particularly pertinent in a country like Kenya where the majority of the population access the internet via their mobile devices.
  6. Provide a simple system to demonstrate how feedback is being actioned: This not only increases usage of the tool but also proves that user behaviours make a difference. This may be done by connecting fact-checkers to review the database of flagged content and feeding back the findings to the users.
Image from Pixabay

Read the final experiment report here that provides more details on the experiment background and findings. Feel free to reach out to Busara Center [contact@busaracenter.org] or UNDP Accelerator Lab Kenya [acceleratorlab.ke@undp.org]. We look forward to connecting further with key players and interested parties on this topic.

--

--

UNDP KENYA
UNDP KENYA

Written by UNDP KENYA

In #Kenya, UNDP works with the Government and communities towards inclusive and sustainable socio-economic and human development. https://www.ke.undp

No responses yet