Media development action with informed and engaged societies
After nearly 28 years, The Communication Initiative (The CI) Global is entering a new chapter. Following a period of transition, the global website has been transferred to the University of the Witwatersrand (Wits) in South Africa, where it will be administered by the Social and Behaviour Change Communication Division. Wits' commitment to social change and justice makes it a trusted steward for The CI's legacy and future.
 
Co-founder Victoria Martin is pleased to see this work continue under Wits' leadership. Victoria knows that co-founder Warren Feek (1953–2024) would have felt deep pride in The CI Global's Africa-led direction.
 
We honour the team and partners who sustained The CI for decades. Meanwhile, La Iniciativa de Comunicación (CILA) continues independently at cila.comminitcila.com and is linked with The CI Global site.
Time to read
3 minutes
Read so far

Platform Problems and Regulatory Solutions: Findings from a Comprehensive Review of Existing Studies and Investigations

0 comments
Date
Summary

"While the digital ecosystem was and is central to expanding the freedom of expression and access to information of users, these same rights are threatened by online lies and hatred."



This policy brief explores the factors accounting for disinformation and hate speech online and sets out several regulatory solutions to govern online content that are aligned with human rights. It urges a middle way between solo-platform regulation and solo-governmental regulation and analyses hybrid possibilities, which include effective multi-stakeholder roles in rule-making, monitoring, and review in terms of platform operations. The brief is part of the United Nations Educational, Scientific and Cultural Organization (UNESCO) series World Trends in Freedom of Expression and Media Development.



The brief draws on published research commissioned by UNESCO from Research ICT Africa (see first item under Related Summaries, below), which consulted more than 800 information sources. The research served as an evidence-based contribution to UNESCO's global dialogue around guidelines for regulating platforms under the "Internet for Trust" initiative (see Related Summaries, below, for more information about these guidelines).



The brief begins by explaining why addressing online disinformation and hate speech is important and how it has serious implications for human rights, trust, and safety as per international human rights law and standards. In particular, the consequences include: feeding fear, hatred, and violence, thereby jeopardising peace and safety; intimidating others into self-censoring; and discrediting and displacing what is truthful, thereby weakening societal trust. In addition, as documented in various studies, the problem of online hatred such as online misogyny, racism, holocaust denialism and antisemitism, and xenophobia is growing. There is also widespread concern with online falsehoods - for example, about climate change or public health issues.



Key messages emerging from the research, as highlighted in the report, are:

  • The mutually-reinforcing determinants of the problems of online hate speech and disinformation are:
    1. "Attention economics";
    2. Automated advertising systems;
    3. External manipulators;
    4. Company spending priorities;
    5. Stakeholder knowledge deficits; and
    6. Flaws in platforms' policies and in their implementation.
  • How platforms understand and identify harms is insufficiently mapped to human rights standards, and there is a gap in how generic policy elements should deal with local cases, different rights, and business models when there are tensions.
  • Enforcement by platforms of their own terms of service to date has grave shortfalls, while attempts to improve outcomes by automating moderation have their limitations.
  • Inequalities in policy and practice abound in relation to different categories of people, countries, and languages, while technology advances are raising even more challenges.
  • Problems of "solo-regulation" by individual platforms in content curation and moderation are paralleled by harms associated with unilateral state regulation.
  • Many countries have laws governing content online, but their vagueness fuels arbitrary measures by both authorities and platforms.
  • Hybrid regulatory arrangements can help by elaborating transparency requirements and setting standards for mandatory human rights impact assessments.

Based on the analysis, the report cites the following recommendations:

  • Guidance is needed for all players tackling online content issues to work in full alignment of international human rights law. This should include ensuring independence of state-linked regulatory authorities, structuring meaningful multi-stakeholder involvement in them, and empowering civil society to demand increasing levels of accountability across the board.
  • Mainstreaming human rights, as the appropriate vantage point for assessing trust and safety across the interplay of platforms' policy rules, practices, business models, and technology, can help to entrench international standards in all regulatory arrangements and outcomes.
  • Statutory authorities should not seek to take over the formulation of platforms' content policies nor the moderation work being done. Instead, they should require companies to meet their own consumer terms of service to the full, as well as to follow broader agreed codes for policy standards and process benchmarks.
  • Institutionalised multi-stakeholder roles can be prescribed, in principle, at all levels of rules: Creation, enforcement, monitoring, oversight, and review can apply to solo-, self-, and co-regulatory mechanisms.
  • Complementing any rules about content issues is the need for rules about privacy and personal data protection. Likewise, other rules can profitably provide for a pluralism of platforms, including a range of non-profit business models.
  • Guidance can encourage modularity approaches, such as for elections, as well as call for more cooperation between stakeholders about regulatory modalities and experiences within and across jurisdictions.
  • Platform companies can be legally compelled to undertake scenario planning and implement due diligence exercises into the full range of human rights risks anticipated in regard to upcoming events, trends, and new products and to provide detail on how they will prevent or mitigate risks. In this way, platforms will be required to better anticipate technology evolutions, and to provide for safety-by-design and user autonomy-by-design.
  • The roles of media, non-governmental organisations (NGOs), tech employee bodies, whistle-blowers, and researchers should be supported as positive elements in the wider governance ecosystem.
  • Recognising the importance of independent monitoring of harms and attempted solutions, there should be mandatory (and tiered) access to platform data, with due respect for privacy protection. This can enhance the knowledge needed by stakeholders for developing information as a public good within platform space.
Source
UNESCO Internet for Trust Update, July 10 2023.