Perceptions and Realities in Assessing Media Landscapes: The African Media Barometer (AMB) in Practice
fesmedia Africa
In 2004, the Media Project of the Friedrich-Ebert-Stiftung in Africa and the Media Institute of Southern Africa (MISA) started developing the African Media Barometer (AMB), which has, over the past five years, sought to provide a bi-annual, in-depth, and comprehensive description of the media situation in 25 African countries. This short paper reflects on the methodological and practical problems in developing and implementing the African Media Barometer. According to the report, though there are some shortfalls with the tool, the need for analysing media landscapes as a prerequisite for effective media development and successful democracy promotion remains beyond doubt.
The report states that by 2005 media development had become an accepted instrument in the wider context of democracy promotion. International organisations like UNESCO and the World Bank are seeing a diverse and independent media as a precondition for the effectiveness of their good governance programmes. Free media are also increasingly recognised as a powerful change agent. Yet what was and is hampering the development of effective approaches to media development is a general lack of data, a gap the AMB attempts’ to fill.
The methodology of AMB was based on the following key principles:
- The AMB could only be a qualitative tool because organisers wanted media practitioners and representatives of civil society debating and assessing the media landscape in their own country.
- The AMB had to be a home-grown instrument to counter the argument that once again Western observers with their own concepts and preconceived notions would be judging African practices on the basis of their interests. The AMB had to be based on African standards to allow civil society groups and media practitioners to hold the result of their AMB-report against the declaration and protocols signed or accepted by their own governments.
- The AMB had to reflect the FES/MISA focus on media policy, regulation, and public broadcasting since the organisations wanted information and data for their particular areas of work.
- The AMB results had to be practical and define points of entry for FES/MISA and other media or civil society organisations.
The final methodology for the first generation of AMBs (2005-2008) can be summarised as follows: Every two years a panel of experts, consisting of at least five media practitioners and five representatives from civil society, meets to assess the media situation in their own country. For two days, they discuss their national media environment along 42 predetermined indicators on which they have to score in an anonymous vote on the scale from 1 to 5. The indicators are formulated as goals that are derived from African political protocols and declarations. The scoring takes place after the discussion and should reflect the personal conclusion each panellist draws from the foregone exchange. The discussion and scoring is moderated by an independent consultant who edits the draft report written by the rapporteur. After the panellists had the chance to comment on the draft and hand in suggestions and corrections the moderator edits the report. Thus, the whole panel has agreed that the report is a fair reflection of the discussion without subscribing to each aspect or argument in it. The final, qualitative report summarises the general content of the discussion and provides the single scores, the average score for each indicator, the average score for each sector and the overall country score.
According to the report, what distinguished the results of the AMB in a positive way from other academic studies of the media situation was the systematic inclusion of the "implementing factor." Panellists were told to score less the legal but the real situation, to judge the practice not the promises. The report would state the legal situation, but then describe the degree or lack of implementation of a particular law which then would also be reflected in the scoring. For example, whereas many academic studies would just list the number of community radio stations from government lists or UNESCO-reports, the African Media Barometer would also state these numbers, but check those with the collective and practical experience of the panellists. Are these community radio stations still broadcasting? Have they been taken over by the local government as propaganda institutions? What kind of content are they actually broadcasting and how many of them still deserve the term "community radio"?
A review in 2008 found some shortcomings with the methodology. For example, sometimes participants could not agree on numbers or were unprepared. Sometimes they quoted from studies which they did not bring or from sources that could not be traced. There was the occasional divergence in scoring that could not be explained by differing opinions or a controversial debate. Sometimes panellists did not master the sophisticated phrasing of the indicators. Sometimes they did not understand or agree with the basic assumptions of the methodology. In most cases, this was due to the lack of capacity particularly among the representatives of civil society. In some countries, the rapporteur lacked the necessary skills or proved unreliable so that the moderator had to step in writing the report. The originally envisaged and tried ranking proved untenable.
Following this review, the organisers re-designed the tool to improve the input of facts and figures into the discussion and standardise the procedure to reduce the "subjectivity factor" in debating, scoring, report writing, and editing. This included extending the indicators to cover recent developments in communication technology; feeding more factual information into the discussion to reduce anecdotal evidence; intensifying training to ensure a better and more reliable performance of the teams of moderator and rapporteur; mandating a FES-supervisor at each AMB to guarantee quality control; and adding an executive summary to each AMB-report, written by the moderator but agreed to by the panellists. Most of the new tasks assigned are written down in a 20-page "Moderator's Guide" to ensure a more standardised practice from country to country and year to year.
The report concludes that the AMB-reports are adding perceptions to the measurement of the media situation. If one wants to know if there is freedom of expression without fear or to what extent self-censorship is practiced, purely quantitative measurements tools are failing to provide the whole picture. And if one also wants to capture the "implementing factor" in assessing the framework of media regulation, only a qualitative analysis will do. The mining of the “quarry of information” in the growing library of AMB-reports will remain a challenge for the coming years. In the end, the African Media Barometer should be read as a continuous study of the African media landscape with all its dark shades and bright colours – and with its recommendations to be acted upon.
fesmedia Africa website on August 23 2010.
- Log in to post comments











































