TNRC Guide | Researching Social Norms and Behaviors Related to Corruption Affecting Conservation Outcomes - Part 2
Researching Social Norms and Behaviors Related to Corruption Affecting Conservation Outcomes
Research Guide Part II: Project Monitoring & Impact Assessment
This TNRC Guide shares practical knowledge for program designers and implementers to reduce corruption’s impact on conservation. It is Part II of a two-part series.
Abbreviations
Acronym | Meaning |
EIA | Environmental Investigation Agency |
IDI | In-Depth-Interview |
IWT | Illegal Wildlife Trade |
NRM | Natural Resource Managers |
OECD | Organization for Economic Cooperation and Development |
SNBC | Social Norms and Behavior Change |
TNRC | Targeting Natural Resource Corruption |
U4 | U4 Anti-corruption Resource Centre at the Chr. Michelsen Institute |
UNODC | United Nations Office on Drugs and Crime |
USAID | United States Agency for International Development |
Glossary
Term | Meaning |
Adaptive management | An approach to project management that ensures adjustments to plans or tactics are made based on early results and monitoring data |
Attribution | The process of assigning any outcomes/changes recorded (either positive or negative) to project activities and interventions |
Baseline research | Key values determined before the start of an initiative, to provide an anchor point against which progress can then be measured |
Big data analysis | Answering social science research questions using amounts of digital data (like social media engagement or digital transactions) so large as to require specific techniques (like machine learning algorithms) (Foster et al. 2021) |
Causality | In impact evaluation: The use of certain techniques to identify whether the project influenced the outcomes or differences being measured |
Computer Assisted Telephone Interview | A process whereby a computer selects and calls a range of telephone numbers, in order to engage respondents in a survey/questionnaire |
Corruption | The misuse of entrusted power for private gain |
Critical discourse analysis | A qualitative analytical approach for describing, interpreting, and explaining the meaning of language in the context in which it is used, rather than just considering the words and grammar involved |
Desirability, Feasibility, Viability assessments | An assessment framework in Design Thinking / Human Centered Design processes, that can be used beyond product development |
Dipstick surveys | A one-time poll that asks open-ended questions to solicit opinions, usually focused on a single issue of research interest |
Direct observation | A method of collecting information in which the evaluator watches the subject in their usual environment without altering it |
Doorstepping | An opportunistic approach to gathering information from people in their homes, without notifying them in advance |
Endline research | Research conducted at the end of the project/initiative |
Enumerator | A person conducting research, often a census-style survey |
Ethics | Moral principles that govern behavior or the conduct of an activity |
Ethnography | The study of the culture and social organization of a particular group |
Focus Group | A group interview involving a small number of demographically similar people or participants who have other common traits/ experiences |
Formative Insight | Information that informs the design/focus on an initiative |
Indirect methods | Gathering information through means other than direct observation |
Mixed methods | Combining several research methods to ensure accurate results |
Objectively verifiable | Data that can be independently verified in some way; i.e., facts |
Observation based methods | Research techniques that seek to observe change, rather than interact with people to seek their opinion or stated claims about it |
Online survey | Internet based questionnaires and/or polls |
Opinion-based methods | Research techniques that gather insight by interacting with people and asking them about their knowledge, attitudes, and practices |
Primary research | A process of research that involves gathering data that has not been gathered before |
Qualitative research | The process of collecting and analyzing non-numerical data |
Quantitative research | The process of collecting and analyzing numerical data |
Sampling processes | Selecting the group to collect data from in research |
Secondary research | Research that involves drawing together a range of existing data |
Semi-structured In-Depth Interview (IDI) | A 1:1 fluid discussion centered around several open-ended questions |
Social desirability | The tendency to answer questions in a manner that will be viewed favorably by others and/or to hide the truth if it is socially “unacceptable” |
Social listening | The process of using keywords to assess what is being said about a company, individual, product, or brand, on the internet |
Summative evaluation | Research conducted at the end of the project to summarize impact, achievements, and lessons learned |
Orientation and overview
Corruption behaviors are complex, so research to identify ways to address them can be challenging (Schwickerath, Varraich, and Smith 2017). A hypothetical example illustrates the point: Anti-corruption practitioners interested in reducing bribery at border checkpoints known for high volumes of illegal timber trade need to understand where best to invest resources to achieve meaningful impact. Options could include interventions that reduce social expectations and community tolerance of giving bribes, initiatives that promote changed behavior by appealing to professionalism and codes of conduct among potential bribe takers, or interventions that encourage or embolden potential bribe givers not to give money when requested. Each of these responses to a corruption problem focuses on different actors and seeks to influence different social norms (SN) that might motivate a behavior change (BC).
Not every corruption problem may be right for such SNBC approaches, of course, and alternatives or accompaniments could include more transparency, increased scrutiny and oversight, or the introduction of technology (Mgaza 2022). Identifying whether to use SNBC or these more “structural” amendments, and if SNBC is chosen, then where, how, and with whom to engage, will depend on multiple factors. These might include prevailing practices of bribing enforcement officials, along with contextual factors that might influence the demand for bribes (such as low salaries, few rewards or other incentives for better professional standards, or a lack of recognition or pride for protecting community resources and stopping illegal wildlife trade), and the perception of personal risk among community members interested in combatting corruption.
This Resource Guide introduces some foundational principles and common considerations for research into conservation-focused anti-corruption actions, complementing a companion Guide on baseline and formative research. The Guide is not a manual; authoritative materials like the “Manual on Corruption Surveys” (UNODC 2018) are already available to support anti-corruption research.
Instead, this Guide introduces two “packages” of research that introduce non-specialists to some of the relevant core approaches and methods for assessing and adaptively managing SNBC approaches and understanding whether they have achieved the overall aims, goals, and ambitions.
Overarching principles
As corrupt practices are by their nature sensitive and usually illegal, some overarching principles should be considered when conducting related SNBC research. The principles currently available in the official CITES Guidance on Strategies to Reduce Demand for Illegal Wildlife Products might be considered and applied to research on corruption facilitating IWT as well. The aim of such principles would be to mitigate any risks and deliver reliable, robust, quality insights that can inform the decisions of relevant authorities.
Building on Economic and Social Research Council guidance, principles include the following:
- Research should aim to maximize conservation benefit and minimize personal risks. Those conducting research into sensitive or illegal behaviors should have comprehensive safeguards and risk mitigation strategies in place. Inexperienced researchers could expose themselves and their subjects to risk (Nature 2022).
- The rights and dignity of individuals and groups should be respected (Nature 2022).
- Participation should be voluntary and informed. Respondents should be aware how the information they are providing will be used and should participate freely and without coercion.
- Research should be conducted with integrity and transparency. For example, questions should be framed in a fully neutral way, and should not lead the respondent to answer in a certain way (like unintentionally leading respondents to agree with the researcher’s expectations).
- Independence of research should be maintained. A clear separation must be maintained between people conducting the research and those who are the subjects of it. Otherwise, conflicts of interest could compromise results. Where conflicts of interest cannot be avoided, they should be disclosed and managed.
For those interested to understand more about these topics or seeking more detailed guidance on how to approach research relevant to corruption, in addition to the UNODC Manual, useful materials include:
Research packages
The rest of this document introduces the two research “packages” that can be employed to monitor project progress against outputs, outcomes, and the theory of change, and to evaluate impact. The research in these two packages would chronologically follow the three packages introduced in the companion Research Guide on baselines and formative assessments, so these packages are numbered consecutively.
“Package 4,” overleaf, focuses on tracking project progress as implementation proceeds (“monitoring”). “Package 5” focuses on assessing the achievements, success factors, and lessons learned as a result (“evaluation”).
To ensure an adequate veracity of insight and appropriate attribution of any impact, both research packages should be undertaken as far as possible. If there are constraints in terms of time and resources, not all methods shown are compulsory. This is explored further in subsequent sections, meanwhile a summary is per Table 1:
Table 1. Summary of Research Packages (numbering continuing from the first Guide)
Package | Purpose | Points of insight | Relevant methods |
4 | Monitor how the project is progressing: Performance measurement to understand if the project is reaching the people intended/engaging the audience or stakeholders required. Are the promised outputs being delivered? Outcome tracking to assess progress towards the ultimate outcomes, and what is working well, what less well, and what can be done to adaptively manage or “course correct” where required. | Interim milestones set against the project workplan or logframe, so that cumulative progress can be tracked, or a snapshot taken, at key points in time. For performance measurement, absolute values and quantifiable aspects such as the number of partner MOUs signed, events delivered, target audience reached, and/or publications and other outputs produced. For outcome tracking, typically social research conducted with the target audience (or those that influence them) to track fluctuations in perception data and changes in knowledge, attitudes, or practice. | Indirect measures like consistent tracking of the number of incidents reported to corruption hotlines per unit of time, the number of court cases resulting in successful prosecutions, media hits, etc. Direct measures like cumulative outreach data around the number of target audience engaged; opinion-based data on the perspectives, knowledge, and attitudes the audience report having as a result; and observation-based data gathered through techniques such as social listening. |
5 | Evaluate project achievements and learning: Overall performance and impact assessment, with a view to ensuring changes observed are attributable to the project activities, and success factors and lessons learned are captured. | The indicators and performance and impact measures declared through the project goal, aims, objectives, and outcomes. For attribution of project impact, researching and comparing results between a treatment and control group. | The same quantitative (e.g., online surveys) and qualitative methods (e.g., semi-structured interviews) used for the baseline (Package 2), while pursuing data that is objectively verifiable (as opposed to opinion-based) where possible. Indirect measures (see above), provided any changes can be attributed to project efforts. |
Package 4
As performance measurement is relatively straightforward and likely to already be included in project monitoring, this section focuses on primary social research methods to test the fidelity of the intervention and track progress against outcomes. These methods include a mix of direct and indirect observation- and opinion-based measures. Combining these approaches (i.e., using “mixed methods”), produces insights about the extent to which delivery is achieving the desired impact or where adaptive management is required.
For both performance measurement and outcome tracking, monitoring should occur throughout the project. Some outcome tracking methods are, however, more time-consuming and costly to deploy and may only be feasible during key project milestones like the mid-point or end of a financial year. Milestones would usually be set when implementation commences, and they should be captured in a Monitoring, Evaluation and Learning (MEL) plan. All team members should have access to this plan and use it as a core reference point and project management tool, to ensure efficient, focused, and effective project delivery.
Figure 1. Conservation Measures Partnership Adaptive Management Cycle
Triangulating data emerging from social research through a “mixed methods” approach is established good practice (Anguera et al. 2020). It is especially relevant for research on sensitive or complex topics such as corruption, where it can help improve the quality and integrity of insight.
Choosing which method(s) to deploy and when, however, can be tricky for non-specialists. Decisions depend on factors such as the time, funds, and skills available, and the “confidence levels” required. Research Guide 1 introduced the “pros” and “cons” of different social research methods; Tables 2 and 3 in this document (below) draw some further distinctions between common quantitative and qualitative methods. These distinctions should help inform decision making around which approach to deploy when to meet monitoring needs.
Table 2. Matrix of Methods
Key: Quantitative, Qualitative
Direct | Indirect | |
Opinion | CATI interviews Dipstick surveys Doorstepping / street surveys Focus groups Online surveys Semi-structured In-Depth-Interviews | Vignettes |
Observation | Big data analysis Critical Discourse Analysis Social listening | Ethnographic studies |
Table 3. Merits of Methods
Key
Challenging
Manageable
Desirable
Research method |
Cost |
Time |
Depth of insight |
Direct: Opinion based |
|||
CATI interviews |
|||
Dipstick surveys |
|||
Doorstepping |
|||
Focus groups |
|||
Online surveys |
|||
Direct: Observation based |
|||
Big data analysis |
|||
Critical Discourse Analysis |
|||
Social listening |
|||
Indirect: Opinion based |
|||
Vignettes |
|||
Indirect: Observation based |
|||
Ethnographic studies |
While each project circumstance and team approach will be different, a rule of thumb is to ensure monitoring processes use at least two direct opinion-based methods and one direct observation-based method, for performance measurement and outcome tracking, across the life of the project. Data acquired through monitoring processes should also augment that gathered at the project start (i.e., baseline data and formative insight) and end (e.g., impact evaluation). The use of additional indirect opinion- or observation-based methods will add value to monitoring processes but should first be appraised for the desirability, feasibility, and viability of data acquisition (Choudhary 2019). There is a wealth of further material publicly available on this topic and suitable for practitioners, so those interested to understand more are directed towards the following resources:
Package 5
As stated in Table 1, the purpose of Package 5 is to evaluate all project performance and impact measures, usually defined at the planning stage, based on the project goal, aims, objectives, and outcomes. Similar to Package 4, because performance assessment is assumed to be straightforward (e.g., totaling the people reached and/or outputs produced), this section focuses on impact evaluation.
Approaches to impact evaluation vary greatly according to the project intervention types and SNBC activities, but just as for monitoring, they would generally lean heavily on data acquired through primary direct social research. Some indirect methods can be used if change can be attributed to the project.
The need to attribute change directly to the project, and to what degree of detail, typically governs the choice of research approaches for impact evaluation. For simplicity, two “options” are to compare changes:
- Over time – comparing key indicator values for differences between baseline (Package 2) and “endline” and attempting to attribute any change to the intervention.
- Between a treatment and a control group – comparing key indicator values amongst those exposed to the project (the “treatment” group) and those not (the “control” group) to identify differences in targeted outcome. See Figure 2 for a visual illustration.
With either option, everything except the specific treatment should be kept as consistent and comparable as possible. This includes variables like the research methods, question type, timing, framing, and demographics of participants. By keeping all of the other variables the same, any changes over time (in option 1) or between groups (in option 2) in the variable(s) the project sought to influence can be better attributed to the project.
However, the feasibility of achieving this consistency and comparability can be a challenge, and it can vary considerably between the options. This is best illustrated by changes catalyzed by the COVID-19 pandemic: between 2019 and 2022, many studies around the world saw shifts in variables like income level and residential status. This reveals some of the risks associated with impact evaluation, especially evaluation focused on measuring change over time (option 1). The elements that should remain consistent may sometimes be beyond the control of the research team. With option 2 it is easier to accommodate such potential inconsistencies, but the approach must be “baked into” the project design from the start for the integrity of the “control” group to remain intact. More information on these topics is available in Olofsgård (2014).
Another consideration for Package 5 is the need to ensure any claimed changes in attitudes or actions self-reported by the target audience are objectively verified where possible. This is especially challenging for corruption behaviors which by nature tend to be hidden, but research techniques that could be used include observation or investigation. Options for observation techniques are summarized in Table 2. Investigation techniques meanwhile, might be split into those involving desk-based analysis, including techniques to measure “proxies” when the corruption is hidden, and those involving fieldwork. However, investigative fieldwork should only be conducted by appropriately qualified professionals, as this is a high-risk area. Desk-based analyses could use social listening, critical discourse, and big data analysis of social and other types of media; or reviews of publicly accessible information like police and court-case records, hotline reporting, or criminal convictions. Especially when the intervention is limited in size or time, however, identifying causation or contribution to impact from implemented activities is not always possible.
A final point is the need to understand not only what has changed and whether it was a result of the project, but also what specifically caused the changes. Capturing success factors and lessons learned is critical to designing better interventions in the future or scaling projects. A mixed methods approach can be instrumental in qualitatively understanding the drivers of quantitative results.
Impact evaluation is a complex topic, and there has only been space here to cover some basic concepts. Practitioners should gather more information before confirming the type of approaches to be adopted, and useful resources for this include:
Annex 1. Example Dipstick Survey Structure for Package 4
Annex 1 provides an example “dipstick” survey process and set of questions relevant to Package 4. Typically, the process would be broken down into three sequential steps such as:
- Identify the main survey objective and define the key questions
- Clarify the survey population, sampling method, and sample size
- Conduct an ethics review, recruit the sample, and deliver the survey
A hypothetical anti-corruption example is provided below:
Step 1: Identify the main objective and define the study questions
Objective: To understand if public knowledge and attitudes are changing around corruption.
Key questions:
- What is your attitude towards bribery?
- How do you feel about reporting bribery if you witness it?
- What do you think stops people in society from reporting bribery?
- What do you think should be done to stop bribery?
- Have you seen any good examples of bribery being stopped elsewhere?
Step 2: Clarify the survey population, sampling method, and sample size
Survey population: Those living in city X, and those living in and around national park Y.
Sampling method: Representative of relevant socio-economic demographics for these areas.
Consider including a booster sample of those who report being asked to pay
a bribe in the past 12 months (P12M). Snowball sampling may be required.
Sample size: 100 people between 18 and 65 years old, from each area.
Step 3: Conduct an ethics review, recruit the sample, and deliver the survey
The survey will be conducted by experienced and fully qualified enumerators, and as the research involves human subjects, the survey topic guide and overall approach will be reviewed and approved by an Ethical Review Committee (ERC) / Institutional Review Board (IRB).
The enumerators will ensure consent is appropriately informed and that no pressure is applied, nor promises made, to those completing the survey. Respondents will be offered a confidential space in which to complete their responses should they prefer this, using handheld devices such as a tablet or mobile phone. Subsequent storage of such equipment, data recording devices, and the data itself will be tightly managed and involve the use of a fully secure facility. Analysis of survey results will involve cleaning and anonymizing the data before it is accessed/used by others.
The survey will be pre-tested to ensure it takes between 10-15 minutes to complete, and that there is clarity and understanding around the questions. Once this has been completed, any tweaks can be made before enumerators canvass the target geographies to recruit survey participants.
A mid-point check-in and coordination call will be arranged with project managers to check all is on track and to sense-check initial insights emerging.
For more information on how to design and run a ‘dipstick survey’, an excellent example is available in relation to the use of insecticide treated nets by children. Although this is not a corruption related example, the detailed approach is illuminating and could be adapted easily.
Annex 2. Example Approach to Social Listening for Package 5
Annex 2 introduces a six-step process recommended for NGOs by commercial social listening firm Awario. Awario defines social listening as: “The process of gathering and analyzing online posts both on social media and news websites, blogs, and forums.” A tool like Awario “allows you to collect all the posts that include keywords and keyword combinations you choose and analyze them based on demographic and psychographic categories, such as authors’ gender, language, location, sentiment expressed in a post and so on.”
Awario’s six-step process, adapted to anti-corruption, is:
Step 1: Define the goal
The goal for anti-corruption practitioners would likely relate to project impact evaluation, like monitoring a campaign to build customs officers’ pride at properly confiscating animal products. Other goals could include brand visibility, message amplification, reputation management, and social mobilization.
Step 2: Identify keywords of interest
Keywords are those relevant words and phrases that would appear in the media being targeted. Such words will vary according to the project type and focus. There is no limit to how many words or phrases should be identified, although a shorter, more specific list that avoids generic descriptors will likely lead to more useful and easier to manage analysis. Awario also recommends including misspellings, like “bribes, bribe, brib, birbe…”
Step 3: Choose alert settings
Alerts can be set for various aspects of a keyword search. “Negative keywords” can be excluded; for example, if you are specifically interested in corruption that enables fisheries crime, avoiding terms such as forestry and logging). The language, location, date range, and source can also be calibrated.
Step 4: Review raw data results
Awario’s software and platform will start providing results in real-time, thus generating instant insight into whether the keywords, filters, and alerts chosen are delivering the information required.
Step 5: Review analytical reports
Frames for analysis could include straightforward statistics (e.g., the number of posts using certain phrases or words, and the location of the posters); direct comparison of datasets (e.g., how many news stories about a certain company that mention “bribes” versus mentions of “influence peddling”); or a “deep-dive” into specifics (e.g., certain politically exposed persons or areas where corruption is understood to be rife).
Step 6: Apply insights
Based on the evidence generated and trends identified from the data, identify any quantitative assessments of impact (e.g., fewer mentions of corruption compared to the baseline, more mentions of confiscating rhinoceros horn). These assessments can inform broader learning about success factors to replicate or adaptations to the existing project.
Those interested to understand more on this topic can read “Defining Social Listening: Recognising an Emerging Dimension of Listening.”
Acknowledgments
Research and writing of this document was led by Gayle Burgess, Behavior Change Programme Leader of TRAFFIC: email: [email protected]. The author is grateful to Elizabeth Hart, Gabriel Sipos, Preston Whitt, Sabri Zain, Claudia Baez-Camargo, and Maija Sirola for their reviews of the document.
References
Anguera, M. Teresa, Angel Blanco-Villaseñor, Gudberg K. Jonsson, José Luis Losada, and Mariona Portell. 2020. “Editorial: Best Practice Approaches for Mixed Methods Research in Psychological Science.” Frontiers in Psychology 11. https://doi.org/10.3389/fpsyg.2020.590131
Choudhary, Shubhangi. 2019. “The best outcome at Intersection — Human desirability, Technical Feasibility and Business Viability.” Medium. https://medium.com/@i.shubhangich/the-best-outcome-at-intersection-human-desirability-technical-feasibility-and-business-viability-e3d13489d482
Haynes, Laura, Owain Service, Ben Goldacre, and David Torgerson. 2012. “Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials.” UK Cabinet Office. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/62529/TLA-1906126.pdf
Olofsgård, Anders. 2014. “Randomized Controlled Trials: Strengths, Weaknesses and Policy Relevance. Expertgruppen för biståndsanalys (EBA). https://www.oecd.org/derec/sweden/Randomized-Controlled-Trials_EBA.pdf