×

From field research to policy influencing – what have we learned?

Written by Martin Whiteside

The Institute of Development Studies (IDS) has led consortia of UK and African organisations in two large programmes of policy research on the future of agriculture in Africa and inclusive commercialisation since 2005. This blog, building on ALRE Working Paper 1 and ALRE Research Note 4, explores what we have learned about policy influencing through these programmes.


Thirteen key lessons are evident. Although many of these lessons are not new, they are worth repeating as experience of these programmes shows their value in practice, and they are often forgotten in the heat of implementation.

Consider the political economy – understanding the political economy of the policy and practice environment to which the new evidence is supposed to be influencing is essential. A process like Participatory Impact Pathway Analysis[2] (PIPA) and/or constructing a policy influencing Theory of Change (TOC) can be a means of doing this, as long as the political economy power dynamics are considered. Whose interests are likely to be served by better evidence? Who are likely to play an enabling or an inhibiting role? Who are the allies? What are their interests? What are the time critical moments?

Design for demand – experience shows that demand-led policy influencing tends to be more effective (with evidence pushing at an open door). Plan the policy-influencing process from the start, alongside the research design to ensure there is user demand for the researcher-delivered evidence supply. Early involvement of evidence users, with elements of co-design and ownership building is ideal, but may be difficult to fully realise in practice due to timescales involved. Managing supply expectations may be important. The early use and periodic refresh of influencing planning tools, such a PIPA and/or an influencing orientated TOC, can be helpful.

Be demand responsive – understanding evidence demand is important; different audiences will need different formats and different emphases. Timing may be critical, evidence made available at the start of a policy review process may be enthusiastically received, the same evidence delivered after a policy has been decided may be ignored. Flexible and demand-led planning processes are needed to keep research programmes relevant in terms of both emphasis and timeliness.

Work in alliances – influencing policy involves specific skills, experience and interests. It is often more effective for evidence suppliers to work in advocacy alliances. Alliances may involve researchers focusing on their comparative advantage in evidence production, while advocacy organisations use the evidence in their influencing. In other contexts, different evidence producers may network together to produce mutually enhancing evidence, across sectors or across regions. Alliances may be formal or informal, temporary or longer lasting. Networks may already exist or need building. Organisations like donors, the private sector and national governments can often be advocacy partners in one context and targets of advocacy in others. 

Be nimble – despite the best made plans, policy change often does not follow a linear change process of evidenceàcommunicationà policy change. Adaptive planning (being ‘nimble’) is important to take advantage of changing opportunities, with shifting power and interests of policy change ‘enablers’ and ‘inhibitors’, and ‘moments’ of influencing opportunity. Having flexible funding, being able to respond to issues as they arise, and call on additional organisational and political resources is important and needs to be part of the programme design. Providing ‘space’ for reflection/discussion/qualitative processes/amplifying voices of change may be critical (policy influencing may be more akin to ‘gardening’ than ‘mechanic’ – creating the conditions for change to grow).

Institutions matter – emphasis is needed on the ‘how’ of implementing policy change and not just the ‘what’. Institutions and institutionalisation (of knowledge/thinking/approaches) are important in policy. Understanding the institutional environment for competing policies, competing interests and the institutional mechanisms for delivery is important. There is a difference between changing ‘policy’ and getting that new thinking owned, embedded, implemented and further developed by service sectors, local government and/or private business.

Personalities matter – individuals, positions and perceptions are part of the solution. New ideas may need to come from the ‘right’ level of person in the ‘right’ organisation with the ‘right’ national ownership. Personal relationships, trust and perceived integrity of evidence, messenger and message can be critical.  This may involve working with people who straddle the knowledge-policy space, providing them with evidence in the right format and creating the space to reflect, discuss and, when conditions are right, to implement. Identifying, listening and understanding these key ‘change-maker’ needs, dedicating time to building relationships, patience and persistence is all important.

Ownership matters – feeling ownership of evidence and conclusions is important. Policymakers are more likely to use evidence they have had an involvement in creating and are more likely to quote and be swayed by opinions of farmers that they have heard themselves. Early identification of end-users, combined with creative research design and appropriate budgeting may help incorporate evidence users into research planning, evidence generation, on-site experience and peer review, and thus generate wider ownership and deeper understanding.

Add value – complementing and enhancing existing research, evaluation or policy reflections and processes can be effective. Examples include complementing existing programme evaluations and/or quantitative data collection exercises with qualitative evidence needed to explain or provide the policy pointers of the other data. Sometimes this may involve creating the ‘space’ for informal discussion, or to reflect on a wider range of data sources, or to bring in perspectives from elsewhere.

Use multiple channels – in many cases policy influencing will require the use of multiple channels of communication using multiple formats through different networks. Presentation and interpretation of evidence is important. Distilling large quantities of research data into ‘Policy Briefs’ and policy relevant ‘nuggets’ may be more appropriate than academic papers. Blogs can be a good way of raising and refining issues and stimulating interest. Media days can be effective in enabling popular communication of new ideas and evidence and can also build longer term relationships in which new evidence can be fed to journalists, and also journalists can fact-check new stories with researchers.

Build capacity – resources and time may be needed for capacity strengthening in evidence-led policy change. Influencing skills may include political economy analyses of the policy change context, advocacy planning, networking and diverse communication approaches. Developing contacts and respect beyond the academic/research community may be equally important. A scheme like the ‘Early Career Fellows’ used in FAC provided both coaching and influencing experience that has proved effective in the longer term. 

Measure what matters – monitor, measure and evaluate what really matters. This may mean less emphasis on outputs (e.g. number of publications or number of workshops) and more emphasis on outcomes like changes in understanding and attitudes among key stakeholders, changes in policy and changes in practice. There may be a need for post-programme monitoring to learn from the actual outcomes and associated lessons from the longer-term use of evidence.

Be smart in mainstreaming difference – think carefully what gender and social difference mainstreaming means in practice. Plan explicitly how to incorporate gender and social difference into both the evidence generation and, crucially, into the policy influencing. This goes beyond reporting disaggregated data (e.g. by gender or other groups), to understanding (and testing) how policy implications are likely to impact on different groups, and how policy recommendations may be used (and misused) by others to pursue different vested interests.


[1] FAC – Future Agricultures Consortium (2005-2014) and APRA – Agricultural Policy Research in Africa (2016-22)

[2] PIPA – Participatory Impact Pathway Analysis