Who is this guidance for?
This guidance is for front-line project staff, project managers and Monitoring, Evaluation and Learning (MEL) officers and managers.
The use case for monitoring systemic change is deeply interconnected with facilitating interventions. It is nearly impossible for a single M&E officer to track and report on systemic change without close partnership with the project staff who are interacting with market actors, undertaking facilitative activities. For these reasons, it is strongly recommended that there are always at least two people on a project working together to monitor systemic change. As shown in the boxes below, each member of staff brings different strengths to the table, and fundamentally they depend on each other to effectively track and document systemic change.
Skills and strengths
Project staff (facilitator)
- Strategic thinking
- Relationships with market actors
- Vision for future change
- Understanding of behavioural change drivers in market actors
- Critical thinking
- Research skills – particularly qualitative research
- Organisational skills – documentation and follow-up
- Ability to visualise complex change processes (e.g. through results chains)
Roles in monitoring systemic change:
Project staff (facilitator)
- In-depth knowledge of particular market actors
- Clarity of vision for systemic change – how do we want market system to change
- Knowledge of key market dynamics and sources of data (e.g. disputes, market prices, key informants)
- Providing flexible structure to help facilitators communicate vision
- Asking tough questions to challenge assumptions
- Proposing alternative ways to gather data and triangulate
- Reminding facilitators to link market actor behaviours to wider systemic change goal
- Organising frequent debrief sessions and quarterly review meetings to analyse strategies
Dependent on the other for:
Project staff (facilitator)
- Structured ways of thinking
- New sources of information that inform strategies
- Evidence of wider response beyond direct partners
- Frequent, well-facilitated sessions for reflection and adaptation
- Understanding the strategy of PMSD – why decisions are being made
- Access to key market actors as data sources
- Real-time updates as strategies change
- Active participation in reflection and review sessions
- Being treated as equal partner
Monitoring systemic change may initially be unfamiliar or uncomfortable to M&E staff who are used to more traditional project logframes.
Rapidly changing facilitative activities
A PMSD project or intervention is not a clear plan with prescribed activities for each week, month or quarter. The focus on a market system, and working with actors to drive change in their own interest makes it hard to plan and predict what activities will take place when. This requires M&E staff to shift their role and even their identity: they are no longer the scorekeepers or taskmasters who ensure that precise output targets have been met every quarter. Instead they need to learn to follow the lead of facilitators, and have the M&E framework follow the project strategies and activities, not the other way around.
Most indicators cannot be set during project design
In traditional projects, the main indicators and targets are set during design, and tracked throughout implementation to show progress according to the desired plan. While this may be still relevant for high-level impact-type indicators (e.g. incomes or access to resources), they will not be short-term results. The majority of the indicators useful for assessing progress during the project’s lifetime, will come from M&E staff rolling up their sleeves, engaging with facilitators, and developing contextual ways of measuring whether market actors are changing their behaviours. This requires a high degree of responsiveness.
Data on systemic change comes from actors who are not direct partners
Finally, the wider goal of PMSD is to stimulate systemic change whereby large numbers of market actors shift their behaviour in ways that empower and improve conditions for marginalised actors. This involves an explicit assumption that many people will change their practices without ever directly interacting with Practical Action or whichever organisation is facilitating. To meaningfully track and measure such change requires a serious effort and clever strategies by M&E staff.
With these similarities and differences to traditional M&E in mind, this use case is structured around three distinct dimensions of monitoring systemic change:
- Making the strategy explicit;
- Gathering data on actor behaviours;
- Interpreting data and adjusting strategies.
Again, it is important to reiterate that all three dimensions require active partnership between PMSD facilitators and their M&E colleagues.
Every ‘arrow’ in a theory of change or results chain is an assumption to be tested:
It is very easy for everyone to forget that theory of change (or a results chain) is just our mental picture of how we would like to see change unfold (hence the ‘theory’ in the title!). It is not a true picture of reality. So every arrow on those diagrams is an assumption that should be tested. M&E staff can help facilitators think critically, and take pride in testing assumptions – especially if they prove incorrect! This is not a failure, but a crucial learning that will help the project adapt more quickly, and reallocate resources to activities that have a better evidence base.
Focus on defining and measuring outcomes in terms of behaviour change:
The majority of M&E frameworks rely on some variation of a logical framework that documents intended activities, outputs, outcomes and impact. The key difference in PMSD is that the activities and outputs will change drastically and frequently. However, M&E staff can play a crucial role ‘steering the ship’ by keeping facilitators focused on the outcomes – which are best defined as behaviour changes of market actors. Asking questions about different indications of whether a behaviour is changing, and why (or why not) can help shine a light on important issues. The consequence though, is less focus on large data collection in terms of output numbers – which can be uncomfortable for those who excel in that activity.
Part 1: Making the strategy for systemic change explicit
In order to measure systemic change, it is necessary to first define what the project is aiming for. While there are multiple definitions for systemic change, a helpful one, developed by ILO’s The Lab is:
“We interpret systemic change to be when new and improved behaviour of permanent market players is sustained beyond the life of the project, and change is manifested beyond the market players associated with the project.”
It is useful to distinguish between the broad, high-level systemic change at a project level, and more focused systemic changes at the level of individual interventions.
Project-level systemic change goals: Theory of Change
For a broad, project-level view of systemic change, a Theory of Change (ToC) is a helpful tool. The ToC gives a bird’s eye view of how the project envisions its broad intervention areas leading to behaviour changes by market actors, that ultimately benefit marginalised groups.
It is important for the ToC to line up with the vision for a transformed market system. As shown in Part 1 of Facilitating Interventions, the vision can be presented in different formats: a future market map; a sustainability matrix (who does, who pays in the future); or a single statement.
Overall, the ToC helps to keep everyone in the project on the same page. It answers questions such as: who do we want to ultimately benefit? Which actors do we want to see change their behaviours, in what ways? And for external stakeholders (donors or other players in the local context) it helps to explain why the project is partnering with different groups (e.g. marginalised groups and private sector actors). However, the breadth of the ToC also means it usually isn’t detailed enough to specify what data to collect. For this, we need to hone in on individual interventions.
Intervention-level systemic change goals: Results Chains
Results chains are the current industry standard tool for monitoring systemic change. They offer a visual and logical rationale for how project facilitative activities will lead to behaviour change that ultimately benefits marginalised groups, and ultimately leads to other actors crowding in or copying new behaviours.
Figure: Contract farming result chain example
For M&E staff, developing results chains is an opportunity to help facilitators make their thinking explicit, and to learn exactly what it is the project is trying to accomplish. Much of the core guidance for Facilitating Interventions focuses on Part 2: supporting behaviour changes in market systems. Following this guidance, a powerful sequence for developing results chains is:
- Start in the middle, with behaviour changes (at market system or enterprise levels). M&E staff can greatly help facilitators by continually asking: what is the behaviour change we are wanting to support? What are their current business practices, and what do we/they want to see different in the future? (e.g. chicken feed supplier expanding its supply base to include youth smallholder farmers through a simple contract farming model).
- Link the behaviour change in partners to changes in other actors. The next question is so what? If a market actor (e.g. a chicken feed supplier) changes their behaviour (e.g. by expanding its supply base to include smallholder farmers), then what is the behaviour change for other actors? In this example, there is another box for “farmers in pilot apply new knowledge”. In other examples involving multiple actors, there may be other related behaviour changes, too.
- Follow the chain of logic to the marginalised group. Collectively, the behaviour changes of market actors should ultimately lead to clear benefits for marginalised groups – through increased access to services or products; new channels for selling their goods or labour. In the example, farmers increase their yields and ultimately increase incomes.
- Draw out the pathway for replication/copying/crowding in. Assuming the steps in the above chain hold true, then ask the project staff how will other similar market actors (e.g. other feed suppliers in other geographies; or for other types of livestock) respond to the changes? This is shown in the example as horizontal linkages via dotted lines as “other collects ‘crowd in’ around model”.
- Finally, fill in the facilitative activities that might lead to the core behaviour changes. By waiting until the end to ask about ‘what might we do’ as the project, the emphasis has stayed on how the system changes itself. In the example, there are several pilot activities listed (technical support, business management). M&E staff can keep facilitators thinking about the uncertainty of market actor responses by suggesting 2-3 different facilitative activities for each behaviour change in the results chain. This reminds everyone that we don’t know what will lead to change, and that sometimes multiple tactics will be needed in parallel to succeed.
It will take time for both M&E staff and facilitators to get comfortable with results chains. Importantly, the results chains should change and evolve as the project unfolds, as the team comes to better understand what is driving behaviour, and where change is most likely to occur.
Part 2: Gathering data on market actor behaviours
In projects with sufficient resourcing for M&E, it is possible to transform a fully developed results chain into an ongoing monitoring plan – by defining indicators for each key changes in the results chain, and identifying sources of data to collect to track those changes. Where resources are more limited, the same approach can be taken, with strict prioritisation to the most important ‘boxes’ in the results chain – usually the initial behaviour changes (because without those occurring, there isn’t much point looking for downstream changes).
A very simple way of thinking about monitoring systemic change is to take each element of a results chain and translate it into a question:
- For boxes that represent behaviour changes: Are we seeing evidence of this behaviour? (e.g. how do we know partners are selling produce on to retailers, or expanding model into new geographies?)
- For arrows that connect two distinct behaviour changes: Has behaviour A by actor A led to behaviour B by actor B? (e.g. has technical support from the poultry feed producer led to any changes in farming behaviour by the youth smallholder farmers?)
- For arrows connecting interventions to market actor behaviour changes: How has the market actor responded to our facilitative intervention? Have they shown signs of changing behaviour (i.e. verbal commitments, investing time and resources)? (e.g. have efforts to promote the new model via radio or newspaper led to any new market actors taking up the change?)
Because project facilitators have more frequent interaction and closer relationships with direct market actors, they are often the ones who have the most up-to-date information about market actor behaviours. For this reason, a key internal practice for team leaders and M&E staff is to encourage and reward facilitators to capture and document their learning over time. The Firm-Level Improvement Plan is one simple tool for doing this. There are different strategies for keeping these up-to-date. To avoid a sense of reporting burden, then short ‘After-Action Review’ debriefs could be held to prompt reflection by the M&E staff member, who takes notes.
In addition to the informal observations by project staff, M&E can bring a more systematic approach to defining qualitative indicators of behaviour change for different market actors. The AAER Framework offers a structured way of looking for evidence of change at different stages in a wider systemic change process.
For example, early stage work with partner (Adoption in AAER terms) would look for evidence of concrete partner contribution to a pilot, partner satisfaction and intent to continue, and evidence of long-term viability (e.g. does the new behaviour lead to increased customers, revenues or desired social outcomes such as increased power/status/income for women or marginalised groups).
In contrast, in a PMSD project that is much further along (e.g. 2-3 years into implementation) and initial partners have shown ownership and adaptations, then qualitative indicators could look for evidence of non-partner actors (Expansion in AAER terms) responding to competitive pressures by copying new business practices to stay competitive; or responding to collaborative incentives by forming new partnerships.
Part 3: Interpreting data and adjusting strategies
The primary purpose for monitoring systemic changes is to learn about the market system, and whether strategies are having their desired effects. Given the time delay between facilitative actions and ultimate impacts, the majority of what a monitoring system can deliver during the project implementation is learning and insight. This reinforces industry best practice for managing relationships with donors as well: the more a project can bring a donor into the loop to understand what is being tested, and what is being learned, the more likely they are to support subsequent adaptations or adjustments.
Again the division of labour between M&E staff and facilitators is front and centre. A first priority is to establish regular team habits and routines of meeting to discuss the latest data (qualitative or quantitative, and generated by facilitators or M&E data collectors). Project leaders play a crucial role in signalling the importance of these meetings – by attending them where possible, and asking for updates on the latest findings.
Regular weekly meetings
For regular weekly or bi-weekly check-in meetings on individual interventions, it is likely that most of the data will focus on micro behavioural responses of individual partners. Here is where slight adjustments in tactics to match the individual personality and style of a market actor can be quite impactful. M&E staff can add value by becoming familiar with the Facilitation Tactics and Activities tool and the associated behaviour change framework (4 drivers of behaviour change). M&E staff and peer facilitators from other intervention areas can help each other step back and reflect on why market actors might be stalling – is it really a capacity issue, or are they not convinced by a compelling story? Where might we be able to draw on an influential role model from an adjacent sector to show them the benefits?
Periodic strategy review meetings
Quarterly review meetings, when emphasised and planned, can form the key rhythm of adaptive management for a project. They should include all staff – technical, M&E and operational – and are a chance for team leaders to set the tone and culture of the project. In the right circumstances, donor representatives can also participate – assuming enough trust has been built that they won’t shoot down ideas, and that project staff are comfortable being open and vulnerable – about what’s working and what’s not; and also about what they do and don’t know. It may be more effective to have the donor attend every second or third review meeting.
For periodic review meetings, it is useful for M&E staff to prepare summary presentations of the latest findings. To gain credibility with facilitators, M&E staff should lead with quotes and qualitative stories of change (or lack of change) before sharing quantitative summaries. Space for open Q&A and discussion in real-time can keep the facilitators engaged, and ensure a healthy culture of debate and reflection. This is also a time where M&E staff can get feedback and learn to improve their sources of data and even data collection techniques.
Results chains can be used to structure information showing evidence of behaviour changes using simple colour coded schemes (e.g. green-yellow-red traffic light systems) to give a picture of the broader progress towards systemic change. It is vital that results chains are treated as ‘best guesses’ – and where repeated sources of data call into question a key link (arrow), the project team should feel empowered to make a change to the results chain, with appropriate documentation.
This big picture view can also help team leaders think about when it’s time to shift from focusing on pilot interventions (more direct support to direct partners) to stimulating response or copying from a wider range of actors. Such strategy shifts are useful once every few months, but not necessary at every single weekly meeting.
Besides these more structured formal reflection spaces, much of the work of learning and adaptation lies in regular, informal conversations and meetings between M&E staff and facilitators. To this end, efforts to build trust, camaraderie and relationships across these groups can go a long way to building a culture of learning and experimentation. Where possible, budgets can be allocated for M&E staff to every so often join field activities – not just for data collection, but for shadowing facilitators and building a sense of teamwork.
Revision of plans with donors
It is one thing to be adaptive, responding to changes as you see them, but your donor, possibly even your Manager, may still want you to be reporting according to a Logframe or a set of pre-identified plans. So the last part of putting systemic M&E into practice is taking those who oversee the project in some way on a journey so that they become supporters of what you are doing. The best way to do this will depend on the donor and the context. Consider carefully when you want to bring them into a discussion and wherever possible ensure that the potential for changes to plans being made is mutually understood from the onset.