After reading this section you should be able to:
● create an evaluation plan for your mentoring program
● use it to inform the way you support the maintenance of mentoring relationships and how you end the mentoring program.
Key questions to ask yourself
What resources are available for the program monitoring and evaluation?
What are the program principles? How can the evaluation support these principles?
Will you do the evaluation yourself or will you contract someone independent to do it?
Often you will see monitoring and evaluation written as “M&E”.
Monitoring refers to setting targets and milestones to measure progress and achievement during a program.
Evaluation is a structured process of assessing the success of a program in meeting its goals and reflecting on the lessons learned at the point that the program is finishing or has already finished.
Monitoring and evaluation is important to mentoring because it:
enables mentoring pairs to learn from each other’s experiences
enables the mentoring coordinator to understand how the program is progressing and to make improvements to its design in real time
makes the program transparent and accountable
provides a basis for questioning and testing assumptions you made when developing the Theory of Change
enables you not to repeat the same mistakes
provides data which can be helpful for making a case for support to donors and partners.
It can be hard to know where to start in designing an evaluation. A good place to start is to review your program principles and use these to create principles that will guide decisions you make about M&E approaches and methods.
Some evaluation principles we have developed for other mentoring programs include:
The evaluation process and experience should be empowering for all participants.
The evaluation should not feel like an “extractive” process, i.e. one that takes time and information from participants without giving them any benefits.
The evaluation should be delivered in a way that builds the capacities of program participants – particularly in telling their story.
The evaluation methods and tools should be sufficiently “light” to minimise the evaluation burden on participants.
The evaluation should build on existing data and evaluation processes and tools.
The evaluation outputs should include materials that can be accessed and used by evaluation participants.
The evaluation methodology should be guided by our values – respecting privacy of participants, not coercive, designing for mutual benefit.
When an external professional evaluator is contracted, we will encourage them to mentor and train our program team so we improve our own evaluation capacity.
BetterEvaluation is an excellent and comprehensive website to help you plan a program evaluation, including how to write Key Evaluation Questions. Here is a good place to start.
Monitoring is best done by the mentoring coordinator, since they will already be checking in with mentoring pairs regularly.
Evaluation is best conducted by an independent group if you have the budget. This is because:
● it can be quite a big task
● they may be able to determine areas of improvement that mentees or mentors may be too shy to tell the program team
● they will analyse the data objectively
● they can share their expert skills with the program team.
If you do not have the budget to hire an external evaluator, it is best to have someone as independent to the program as possible conduct the evaluation (i.e. not the mentoring coordinator). Also think about in-kind approaches – you may find a student studying evaluation who would be grateful for the experience!
Think carefully about what you want to know and what you want to achieve at each stage of the mentoring program, as this will inform what monitoring approaches are most appropriate for you.
For example, if you want feedback on how participants experienced the mentoring orientation workshop it may be most appropriate to ask them to complete a short paper survey on the last day of the workshop.
However, when checking in with mentees and mentors during the program, you will want to know how they are and to continue building trust with them, so it may be most appropriate to interview them via phone.
Table 5: Approaches to M&E
It’s important to regularly check in not only with mentees and mentors, but also with the program team. You will be learning valuable lessons about how to run a mentoring program that should be discussed, captured, and analysed.
We would also love to have these stories contributed to this mentoring toolkit!
We find M&E approaches that emphasise listening and creatively empowering participants to share their stories are generally the best fit for mentoring programs.
You may also want to experiment with approaches like empowerment or participatory evaluation, which provides participants with the tools and knowledge to monitor and evaluate their own performance, often in a creative way using blogs, photos, and videos. This may require you to run some training with the participants at the beginning of the program.
Read about other evaluation approaches here.
The full impact of mentoring is often only seen a number of years after a program finishes. Why not plan to do a follow-up evaluation with mentees and mentors every few years?
Decide on your M&E principles and document them.
Check this aligns with your evaluation approach.
Create an M&E plan.
[Example] Event-based program 6-month follow-up survey
Jim, YPARD Philippines Country Representative, talking about how they chose an M&E approach for the YPARD Philippines mentoring program