06: Monitoring and evaluating the mentoring program

After reading this section you should be able to:

● create an evaluation plan for your mentoring program

● use it to inform the way you support the maintenance of mentoring relationships and how you end the mentoring program.

Key questions to ask yourself

  1. What resources are available for the program monitoring and evaluation?

  2. What are the program principles? How can the evaluation support these principles?

  3. Will you do the evaluation yourself or will you contract someone independent to do it?

6.1 What is M&E and why is it important to mentoring?

Often you will see monitoring and evaluation written as “M&E”.

Monitoring refers to setting targets and milestones to measure progress and achievement during a program.

Evaluation is a structured process of assessing the success of a program in meeting its goals and reflecting on the lessons learned at the point that the program is finishing or has already finished.

Monitoring and evaluation is important to mentoring because it:

  • enables mentoring pairs to learn from each other’s experiences

  • enables the mentoring coordinator to understand how the program is progressing and to make improvements to its design in real time

  • makes the program transparent and accountable

  • provides a basis for questioning and testing assumptions you made when developing the Theory of Change

  • enables you not to repeat the same mistakes

  • provides data which can be helpful for making a case for support to donors and partners.

6.2 Principles guiding M&E

It can be hard to know where to start in designing an evaluation. A good place to start is to review your program principles and use these to create principles that will guide decisions you make about M&E approaches and methods.

Some evaluation principles we have developed for other mentoring programs include:

  • The evaluation process and experience should be empowering for all participants.

  • The evaluation should not feel like an “extractive” process, i.e. one that takes time and information from participants without giving them any benefits.

  • The evaluation should be delivered in a way that builds the capacities of program participants – particularly in telling their story.

  • The evaluation methods and tools should be sufficiently “light” to minimise the evaluation burden on participants.

  • The evaluation should build on existing data and evaluation processes and tools.

  • The evaluation outputs should include materials that can be accessed and used by evaluation participants.

  • The evaluation methodology should be guided by our values – respecting privacy of participants, not coercive, designing for mutual benefit.

  • When an external professional evaluator is contracted, we will encourage them to mentor and train our program team so we improve our own evaluation capacity.

6.3 Creating an M&E plan

BetterEvaluation is an excellent and comprehensive website to help you plan a program evaluation, including how to write Key Evaluation Questions. Here is a good place to start.

6.4 Who does evaluation?

Monitoring is best done by the mentoring coordinator, since they will already be checking in with mentoring pairs regularly.

Evaluation is best conducted by an independent group if you have the budget. This is because:

● it can be quite a big task

● they may be able to determine areas of improvement that mentees or mentors may be too shy to tell the program team

● they will analyse the data objectively

● they can share their expert skills with the program team.

If you do not have the budget to hire an external evaluator, it is best to have someone as independent to the program as possible conduct the evaluation (i.e. not the mentoring coordinator). Also think about in-kind approaches – you may find a student studying evaluation who would be grateful for the experience!

6.5 Monitoring: deciding on approaches

Think carefully about what you want to know and what you want to achieve at each stage of the mentoring program, as this will inform what monitoring approaches are most appropriate for you.

For example, if you want feedback on how participants experienced the mentoring orientation workshop it may be most appropriate to ask them to complete a short paper survey on the last day of the workshop.

However, when checking in with mentees and mentors during the program, you will want to know how they are and to continue building trust with them, so it may be most appropriate to interview them via phone.

Table 5: Approaches to M&E






  • You can get feedback from many participants

  • Good for quantitative data

  • Most participants will not give you in-depth data

  • Not relationship building

Event-based program 6-month follow-up survey

Designing surveys: A guide to decisions and procedures


  • Helps you build relationships with mentees and mentors

  • Better helps you identify potential problems

  • Takes a lot of time from the mentoring coordinator (each check-in interview would probably last at least 30 minutes)

Check-in questions

Narrative journal/diary

  • If mentees and mentors are diligent about recording their own reflections, this can provide very useful insight into what is happening in real time

  • Can be easily turned into a blog post

  • Can seem like a burdensome, time-consuming task, particularly if the mentee or mentor does not already have their own reflective practice

  • Mentees/mentors may not want to share information that makes them look vulnerable in written form, so it could provide a skewed perspective

Diary template

6.6 Monitoring the program team

It’s important to regularly check in not only with mentees and mentors, but also with the program team. You will be learning valuable lessons about how to run a mentoring program that should be discussed, captured, and analysed.

We would also love to have these stories contributed to this mentoring toolkit!

6.7 Evaluation: deciding on approaches

We find M&E approaches that emphasise listening and creatively empowering participants to share their stories are generally the best fit for mentoring programs.

You may also want to experiment with approaches like empowerment or participatory evaluation, which provides participants with the tools and knowledge to monitor and evaluate their own performance, often in a creative way using blogs, photos, and videos. This may require you to run some training with the participants at the beginning of the program.

Read about other evaluation approaches here.

6.8 Following up years later

The full impact of mentoring is often only seen a number of years after a program finishes. Why not plan to do a follow-up evaluation with mentees and mentors every few years?

Take action!

  • Decide on your M&E principles and document them.

  • Check this aligns with your evaluation approach.

  • Create an M&E plan.



Jim, YPARD Philippines Country Representative, talking about how they chose an M&E approach for the YPARD Philippines mentoring program