by Dr. Linda Phillips-Jones

In order to make your mentoring efforts as successful as possible, we recommend that you give additional attention now to how your program is being evaluated. Remember, even if you haven’t put formal evaluation measures in place, it’s being evaluated informally by those who participate in, observe, or hear about your efforts.

These comments apply mainly to mentoring programs. Those of you who are reading this in the context of an individual mentoring relationship can also evaluate your situation. Is your mentoring partnership meeting your expectations?

Our bias is that your evaluation should focus most on what happens to your mentees. At the very least, as our medical colleagues first ask: “Did you do no harm?” Further, did the mentees change for the better as a result of being in your program? If you’re not too sure how to conduct a program evaluation, here are some ideas to discuss with the rest of your team.


1. What do we need to know in order to make decisions about our mentoring program’s future?

Always, your evaluation should be geared toward the decisions that must be made by your decision makers. First, determine who your decision makers are. Second, list their most important decision needs, or ask them what they are. Usually these decisions include some of the following: Should we spend our money on another round of this? What, if anything, should we add or drop? What were the anticipated and unanticipated outcomes of the effort? What, if any, harm was done? Should the type or number of participants be the same or different? Could we get the same effects with a different approach?

2. What data will help answer these questions?

You have a large choice of data. The programs we evaluate and others we’ve observed focus on some or all of these various measures: program satisfaction; knowledge and skills acquired; mentees’ career progress (e.g., promotions, raises, career decisions); mentees’ self-confidence; employee retention; contacts made/people met; risks taken; mistakes avoided; money saved; products or processes created; best features of the program; program weaknesses; recommendations for improvements. Your most difficult task will be determining what exactly the mentoring component (as opposed to other factors) contributed to these changes.

You also have to decide among data sources. Will you contact mentees, mentors, mentees’ managers, program planners, others in a position to know something about the program? Will you analyze written documents such as training materials and the mentees’ development plans?

3. Who should do the evaluations, an external expert, the planning team, or some combination?

The planning/implementation group should collect at least some of the data internally. Examples include: numbers of mentors and mentees, participants’ satisfaction with training they received, their satisfaction with the mentoring experience as a whole, whether or not planned activities actually occurred. Participants can turn in reports on what they did together, what they learned, and suggestions for improvements. You can also get short-term retention numbers (Do participants stay with your organization after they complete the program?).

check list

In addition, we strongly advise you to get outside evaluation help. An outside source which specializes in mentoring evaluation and which guarantees confidentiality will ensure that your participants share more detailed and candid information. You and the team can strategize with the evaluators on data needed, items to be asked, procedures, and what you want the report(s) to cover.

4. What mistakes could we make?

The biggest mistake is not collecting any evaluation data. Probably the second is generalizing too much from a small number of data points. (In our opinion, you should have at least 20 respondents, and preferably more.)

You can also make mistakes in selecting of respondents, wording of questions, interpretations of answers, and conclusions drawn from the data. We think it’s also a mistake to present too technical, dry, or sterile a report, one without useful illustrations and quotes.

The sooner you think about evaluating your program, the better. Ideally, designing the evaluation is one of your earliest tasks. If you haven’t done much up to now, how about starting this week?

Those are a few of our thoughts. Let us hear from you about how you’re currently evaluating your mentoring efforts. The Mentoring Group is available to work with you in your evaluation. Contact us at