Ratings Reconciliation: When Everyone Has an Opinion on Prospect Capacity, who is right?
By Dan Lowman, Senior Vice President, Grenzebach, Glier and Associates
Some variation on the following scenario plays out every day in development offices around the world: A third-party wealth screening indicates that a person is a good prospect for a gift of $100,000. The prospect researcher does some additional research and rates the prospect at $250,000. An enthusiastic senior staff member (academic Dean, hospital administrator, research center director) meets this person at an event and emails the development office “definitely millions here.” Yet, when the gift officer enters the prospect’s data into the system, the rating is for $25,000…
A series of ratings reviews conducted by GG+A over the last two years finds that overwhelmingly these differences arise not because of fundamental differences in how prospects are evaluated, but due to one actor in the equation having different information than the others. In the example above, the screening may have had limited company sales information, while the researcher was able to find something more current and comprehensive. The Dean may have been reacting to the designer clothes and the stories he was told about a spectacular art acquisition, and the gift officer may have learned about the illiquidity of the prospects business wealth, discovered he has three children in college at the same time, and learned he is paying off a large pledge to the local hospital foundation.
There is no formula, yet, that can accurately account for all of the nuances that lead to a successful solicitation. GG+A encourages organizations to develop a coherent rating structure that treats all of these evaluations as inputs into a “consensus” rating, defined simply enough as “Realistically, how much will we ask this person for within the next X years?” This then facilitates a discussion between all of the actors. The end result is a goal, accepted by each stakeholder, and all future prospect activity can be designed around how does the institution ultimately solicit and secure a gift of that size, in that time frame.
The rating can be revised as more information becomes available, and a consensus rating becomes a powerful tool for reporting, forecasting, and planning. Many development offices have expanded this framework to include a financial rating (how much could be given to any charity), a next-ask rating (For how much will we ask in the next solicitation), and a “remainder rating” (How much will we solicit before X date, such as the end of the campaign).
Such an approach needn’t be rocket science—at its core, facilitated discussions between everyone with relevant information will provide the whole office with more to work with, and lead to better outcomes.