We recently met with the fundraising team at a well-regarded NFP and heard an interesting donor story. The organisation had a donor who for many years made a small regular gift. At a certain point, the donor churned and the organisation lost touch. But despite ceasing their regular donations years prior, the donor nonetheless left a sizeable bequest.
The question posed to Dataro was: is it possible to predict bequests or major gifts? Surely some things are inherently random and thus unable to be predicted.
Of course, this is very true. Major gifts and bequests are rarer events than, say, RG (Regular Giving) upgrades or churn events, which means NFPs usually have less data on them and they are more difficult to predict. But that doesn’t mean it is impossible. Here is how we have approached the problem in the past, using machine learning.
1. Know your donor
In the immortal words of Sherlock Holmes, “Data! Data! Data! I can’t make bricks without clay!” To successfully predict bequests , the first step is to take a look at your data. That means not only your donors’ history of financial giving, but also details like where they live, their level of email or mail engagement, petition signing, history with your products or services, and other factors. The first step to generating predictions is to ‘ingest’ this rich tapestry of information about every donor on your database, as it is very possible that you will find other signals that may indicate an upcoming major gift or potential bequest. The volume and history of information is important, so we would only deploy our approach in the right conditions.
In historic examples, we’ve found that key indicators include a combination of: the number of total appeals the donor contributed to, the number of recent appeals contributed to, the donor’s regular giving history (both longevity and recency), the size of their largest ever gift, the total amount they have given (LTV), their level of engagement in non-financial giving such as signing petitions or volunteering, their method of payment, their age and their number of late payments or card declines.
Top 10 important features in predicting bequests
|1||Number of total appeals contributed to|
|2||Number of recent appeals contributed to|
|3||Regular giving history (longevity and recency)|
|4||Size of largest ever gift|
|5||Total amount of giving (LTV)|
|6||Engagement in non-financial giving (such as petition signing)|
|7||Method of payment|
|9||Number of Late Payments|
Most NFPs know quite a lot about their donors, but there are many other sources of information that can help us to build a detailed overall picture, including enrichments from publicly available data sources such as ABS Census data. We overlay this information, in order to learn more about your key donor segments and behaviours (inferred from aggregated third party materials).
3. Train a machine learning model
One of the advantages of using machine learning as opposed to classic ‘segmentation’ or cohort selection tools such as RFM analysis, is that given enough data machine learning programs can understand subtle patterns and novel drivers for cohort selection that aren’t easily apparent to the naked eye. Traditional analytical techniques like RFM only consider a few basic features – such as recency, frequency and value of gifts – which means you miss out on the benefit of lots of potentially important information, including many factors we know are relevant.
In order to try to predict bequest or major giving behaviour we would deploy ensemble methods, a machine learning technique that deploys multiple learning algorithms, each attuned to different things, for better predictive performance. For example, some models used will be better at picking out bequests. Tree-based methods are particularly robust and we often employ Random Forests and XGBoost as methods in our process. Having said this, it is also very useful to include linear methods such as logistic regression, which we include both to check model accuracy against known methods as well to assist in model interpretation and insights.
Building complex models and generating mathematical predictions is one thing, but they need to be accurate. The next step would involve verifying model accuracy by checking our predictions against historical data. We would look at the entire history of an organisation and train models to predict outcomes in withheld subsets of the data, to check and improve model accuracy. Of course, accuracy will depend on many factors including the volume of data available, which is why including data from the organisation’s entire fundraising history is important for success.
6. Take a holistic view
Finally, we would encourage organisations to take a broader view and seek to predict the lifetime value of donors, not just the single outcome of whether or not a gift will be made. Taking a holistic view of the potential entire LTV of a donor into the future means that we can always account for rare (but large) one-off giving events like bequests or major gifts in all of our models.
Predicting major giving and bequests is a difficult business and model accuracy can vary with the volume of data available and many other factors. However, selection methods based on machine learning will tend to deliver far more accurate cohorts and identify potential participants more effectively than simple classification tools or worse, leaving it to chance. You can find out more about our approach here.