SVA Quarterly

The ‘whys’ of measuring outcomes for grant makers

Five common reasons for measuring outcomes and what they mean for good funding practice.

  • SVA Quarterly
  • Measurement and evaluation , Philanthropy
Summary
  • Being intentional about why and how we measure is important as it can impose a significant burden on the recipients of grants.
  • When it comes to measuring outcomes, funders need to ask themselves what they will use the results for.
  • The article identifies five common reasons: to track performance of a grant, learn and understand what works, demonstrate total impact as funder, determine what should be funded in future, and compare different organisations or programs.
  • For each, the article describes what they are and the implications for how outcomes are measured.

Measuring outcomes is becoming a standard part of any funder’s toolkit to help understand the impact they are creating. Government departments across Australia have started integrated reporting on outcomes into their funding agreements, as have many philanthropic funders. Yet being intentional about why and how we measure is important; it can impose a significant burden on organisations requiring precious resources and time.  

What are some different approaches to outcomes measurement?
There is a spectrum between ‘entirely bespoke outcomes for each grant’ to ‘every organisation measures against the same outcomes’. The more bespoke the outcomes measurement, the more accurately it will reflect what the grant is doing. The more that outcomes are shared, the easier it is to compare and combine them, but potentially the less aligned they are to the impact being created.

One of the first and most fundamental questions that any funder should ask as they think about their approach to measuring outcomes is ‘what will we use the results for?’ – the ‘why’. Outcomes can be used for many purposes, but each purpose has implications for the way outcomes are measured. Several funders have come to us with highly ambitious aims for what they want their impact measurement to achieve, yet a limited understanding of what those aims would require.

Here, we break down some of the common reasons funders have for impact measurement and how they influence the approach. The common reasons are:

  1. Track performance of a grant
  2. Learn and understand what works
  3. Demonstrate total impact as funder
  4. Determine what should be funded in future
  5. Compare different organisations or programs.

Importantly, these aims are not mutually exclusive – funders can (and should) have multiple uses for their reporting. Though, as we will see, different aims can also be contradictory.

1. Track performance of a grant

Description: Outcomes can help understand whether an organisation is doing what it had planned to do, and whether it is succeeding. For example, a program aiming to support students with school refusal might measure changes in school attendance.

Implications: If you are measuring outcomes simply as a ‘check and balance’, then consider a focused, low-rigour approach. If you are simply confirming progress, asking organisations to measure outcomes can be quite a resource-intensive way of achieving this. You might wish to measure only 2-3 key outcomes or consider whether more narrative measures of impact could serve your purposes. As an example, the CDC Foundation requested grant reports in the form of ‘one page vignettes’ with stories and photos rather than extensive quantitative reporting.1

Using outcomes to track progress can pair well with a more flexible approach to funding.

If checking grant progress is the primary aim, it also suggests that outcomes measurement can be at the more bespoke end of the scale. This means that outcomes will be less comparable between organisations, but more likely to accurately reflect the impact being created.

Using outcomes to track progress can pair well with a more flexible approach to funding. By focusing on outcomes rather than tracking activities or outputs, you can allow organisations to pivot as circumstances change while still working to maximise the agreed impact.

An extension of the idea of tracking grant performance is actively tying payments to outcomes. This comes with different implications for outcomes measurement. In outcomes-based contracting, performance is determined based on pre-agreed outcomes, which in turn determines payments. For this to work well, the measurement of outcomes should be on the more rigorous end of the spectrum. This rigour and discipline can create a positive feedback loop for service providers, and a level of transparency so service providers and commissioners can have meaningful conversations about performance. However, it also means that more time is required upfront to agree outcomes measures that reflect the impact of the program, are easily implemented and understood, and do not create perverse incentives.

2. Learn and understand what works

Description: Outcomes can be used by funders to collaborate with organisations to learn and improve, supporting innovative ideas. They can be crucial to understanding what works and what doesn’t.

Implications: Successfully creating an environment for learning requires a growth mindset from the funder and a strong trusting relationship between the funder and organisation. Without this, organisations will be less willing to be frank and open about the challenges they encounter and how these might be overcome. Either be guided by organisations on what outcomes are most relevant, or (if you are highly engaged in the space) consider co-developing a set of outcomes. Emphasising the shared goals between funder and grantee can be useful for fostering the necessary trust.

Quantitative measures of outcomes can tell you what has happened, but rarely tell you why it has happened. To enable learning and growth, quantitative outcomes will generally need to be accompanied by interviews or focus groups (i.e. qualitative data) to understand the nuances of what has supported or prevented impact.

Case study: Social Impact Bonds

Social Impact Bonds (SIBs) provide an innovative funding mechanism to enable service providers to enter into outcomes-based contracts with governments.

An objective of many SIBs in Australia has been about learning and generating evidence of what works. This has required the use of a counterfactual or baseline – an estimation of what outcomes would have been achieved in the absence of the intervention – to understand the impact of the program.

There are a range of counterfactual approaches with varying complexity, associated costs and levels of robustness that have been employed for SIBs, such as a historical baseline, a pre-post approach, propensity score matched control group and randomised control trial.

An example is the Newpin SA SIB which measures a single outcome – the proportion of children reunified with their families over and above the counterfactual proportion (i.e. what would have happened without the intervention). Each child is assessed 18 months after entry to the Newpin program.

3. Demonstrate total impact as a funder

Description: Funders often wish to measure and communicate their overall impact, either to a Minister for a government department, or to a board or set of trustees for a philanthropic organisation. Measuring outcomes can be one way of drawing together the overall impact of the work being funded.

Implications: Measuring the true impact of any funder is highly complex. It requires reckoning with deep questions: What would have happened if the funder didn’t exist? How long will the change last for? Who else contributed to it? Yet there is no way to construct a world without that funder and see what happened. This means that any approach to articulating ‘total impact’ will always come with caveats.

There are, broadly, two potential approaches – measuring change against the problem being solved, and ‘rolling up’ reporting from multiple sources.

Measuring overall change

If the funder is focused on a particular sector or issue, it may be worth measuring metrics related to that issue broadly. E.g. a government department with a specific remit around homelessness wants an understanding of how many people are sleeping rough. This kind of measurement is often best done separately to any individual grant in order to get a better and more holistic picture. Of course, this approach makes it even harder to directly attribute any changes to a particular funder or program, however it broadly illustrates whether the problem is being addressed.

Rolling up reporting

Rolling up reporting is the idea that you can ask multiple organisations to measure the same outcomes, then combine those results together. This can work well if all the funding is supporting a similar group in a similar way. For example, all disability housing is focused on providing a good home for people with a disability to live in a safe, independent way. Because of this, you can create a shared set of outcomes, as SVA did when supporting the development of the Disability Housing Outcomes Framework. This is being used by funders to get a holistic understanding of the impact they are creating.

The counterbalance is, of course, that by asking organisations to measure the same outcomes you risk having them measure outcomes that don’t capture the change they are creating. The less similar the grants are (in terms of what they do, the population they work with, or the location they operate in), the more difficult it becomes to align on shared measures of impact that make sense. An example of this is the Department of Social Services (DSS) Standard Client/Community Outcomes Reporting, or SCORE. This is a standard set of outcomes applied across many DSS grants from people with disability to employment services. A common complaint from the sector is that these outcomes are too generic to be an accurate reflection of the work they do.

Case study: ILC funding program

The Department of Social Services commissioned SVA Consulting to create a shared outcomes framework for their Information, Linkage and Capacity Building grants in disability. One of their key objectives was to enable rolled up reporting.

This was highly challenging, however, due to the nature of the grants. The grants under the scheme worked with different cohorts, doing different kinds of activities, in different locations and over different time scales. In addition, the organisations receiving the grants varied hugely in scale and capability. Aligning on a single shared set of measures that accurately represented what each organisation was doing while still allowing reporting to be combined became one of the key balancing acts of the project.

We landed on providing a ‘menu’ of potential measurement options, which allowed flexibility for organisations at the cost of complexity of administration for DSS. Creating this approach required deep engagement with the grant recipients and people with disability, and substantial effort in creating and refining a menu of potential measurement approaches.

4. Inform what should be funded in future

Description: Funders wish to fund successful, impactful work. To do so, funders may wish to look at programs that have a proven track record of creating outcomes.

Implications: Using outcomes to determine what to fund can be excellent for funders whose goal is to scale successful programs. Outcomes and evaluations can give an understanding of ‘what works’ in order to determine where additional funding should be invested.

However, it is important to make the distinction between not funding an unsuccessful program and not funding an organisation because it ran an unsuccessful program. If a funder is perceived to be determining who should receive funding based on historic success at creating good outcomes, it acts as a brake on innovation as organisations ‘play it safe’ to reduce the risk of their future funding being threatened. It also erodes trust and makes it far less likely that organisations will be open about the challenges they face in implementation.

Rather than relying on any single program’s outcomes, look for trends across similar work and dig into the factors that influenced success.

This means that it can actively conflict with the ‘learn and understand what works’ aim unless care is taken. If funding is perceived to be tied to outcomes (either present or historic), it changes the incentives on organisations and suggests a need for careful consideration of which outcomes to measure and how.

Also note that the reasons that any program succeeds or fails are, to some extent, unique to that program. The potential external and internal challenges faced by a program (e.g. policy change, obstructive stakeholders, key staff retention) mean that any one grant is often just a single data point on effectiveness of an intervention. Rather than relying on any single program’s outcomes, look for trends across similar work and dig into the factors that influenced success.

5. Compare different organisations or programs

Description: Funders wish to maximise their impact, ensuring that their funding is distributed in the most efficient way. Outcomes can be used to determine where funding can be most effective.

Implications: In some ways this is an extension of the above – rather than using ‘did the program succeed or not’ to determine which programs should receive funding, it compares the scale of outcomes created by different programs and tries to use that to decide how funding should be allocated.

Determining which programs are more effective than others in a systematic and defensible way is often complex and costly for both funders and grantees.

This can be very hard for funders to achieve. Determining which programs are more effective than others in a systematic and defensible way is often complex and costly for both funders and grantees.

First, we need to create a common measure (or set of measures) to compare different programs against. As discussed above, the more programs have in common, the easier it is to create a common measure. If you are only comparing programs that support literacy in a specific group of disadvantaged children, then using changes to literacy rates as a common measure makes sense. Often, however, there is no obvious common measure so a proxy must be used. Some sectors have well-defined proxies – population health programs often use Disability Adjusted Life Years (DALYs) as a way of comparing different initiatives.

Many sectors, however, do not have similar proxies. And as above, the more differences there are between two programs, the harder it is to create a common baseline. One method of comparing highly disparate interventions is to use money as the proxy – how much money is saved or dollars of social value created by each intervention. This comes with strong caveats, however. The process of coming to a dollar value must be done using comparable methodologies for each intervention. This can require significant resources from both funders and grantees.

Case study: Arts funder

A funder of multiple arts-based organisations wanted to create an outcomes-based method of determining how much money each organisation should receive. It engaged SVA Consulting to help find a solution.

What became apparent, however, was that the different organisations did radically different work, in different parts of the sector, for different audiences. Moreover, there was no existing common measure of outcomes (or indeed much outcomes measurement at all).

We concluded that creating a rigorous shared outcomes approach would be neither a realistic nor cost-effective way to determine how to split funding between organisations. Instead, we proposed an assessment based on benchmarking which looked at funding of similar organisations across Australia as well as historic funding arrangements and articulated funding needs.

Conclusion

Overall, outcomes measurement is a powerful tool for funders. Think carefully about what you are aiming to achieve, and craft your approach around that. Otherwise you can end up with excessive burden on organisations and participants with little to show for it.

Notes

1. R Powell, D Evans, H Bednar, B Oladipupo, T Sidibe, Using trust-based philanthropy with community-based organisations during COVID-19 pandemic, Journal of Philanthropy and Marketing, Jan 2023, accessed online 12 Sept 2025.