It’s an inconvenient fact of life that people often want us to account for what we do. Especially when they’re giving us money to do something.
In these situations they’d like us to do something – grow their money, help a community, eradicate a disease – in other words, make an impact of some kind.
They ask, quite reasonably perhaps, that we measure and show what impact we’re having.
Mary Kay Gugerty and Dean Karlan argue that we need to be careful when doing this because we could end up spending more trying to measure things than it’s worth doing – the money could be better spent making an impact.
The problem is that there is lots of data that we can collect. How can we tell what’s worth collecting and what isn’t?
They argue that a good system has the right-fit – giving the people with the money reassurance that the work is having an impact and the people with the responsibility for decisions information that they can act on.
In particular, they say that we need to think about five kinds of data – two that we probably already do, and three that we need to think about some more.
1. Financial information
Most organisations will have some kind of overall financial reporting, if only for tax reasons. They’ll have a profit and loss statement and a balance sheet.
What they might not have is good quality costing that tells them whether they’re spending money wisely or whether certain programs have a better return than others.
When thinking about spending money, being able to work out where it will make the most difference could make the difference between spending wisely or just spending.
2. Activity or implementation information
The second thing we can tell fairly easily is how busy we are. How many tents have been sent out, how many doctors are working in the field or how many servers are in the office.
We can count the busy bees and what they’re doing.
The point is whether what they’re doing is worth doing – does it advance the aims of the organisation?
In some cases, if it’s not worth doing well, it’s not worth doing at all.
3. Targeting information
Then we need to think about whether what we’re doing is helping the right people.
Whether its an aid program or a new computer system – who are the people that will be affected? Is it a large number or will it help a small fraction of a population?
The more we know about who we’re doing something for, the more likely it is we’ll do it right.
4. Engagement information
The next thing to look at is whether people are actually spending time with the thing we’ve put in front of them.
Take mobile apps, for example. The thing that makes an app live or die is whether it gets used.
An interesting experiment on the iPhone is to check the option that says download apps when used. Of the thirty on my screen there are about five I use all the time.
And arguably, all of them could wait till I get to a computer instead of spending my time distracted by the screen.
5. Feedback information
The final thing we need to do is ask people how we’re doing.
Do they like what we do, could we do anything better?
We’ll work harder to deliver better service when we know that we’re going to ask users how we did.
In summary… just collecting data isn’t enough.
Measuring lots of things or creating complicated calculations isn’t going to help.
We’ve got to get better at getting the right kind of information that tells us if we’re on track or way off.
Then, we need to act on what we’ve learned to make things better.