It is the recoverability criterion, that is the crucial one in action research. If we imagine a spectrum of knowledge acquisition from experimental natural science at one end to story telling at the other, then along that spectrum will be very different criteria for judging the truth-value of the claims made. Traditional scientific experiments would be at one end and at the other, the weaker criterion that this (research) story is plausible. However, action researchers have to do better than simply settling for plausibility – Peter Checkland and Sue Holwell
I have a stack of books that fall into the genre of what Shawn Coyne calls “narrative non-fiction” and I’m starting to wonder whether they’re worth reading at all.
The problem is telling the difference between stuff that is true and stuff that just sounds true – the essential issue with anything that falls outside the remit of your basic set of physical sciences.
If you want to learn how people tick and how economies work what you need is carefully controlled experiments – from which you can figure out How Things Really Work.
The scientific method has been so successful at explaining the world around us that it now feels like the only way to create knowledge.
Except, a lot of time, there is a horrendous amount of noise in the situation you are trying to understand.
For example, let’s say you’re a consultant trying to come up with a marketing plan for a large organisation.
At one extreme everything you need to know is catalogued and figured out to two decimal places.
The Neilsen Norman group, for example, have identified 85 factors that go into making an About Us page to help your users.
It sounds very scientific – looking at over a hundred sites and observing 70 users – collecting data for the research that resulted eventually in this set of insights.
They write, with no trace of irony, that “Organizations that stood out from the crowd in favorable ways used tactics that helped them appear authentic and transparent.”
In other words – do these things and you will be able to fake being real.
Now, the point is not to have a go at the research – there’s nothing duller than trying to rubbish someone else’s work – but to understand how reliable this kind of research might be.
It’s likely that if someone else replicated the research they would end up with a different set of factors – none of the 85 on the list are going to be on the same level as, say, gravity.
The best you can hope for is that if you sat down and wrote a piece of copy that incorporated all the features it would outperform whatever was there already or “average copy”, whatever that is.
In a nutshell – it’s really hard to come up with an objective and neutral way of looking at things that involve people.
Systems that involve people are just plain messy.
So, when you read books that tell you how to solve problems that involve dealing with people you need to approach them with some scepticism.
And that’s because most of them make arguments that are plausible – but not much more.
What does that mean?
It means that if you want to write this kind of narrative non-fiction book what you do is collect a load of research, sift through it to come up with a Big Idea, search for stories that illustrate what is happening and put them all together in an engaging, conversational, easy to read package.
When you read these things the tips and tricks and hacks sound good – and you’re keen to try them out.
I try these out just as much as anyone else does – from David Allen’s GTD to Morning Routines from Tim Ferriss.
But these tips, while entertaining, do not help us move towards sustainable improvements in complex situations.
For those we need something a little more rigorous, something like action research – a process where we get involved in the situation as a participant AND researcher aiming to generate knowledge which is, if not repeatable, at least recoverable.
What this means, going back to our marketing example, way back when, is that the thinking and steps we follow to do what we do should be capable of being looked at and critiqued by someone else.
Which is a scary thing to allow – and so we don’t. Usually.
But, if you wanted to, just because it was the right thing to do what should you keep track of in the first place?
In a paper by Donna Champion and Frank Stowell called Validating action research studies: PEArL, they suggest that we should record five crucial elements to help us think about what we’ve created.
In the marketing project example – we might start with the participants. Are we satisfying the whims of the business owner or are we responding to a crucial change in the market?
Who is engaging with the project? Is it seen as something that is a one-off or do you have a team within the company that is interested and eager to make a difference?
What is the authority structure within the firm – is there someone who can make decisions or is this a project that is being done with no hope of taking action at the end?
What do the relationships look like? Are they collegial or based on power dynamics and politics?
Finally, is real learning being generated – not wishy washy marketing vaporware but real, reflective thinking that looks at things warts and all?
And a handy mnemonic for remembering these is PEArL – where the small r represents the importance of the soft relationships that exist within the group.
The chance are that if you look at the vast majority of commercial projects through the PEArL lens you will find all kinds of nasties and unmentionables hidden away.
Because in most organisations day-to-day life is about face and power and hierarchy and order – not about truth.
That’s the thing that “scientific” types need to understand about the real world.
It’s about people.