How AI Can Make You Better

2026-04-06_ai-work.png

I don’t think AI is going to save you time.

But I believe it can help you produce better work.

Like many people, I use AI every day.

Recently, I’ve been experimenting with using AI as a feedback coach.

Give it a post, a paper, a presentation – and ask it to suggest improvements.

It doesn’t just catch typos.

It shows the holes in your logic, points out when the narrative arc breaks down, and suggests where to trim the fat.

I tell it to review my work. Not rewrite. That’s my job.

But it often takes longer to finish. A piece might take seven revisions, rather than three.

There’s a saying – “Art is never finished, only abandoned.”

AI makes you spend more time on the task before you abandon it, because it spots the problems you need to fix.

Using AI can make you better at what you do.

But only if you use it as a coach, rather than a substitute.

Are you using AI to do the work, or show you where it isn’t good enough yet?

Fix The System Before Fixing People

2026-04-04_management.png

I’ve been talking to managers recently about what really frustrates them.

It’s how much of their time is drained managing performance.

But the problem often isn’t the people. It’s the system.

For example, setting targets often results in gaming behaviour. The objective becomes hitting the target, rather than doing the work in a way that’s best for the customer.

We see this play out often:

  • the surgeon that avoids cases that hurt their stats
  • the salesperson that offers a ridiculous discount to get their bonus
  • the CEO that uses layoffs to maintain quarterly EBITDA

People will behave in ways that are rational for the system that employs them.

If you want them to act differently, you have to change the way the system works.

That’s the insight Deming had.

“A bad system will beat a good person every time.”

Next time, try fixing the system before fixing the people.

Value Hides In Invisible Markets

2026-04-03_mismatch.png

Why is it so hard to find employees and clients when there is infinite visibility on LinkedIn?

Here’s the problem.

Visible markets are crowded. Opportunities sit in invisible ones.

LinkedIn makes it easy to apply for jobs, so thousands respond to an advert.

Thinking of hiring a consultant? A post will attract hundreds of suggestions.

The problem is baked into the design of the system, creating a mismatch between demand and supply.

Only a small fraction of total demand is visible.

But the supply side floods in, because people search where it’s easy to look.

So the visible market becomes the most competitive one.

Here’s the strategic takeaway.

If it’s easy to reach, it’s already saturated.

If you want an edge, you have to go where others aren’t looking.

The Thing About Luck

2026-04-02_future.png

You know what they say about luck, right?

Hint: It’s about making a decision at the right time.

In sustainability work, we often assume we can reduce emissions in a straight line.

But change happens in steps, not lines.

Our emissions are shaped by the current system – our assets, processes and ways of working.

If we want to reduce them permanently, we need to redesign the system.

Take a simple example.

If your employees travel a lot for work your emissions will stay in a range as long as you keep operating the same way.

Significant reductions require a step change.

Can we do some work remotely? Is low-carbon transport an option?

But those kinds of shifts aren’t always possible.

They tend to open up in specific windows – when assets reach end of life or when policy and economics make new options viable.

You don’t control when those windows appear, but you can be ready for them.

And that’s the thing about luck.

It’s when preparation meets opportunity.

The Chain Of Understanding

2026-03-24_chain-understanding.png

I came across a term recently that should guide how we use AI.

The “chain of understanding”.

If you watch cop shows, you’ll have heard of the chain of custody – the process that ensures evidence can’t be tampered with.

We need something similar for AI.

AI can generate huge amounts of content – but our ability to absorb and verify it hasn’t changed.

So do we really read and understand what it produces? Or do we trust that it’s right?

Yesterday, I asked two different AIs the same question. They gave two confident but contradictory answers.

That’s the risk.

In many contexts, choosing the wrong answer has an impact radius – affecting millions in investment and rippling through supply chains.

The issue isn’t speed.

It’s whether you understand the logic underpinning a decision or how a program actually works.

That’s the chain of understanding.

AI can generate answers. It can’t take responsibility for them.

If you can’t explain it, you probably shouldn’t act on it.

Improving Problem Situations Rather Than Solving Problems

2026-03-21_problem-situations.png

As an engineer, I want to solve problems. As a consultant, I’ve learned that’s not enough.

Life rarely gives us neat, well-defined problems.

It gives us messy situations, with argumentative stakeholders, unreliable data, and tensions over culture and power.

We don’t operate in a laboratory. We operate in a wicked messy swamp, requiring soft skills to address practical issues.

You can see the world as full of problems to solve, or as problem situations to improve.

Success looks different in the second view.

It’s not about the “right answer” but about getting stakeholders to commit to the next action.

Because without commitment, even the best solution goes nowhere.

That means:

  • learning your way through the situation
  • negotiating between perspectives
  • agreeing a direction
  • and committing real resources.

And here’s the twist:

When you focus on what people actually need and, you often end up with better solutions anyway.

References

John Mingers. 2011. Soft OR comes of age – but not everywhere

Professionals Create Accountability

2026-03-20_cat-mouse.png

You know that question you ask your cat when it drops a mouse at your feet?

“I can see you’re very proud of yourself, but what do you want me to do with this now?”

That’s how I feel when someone presents me with work they’ve made using AI.

Here’s an example. Let’s say you ask a junior consultant to generate market research and a go-to-market plan for one of your clients.

You get given 20 pages of output.

You ask the junior to talk you through the material.

Ideally, they’ve read the 20 pages, validated the information, sense-checked against prior knowledge, and can confidently articulate the situation and explain what needs to be done next.

All too often, you’ll get the same blank look the cat gave you when you asked it the question.

AI does not give you less work. In fact, you’ll probably end up with more.

Here’s the takeaway.

AI creates output.

Professionals create accountability.

Stop Hiring For Tasks – Build Systems Instead

2026-03-17_frameworks.png

When you have the same conversation six times – something is going on.

“We know what we need to do – the challenge is getting it done with the resources we have.”

The instinctive response is to hire.

Build the team. Add capacity.

It’s 2026. The way we work has changed.

You need three things:

  • Leaders: internal champions who move things through the organisation
  • SMEs: internal and external experts to plan and execute jobs
  • Systems: tools, automations, agents and processes that complete tasks

The real shift is deciding what people should do, and what they shouldn’t.

Five years ago, I hired teams to get data work done.

Now, we build processes.

Identify where the data lives. Connect to it. Speed up how tasks are executed.

We still need people – they just do different work.

  • Negotiating data access.
  • Ensuring data flows regularly.
  • Providing quality control.

But this takes 90% less time than the old way.

That frees up time for what actually matters.

What is the data telling us? Can we trust it? How do we communicate insights? What does this mean for strategy? What do we do next?

Stop hiring for tasks.

Start building systems for transformation.

Explanation As Strategy: Learning From Schopenhauer

Arthur Schopenhauer, as an old man, was asked what he thought about his life’s work on philosophy being ignored, and replied that he didn’t care at all. “They will find me”, he said.

This extract is at the end of a Peter Checkland (1992) paper I was reading, and so, of course, I looked up Schopenhauer.

I’ve recently been studying “explanation” as a sense-making device, in the context of strategy making by organisations.

In essence, this is the idea that structure is not something external to people but something that they construct based on how they explain the way in which they see the world.

In practical terms, this makes the difference between arguing for investing in sustainable technology or waiting, between starting a war or compromising to keep the peace. Explanations that make or break the future.

So what are we trying to explain?

Schopenhaur argued that there are four kinds of objects and four corresponding types of explanations.

  1. Material objects, explained with cause and effect reasoning
  2. Abstract objects, explained with logic.
  3. Mathematical and geometrical objects, explained with numbers and spaces.
  4. Psychologically motivating objects, explained by motivation or moral reasoning.

Problems arise when we try and apply one style of explanation to a different type of object.

I see this problem all the time in my consulting practice.

Here’s an example. You have a leadership team that wants to build a decarbonized company. Should you therefore replace your gas boiler with an air-source heat pump?

The first problem is one of motivation. Do leadership believe in the case for decarbonisation? Are they forced to do it by supplier pressure? By regulation? What motivates them?

The second is a cause and effect problem. Will the ASHP meet heating demand in all situations? Are operating costs equivalent?

It’s when we mix modes of explanation that we end up with circular and stalled thinking.

Progress becomes easier when we use the right kind of explanation to match the problem we’re facing.

A good reminder when working on strategy.

Managers: Focus On Removing Obstacles

2026-03-13_process.png

Why do motivated teams struggle to make progress?

I’ve been reflecting on how we design processes that work.

It’s easy when you’re building a tool that you’re going to use, working directly with a client, or designing with a small, tightly-knit team.

It gets harder as groups get bigger.

Imagine crawling through a pipe.

It’s easy if the pipe is large enough.

But what if the pipe is too small, or filled with obstacles?

Suddenly every small movement becomes hard work.

It’s not what’s outside the pipe that slows us down – the weather, people shouting.

It’s the constraints that matter.

We often think that managing progress means working harder, putting more controls in place, or creating incentives.

But maybe the real task of management is simpler: removing obstacles that stop teams from getting work done.