With AI Your Job Depends On How You Validate What’s It Produces

2025-12-12_wrong.png

If you’ve ever presented to a decision maker you know what happens if they spot something’s off.

If one thing is wrong, the whole thing is wrong.

Go back and check it again.

I’ve recently been on the giving and receiving end of AI generated material.

For example, I ran some company reports through a couple of tools I’d built.

These tools used different approaches to analyse the content of a document and give me a report.

Both reports had issues, things that I could spot immediately.

I now had two choices – take the report to someone else and point out that there were some errors.

Or review the source information, and cross check what had been produced?

What would you do?

Well, when I was given information that was wrong recently – or more accurately – clearly hadn’t been reviewed, I didn’t go ahead with the deal.

I think we need to figure out where AI sits in workflows – and I’m starting to believe it’s not a solution.

It’s not something that’s going to replace all your people – although you might stop hiring for certain roles.

It’s a tool.

What you produce is better or worse depending on how you use it, and how much you put into validating what comes out of it.

You Can’t Move A Big Rock With A Small Lever

2025-12-11_levers.png

Why is it hard to get decarbonization projects away?

The main issue, as far as I can see, comes down to big rocks and small levers.

Regulators want us to understand what risks and opportunties affect our businesses, and work out mitigation pathways.

This isn’t new, especially to Operations Researchers. We’ve had the tools to model alternatives, outcomes and preferences to support decision makers for decades.

What stops us from taking action is that most models don’t capture all the variables that matter.

Let’s take one of the biggest issues. Getting capital.

You run a company and if you invest in machinery you get your money back in 2 years – a 2-year payback.

So it seems rational to ask that every project you approve has a 2-year payback or less – or a better use of money is to invest in your core business.

Almost every energy saving project has a 5-year payback. So it doesn’t clear that hurdle.

One way of solving this is for an energy services company to finance the project and have savings pay for the capital – a no brainer you would think – except there are balance sheet implications and long-term obligations. So that’s complicated.

Plus – define savings. You’ll have to bring in a financial engineer.

Oh and by the way – that payback equation – it’s bad maths. You need to look at NPV to work out if it’s worth investing or not really.

In the last 10 years the one technology that’s flown off the shelves is LED lighting. It’s met the two-year payback requirement but do you know what really sealed the deal?

It made products look good. I remember a bathroom supplier telling me that it made their stuff sparkle. Get your product to boost sales and the pitch is much easier.

Getting projects away is not about modelling and optimization. It’s about attitudes, beliefs, values, forecasts, preferences, risks and most of all, leverage.

Making a plan is the easy bit.

Getting a big enough lever to move huge rocks requires a bit more preparation and thinking.

Develop The Art Of Talking About The Important

2025-12-10_importance.png

Selection is the art of picking out the important stuff.

I’ve been thinking about this as I read papers on deliberative processes – how we talk in small groups to take action.

Many of us have to have complex conversations – about strategy, sustainability, resilience, value, impact, and a myriad other topics.

But it’s difficult to hold more than a few ideas in working memory at one time.

As a result, we’re easily biased, swayed by what we’ve heard most recently, the thing we reacted to most viscerally, the image that comes to mind most easily.

The way we talk about complex things matters. The methods and procedures we use to manage discussions will lead to better or worse decisions.

One way to think about this is using the input-throughput-output framework from Gastil et. al, (2012).

Input is the raw stuff – the people and ideas you have to bring together to be able to understand a situation.

In our business, sustainability reporting, that’s involving diverse stakeholders, finding data sources, understanding strategies.

Throughput is about engaging with the stuff. How do we work with what’s there? How do we do this efficiently and cost-effectively. We want to work out the best way to work with the complexity of a given situation.

That might be interviews, working out process flows, constructing automations.

And then there’s output. Surfacing the important stuff. The ideas that matter. Analysis that helps support discussions. Conclusions that lead to action.

You can use the ITO model in many situations. Yesterday I was listening to a panel about circular economy principles in construction, and the same ideas came up. We need to reduce inputs – lowering the need for materials at the start of the construction process. We want to maximise throughput – keeping elements that have been made in service for as long as possible. And we want to close output, recovering materials and reusing them as much as possible.

But can you just imagine how complex the discussions are going to be to make this kind of process work?

My prediction: regardless of what’s happening with AI the ability to have good conversations – high quality deliberative processes – will still matter in 2026.

Strategies Must Be Dynamic to Succeed

2025-12-09_targets.png

All we have to do to succeed is set a goal and head towards it. If only it were that simple.

Anyone with any experience knows that we always start at point A – a place that we’ve arrived at with a history.

Things have happened. They can’t be changed – but the decisions we took and the stories we told ourselves brought us here.

Now the task is to get to B – and there are multiple ways to get there – endless possibilities.

For example, we know that we need to live sustainbly, to use resources in a way that means future generations can also live.

But the solutions aren’t simple. Can we innovate our way to zero carbon materials and living? Can we keep living the way we do and suck carbon out of the air? Do we have to voluntarily use less? Will we be forced to use less because prices will go up?

The end point, B, is not static. It’s a moving target, buffeted by changing politics, values and economics.

Our strategies, therefore, can not be static. We have to tack our way towards B, making improvements and adjustments as we go along, responding and reacting to the environment.

The good news is that we naturally seem to want to make things better. No one sits down to design a worse and more inefficient solution.

The bad news is that we’re going to be uneasy and unhappy about the rate of change and the extent to which we’re progressing. Which is not fast enough.

If we’re too comfortable, that’s a warning sign that we’re getting complacent.

We have to see the possibilities for a better future, otherwise why act at all?

The takeaway: for strategy to be useful it must be dynamic and responsive.

Because things will change along the way.

How Will You React To The Rising AI Tide?

2025-12-06_competence.png

If you aren’t willing to look at the world from a different angle, it will change around you before you realize it.

Jordan Peterson argued that the world had moved from a dominance hierarchy – the big and strong win – to a competence hierarchy.

Competence is built over time, so as long as you practice something you will get better as you get older.

But what does practice look like when you don’t need to do parts of the work?

Take music, for example. You could learn to play the guitar, which is hard and tiresome work. You could use a tool like Garage Band, and press buttons to make music. Or you could go all geeky and synthesize it with code generators.

What matters is that you pick a method and use it to create better music.

In a work context, we’ve used flash reports – a one page summary of progress – for 20 years.

I saw a post recently by someone that used Google’s Nano Banana to create a very pretty, beautifully laid out flash report. At first glance, it’s really good.

But is it useful?

The point of a flash report is that it communicates important information quickly and easily.

That artifact in itself is a collection of pixels – the user has to make it useful by embedding it in a repeatable workflow.

The options are to use it as a template for a PowerPoint that is edited manually. It’s a little more of a stretch to create an automated workflow that takes notes information or instructions and creates a finished report. You need skills to do that, and a human in the loop to check the output.

Things get more complex as you try and make something that looks good actually work in practice.

When you’re faced with rising waters, the sensible thing is not to waste time protesting that about the nature and speed and levels and use and abuse of the water.

It’s more useful to build a boat.

What’s your boat for the way in which AI is washing over your industry?

Use Generative Learning To Boost Generative AI

2025-12-05_gen-learning.png

We’re now familiar with gen AI but what is generative learning?

Generative learning theory suggests that people learn and remember more when they make relationships between what’s new to them and what they already know.

It’s a constructivist theory and says learning is about the work of actively constructing knowledge.

Gen AI tries to shortcut this.

I tried to make a thing yesterday. A tripod mount for an attachment. I drew a picture on paper, designed it in OpenScad, created a printable file in Slic3r and printed it on my 3d printer.

It’s a godawful design. I got it wrong twice. It only works becuse I didn’t realise that the tripod mount was helping the design stay rigid.

Any engineer that’s got shop experience would know a hundred different ways to make something better. But I don’t. I’ve given up on constructing that knowledge of making physical things. I’m at a competence level not far behind someone in high school.

We usually improve with age. Unless we stop trying.

Now imagine doing that with your mind. Stop trying to actively construct knowledge. Stop learning and remembering information. Stop trying to connect what’s new with what you already know.

In business, don’t bother talking to people. There’s no need to understand how the operation actually works. Want a strategy? Pick from a selection of ready made ones, all plausible and beautifully formatted.

If we stop and think for a minute, assuming we still can, what do we think that’s going to do to our ability to think?

Can we create the businesses and services and politics of the future if we let our ability to gain knowledge stagnate?

I’m not against using tools. They augment us. What we’ve got to do is remember that gaining knowledge requires active work – which is often hard work. You need knowledge so you can use tools better.

My prediction – the people who succeed will the ones who successfully couple generative learning with generative ai.

Get Computers To Work For You

2025-11-20_working-hard.png

Working hard in a world where you have computers seems like a failure of imagination to me.

I dropped out of my first PhD to join a startup.

While I was doing the PhD, however, I had plenty of time to get coffee with colleagues and talk about research.

And this was for one simple reason – my computer busy working for me.

I inherited a codebase in c of around 4,000 lines.

I cut it down to 100 lines in python.

And then I built a pipeline – the computer started with a model, did an initial pass to reduce compute time, and then worked through complex calculations on a computing cluster my colleague built. When the calculations were done, and the results were formatted and pulled together.

Yes, you could work hard at each of those steps and it would take days or weeks – or you could use a machine and get it done in three hours.

And this isn’t new stuff – we’ve had the tools for around 40 years now.

I’ve used the same approach again and again, and we do the same thing in our latest business.

Raw data is entered in spreadsheets. Computers do a series of tasks and clean and usable outputs pop out the other end.

Most systems on the market give you more work to do.

Our systems do the work for you.

Innovation Teams In An Age Of AI

2025-11-17_innovation-teams.png

How do you build innovation teams in a world of AI?

Pretty much the same way you built teams before AI.

There are four roles that are crucial but most firms only get three right.

You need a developer – someone who can make what you need.

You need an SME – someone who knows what do do.

And you need an architect – someone who knows how something should be made.

One person can deliver all three roles if they have the experience.

But what’s usually missing from the conversation is the voice of the user.

Maybe it’s because users introduce real world complexity and nuance – they bring context.

It’s messy and untidy and hard to solve.

But building for context is what results in success.

Are You Describing Your Value In The Best Way?

2025-11-14_career-strategy.png

It’s a tough time for older job seekers.

We once interviewed an experienced, gray-haired candidate for a sales director role.

It was a no – not because of age but because their responses didn’t match the level of career maturity the role needed.

It got me thinking about how careers evolve, and what employers expect at different statges.

1. Early career: It’s a job

Your first roles are about learning, working hard and doing what you’re asked.

You build capability.

2. Mid-career: It’s about reliability

You’ve shown you deliver.

You’re a safe pair of hands.

The reward for good work is more work – and more importantly, responsibililty.

3. Experienced: It’s about knowing what you offer

Now you’re not just doing the work, you’re shaping how it’s done.

You sell ideas upwards.

You say, “Here’s what needs doing, and why.”

4. Senior: It’s about bringing about change

You recognize patterns – using knowledge and experience gained over decades.

You know what’s coming next, what needs to happen and what’s stopping us from getting better.

Your value is helping stakeholders in the organisation align, improve and move forward.

That salesperson we met?

We wanted level 4 vision – how they’d transform our go-to-market, upskill the team, build strategy.

What we got were Level 1 answers: “I’ll do anything you need me to do.”

I don’t think every rejection is about age.

Sometimes it’s because the way we describe the value we bring hasn’t matured as we have.

Should You Use AI Less Rather Than More

2025-11-11_ai.png

Should you use AI less rather than more? Extracts from a philosophical and a legal opinion.

Our goal as thinking beings should be to cultivate the faculty of reason – according to Daly (2026) – working on habits to develop excellence in five intellectual virtues.

These are:

  1. Knowledge of one’s field
  2. Intuition based on knowledge
  3. Wisdom in how one’s field relates to life and society
  4. Decision-making skill in how to achieve a desirable end
  5. Practical ability to make something using reasoning

The use of generative AI threatens the development of all these virtues.

The problem is that we experience sustained cognitive declines by outsourcing these habits to generative AI.

We literally get more stupid.

If that wasn’t enough the case for using Gen AI – that it makes us faster and more effective is undermined by Yuvraj (2025)’s verification-value paradox hypothesis.

In a nutshell, this hypothesis argues that the time saved by using Gen AI is offset by the increased time needed to manually verify the outputs from Gen AI.

This is because truth matters. Knowing that a collection of words belong together statistically is not sufficient justification to use them uncritically.

Verify. Then use.

Our cognitive skills matter. We should be very sceptical when it comes to replacing or diminishing them.

REFERENCES

Daly, T., 2026. A ‘low-tech’ Academic Virtue Ethics in the Age of Generative AI. J Acad Ethics 24, 13. https://doi.org/10.1007/s10805-025-09683-3

Yuvaraj, J., 2025. The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice. https://doi.org/10.2139/ssrn.5621550