AI Isn’t Magic – It’s Just Another Way Of Working

2025-06-13_magic.png

I recently met up with Graeme Forbes, one of the smartest people around, for a chat that ranged from philosophy to Pratchett.

Terry Pratchett’s books had penetrating insights into how people think and act. I only wish I had come across them earlier.

One of the ways he sees philosophy, Graeme said, was as a fight against magical thinking. Magic, it turns out, is a technical term in philosophy.

There’s magic in technology as well. Arthur C Clarke wrote that “Any sufficiently advanced technology is indistinguishable from magic.”

The thing with magic is that, if it exists, you don’t need to do anything else.

Pratchett’s Discworld is a world of magic. One where there isn’t much technological progress because when you can do anything with magic why would you invent anything new, why would you strive for progress and improvement?

A more subtle problem is magical thinking. That something new will change everything, as if by magic. And the most recent focus of that is AI.

Of course, AI isn’t magic. It doesn’t solve everything. It has limitations, some temporary that may be addressed as the technology evolves, some more fundamental, such as trust, reliability and maintainability.

We need to figure out how to use it, where it works, and where it doesn’t. How it fits into the workflows that we run.

What Did You Do Yesterday That You Can Do Better Today?

2025-06-12_ai-jobs.png

There’s a lot of concern about what AI will do to jobs.

I think it’s less about losing jobs and more about jobs not being created at all.

I remember going to a conference around 20 years ago.

A speaker was talking about his team of analysts – people with Master’s degrees and PhDs.

This high-level team, he said, worked late into the night after markets closed, crunching the numbers and figuring out what they meant for clients.

It seemed like a big, expensive operation.

And I thought to myself – I do that now with a spreadsheet.

Each improvement opportunity I’ve seen has started that way – we’ve got a team doing something and it takes time and effort.

The team is too busy doing the work to try and do it better.

But once someone figures out how to automate or eliminate an activity those roles become unnecessary.

Or we set up smaller teams with the tools they need to get the job done as efficiently as possible.

New approaches and tools mean that jobs that you were hiring for yesterday just aren’t required today.

And the extension of that is eventually the tasks get done without supervision at all.

That does sound a bit depressing – is there always an inevitable slide from jobs existing to jobs becoming irrelevant.

But that’s also the nature of a dynamic, innovative economy.

The good news is that there is always a different job to do – other problem situations that need addressing.

The trick is perhaps always thinking about how you’re going to replace something done one way yesterday with a better way today?

Take Action No Matter How Frightened You Are

2025-06-11_blockers.png

I seem to be getting extreme views on technology in my feed and it’s easy to be confused about what’s right and wrong.

On the one extreme AI is viewed as an exploitative, unethical and uncontrolled.

On the other it’s viewed as a panacea, the universal solution, the answer to our prayers.

The truth is probably somewhere in the middle.

People and cultures adapt slowly. Some niches may move fast and break things but it takes decades for change to ripple through into widespread usage.

Many ideas we use to think about the world still date to the 50s and 60s – the latest thinking is still limited to small pockets of academic practitioners.

We could do with a model to help those of us trying to feel our way through new technologies – trying to figure out whether things like AI are good or bad, helpful or unhelpful, useful or wasteful.

And I wonder whether Part X, the model from Phil Stutz, the psychiatriast that’s the focus of Jonah Hill’s documentary Stutz, could be useful.

Stutz defines Part X as the “voice of impossibility” – the voices that dissuade and criticise.

Perhaps AI is coming for jobs. Perhaps it will make categories irrelevant. Perhaps it will destroy markets.

Or it will create new jobs. Create new categories. Create new markets.

The main thing for us, at all stages of our careers, is to figure out how to engage with this space, work out what it means for us, our firms, and economies.

And then take steps to understand how it all works.

As Stutz puts it – take action no matter how frightened you are.

Mode 1 And Mode 2 Operation In A Service Business

2025-06-10_prof-services-modes.png

I’ve been reflecting on the modes in which those of us that provide services operate on a day to day basis.

How do you decide what kind of approach is required in a given situation – say when a client is looking for help?

Think about this in terms of two modes – Mode 1 and Mode 2.

In Mode 1, the client has a clear brief. They know what they want and it’s set out in a specification.

In this mode, what we have to do is read the brief and come up with a budget to do the work.

If the budget is in line with what the client has and is good value compared to what else they have, then you’re in business.

In Mode 2, the client can see that there are issues they have to deal with, but is less clear on what the problem is or where to start.

Mode 2 starts with discovery, with trying to understand what is going on before moving to design an intervention and then deliver it.

We get muddled up when a client tries to attack a Mode 2 problem with a Mode 1 approach – asking you to get on and do some work or create a budget or proposal before it’s clear what’s actually required.

And Mode 2 applied to a Mode 1 problem is the old sledgehammer-nut issue – just get the work done instead.

One way to waste less time is to think a little bit more up front about which mode is appropriate for the next situation you’re in.

Operators – Involve Managers Early And Often

2025-06-09_power.png

Technology people find that it comes as a shock when the business world operates with a non-programmatic set of rules.

I once picked up a textbook on decision theory and spent a few hours building a set of decision models.

I learned about the difference between decision making under risk, decision making under uncertainty and how to quantify optimism or cynicism.

I still remember the look in a client’s eye when I presented my analysis – and it wasn’t delight and acceptance.

Instead it was a wary look, one that said I’ve seen this kind of stuff before and I don’t understand it, and I’m not sure I trust it’s been done right, and if I get the call wrong it’s not you that has to explain what happened.

Since the 50s, technologists have modelled management as people that accept or reject our recommendations – and assumed that as rational people they will choose the optimal path.

Our job as operators is to analyse and recommend, their job is to accept our recommendations and give us resources.

Unsurprisingly this approach fails because our models of what managers do is flawed.

Managerial decision making is based primarily on power and bargaining – it’s about making a case for resources, trade offs, and personal positioning.

It’s rational, but uses different objective functions – the thing that’s being optimised – than operators do.

What this means is that if you want to get a decision approved get the decision makers involved in the process as early as possible, preferably co-creating your analysis, rather than presenting a finished package and hoping they agree with you.

Wisdom Is Telling The Difference Between Simple And Complex

2025-06-07_messages.png

How do we create services that clients really want to buy?

I think some of the advice we see needs a little more thought.

We can see the past very clearly, we believe.

When you look back things often arrange themselves in straight lines, a clear route from A to B.

I’ll let you into a secret.

Sometimes people even invent straight lines because they make for better stories of how they, as the hero, started with few resources, encountered adversity and overcame it.

But when it comes to the future, we have much less to go on.

Just because something worked in the past is no reason why it should in the future.

In some cases, success using one method is almost a guarantee that it will not work again because the market and competition respond and adapt.

This is the case with many new marketing strategies, once everyone uses them they lose their value.

A real path is often more complex, one that’s carved rather than trodden.

But that isn’t to say we can’t learn from the path. We shouldn’t endlessly reinvent approaches that have been tested and work.

We should just get those jobs done because we know the methods work. It’s a straightforward task and the job is execution.

Then you have those situations that are new and untested. Where you are exploring new markets, new opportunities, new sources of value.

That’s where you have to carve a path, where your capability and expertise allow you to work through a situation and figure out what to do next.

All of this goes to say some things are simple and some things are complex, and wisdom is in being to tell the difference.

How Do We Produce Better Than Average Work With AI?

2025-06-06_storyboard.png

I used a video generation AI tool for the first time yesterday and what struck me most about the output was how average it was.

That seems to make sense.

These tools are powered by statistics so what they produce is informed by what is in the world.

And that leads to something of a conundrum.

No one wants to pay for average.

Do you want an average story, an average proposal, an average strategy?

Generics, by definition, have very low value.

There are some kinds of outputs, like bricklaying, that aren’t affected by these tools – yet – and that can attract a premium.

But the only to push value up is to dig deeper, go beyond the surface level responses and find something new and interesting.

Which comes back to people messing around, exploring complex spaces and coming up with something new and interesting.

And it’s interesting because a person did it, not because it’s interesting in itself.

There are some WWF pictures making the rounds of animals in food products – made with AI, as I understand.

The fact that they were made with AI makes their existence less valuable, because they can be replicated easily.

Perhaps.

From my experience, you can’t.

It requires more prompts, more thinking, more effort before you get something, even with AI, that is on the right side of average.

Some people may get very good at prompt generation, and a few may combine their expertise with AI to create exceptional work.

But that doesn’t sound scalable – it doesn’t sound world changing.

Yet.

The exam question here is how do we produce better than average work using AI tools – assuming they’re here to stay.

AI Is Becoming Willful – Can We Tolerate That?

2025-06-05_implementing-ai.png

The problems implementing AI workflows in corporate data pipelines are becoming more apparent the more we try and use these tools in practice.

Their main advantage is speed – it can write a snippet of code, check and existing block, or rewrite very fast.

And it mostly works unless you’re working with a newish library that it doesn’t know about yet.

But when it comes to more tasks I’m struggling with reliability.

Anything you produce needs to be checked. And if it has to be checked, that needs someone who knows what they’re doing. So your more expensive resources get tied down.

And if you build anything significant there’s the issue of control – do you rely on one model or build for several?

Then there’s the last point, which it’s tricky to name but I’ll start with willfulness.

Many years ago, I learned electronics by taking things apart.

I learned that engineers started with expensive components for the first version.

Then they replaced things progressively, plastic wheels for ceramic in VCRs, for example.

The idea was to reduce cost while maintaining performance.

Software works differently – stuff that works well has to be limited in order to monetise it. Performance is tiered.

ChatGPT used to give you a list of 100 companies of a certain type without complaining.

Now it gives you five, refuses to repeat itself and points you to a reference.

It’s obstinate, willful, petulant, even.

I don’t know what the solution is becuause clearly companies have to experiment with pricing models to be sustainable, and limiting what you can do until you pay is one way to get the dollars rolling in.

It’s just tricky to stay a customer of something that seems to get worse over time rather than better.

Doing Less With Data Is More

2025-06-04_data.png

Eliyahu M. Goldratt starts by asking what information is in his book: The haystack syndrome.

Many of us start with a taken-for-granted assumption that given enough data we can build a system that will transform that data into insight – into something useful.

In practice, this ends up with analysts doing large amounts of work that produces output that no one reads or uses.

All of us can point to examples where we spend hours every month creating client reports that don’t appear to be needed.

But we keep spending the time – because that’s the job, we think.

Goldratt suggests that we should instead start with questions, questions that are informed by purpose – which is a more complicated subject in itself.

For example, when it comes to corporate reporting, managers may want to know what is the minimum requirement, what’s the most efficient way to get compliant?

Others may want to know how they can give other managers a breakdown of key numbers that are going to be used to set targets.

Rather than starting with data, we need to select questions that would be useful to answer and transform them into a data collection programme that will help us answer the questions.

But it’s important that we keep the list of questions small – focus on questions that managers actually ask rather than ones we think they may ask.

In a nutshell – if you don’t do something, you can’t do it wrong and it doesn’t cost you anything.

We think that way about energy, about resources.

We should think the same way about data processing – do it only when it’s actually necessary.

Sustainability Reporting – Focus On Process, Not Software

2025-06-03_decision-flow.png

I came across a collections of comments I’d saved about why sustainability managers struggle with implementing ESG reporting software solutions.

There is a need – managers start by being frustrated by the enormity and tedious nature of the task and the amount of manual effort they and their teams have to put in.

So they look for solutions. And there is an overwhelming list of solutions. Along with solutions to help you select from the list of solutions.

If you then pick one of these using a procurement process, it appears that the promise and flash at the pitch or demo is unable to deliver in practice.

Many tools give you more work rather than less because now in addition to collecting all the data in the first place you need to organise and manage it in a new package.

Frustrated, they head back to the still frustrating but less expensive approach of managing things in Excel.

Many of the conversations I have with sustainability managers have begun at this point – they’ve tried a solution, it hasn’t worked and they’re looking for something that does. It’s been the same story since we started providing services in this area in 2016-17.

And what works is going back to basics, simple, reliable, maintainable processes – informed by sound operations research principles.

Easier said than done, of course.