How Do We Produce Better Than Average Work With AI?

2025-06-06_storyboard.png

I used a video generation AI tool for the first time yesterday and what struck me most about the output was how average it was.

That seems to make sense.

These tools are powered by statistics so what they produce is informed by what is in the world.

And that leads to something of a conundrum.

No one wants to pay for average.

Do you want an average story, an average proposal, an average strategy?

Generics, by definition, have very low value.

There are some kinds of outputs, like bricklaying, that aren’t affected by these tools – yet – and that can attract a premium.

But the only to push value up is to dig deeper, go beyond the surface level responses and find something new and interesting.

Which comes back to people messing around, exploring complex spaces and coming up with something new and interesting.

And it’s interesting because a person did it, not because it’s interesting in itself.

There are some WWF pictures making the rounds of animals in food products – made with AI, as I understand.

The fact that they were made with AI makes their existence less valuable, because they can be replicated easily.

Perhaps.

From my experience, you can’t.

It requires more prompts, more thinking, more effort before you get something, even with AI, that is on the right side of average.

Some people may get very good at prompt generation, and a few may combine their expertise with AI to create exceptional work.

But that doesn’t sound scalable – it doesn’t sound world changing.

Yet.

The exam question here is how do we produce better than average work using AI tools – assuming they’re here to stay.

AI Is Becoming Willful – Can We Tolerate That?

2025-06-05_implementing-ai.png

The problems implementing AI workflows in corporate data pipelines are becoming more apparent the more we try and use these tools in practice.

Their main advantage is speed – it can write a snippet of code, check and existing block, or rewrite very fast.

And it mostly works unless you’re working with a newish library that it doesn’t know about yet.

But when it comes to more tasks I’m struggling with reliability.

Anything you produce needs to be checked. And if it has to be checked, that needs someone who knows what they’re doing. So your more expensive resources get tied down.

And if you build anything significant there’s the issue of control – do you rely on one model or build for several?

Then there’s the last point, which it’s tricky to name but I’ll start with willfulness.

Many years ago, I learned electronics by taking things apart.

I learned that engineers started with expensive components for the first version.

Then they replaced things progressively, plastic wheels for ceramic in VCRs, for example.

The idea was to reduce cost while maintaining performance.

Software works differently – stuff that works well has to be limited in order to monetise it. Performance is tiered.

ChatGPT used to give you a list of 100 companies of a certain type without complaining.

Now it gives you five, refuses to repeat itself and points you to a reference.

It’s obstinate, willful, petulant, even.

I don’t know what the solution is becuause clearly companies have to experiment with pricing models to be sustainable, and limiting what you can do until you pay is one way to get the dollars rolling in.

It’s just tricky to stay a customer of something that seems to get worse over time rather than better.

Doing Less With Data Is More

2025-06-04_data.png

Eliyahu M. Goldratt starts by asking what information is in his book: The haystack syndrome.

Many of us start with a taken-for-granted assumption that given enough data we can build a system that will transform that data into insight – into something useful.

In practice, this ends up with analysts doing large amounts of work that produces output that no one reads or uses.

All of us can point to examples where we spend hours every month creating client reports that don’t appear to be needed.

But we keep spending the time – because that’s the job, we think.

Goldratt suggests that we should instead start with questions, questions that are informed by purpose – which is a more complicated subject in itself.

For example, when it comes to corporate reporting, managers may want to know what is the minimum requirement, what’s the most efficient way to get compliant?

Others may want to know how they can give other managers a breakdown of key numbers that are going to be used to set targets.

Rather than starting with data, we need to select questions that would be useful to answer and transform them into a data collection programme that will help us answer the questions.

But it’s important that we keep the list of questions small – focus on questions that managers actually ask rather than ones we think they may ask.

In a nutshell – if you don’t do something, you can’t do it wrong and it doesn’t cost you anything.

We think that way about energy, about resources.

We should think the same way about data processing – do it only when it’s actually necessary.

Sustainability Reporting – Focus On Process, Not Software

2025-06-03_decision-flow.png

I came across a collections of comments I’d saved about why sustainability managers struggle with implementing ESG reporting software solutions.

There is a need – managers start by being frustrated by the enormity and tedious nature of the task and the amount of manual effort they and their teams have to put in.

So they look for solutions. And there is an overwhelming list of solutions. Along with solutions to help you select from the list of solutions.

If you then pick one of these using a procurement process, it appears that the promise and flash at the pitch or demo is unable to deliver in practice.

Many tools give you more work rather than less because now in addition to collecting all the data in the first place you need to organise and manage it in a new package.

Frustrated, they head back to the still frustrating but less expensive approach of managing things in Excel.

Many of the conversations I have with sustainability managers have begun at this point – they’ve tried a solution, it hasn’t worked and they’re looking for something that does. It’s been the same story since we started providing services in this area in 2016-17.

And what works is going back to basics, simple, reliable, maintainable processes – informed by sound operations research principles.

Easier said than done, of course.

#excel #sustainabilityreporting #operationsresearch

Seeing Operations Research Where It Was Invented

2025-06-01_think-happens.png

Although my PhD study is in the field of Operations Research, it took a trip to Normandy to see the impact and history of this area first hand.

The Allies carried out a number of operations – Operation Overlord, for example, was the invasion of Normandy.

The picture that comes to mind of an operation might be one of individual effort and contribution

You might do one yourself with enough persistence and zeal – if you set a goal and put in the work then you can push any rock up any hill.

But, if you read the histories of these operations it’s eye opening just how easily it could have gone the other way.

If the weather hadn’t eased enough on the 6th of June, if the mission had been delayed by a couple of weeks, if key figures on the other side hadn’t been asleep or travelling, things could have turned out very differently.

Events and personalities have an effect on operations that’s a little like a massive object on light – it bends its path.

The rock you’re trying to push is actually rolling this way and that, based on the distribution of power in the system and effects that you have no control over.

Operations Research has the tools to work out what and who needs to be where and when for maximum impact.

But that’s only part of it.

It also has the tools to work with the aspects of power and culture that make real-world operations so messy.

Modern militaries have learned from these experiences and many have processes that are designed to prevent decisions being made purely because someone important says so.

This is something we should bring into business processes as well.

Embrace Real World Complexity – That’s Where Value Hides

2025-05-23_complexity.png

I read something yesterday that got me thinking – it said that a particular approach was “a simple framework to solve complex problems”.

I’m not sure that’s the right way to look at it.

The real world is messy. Everyone knows this.

If you have a complex problem-situation – one that has people with different, perhaps incompatible views, and options with different costs and benefits – surely it doesn’t make sense that some simple framework can address such complexity.

Wouldn’t you need an approach that’s capable of at least as much complexity as the situation? Just to match up the situation and options for resolution.

It’s like consultants who come in with a four-box framework and think they can apply it to any client.

And, if you’ve been on the receiving end, you know that this usually ends poorly.

Simple approaches are fine for simple problems.

That’s why the promise of bait and switch approaches are that here’s an easy button, this is a simple solution, buy this and things will get better.

They probably won’t.

Some people go the other way.

They use complex approaches to address simple problems.

That’s just a waste of resources.

The pragmatic way is to accept that the real world is messy and complex and engage with it – to learn, understand and figure out an appropriate approach.

That’s where value hides.

Understanding The Difference Between Variation And Variety

2025-05-22_variation-variety.png

If I had to pick one thing that’s changed the way I work it would be understanding the difference between variation and variety.

I used to believe that the way to do better work was standardisation.

You improved quality and productivity by using tools like 5S – sort, set in order, shine, standardise, sustain.

I’d bet you’ve listened to someone suggest that the answer to a problem was standardisation – we need a standard for this.

And that’s because standardisation works, in a particular context – that of factory work.

If you’re in the business of making cars then you want to have standards – every piece of glass for a particular model of car has to be the same – as close as possible.

You’re trying to make lots of copies of a particular type of thing – you want to remove any variation in the product.

It takes effort to reduce variation. Try drawing nine squares that are exactly the same and you’ll quickly find out how much.

But most of us don’t work in factories. A lot of us are engaged in information work.

And with information work, no two situations are exactly the same.

Trying to use a standardised approach doesn’t work. One approach may work with one client, but the minute you try and apply the same approach with the next one, new and interesting ways to derail your plan come into existence.

But it’s more complicated than that.

As Robert Pirsig said, no two people are in the same situation and have the same problems.

But, in contradiction, in some ways everyone is in the same situation and has identical problems.

What makes the difference is that situations contain variety.

Learning how to deal with variety is the first step to building solutions that work for more than one client.

Innovation Gets Harder As You Get Bigger

2025-05-22_startup.png

It’s much easier operating in a startup in the early days.

The team is working in a messy, complicated space where there are no right answers and you have the freedom to explore the problem-situation and create solutions that wrap around a customer.

As you get bigger, this gets harder to do.

In large corporate organisations you come under pressure from people in roles that are more about risk reduction than value creation.

You’ve created the value in the earlier stages, now the challenge is to keep that value.

The pressure often ends up squeezing some people into a box – perhaps squeezing others out altogether.

It’s probably inevitable that as a company gets bigger it focuses more on internal power dynamics than customers.

More companies go under, I remember reading, because of internal problems than because of competitors or customer behaviour.

The Best Technology Is Unnoticed In Day To Day Work For Managers

2025-05-21_managers.png

Technogists think they are far more important to a manager than they really are.

A typical manager has to operate within an organisational hierarchy.

The overt hierarchy is in the org chart. The real hierarchy is in the power relationships between the people that work in the organisation and managers spend a lot of time understanding and navigating implicit currents of power.

They have to plan courses of action and get approvals – which requires being tuned into the politics and culture and how things work around here.

They have to juggle resources, manage teams and research options.

When it comes to systems, then, they’re usually not interested in learning everything about every feature and having to deal with technology folk.

They just want it to work.

Good technology is like plumbing.

You should never have to worry about it.

Systems and processes should just chug along reliably and regularly in the background – letting managers get on with their real work.

Dealing with people.

Can Managers Trust AI To Do Work Unsupervised?

2025-05-20_proof.png

Managers won’t be able to delegate to generative AI until they can rely on what it produces.

We need proof that it works.

I’m not seeing that yet.

I’m not anti-technology – as an engineer I’m trying these new tools out and running several experiments.

But as an engineer, I also want solutions that work, that are reliable, and that can be left alone to do what they’re supposed to do.

Software that comes with a warning that its outputs may be wrong and need checking are not particularly helpful.

The only time you’ll use that output is when the output doesn’t matter – such proposal filler or a quick email response.

Or if there is a human in the loop with ultimate responsiblility for agreeing with the output – such as checking the results of a medical image diagnosis.

But the messy middle may stay messy.

When something is important and needs to be done right – what are you going to do?

You don’t really want to commit a career limiting or career ending blunder.

Perhaps the approach many managers will take is to outsource tasks to consultancies that use specialists that leverage AI rather than bringing AI in house as a replacement for recruitment or capability building.

After all, it’s always easier to fire a consultant if things go wrong.