Operators – Involve Managers Early And Often

2025-06-09_power.png

Technology people find that it comes as a shock when the business world operates with a non-programmatic set of rules.

I once picked up a textbook on decision theory and spent a few hours building a set of decision models.

I learned about the difference between decision making under risk, decision making under uncertainty and how to quantify optimism or cynicism.

I still remember the look in a client’s eye when I presented my analysis – and it wasn’t delight and acceptance.

Instead it was a wary look, one that said I’ve seen this kind of stuff before and I don’t understand it, and I’m not sure I trust it’s been done right, and if I get the call wrong it’s not you that has to explain what happened.

Since the 50s, technologists have modelled management as people that accept or reject our recommendations – and assumed that as rational people they will choose the optimal path.

Our job as operators is to analyse and recommend, their job is to accept our recommendations and give us resources.

Unsurprisingly this approach fails because our models of what managers do is flawed.

Managerial decision making is based primarily on power and bargaining – it’s about making a case for resources, trade offs, and personal positioning.

It’s rational, but uses different objective functions – the thing that’s being optimised – than operators do.

What this means is that if you want to get a decision approved get the decision makers involved in the process as early as possible, preferably co-creating your analysis, rather than presenting a finished package and hoping they agree with you.

Wisdom Is Telling The Difference Between Simple And Complex

2025-06-07_messages.png

How do we create services that clients really want to buy?

I think some of the advice we see needs a little more thought.

We can see the past very clearly, we believe.

When you look back things often arrange themselves in straight lines, a clear route from A to B.

I’ll let you into a secret.

Sometimes people even invent straight lines because they make for better stories of how they, as the hero, started with few resources, encountered adversity and overcame it.

But when it comes to the future, we have much less to go on.

Just because something worked in the past is no reason why it should in the future.

In some cases, success using one method is almost a guarantee that it will not work again because the market and competition respond and adapt.

This is the case with many new marketing strategies, once everyone uses them they lose their value.

A real path is often more complex, one that’s carved rather than trodden.

But that isn’t to say we can’t learn from the path. We shouldn’t endlessly reinvent approaches that have been tested and work.

We should just get those jobs done because we know the methods work. It’s a straightforward task and the job is execution.

Then you have those situations that are new and untested. Where you are exploring new markets, new opportunities, new sources of value.

That’s where you have to carve a path, where your capability and expertise allow you to work through a situation and figure out what to do next.

All of this goes to say some things are simple and some things are complex, and wisdom is in being to tell the difference.

How Do We Produce Better Than Average Work With AI?

2025-06-06_storyboard.png

I used a video generation AI tool for the first time yesterday and what struck me most about the output was how average it was.

That seems to make sense.

These tools are powered by statistics so what they produce is informed by what is in the world.

And that leads to something of a conundrum.

No one wants to pay for average.

Do you want an average story, an average proposal, an average strategy?

Generics, by definition, have very low value.

There are some kinds of outputs, like bricklaying, that aren’t affected by these tools – yet – and that can attract a premium.

But the only to push value up is to dig deeper, go beyond the surface level responses and find something new and interesting.

Which comes back to people messing around, exploring complex spaces and coming up with something new and interesting.

And it’s interesting because a person did it, not because it’s interesting in itself.

There are some WWF pictures making the rounds of animals in food products – made with AI, as I understand.

The fact that they were made with AI makes their existence less valuable, because they can be replicated easily.

Perhaps.

From my experience, you can’t.

It requires more prompts, more thinking, more effort before you get something, even with AI, that is on the right side of average.

Some people may get very good at prompt generation, and a few may combine their expertise with AI to create exceptional work.

But that doesn’t sound scalable – it doesn’t sound world changing.

Yet.

The exam question here is how do we produce better than average work using AI tools – assuming they’re here to stay.

AI Is Becoming Willful – Can We Tolerate That?

2025-06-05_implementing-ai.png

The problems implementing AI workflows in corporate data pipelines are becoming more apparent the more we try and use these tools in practice.

Their main advantage is speed – it can write a snippet of code, check and existing block, or rewrite very fast.

And it mostly works unless you’re working with a newish library that it doesn’t know about yet.

But when it comes to more tasks I’m struggling with reliability.

Anything you produce needs to be checked. And if it has to be checked, that needs someone who knows what they’re doing. So your more expensive resources get tied down.

And if you build anything significant there’s the issue of control – do you rely on one model or build for several?

Then there’s the last point, which it’s tricky to name but I’ll start with willfulness.

Many years ago, I learned electronics by taking things apart.

I learned that engineers started with expensive components for the first version.

Then they replaced things progressively, plastic wheels for ceramic in VCRs, for example.

The idea was to reduce cost while maintaining performance.

Software works differently – stuff that works well has to be limited in order to monetise it. Performance is tiered.

ChatGPT used to give you a list of 100 companies of a certain type without complaining.

Now it gives you five, refuses to repeat itself and points you to a reference.

It’s obstinate, willful, petulant, even.

I don’t know what the solution is becuause clearly companies have to experiment with pricing models to be sustainable, and limiting what you can do until you pay is one way to get the dollars rolling in.

It’s just tricky to stay a customer of something that seems to get worse over time rather than better.

Doing Less With Data Is More

2025-06-04_data.png

Eliyahu M. Goldratt starts by asking what information is in his book: The haystack syndrome.

Many of us start with a taken-for-granted assumption that given enough data we can build a system that will transform that data into insight – into something useful.

In practice, this ends up with analysts doing large amounts of work that produces output that no one reads or uses.

All of us can point to examples where we spend hours every month creating client reports that don’t appear to be needed.

But we keep spending the time – because that’s the job, we think.

Goldratt suggests that we should instead start with questions, questions that are informed by purpose – which is a more complicated subject in itself.

For example, when it comes to corporate reporting, managers may want to know what is the minimum requirement, what’s the most efficient way to get compliant?

Others may want to know how they can give other managers a breakdown of key numbers that are going to be used to set targets.

Rather than starting with data, we need to select questions that would be useful to answer and transform them into a data collection programme that will help us answer the questions.

But it’s important that we keep the list of questions small – focus on questions that managers actually ask rather than ones we think they may ask.

In a nutshell – if you don’t do something, you can’t do it wrong and it doesn’t cost you anything.

We think that way about energy, about resources.

We should think the same way about data processing – do it only when it’s actually necessary.

Sustainability Reporting – Focus On Process, Not Software

2025-06-03_decision-flow.png

I came across a collections of comments I’d saved about why sustainability managers struggle with implementing ESG reporting software solutions.

There is a need – managers start by being frustrated by the enormity and tedious nature of the task and the amount of manual effort they and their teams have to put in.

So they look for solutions. And there is an overwhelming list of solutions. Along with solutions to help you select from the list of solutions.

If you then pick one of these using a procurement process, it appears that the promise and flash at the pitch or demo is unable to deliver in practice.

Many tools give you more work rather than less because now in addition to collecting all the data in the first place you need to organise and manage it in a new package.

Frustrated, they head back to the still frustrating but less expensive approach of managing things in Excel.

Many of the conversations I have with sustainability managers have begun at this point – they’ve tried a solution, it hasn’t worked and they’re looking for something that does. It’s been the same story since we started providing services in this area in 2016-17.

And what works is going back to basics, simple, reliable, maintainable processes – informed by sound operations research principles.

Easier said than done, of course.

Seeing Operations Research Where It Was Invented

2025-06-01_think-happens.png

Although my PhD study is in the field of Operations Research, it took a trip to Normandy to see the impact and history of this area first hand.

The Allies carried out a number of operations – Operation Overlord, for example, was the invasion of Normandy.

The picture that comes to mind of an operation might be one of individual effort and contribution

You might do one yourself with enough persistence and zeal – if you set a goal and put in the work then you can push any rock up any hill.

But, if you read the histories of these operations it’s eye opening just how easily it could have gone the other way.

If the weather hadn’t eased enough on the 6th of June, if the mission had been delayed by a couple of weeks, if key figures on the other side hadn’t been asleep or travelling, things could have turned out very differently.

Events and personalities have an effect on operations that’s a little like a massive object on light – it bends its path.

The rock you’re trying to push is actually rolling this way and that, based on the distribution of power in the system and effects that you have no control over.

Operations Research has the tools to work out what and who needs to be where and when for maximum impact.

But that’s only part of it.

It also has the tools to work with the aspects of power and culture that make real-world operations so messy.

Modern militaries have learned from these experiences and many have processes that are designed to prevent decisions being made purely because someone important says so.

This is something we should bring into business processes as well.

Embrace Real World Complexity – That’s Where Value Hides

2025-05-23_complexity.png

I read something yesterday that got me thinking – it said that a particular approach was “a simple framework to solve complex problems”.

I’m not sure that’s the right way to look at it.

The real world is messy. Everyone knows this.

If you have a complex problem-situation – one that has people with different, perhaps incompatible views, and options with different costs and benefits – surely it doesn’t make sense that some simple framework can address such complexity.

Wouldn’t you need an approach that’s capable of at least as much complexity as the situation? Just to match up the situation and options for resolution.

It’s like consultants who come in with a four-box framework and think they can apply it to any client.

And, if you’ve been on the receiving end, you know that this usually ends poorly.

Simple approaches are fine for simple problems.

That’s why the promise of bait and switch approaches are that here’s an easy button, this is a simple solution, buy this and things will get better.

They probably won’t.

Some people go the other way.

They use complex approaches to address simple problems.

That’s just a waste of resources.

The pragmatic way is to accept that the real world is messy and complex and engage with it – to learn, understand and figure out an appropriate approach.

That’s where value hides.

Understanding The Difference Between Variation And Variety

2025-05-22_variation-variety.png

If I had to pick one thing that’s changed the way I work it would be understanding the difference between variation and variety.

I used to believe that the way to do better work was standardisation.

You improved quality and productivity by using tools like 5S – sort, set in order, shine, standardise, sustain.

I’d bet you’ve listened to someone suggest that the answer to a problem was standardisation – we need a standard for this.

And that’s because standardisation works, in a particular context – that of factory work.

If you’re in the business of making cars then you want to have standards – every piece of glass for a particular model of car has to be the same – as close as possible.

You’re trying to make lots of copies of a particular type of thing – you want to remove any variation in the product.

It takes effort to reduce variation. Try drawing nine squares that are exactly the same and you’ll quickly find out how much.

But most of us don’t work in factories. A lot of us are engaged in information work.

And with information work, no two situations are exactly the same.

Trying to use a standardised approach doesn’t work. One approach may work with one client, but the minute you try and apply the same approach with the next one, new and interesting ways to derail your plan come into existence.

But it’s more complicated than that.

As Robert Pirsig said, no two people are in the same situation and have the same problems.

But, in contradiction, in some ways everyone is in the same situation and has identical problems.

What makes the difference is that situations contain variety.

Learning how to deal with variety is the first step to building solutions that work for more than one client.

Innovation Gets Harder As You Get Bigger

2025-05-22_startup.png

It’s much easier operating in a startup in the early days.

The team is working in a messy, complicated space where there are no right answers and you have the freedom to explore the problem-situation and create solutions that wrap around a customer.

As you get bigger, this gets harder to do.

In large corporate organisations you come under pressure from people in roles that are more about risk reduction than value creation.

You’ve created the value in the earlier stages, now the challenge is to keep that value.

The pressure often ends up squeezing some people into a box – perhaps squeezing others out altogether.

It’s probably inevitable that as a company gets bigger it focuses more on internal power dynamics than customers.

More companies go under, I remember reading, because of internal problems than because of competitors or customer behaviour.