What Kind Of Operation Are You Building?

2025-07-05_post-modern.png

There’s a model that I first came across ten years ago – that helped me make sense of the ten years before that.

And I now think it’s a helpful tool for anyone looking to position themselves for the next ten years by figuring out what kind of operation you’re running.

This is what I learned.

Once upon a time, there were butchers, bakers and candlestick makers.

In this pre-modern world you didn’t need to know about flour to operate a forge, or wax to make a loaf.

Individual professionals did their own thing with simple machines – hands, heat, hammers – and coexisted.

Then we had the industrial revolution and a step change in the way we made things.

Capital was deployed into factories and the modern world was born.

It was a world of strong machines, with workers that served the machines. The workers were ordered, structured, placed. They were interchangeable, replaceable, pieces on a board to be positioned and played by management.

Our modern hierarchy, command and control style operating structure comes from this world.

And then, sometime in the last century, the post-modern world came into existence.

This was a world of smart machines. An information age. Of networks and connections. Where the links between things mattered as much as the things themselves.

And there are obvious differences.

If I have a hammer and I give it to you, now you have a hammer and I have nothing.

If I know how to do something and I teach it to you, we both know this thing. I lose nothing.

So, what kind of business do you operate? Are you a lone genius that does your own thing? Do you have a job in a corporation? Or are you part of a network?

Knowing this gets even more important in this age of AI.

Now, the smart machines are everywhere. Anyone can have them.

Many people still think that they have to use modern methods to build organisations – using techniques to control and motivate people that are at least a hundred years old.

The observant ones will notice that it’s now about teams – small groups of people that want to work with each other and use smart machines to supercharge what they do.

What kind of operation makes this possible? Who’s doing this already? What does great look like?

That’s the change that’s coming. Ready or not.

Create The Conditions To Allow Yourself To Be Surprised

2025-07-03_charity-shop.png

I get concerned sometimes that the information I am exposed to is so highly curated that I learn nothing new at all.

We need to create conditions that allow us to be surprised.

There are three approaches that work for me.

The first is to frequent charity shops.

Books in a charity shop offer a glimpse of what others, who may be very different from you, find interesting.

For example, I came across Austin Kleon’s “Steal like an artist” in a charity shop, which then led me to Lynda Barry and Ivan Brunetti’s work on cartooning.

The second approach is to read the paper.

It’s much easier to go with free media but if you have library access and can get hold of titles like the Economist you get some really interesting perspectives.

The mix of stories in a newspaper are written without knowing you – so there’s a good chance there’s something in there that will be different and interesting.

The third approach is to get recommendations.

In a world where AI can help people pump out material designed for virality rather than substance personal recommendations get ever more important.

If you rely on just social media, then the algorithm seems to feed you what it thinks you would like – and the content seems to converge pretty quickly.

There are good posts but they can disappear as you scroll along, so I make a point of saving the good ones – so I can share them later.

And perhaps surprise someone else.

Sometimes You Don’t Need To Say Much To Get The Point

2025-07-02_efficiency-logic.png

If you hang around academics for a bit you notice that they use a kind of shorthand.

Take the way we normally talk about AI in businesses.

We look at the way in which we’re going to use it to speed up coding. How we can create documents more quickly. How we can summarise information. How we can make games more accessible quickly.

And we go on about the implications. At length.

And then you listen to a good researcher who says five words.

“The epistemology is efficiency logic”

I always have to look up the meaning of epistemology. Wikipedia is helpful for this but the article is a rabbit hole and one should probably stay away from it.

In a nutshell, however, it’s about the theory of knowledge. How we know what we know.

And in this case – when it comes to AI – we’re trying to think about what we think it can do.

And that’s to make us more efficient. More productive. Able to do more with the same resources.

Hence, efficiency logic.

And that’s really all you have to say about that. You can now move on to the next point, if there is one.

The great thing about a well written paper is that each sentence is worth reading. Each one adds knowledge rather than regurgitating what has already been said.

You know how some books are really one idea spread over 300 pages.

A good paper has a hundred good ideas – expressed clearly and efficiently.

It’s not something that can be summarised. It’s already as compact as it needs to be. Any less, and you lose something.

This post is not a good example of that. It’s exploratory, ruminative and far from distilled.

If you had to summarise what I’m trying to say in a phrase, it might be back to Strunk and White’s timeless advice.

“Omit needless words”

6 Levels Of Moral Development For Organisations

2025-06-28_6-levels.png

Rafe Esquith writes about Lawrence Kohlberg’s Six Levels of Moral Development in his book “Teach like your hair’s on fire” as a model for young people.

I think it could also help companies trying to figure out their culture and strategy.

Let’s take becoming more sustainable as an example – how would companies look at these different levels?

1. Stay out of trouble

Most organisations start here. They ask what the minimum compliance levels are and do what is needed to stay out of trouble.

2. Do things for rewards

Some organisations will get involved in programmes and schemes that offer a reward, such as grant funding for early adoption.

This can help unlock some projects that might otherwise not meet investment criteria.

3. Please someone else

We probably see companies start projects when customers ask them to make progress against the customer’s own objectives.

Questionnaires, rankings, and customer promotions may help make the case that an organisation should do more in a certain area.

4. Follow the rules

This might seem similar to 1, but the difference is that in this case it’s more like making a set of rules to follow rather than complying with someone else’s rules.

For example, you might set out rules on how to book travel – avoid meetings if possible, choose low-carbon options, and so on.

A collection of such rules then guides behaviour.

5. Be considerate

This level of operation is one that is empathetic – that considers others.

Perhaps the easiest way to see this is action is with the construction industry.

Is that development next door making noise at all times of day and night or are they considering the impact on the people around them.

In fact, there is a considerate constructors scheme that is just about this.

6. Have a personal code

This is the most difficult one to reach. It’s about having a code about what is the right thing to do and doing it regardless of what’s happening elsewhere.

You see this in action quite a lot. Companies sign up for a programme because it is good marketing, and then pull out when it’s too hard to reach, or if the political environment changes and certain views fall out of favour.

Do you carry on with those views, because you think they are right, or do you bend to power?

Esquith thinks that the 6 levels are an easy-to-understand set of building blocks that can help young people grow as students and people.

Perhaps they could help companies do the same.

A Roundup Of EURO 2025

2025-06-27_or-conference.png

I’ve returned from the EURO 25 conference with 30 pages of notes and a new appreciation for my own bed.

Like many practitioners I never realized that what I had been doing for over a decade was operational research – OR.

Many businesses get stuck trying to figure out what to do as things change around them.

Take sustainability, for example. What happens now? Should you invest in sustainable choices? Will everything just go back to fossil fuels? How do you make decisions in such complex environments?

Those are the kinds of questions OR helps with – it helps decision makers make better decisions.

And there are four things – at least – that I took away from the conference.

  1. Meetings are where things are decided

Many people hate meetings. They love the idea of sending their virtual note taker instead and just reading the summary.

That would be a mistake.

Meetings are where ideas are exchanged, consensus is formed, decisions are made, and resources are allocated.

You need to be in the room.

Soft OR and problem structuring talks about ways to hold better meetings.

Some great talks in this stream from Mike Yearworth, Chris Smith, Leroy White, and the UCL team with Ke (Koko) Zhou, Nici Zimmerman, Irene Pluchinotta and others.

  1. Models capture complexity

Just talking is rarely enough.

Models give people the power to hold more complex ideas in their heads and build more complete pictures of situations and resource flows.

Systems thinking is having a moment, we were told.

And as many equate ST with Systems Dynamics, David Lane’s talks were a must.

  1. Let’s get philosophical

Systems approaches have their roots, the founders tell us, in different philosophies.

It gets complicated very quickly.

Which is why it was helpful to have a session on the philosophical foundations of Systems Thinking by Graeme Forbes, along with an alternative history by Roger James.

  1. Reflecting on the field

As OR practitioners, we want to make a difference and improve how organisations work.

Mike Jackson talked about the way this has been done in the past, talking about Russ Ackoff’s vision of OR.

And, to learn more about how to study interventions in-depth, I had my first introduction to Behavioural OR with a workshop run by Martin Kunc and Alberto Franco.

I’ve missed many more great speakers from this list, and even more sessions I couldn’t attend – given there were 2,000 odd talks.

But there’s lots to think about.

I think the most important thing that came out was in David Lane’s session – with Blackett’s advice to OR practitioners trying to get things done:

  1. Use what you have
  2. Get access to senior people

At the EURO 2025 Conference Next Week

2025-06-21_euro-2025.png

Looking forward to listening to a range of speakers at EURO, the 34th European Conference on Operational Research next week.

I’ll be spending time mostly in the Soft OR and Problem Structuring streams.

Two items for your diary if you’re attending.

On the first session on Monday, Chris Smith will be talking about the Development of The Rich Notes Technique Through an Action Research Programme – on work we’ve been collaborating on with Giles Hindle.

In the afternoon, I will be talking about the History and Foundations of SSM.

Then its a job of looking through the list and selecting from the many options – the Systems Thinking stream looks interesting.

Graeme Forbes is on Tuesday at 14:30, talking about the Philosophical Foundations of Systems thinking in room Parkinson B11, and Christina Phillips is in room Maurice Keyworth 1.32 on Wednesday at 10.30 on Design Thinking for Impact in OR.

See you all there.

The Leaner Your System, The More Flexible You Can Be

2025-06-20_pain-free.png

Every once in a while you come across a term that captures what you want to convey – and one that caught my eye recently was “pain-free progress”.

It was made in the context of physical pain but I think we should be able to use it when talking about software as well.

There’s a big disconnect in the way programmers see the world and the way managers see the world.

Programmers build what you want – give them a specification of what’s required and they’ll make something that does that.

The problem is that if you ask people what they want, you might end up with a long list of requirements.

But is it what they need?

This approach often results in programs that do what they’ve been asked to do but struggle if they’re presented with something outside what they’re designed for.

Managers, on the other hand, are usually just tired.

What they need is for the thing to work and make sense.

I’ve found that when you’re building or selecting software tools to help manage an aspect of operations the first version you create is the one that helps you learn what is really needed.

You’ve got two options from there.

One is to build on what you’ve made – add more functionality and features.

The other is to strip back – what is useful and what isn’t? How can you reduce the number of things that are going on so that what’s needed is done more simply and reliably?

The second approach, I think, is closer to my idea of pain-free progress.

Ironically, the leaner your system, the less it does, the more flexible and reliable it usually turns out to be.

Deal With These Key Corporate Departments Before They Turn Into RoadBlocks

2025-06-19_implementing-ai.png

I see at least three sets of stakeholders that we must involve from the start of an AI project if it’s going to work in a large company.

IT evaluates and decides which tools are ok to use and which ones aren’t.

Microsoft has a decided advantage when it comes to large firms because they’re already on every machine and their cloud products are the standard toolkit for the majority of users.

Copilot does a lot, and if your organisation pays for it, you can get access to advanced features.

So that’s usually the place to start for the easiest route for a solution that meets IT requirements.

But there are lots of other tools that do things better than Copilot – ChatGPT, Gemini and Claude all have their own strengths – and wouldn’t it be nice to be able to use them as well?

That’s where Legal comes in. What you’re allowed to use will be determined by what levels of protection are in place. All these tools have enterprise licenses but it’s still all new.

It’s worth having in-house lawyers involved or get in external experts that can provide an opinion on what’s ok and what’s not.

Then there is the question of data.

I think that the code and computational aspects of this whole space are a commodity – you can get what you need from one provider or another, and you can spin up your own infrastructure and private LLM if you want complete control.

The thing that makes a difference is data.

Although ChatGPT and others have made it simple to connect your data sources, such as SharePoint’s, directly to their engines – I’d be surprised if many large organisations are doing this without commitment from the top down in management.

Competitive advantage in this space comes from access and control over curated and specific data that is hard to get in the open market.

It’s probably a good idea to proactively deal with these areas at the same time as you’re exploring and building solutions.

How Productive Will AI Make Us?

2025-06-18_ip.png

The question of who owns the IP produced by generative AI will have a huge impact on productivity.

The rules have been fairly straightforward for a while – don’t copy someone else’s work.

But how does this this work when an LLM generates work for you that its remixed based on other people’s work?

It looks like the platforms pass that concern over to you.

You start the process with a prompt – which you have written, so you own.

The output is yours, depending on the license terms of the AI tool you’re using. Some seem to want to hold on to the IP, so that’s not entirely straightforward.

But the output can also be a copy of someone else’s copyrighted content.

This is most obvious when you create a character that looks exactly like a commercial one, but it also applies to code, which might be harder to spot.

There is a danger zone where you prompt an AI and get output that you then use in a commercial product without checking if it infringes anything.

This leads to a few scenarios.

First, you need humans to check the work, so your productivity is limited to the ability of people to process and check what’s going on. You get a boost, but it’s small.

Second, the checking process gets automated and tools can give you output that has been checked against everything else and guaranteed to be original.

Third, existing protections are swept away and you can do what you like.

Fourth, the whole thing is like smoking. It feels good, you do it for a while, then find out that it rots your thinking and we start to move away from it as a society.

Five, the hype fades and we move onto the next thing.

Six, none of the negatives happen and the industry sorts out the issues and we become incredibly productive.

There are probably more scenarios you can think of.

Questions About AI Use In Companies

2025-06-17_soft-or.png

I have many questions as we figure out how to use generative AI.

Peter Checkland has a model that (simplifed) says that things happen – life is a flux of ideas and events.

We engage with this flux with standards. We learn from our experiences. And we modify our standards. So what are the standards that we need to think about now?

This is a non-exhaustive list for working with AI – again, more questions and observations than answers.

Monolithic or hierarchical?

Is it better to work on one big project – try and create it end to end in one-shot or work on and assemble components? The larger something gets, the more complex it is. It looks like Deep Research takes the component approach and builds up over time.

Expert Mode or Collaborative?

Should we work on projects together or have an expert go away and build tools to a specification? I’m seeing the return of interest in Extreme Programming (XP), which has pair programming as a collaborative building approach. Will we see that come back?

Exploration vs Deployment.

When you can build anything, what do you focus on? How do you move from having a mockup to something that can be used in production? Is it an extension of what you’ve made, or is it a rebuild?

The legal aspect.

This seems a thorny issue. Who owns the output of LLMs?

Let’s say you use an LLM to write code for your app. The chances are that your competition is doing the same. So, if you have the same code in both your applications, how does that work?

As far as I know you can’t copyright the output from a machine, so now is all code fair play? What if you mix this generated code with your own IP?

Cognitive accessibility

LLMs produce information far quicker than we can process them. There is a limit on what we can take in, so how are we going to reduce the flow of information so that we can actually make sense of it. I saw a post recently that said you can check 10 lines of code, but 500 is probably fine. What happens if your 200 page strategy deck goes unread and accepted because it takes too long?

Standards

We are going into this space with standards – IP protections, data management, work processes, that are being upended.

There’s a race to limit protections to support rapid development. There is a race on, and it looks like winning matters more than following rules.

Is that the new standard companies have to sign up to in order to stay relevant? Move fast and worry about the consequences later?

So many questions… You probably have more.