How Productive Will AI Make Us?

2025-06-18_ip.png

The question of who owns the IP produced by generative AI will have a huge impact on productivity.

The rules have been fairly straightforward for a while – don’t copy someone else’s work.

But how does this this work when an LLM generates work for you that its remixed based on other people’s work?

It looks like the platforms pass that concern over to you.

You start the process with a prompt – which you have written, so you own.

The output is yours, depending on the license terms of the AI tool you’re using. Some seem to want to hold on to the IP, so that’s not entirely straightforward.

But the output can also be a copy of someone else’s copyrighted content.

This is most obvious when you create a character that looks exactly like a commercial one, but it also applies to code, which might be harder to spot.

There is a danger zone where you prompt an AI and get output that you then use in a commercial product without checking if it infringes anything.

This leads to a few scenarios.

First, you need humans to check the work, so your productivity is limited to the ability of people to process and check what’s going on. You get a boost, but it’s small.

Second, the checking process gets automated and tools can give you output that has been checked against everything else and guaranteed to be original.

Third, existing protections are swept away and you can do what you like.

Fourth, the whole thing is like smoking. It feels good, you do it for a while, then find out that it rots your thinking and we start to move away from it as a society.

Five, the hype fades and we move onto the next thing.

Six, none of the negatives happen and the industry sorts out the issues and we become incredibly productive.

There are probably more scenarios you can think of.

Questions About AI Use In Companies

2025-06-17_soft-or.png

I have many questions as we figure out how to use generative AI.

Peter Checkland has a model that (simplifed) says that things happen – life is a flux of ideas and events.

We engage with this flux with standards. We learn from our experiences. And we modify our standards. So what are the standards that we need to think about now?

This is a non-exhaustive list for working with AI – again, more questions and observations than answers.

Monolithic or hierarchical?

Is it better to work on one big project – try and create it end to end in one-shot or work on and assemble components? The larger something gets, the more complex it is. It looks like Deep Research takes the component approach and builds up over time.

Expert Mode or Collaborative?

Should we work on projects together or have an expert go away and build tools to a specification? I’m seeing the return of interest in Extreme Programming (XP), which has pair programming as a collaborative building approach. Will we see that come back?

Exploration vs Deployment.

When you can build anything, what do you focus on? How do you move from having a mockup to something that can be used in production? Is it an extension of what you’ve made, or is it a rebuild?

The legal aspect.

This seems a thorny issue. Who owns the output of LLMs?

Let’s say you use an LLM to write code for your app. The chances are that your competition is doing the same. So, if you have the same code in both your applications, how does that work?

As far as I know you can’t copyright the output from a machine, so now is all code fair play? What if you mix this generated code with your own IP?

Cognitive accessibility

LLMs produce information far quicker than we can process them. There is a limit on what we can take in, so how are we going to reduce the flow of information so that we can actually make sense of it. I saw a post recently that said you can check 10 lines of code, but 500 is probably fine. What happens if your 200 page strategy deck goes unread and accepted because it takes too long?

Standards

We are going into this space with standards – IP protections, data management, work processes, that are being upended.

There’s a race to limit protections to support rapid development. There is a race on, and it looks like winning matters more than following rules.

Is that the new standard companies have to sign up to in order to stay relevant? Move fast and worry about the consequences later?

So many questions… You probably have more.

Is A Surface Level Understanding Enough For You?

2025-06-16_western-town.png

I was wondering when I’d get an opportunity to use Deep Research – I didn’t have any particular need for it on a day-to-day basis.

Until now.

The EURO conference is coming up next week and I’m brushing up on my history of SSM (Soft Systems Methodology).

Mingers (2000) has a table listing published case studies that use soft systems or soft OR methods.

This, I thought, might be an opportunity to test Gemini’s Deep Research.

I asked it to review the literature and create an up to date table.

The results are impressive. On the surface.

24 pages. 7,399 words. 53 references. In about half an hour.

The table in Mingers (2000) has around 51 references. Gemini has 13 – rather shallow. I assume it would do more if you asked.

The writing is fine. The structure is good. The content is relevant. The experience is dead.

There’s nothing that stands out. It feels like one of those film sets they use when making Westerns – something that looks like the real thing, but that’s empty behind.

It has the following readability grades:

  • Kincaid: 14.7
  • ARI: 16.6
  • Coleman-Liau: 18.2
  • Flesch Index: 23.6/100
  • Fog Index: 18.5
  • Lix: 62.0 = higher than school year 11
  • SMOG-Grading: 15.5

Those aren’t particularly good. Essentially lots of big words strung together.

It looks academic. Plausible. Researched, even.

But is that enough?

References


Mingers, J., 2000. An Idea Ahead of Its Time: The History and Development of Soft Systems Methodology. Systemic Practice and Action Research 13, 733–755.

Make It Hard To Copy What You Do

2025-06-14_moat.png

Are the rules of building sustainable competitive advantage changing?

I’ve used the VRIO checklist for a decade now to check if I am on track.

  1. Value – Does the activity produce value for a client?
  2. Rare – Is it a hard to find capability?
  3. Inimitability – Is it hard to copy?
  4. Organisation – Do you have the operational system to deliver?

I use the checklist to help me identify, design and implement new service lines that will have a sustainable competitive advantage.

And that’s because a sustainable competitive advantage, better known as a moat, is what sustains profitability.

If you have a moat, you can have a margin. Without it, your selling price drops to the price of production.

You’ll see comments like the cost of intelligence will drop to the cost of electricity – that’s what happens to a selling price in a world of perfect competition.

Based on that one economics class I took a while ago…

The most important item on the checklist, for me anyway, is inimitability.

If what you do is hard to copy then it’s difficult to replace and your customers stay with you.

But is AI changing that? Surely you can make a copy of any business in an instant with AI?

The rapid introduction of AI tools that seemingly do everything can make it feel like the barbarians are at the gate – that everything is going to change and everyone is going to be out of a job.

Perhaps we should pause for a minute.

There is a story, possibly apocryphal, about the war in the South Pacific.

Soldiers came in, built airfields, planes came with goods, and a thriving economy sprang up.

The war ended, and the soldiers left.

The islanders wanted the economy to go on, so they kept the runways paved, built new buildings out of bamboo and preserved the look of the airfields.

But no planes landed.

The point is that if a service is simple and surface level – then it is under threat from AI, automation and replication.

If it’s layered and intertwined, with a lot of tacit knowledge involved, then it’s harder to replace.

If you want this wrapped up in a single line it is still – make it hard to copy what you do to create value.

AI Isn’t Magic – It’s Just Another Way Of Working

2025-06-13_magic.png

I recently met up with Graeme Forbes, one of the smartest people around, for a chat that ranged from philosophy to Pratchett.

Terry Pratchett’s books had penetrating insights into how people think and act. I only wish I had come across them earlier.

One of the ways he sees philosophy, Graeme said, was as a fight against magical thinking. Magic, it turns out, is a technical term in philosophy.

There’s magic in technology as well. Arthur C Clarke wrote that “Any sufficiently advanced technology is indistinguishable from magic.”

The thing with magic is that, if it exists, you don’t need to do anything else.

Pratchett’s Discworld is a world of magic. One where there isn’t much technological progress because when you can do anything with magic why would you invent anything new, why would you strive for progress and improvement?

A more subtle problem is magical thinking. That something new will change everything, as if by magic. And the most recent focus of that is AI.

Of course, AI isn’t magic. It doesn’t solve everything. It has limitations, some temporary that may be addressed as the technology evolves, some more fundamental, such as trust, reliability and maintainability.

We need to figure out how to use it, where it works, and where it doesn’t. How it fits into the workflows that we run.

What Did You Do Yesterday That You Can Do Better Today?

2025-06-12_ai-jobs.png

There’s a lot of concern about what AI will do to jobs.

I think it’s less about losing jobs and more about jobs not being created at all.

I remember going to a conference around 20 years ago.

A speaker was talking about his team of analysts – people with Master’s degrees and PhDs.

This high-level team, he said, worked late into the night after markets closed, crunching the numbers and figuring out what they meant for clients.

It seemed like a big, expensive operation.

And I thought to myself – I do that now with a spreadsheet.

Each improvement opportunity I’ve seen has started that way – we’ve got a team doing something and it takes time and effort.

The team is too busy doing the work to try and do it better.

But once someone figures out how to automate or eliminate an activity those roles become unnecessary.

Or we set up smaller teams with the tools they need to get the job done as efficiently as possible.

New approaches and tools mean that jobs that you were hiring for yesterday just aren’t required today.

And the extension of that is eventually the tasks get done without supervision at all.

That does sound a bit depressing – is there always an inevitable slide from jobs existing to jobs becoming irrelevant.

But that’s also the nature of a dynamic, innovative economy.

The good news is that there is always a different job to do – other problem situations that need addressing.

The trick is perhaps always thinking about how you’re going to replace something done one way yesterday with a better way today?

Take Action No Matter How Frightened You Are

2025-06-11_blockers.png

I seem to be getting extreme views on technology in my feed and it’s easy to be confused about what’s right and wrong.

On the one extreme AI is viewed as an exploitative, unethical and uncontrolled.

On the other it’s viewed as a panacea, the universal solution, the answer to our prayers.

The truth is probably somewhere in the middle.

People and cultures adapt slowly. Some niches may move fast and break things but it takes decades for change to ripple through into widespread usage.

Many ideas we use to think about the world still date to the 50s and 60s – the latest thinking is still limited to small pockets of academic practitioners.

We could do with a model to help those of us trying to feel our way through new technologies – trying to figure out whether things like AI are good or bad, helpful or unhelpful, useful or wasteful.

And I wonder whether Part X, the model from Phil Stutz, the psychiatriast that’s the focus of Jonah Hill’s documentary Stutz, could be useful.

Stutz defines Part X as the “voice of impossibility” – the voices that dissuade and criticise.

Perhaps AI is coming for jobs. Perhaps it will make categories irrelevant. Perhaps it will destroy markets.

Or it will create new jobs. Create new categories. Create new markets.

The main thing for us, at all stages of our careers, is to figure out how to engage with this space, work out what it means for us, our firms, and economies.

And then take steps to understand how it all works.

As Stutz puts it – take action no matter how frightened you are.

Mode 1 And Mode 2 Operation In A Service Business

2025-06-10_prof-services-modes.png

I’ve been reflecting on the modes in which those of us that provide services operate on a day to day basis.

How do you decide what kind of approach is required in a given situation – say when a client is looking for help?

Think about this in terms of two modes – Mode 1 and Mode 2.

In Mode 1, the client has a clear brief. They know what they want and it’s set out in a specification.

In this mode, what we have to do is read the brief and come up with a budget to do the work.

If the budget is in line with what the client has and is good value compared to what else they have, then you’re in business.

In Mode 2, the client can see that there are issues they have to deal with, but is less clear on what the problem is or where to start.

Mode 2 starts with discovery, with trying to understand what is going on before moving to design an intervention and then deliver it.

We get muddled up when a client tries to attack a Mode 2 problem with a Mode 1 approach – asking you to get on and do some work or create a budget or proposal before it’s clear what’s actually required.

And Mode 2 applied to a Mode 1 problem is the old sledgehammer-nut issue – just get the work done instead.

One way to waste less time is to think a little bit more up front about which mode is appropriate for the next situation you’re in.

Operators – Involve Managers Early And Often

2025-06-09_power.png

Technology people find that it comes as a shock when the business world operates with a non-programmatic set of rules.

I once picked up a textbook on decision theory and spent a few hours building a set of decision models.

I learned about the difference between decision making under risk, decision making under uncertainty and how to quantify optimism or cynicism.

I still remember the look in a client’s eye when I presented my analysis – and it wasn’t delight and acceptance.

Instead it was a wary look, one that said I’ve seen this kind of stuff before and I don’t understand it, and I’m not sure I trust it’s been done right, and if I get the call wrong it’s not you that has to explain what happened.

Since the 50s, technologists have modelled management as people that accept or reject our recommendations – and assumed that as rational people they will choose the optimal path.

Our job as operators is to analyse and recommend, their job is to accept our recommendations and give us resources.

Unsurprisingly this approach fails because our models of what managers do is flawed.

Managerial decision making is based primarily on power and bargaining – it’s about making a case for resources, trade offs, and personal positioning.

It’s rational, but uses different objective functions – the thing that’s being optimised – than operators do.

What this means is that if you want to get a decision approved get the decision makers involved in the process as early as possible, preferably co-creating your analysis, rather than presenting a finished package and hoping they agree with you.

Wisdom Is Telling The Difference Between Simple And Complex

2025-06-07_messages.png

How do we create services that clients really want to buy?

I think some of the advice we see needs a little more thought.

We can see the past very clearly, we believe.

When you look back things often arrange themselves in straight lines, a clear route from A to B.

I’ll let you into a secret.

Sometimes people even invent straight lines because they make for better stories of how they, as the hero, started with few resources, encountered adversity and overcame it.

But when it comes to the future, we have much less to go on.

Just because something worked in the past is no reason why it should in the future.

In some cases, success using one method is almost a guarantee that it will not work again because the market and competition respond and adapt.

This is the case with many new marketing strategies, once everyone uses them they lose their value.

A real path is often more complex, one that’s carved rather than trodden.

But that isn’t to say we can’t learn from the path. We shouldn’t endlessly reinvent approaches that have been tested and work.

We should just get those jobs done because we know the methods work. It’s a straightforward task and the job is execution.

Then you have those situations that are new and untested. Where you are exploring new markets, new opportunities, new sources of value.

That’s where you have to carve a path, where your capability and expertise allow you to work through a situation and figure out what to do next.

All of this goes to say some things are simple and some things are complex, and wisdom is in being to tell the difference.