When To Crank Up The Formal Scientific Method

2024-01-08_paper.png

Monday, 8.49pm

Sheffield, U.K.

To mistrust science and deny the validity of scientific method is to resign your job as a human. You’d better go look for work as a plant or wild animal. – P. J. O’Rourke

Sometimes I feel like all I do is talk around what’s happening to what we do with the introduction of generative AI.

This probably bores some people.

But it’s important to test these new technologies and understand their limitations and applications.

Hypes are nothing new, after all.

In the last twenty years I remember being excited by biological computing, genetic algorithms, cloud computing and a bunch of other fads, before now playing with Gen AI, as the cool kids call it.

And I’m finding that my evolving relationship with it is hitting a few hurdles.

First, there’s an issue with cognitive capacity – how much our brains can take in.

For example, the other day I wanted to modify some code to make it more flexible.

The output from the AI worked perfectly and that was good – because it took something that I knew how to do but which would have taken me ten minutes or so and did it in less than a minute.

Time saved. Great.

Today I was trying to understand a particular statistical approach.

I tried putting the question into the AI and ran the answer but I couldn’t really work out whether it had understood what I had asked and if the answer it was giving was right.

The problem is I didn’t know enough to know if the AI was doing something correctly.

This is probably worth repeating.

In order to have confidence in what the AI tells you, you need to know enough about it already to be able to judge the quality of the information.

What you know matters.

You can’t just put something into the system, take the answer, publish the result and expect it to be correct.

The limitation is your ability to understand what’s going on.

Now, when I don’t understand something I read about it, watch videos, try and find a quick solution.

If still don’t get it it’s time to do what Pirsig writes about in “Zen and the art of motorcycle maintenance” and that’s to crank up the scientific method.

To do this you get a notebook – I often prefer a clipboard and paper – and you start writing things down.

You read, take notes, try and understand what you can do.

Which takes me back to another book I read two decades ago – “The Personal Software Process” by Watts Humphrey.

He writes there about the value of printing out code to do a review before running it – you can pick out many bugs when you see it in that different format.

The point is that we are constrained by our ability to comprehend what’s going on.

We need time to appreciate and consider and digest what is presented to us.

If you’re using AI to help you do something for a client, for example, it doesn’t matter if you can do it in five seconds when you previously took 5 days.

The bottleneck is your customer’s ability to understand what you’re presenting to them.

The real shortcut is something different – ask yourself when a client would simply accept the output of an AI generated system when you present it to them.

One word.

Trust.

Not in the AI. Trust in you.

That’s when the client will take your word that the output is good.

Which then makes you the next bottleneck – do you really understand what’s going on?

Or if you have teams working for you that use AI, do you trust them to do what’s needed to understand the output?

I think that in a world where anything can be generated humans will rely more and more on each other – and trust will become vital – even more so than it is now.

Does this make the case for a blockchain? Is that what AI will take us towards? Is AI the problem to which blockchain is finally a solution?

Or is trust something that will become the most human thing to grow?

You only work with people you like, admire and trust.

We’ll see how things work out.

Cheers,

Karthik Suresh

One Reply to “”

Leave a comment