How to use data to understand and predict the future

big-data-vs-thick-data.png

The world’s largest corporations are increasingly investing in projects to try and use data for better decision making in their businesses, to improve how they work from marketing to operations.

Why is it then that nearly three-quarters of big data projects are unprofitable?

One explanation is put forward by Tricia Wang in this TED talk.

We use data to help us understand how the world around us works, and we hope that this understanding will help us predict what is going to happen in the future.

But, depending on how we approach the idea of data, this results in different tactics by different organisations.

The “hot” approach is the one of big data.

Everything is connected – the internet of things (IOT).

Data is collected automatically, recording everything from your browsing history to when your toaster turns on and off.

Tricia Wang has invented the term “thick data” for an approach to collecting data by observation – something done by the likes of ethnographers and anthropologists.

This is a modification of the term “thick description” that tries to explain behaviour and the context in which the behaviour takes place.

So, in big data, computers collect information from customer “touchpoints” – the places where you interact with the machines.

In thick data, people collect information by observing and interacting with other people.

Tricia’s example of how this results in different outcomes is the case study of Nokia.

The huge amount of data collected by Nokia from its customers and market research failed to alert them to the possibilities of the smartphone.

Tricia’s research showing that low-income Chinese people were willing to spend half their monthly income on buying a phone convinced her that the smartphone would take off.

And we all know what happened to Nokia when the iPhone took over the world.

In a big data world more is better – sample sizes are huge.

We collect millions of data points and store these in the cloud.

Tools like IBM’s Watson help you analyse and evaluate this data for not just quantitative insights but also, through natural language processing, for emotional components and behavioural predictions.

With thick data, we have a small number of data points.

Someone has to spend time with people, observing what they do, where they do it and draw conclusions on what that means for the future.

Big data helps you quantify the world.

All the measurements you take give you the ability to look at how people interact with your business in a level of detail beyond anything that was possible before and express this in numerical terms.

Thick data helps you explain why people do what they do.

Taking time to watch and interact with people gives you insights into the way they think and behave and, crucially, what they might do next.

The point is that it is not an either-or situation.

Using just big data is not enough.

Combining the power of big data to quantify and the power of thick data to explain can give you a better understanding of the situation.

Take a simple example of thick data in action.

If you have watched The Social Network, you’ll remember a scene where Zuckerberg is trying to figure out what to do with his system.

Over a drink, his friend comments that it would be great if people had a badge that said there whether someone was single or attached.

Zuckerberg has a flash of insight and adding that feature to facebook causes subscriptions to rocket.

In other words – if you work out how use both big data and thick data in your business, you are more likely to be able to better understand and predict the future.

How to create energy from underwater kites

underwater-kites.png

In 2015, a £25m project was launched to install underwater “kite-turbines” in Holyhead Deep, off the coast of North Wales.

Swedish developer Minesto has built the turbines and plans to commission the project in stages, starting in 2017 with a 0.5 MW demonstration unit of their patented Deep Green ocean energy power plant.

Unlike airborne kites, which turn a generator on the ground, these underwater kite-turbines have a wing with a turbine attached directly to it.

The underwater current lifts the wing and the kite is steered in a figure-of-eight at several times the speed of the current.

The water flows over the turbine blades and turns them, producing electricity, which is then transmitted through to a cable to the kite-turbine’s tether on the seabed and from there to the grid onshore.

Most existing tidal technology is large, fixed and can operate only in currents that are faster than 2.5 meters per second.

Because the movement of Deep Green increases the speed at which currents flow over the turbine, it can operate at lower speeds than fixed installations, down to 1.2 meters per second.

Each turbine is rated at between 150 and 800 kW and can work submerged in depths of 15 meters to 300 meters.

Kite-turbines can also be up to 15 times lighter than fixed alternatives, at around 10 tonnes.

The locations for these generators have to be chosen so that they don’t interfere with shipping or other sea users.

Following the demonstrator project in 2017, the site will be gradually expanded to house 20 power plants producing 10 MW.

Minesto have also announced that they are looking to take the eventual size of the array to 80 MW.

The cost of energy from this technology could be around £1 million per installed MW at this point, although costs could decrease with scale.

The UK is in a unique position to harvest energy from tidal resource – it has around half the European tidal resource and 10-15% of global resource according to Minesto.

Tides are also very predictable, making this kind of technology very attractive if it can be deployed at scale because it uses renewable resource and its output can be predicted with a high degree of accuracy.

What the prices of batteries mean for storage applications

Battery-price-chart2.png

The prices of battery packs fell from close to $1,000 per kWh at the start of the decade to $227 in 2016, a drop of around 80% according to a McKinsey study released at the start of the year.

Current projections put them on course to fall below $200 per kWh by 2020 and below $100 per kWh by 2030.

What impact does the cost of batteries have on the overall business case for producing an electric vehicle?

Some interesting numbers are discussed in this Tesla forum article:

  • There are claims that Tesla’s internal cost of batteries ranges from $150 to $240 per kWh now.
  • GM revealed that their battery cost at cell level was around $145 per kWh.
  • A 60 kWh battery pack would make up $10,440 of the $37,495 Chevrolet Bolt at a pack price of $174.

This means that the cost of batteries for an electric vehicle drops to under a third of the price of a car and could drop to under a fifth by 2030.

Lithium-ion technologies dominate the battery storage market, making up 95% of new energy storage projects according to McKinsey research.

The same research found that battery storage applications are already economic in four important areas – demand charge management, grid scale power, small-scale renewables and storage and frequency response.

They also note that in applications such as demand charge management and small-scale renewables, lead-acid batteries may work better than lithium-ion.

It is likely that in the coming years packages of energy storage solutions for industrial and domestic use will become simpler and easier to buy and install.

Falling prices as the technology improves in any industry benefits consumers more than the producers – buyers will gain most of the benefit from price reductions in battery technology.

Energy storage has the potential to tranform the energy system as we know it, and it looks like it could happen faster than anyone expected.

When should you use algorithms for decision making?

low-high-validity-environments.png

Many of us use algorithms every day for decision making.

We don’t always trust them, however, and tend to use them less if they have shown themselves to be imperfect in the past.

We tend to judge algorithms by how well they do at meeting a performance goal of some kind, rather than working out whether they will do better than the method we currently use.

This usually results in worse outcomes.

For example, if you drive, the chances are that you use a satellite navigation system often.

Whether it is a standalone system with built in maps or a connected system with traffic feedback like Google maps, how often have you decided to ignore the guidance and decided you know best?

The chances are that you will do better more often by following the guidance.

Algorithms work better in some situations than others.

Broadly, there are three kinds of situations or environments you could face.

In the image above these are categorised into learnable situations, where you can improve through practice, and the predictability of situations – whether you know what could happen next or not.

A zero validity situation is one where you can’t learn through practice and you don’t know what could happen next. A career path for a baby, for example, or the direction of world policy with Trump and Brexit.

A high validity situation is one where you can get better with practice and you can tell what is going to happen next.

Learning to play tennis for example, or learning to drive a car.

You know that a ball is going to arrive in your direction in the near future, or that you will need to drive in a straight line, or around a curve or slow down or speed up.

Between these two extremes is a wide range of low validity situations characterised by uncertainty and unpredictability.

The nobel prize winning economist Daniel Kahneman writes about how algorithms perform best in such low-validity environments.

These cover a wide range of situations including medicine, recruitment, finance, logistics and so on.

In study after study we find that simple rules outperform experts.

For example, a simple six point model outperformed doctors in judging the probability of cancers in a patient.

A stock market index fund that that simply follows the top 500 companies will outperform the vast majority of expert stock pickers.

Using a few measures to score applicants will select candidates who will perform better than those selected by “gut instinct”.

In fact, selecting applicants purely based on the information in CVs can produce a better result than selecting after an interview.

Algorithms don’t have to be complex. They can be based on simple rules based on existing statistics or common sense.

What algorithms do is help cut through the “noise” and focus on a few factors that can make a difference.

When used well, algorithms can help experts make much better decisions by helping them bypass their own cognitive biases.

We should all be using them much more in our work and lives.

When getting a cup of tea is a crucial decision

expectations-and-reality-over-time.png

In one of Austin Kleon’s talks, the writer and artist describes how the process of creative work unfolds.

It all starts with an idea.

It might be an idea for a new piece of art, a book or a charitable project.

It could be an idea for a new spreadsheet model, an asset purchase or a renewable investment.

Those count too – there is no reason why what we think of as “work” can’t be as creative as “art”.

The idea seems like the best thing in the world, especially if you have just come up with it in the shower.

Cue big, excited, smiley face.

Then you start work on that idea and begin to develop it and create packages of work to complete.

As you get deeper into doing that, you start to realise that this might be harder than it first seemed.

Once you get into the detail, various problems appear that you need to deal with as you move things along.

Cue pensive face.

At some point, you reach rock bottom. This is where nothing seems to work and you can’t see a way to fixing all the problems you have.

All you have done so far is in danger of being completely useless. You might just have wasted days/weeks/months/years on this project.

Cue sad face.

This is a crucial point in the process.

This is the time to go and get a cup of tea.

Or coffee. Or whatever that will give you a break and then let you get back to work.

It’s when you keep going and work through to the next stage in the process that things start to get better.

Just by spending time and working on the problems, you come up with ways to solve them and get things moving again.

It doesn’t seem that bad now.

Cue return of pensive face.

Then you’re starting to speed up again, and you enter the final stretch.

At this point, the work is done – whether it is art, writing, a spreadsheet or a construction project.

It’s perhaps not reached the lofty heights that you first imagined, but its a good piece of work and it is now done and you can be pleased about it.

Cue smiley face.

The message in Austin’s talk is that you should think “process not product”. Creating good work is as much about working on the process as on the product.

And a crucial part of that process is being able to recognise when you need to take a break and get a cup of tea, so that you can return to work and keep going after that.

Why change efforts fail

problem-solution-conflict.png

Organisations are constantly implementing new initiatives to improve the way in which they do things.

Why is it that so many of these efforts fail?

Robert Fritz describes an interesting way of analysing and showing these situations in his book Corporate tides.

He argues that the existing structure of an organisation undermines and frustrates efforts to change things.

Take, for example, a common problem in many organisations – a strained workload on people.

The solution to this problem is to add more people to help with the workload. So quite often managers will start recruiting.

Another problem is the need to maintain earnings and manage budgets.

Adding more people has an impact on budgets, and the solution to that problem is to limit hiring new people.

Limiting the number of people hired then has an impact on existing staff and their workloads.

The image above shows this, adapted from the method used by Fritz.

We have problems and associated solutions.

What is not immediately obvious to the people involved is that the solution to one problem can often make another problem worse.

This is because different managers are involved and don’t necessarily see the way everything interacts.

In large organisations, these dynamics can take years to play out.

A period of hiring by operational managers can lead to a clampdown in the following years by financial managers – leading to a constant oscillation between one bad situation and another.

We see this tension again and again in corporate situations. For example:

  • Between long-term investment and the need to report short-term results.
  • Between decentralised decision making and central control over the organisation.
  • Between employee responsibility and managerial control

This is why change efforts based entirely on dynamic energy and good intentions can make a difference for a while, but fail in the long run.

For example, you could bring in a manager that through sheer energy and momentum creates a new way of doing things.

As soon as that driving force is removed, the organisation resumes its normal pattern of doing things – its state of equilibrium.

What is “normal” is determined by the structure that is in place.

The structure might not be immediately obvious or visible, but it has a huge impact on whether change will succeed or fail.

The implication is that if you want real change, you can’t just fiddle with an inadequate existing structure.

You first need to establish a more suitable one – and that should be the primary focus of managers in strategic leadership positions.

The four principles for investment success

four-principles-for-investment-success.png

It sometimes seems that the process for investing money is made much harder than it should be.

Whether you are investing on behalf of yourself, putting your savings aside every month, or making decisions on behalf of a large corporate, there are four princples for investment that are worth keeping in mind.

These principles are set out in the investment philosophy followed by Vanguard, one of the world’s largest investment companies.

Vanguard was founded by John Bogle who created low cost funds designed to make investing simple.

Fans of Bogle are called Bogleheads and supporters include Warren Buffet who wrote that Bogle is the person who has done the most for investors by urging them to invest in ultra-low-cost index funds.

The four principles, however, apply beyond just personal investing and to a range of decisions we face.

1. Set clear goals

You need different approaches for short-term and long-term needs.

The same investment plan cannot be used to save for a house deposit, school fees or for retirement – you need a different approach for each one.

For short-term needs, you may better off with ways of saving money that are safer.

For long-term needs you may be happier with more fluctuation if you don’t need the money for a while, but much less comfortable with volatility if you are close to retirement.

2. Diversify asset allocation

You don’t know what is going to do better at any given period.

Quite often, something that does poorly one year can be the best performer next year.

Trying to pick winners usually results in you losing your stake.

The option that appears to work best is to keep a wide selection and pick from the entire market. The more you have in your collection, the less impact any one pick has on your results.

3. Minimise cost

Investors can’t control markets.

What they can do is control the costs of investing.

Every pound paid in fees or commissions reduces the returns to the investor.

Most managed funds do worse than an unmanaged index fund that tracks the market.

Worse, some managed funds are simply “closet” indexers, where they take large fees but simply follow the market.

Pick low-cost options wherever possible.

4. Be disciplined and think long-term.

Investing is a marathon, not a sprint.

The power of long-term investing lies in the ability of investments to compound over time.

With a long enough time-horizon small, regular investments can add up to a large return.

The mistake some people make is to react emotionally to short-term volatility and make quick, rash decisions.

Being discipled and following a long-term strategy is the best way to counter emotional responses.

Set your strategy, make your decisions and then get on with other, more important things.

The 1 kWh Energy Reduction Strategy

accumulation-of-marginal-gains.png

The business case for energy efficiency should be simple: the cheapest unit of energy is the one you do not use.

In spite of this, why is hard to get energy efficiency and energy reduction projects underway?

According to the International Energy Agency (IEA), energy efficiency is the only energy resource possessed by all countries.

Globally, we are making progress on energy intensity – it’s just that we aren’t making enough progress as fast as we need to do.

According to the IEA:

  • Global energy intensity improved by 1.8% in 2015 (2014 = 1.5%).
  • Emerging and developing countries reduced intensity by 2.5%, doing better than developed countries who managed 2%.
  • China is the best performer, with a reduction of 5.6%.

Although this is good, we need to have an annual improvement in energy intensity of 2.6% globally to meet our climate goals.

A 2.6% improvement doesn’t seem challenging. At an individual and organisational level, why is it that we can’t easily meet that target?

The problem is that globally is that more than 70% of energy usage is not covered by any form of energy efficiency performance requirement.

Two-thirds of buildings built do not have to comply with codes or standards.

In these situations, market forces determine what gets done, and people will quite often go for the cheapest option, which may not always be the most efficient.

For example, India is the third largest energy user in the world and installs a staggering amount of solar panels.

As it gets richer, however, it is also installing more air-conditioning, and so its energy demand is rising faster than the amount of new clean generation being installed.

Large projects face large challenges

Governments and policy makers want to meet climate change targets in the quickest and easiest way possible.

That is why they focus on large projects, such as the Hinkley C nuclear plant. The idea is that it will deliver both a substantial amount of secure energy and have a lower carbon impact, helping the UK government meet its targets faster.

The public debate and scrutiny, however, can be intense. It takes a long time to get such projects approved and underway.

In organisations, large energy efficiency projects that involve high capital costs, longer payback times than core business options, or the need to enter into long term agreements with third parties can face several hurdles.

You need to put together business cases, have them reviewed by panels, go through approvals processes before they are eventually accepted or denied.

Governments know this, and that is why much policy focuses on creating new infrastructure.

It is easier to get people to do something new from scratch than it is to have them fix an existing situation.

The solution may lie in a concept called ‘the aggregation of marginal gains’

Doing small things better regularly adds up over time.

This method can be traced back to the Austrian chess player Wilhelm Steinitz, who applied an ‘accumulation of small advantages’ to gain a positional advantage in his play and became the first official world chess champion in 1886.

The most current example of this approach is how Sir Dave Brailsford transformed British Cycling and the performance of Team GB in the Olympics.

His basic idea was that if you broke down all the activities involved in winning cycling races into their component parts and then made a 1% improvement in each of those components, then the gains would add up to a significant amount.

Another example is Mazda’s 1 gram strategy.

What they do is look for ways to save just 1 gram in weight from each component of the car.

The Mazda2 weighs a little over a tonne and this low weight means that Mazda can use less expensive transmission technology, making the car more affordable, more efficient and requiring less materials to build.

At the same time, the lighter car makes for a agile and nimble ride – keeping Mazda’s ‘zoom-zoom’.

Is a 1 kWh strategy the answer?

Instead of focusing mainly on large projects, perhaps applying a 1 kWh strategy is the way to get significant energy reductions in organisations.

In Europe, with two years to go before mandatory energy audit reports for large organisations have to be done for the second time, energy managers should look at the small changes they can make every day.

Look at every component of how your organisation uses energy, and see if you can shave just 1% off that.

There are 200 working days in a year. If you asked each person to work from home just 1% of that time – 2 days a year – what impact would that have on the fuel consumption associated with commuting?

If you have 5,000 lights that are on all the time, what would removing 1% of them, or 50 lamps, do to your operations?

What would a 0.5 degree change in your setpoint for heating or cooling do to your building’s need for electricity?

How would removing one printer in a hundred affect the way in which your business worked?

How would replacing one desktop in every hundred with a laptop impact your staff?

We ignore small wins too often because they don’t seem worth the effort.

The point, however, is that the “long tail” of small wins could get you to where you need to be in terms of energy efficiency without stumbling at all the hurdles that are associated with large projects.

The ‘aggregation of marginal gains’ strategy has worked in fields as diverse as sport, automotive manufacturing and healthcare.

There is no reason why it shouldn’t work across industry and business in general.

How to choose your next job

decision-table-job-parameters.png

How do you make a decision about what to do next?

Which job should you choose, which option should you explore, which project should you spend time on?

These are problems we face every day, often under time pressure and with limited information.

Take, for example, one of the most important decisions you make – what job to do.

This is a decision that has a major impact on your life and carries a lot of emotional weight. You will be influenced by experiences in previous jobs and what your goals and expectations are of the future.

It is a high stakes, high emotion decision.

In a crucial decision such as this, you should be taking into account several parameters and thinking clearly and carefully about your options and what you should do.

Instead, the human brain often gets overwhelmed and focuses on one or two factors and excludes other, equally important ones.

It defaults to emotional decision making, with people making choices about how they feel about the factors that seem most important at the time.

One study, for example, found that more than half of the people surveyed left their job because of their relationship with their manager.

That single factor might have been enough to discount all the other positive factors that might have made it a better choice to continue with that job.

So how do we make better decisions when it comes to a crucial problem like choosing your next job?

One tool that can help is called a decision table.

First, identify the paramters that are important about the problem.

What are the things that you should consider when assessing the choices you have open to you?

When you are doing this, it is important to consider more than just the ones that come readily to mind. What do other people think, what does the research indicate?

The list of parameters in the image above are from research that was carried out that identified the eight factors that were most important to the study participants when it came to job satisfaction.

Second, assess each job option you have against the parameters.

The question to ask yourself is, “Will this job mean I am better off or worse off on this parameter”.

A simple coding system to use is to use 0 when there is no change, + when you are better off and when you are worse off.

In addition, you could use ++ and to indicate when an option makes you much better off or much worse off.

Just doing this exercise means that you will consider each factor in turn and assess how your life will improve or worsen under each option.

At the end of the process, you will have a table that shows you how each job compares on the parameters or measures that are important.

Now that you have considered all the parameters, you can figure out which ones are more important to you.

Are you, for example, prepared to take on a long commute for the prospect of much more pay?

Or would you rather have less pay and a better commute?

Are you ambitious – is getting a promotion really important? Or are you at a stage when you want a job that will pay for food while you get on with something that is important to you, like a creative pursuit?

A completed decision table will help you have that discussion with yourself or with someone else and help you consider all the factors that are important. It will lead to a more balanced decision.

It also greatly increases the chances that the decision you eventually make will actually result in better job satisfaction.

The same process can be applied to other areas. Perhaps not things like whether you should have coffee or tea, but definitely the important decisions, like where to invest, what to do, who to enter into business with.

When you have a problem that is important and where there is a high emotional component, that is the time to get out a pencil and start working on a decision table.

Why good people do bad things

you-and-your-environment-circles.png

We all know about the horrors that took place during the Second World War, in Cambodia, in Rwanda.

The history of humanity, in virtually every culture, is littered with stories where one group of people abused their power over another.

What do we infer from these stories?

One inference is that is was all down to a small group of individuals who were fundamentally evil and were able to dictate what was done from their position of power.

From Vlad the Impaler to Pol Pot, from the Nazis to Saddam’s Iraq, we can point the finger and find one person to blame, or a group of people that should be tried and punished.

Would you act differently if you were in their position?

The evidence suggests that you would not.

In a famous experiment conducted in 1971 at Stanford, researchers found that it took only six days to turn nice, normal college boys into sadistic monsters.

They did this by creating a prison, making some of the boys guards and others prisoners and setting up a simulation where the guards had absolute power over the prisoners.

They set up conditions that:

  • Dehumanized the prisoners
  • Deprived them of sensory stimulation – no clocks, no views of the outside world
  • Took away their identify – they were referred to as a number
  • The guards could punish infractions of the roles or improper attitudes

The end result was that the situation these people were put into brought out and magnified some of the worst aspects of their humanity and the experiment had to be abandoned after only six days.

Why is this relevant to us now?

Surely all this is just something that happened a long time ago somewhere else to people not at all like us?

The problem is that we tend to think that bad things happen because the people involved are bad rather than because the situation they are in allows them to do bad things.

This is called the fundamental attribution error and has been described as the “conceptual bedrock” of social psychology.

In every day life, we explain away our lapses by finding reasons in our environment for how we behaved as we did.

With other people, however, we tend to conclude that others are lazy, incompetent or thoughtless, explaning their behaviour as due to their internal characteristics.

Understanding that the environment has a huge impact on how people behave is crucial in some situations.

For example, I recently heard a someone talk about visiting a care home where the staff referred to the residents by their door numbers.

“Room 32 needs a change, Room 42 is hungry”.

This is the first step to removing that person’s identity – reducing them to a number rather than a person.

Such practices should have no place in an organisation – especially one where people have power over others.

Finally, on an individual basis, we place great emphasis on personal fulfillment.

For example, do work that makes you happy.

It turns out that what makes you happy is less to do with the work you do, and more to do with the conditions of your work – do you have autonomy, feedback and control over what you do?

People in charge of designing organisations need to realize just how important the environment is in influencing how the people in that organisation behave.

If you want your people to perform, first create the right environment for them to be good.