How could microgrid and peer-to-peer energy networks work?

microgrids.png

Why is the energy business so heavily controlled and regulated?

Mostly, its history.

When you have a few large generators and millions of consumers, its big business – and that leads to operators trying to control markets which triggers political oversight, which inevitably leads to questions of control.

So what we have across the world is a system of generation, transmission and distribution over a grid system that connects where energy is made and where it is used, and a parallel system of metering and accounting to bill users.

Microgrids and peer-to-peer systems want to change that

Imagine a new housing development where the developers have decided to create a private network of electricity wires that connect the homes instead of using the cables and equipment provided by the grid.

There may be a few connections to the main grid, but the rest of the properties are effectively off-grid.

At the same time, each house has solar panels for electricity and hot water, excellent insulation, low running requirements and perhaps a micro-chp unit and battery storage.

The independent network forms a microgrid.

The existence of housing units with the ability to generate electricity and heat from a variety of sources and a population that uses energy creates a network of peers – equal participants.

The concept of peer is sometimes forgotten – the households of the future will be both producers and users of energy – so called prosumers.

What they need to work are markets

In a microgrid peer-to-peer system, there will need to be some way of keeping everybody happy – and that is done by a price system and a market.

If people are free to set prices (or the trading is automated and the machines trade among themselves) then the market will result in a price that matches supply and demand.

It avoids the cost of routing energy through the grid, so it should be cheaper.

Experiments like the Brooklyn microgrid set up by LO3 Energy are showing how this could be done.

A peer-to-peer network does not have to be part of a microgrid

We could have renewable generators, like a solar farm, connected to the grid that want to directly sell all their output to a user connected somewhere else on the grid.

They can currently enter into a bilateral contract that is settled and billed by a supplier.

A true peer-to-peer system could eliminate the need for a supplier, and simply have a separate contract – based for example on a contract for differences model – although these are still complex to create and agree on a one-to-one basis.

A start in this direction is Open Utility’s Piclo platform that matches users with local generators.

We are still in the early stages of a transition

We’re a long way away from having solar PV on every roof and local networks of users have yet to spring up.

Will there be a revolutionary peer-to-peer change, or is it likely that the majority of the system will still be controlled by a few producers.

If history is anything to go by – network effects and scale matter.

We may have lots of committed, small players, but Google style companies for energy will still probably emerge – a few highly connected hub players that aggregate and influence how everything else works.

We still operate in a winner-takes-all ecosystem, and peer-to-peer is a small part of it.

Will it be different this time?

What is emergence and how can we make it happen?

emergence.png

We’ve all seen a flock of birds wheeling and swooping as if it were a single, giant organism.

The same thing happens with shoals of fish, or even people trying to leave a train station at rush hour.

Why and how does this happen, and what does it mean for us?

The term emergence is used to describe complex phenomena or behaviour that emerges from the interaction of simpler elements – often in a way that can’t be predicted from the features of the simpler element.

We can simulate flocking behaviour by setting up a system that follows three rules:

  1. Don’t crowd neighbours
  2. Move in the average direction of where neighbours are moving
  3. Move in the average direction of where neighbours are

These three rules result in a swarm – see here for example.

In organisations, emergence can happen in two ways.

In a hierarchy, the rules are set by those in charge.

People are given jobs, roles and responsibilities. In most organisations now, they have latitude and discretion in how they do their roles but have rules to follow.

Take the flocking rules, for example, and recast them for a job role. This might say:

  1. Avoid doing the same work as someone else – create your own niche.
  2. Try and make sure what you do is aligned with the vision and mission of the organisation.
  3. Do work that feeds into and works with what others in the organisation are doing.

If company had a number of people who organised their work in line with these rules it’s very likely that they will do some very interesting things.

It’s that balance between individuals and the collective that creates the conditions for innovation and creativity to emerge.

It’s also why micromanagement doesn’t work.

We need freedom and control – too much of either results in very simple or chaotic behaviour, neither of which are useful.

The second way in which emergence happens is through markets

Take Ebay, for example.

By creating a platform where people can exchange things, they created a thriving ecosystem of buyers and sellers.

Products from bicycles to floor mats flow through the system, in bursts of transactions that spill out into the real world – triggering a flow of packages in white vans that then creates emergent behaviour in the flow of traffic.

On a macro-level, the most successful economies are those that let markets form – allowing people to freely exchange goods and services.

We are surrounded by emergence – and what it reminds us is that we cannot control everything.

The best stuff happens when find the space between simplicity and chaos.

Why we find it hard to get creative ideas accepted

creative-bias.png

Most people feel that their organisation needs to nurture and develop creative ideas – because that’s where innovation and growth comes from.

So, why is it such a struggle to actually get new ideas and projects accepted and pushed through the organisational decision making process.

A paper by Jennifer S. Mueller, Shimul Melwani and Jack A. Goncalo gives us an insight into how creative ideas are seen by others.

Most people, if you ask them, will say they support creativity – it’s a good thing.

A creative idea is different from just doing a job well.

It is novel – there is something new or different about it and it is useful – it should help us in some way.

And, in general, people feel like they would support creativity – either because they feel it’s the thing that other people would support – a social norm, or because they think of themselves as creative.

The odd thing is that creative ideas also introduce uncertainty.

If we already do things in a certain way, and we’re used to a particular set of accepted ideas and beliefs, we may be biased against creativity without being aware of it.

We can see this in the NIH syndrome – the not invented here mindset that poo poohs anything that comes from a different team or company.

Even supposedly creative people find it hard to recognise other people’s creative ideas.

We may only accept that an idea is creative once it has been endorsed by someone we trust or when it has reached a critical mass of users and we, rather belatedly, decide to join the party.

In the paper, the authors cite the example of Robert Goddard who worked on rocket propulsion and was ridiculed by his peers.

In the early days of the internet, some people felt that it would simply stop working because of the number of connections that were being created and how communication would be impossible.

A few years ago, people felt that blockchain would never work – and now they are starting to become more aware of the potential but also the huge issues that still need to be solved.

Their interest has been sparked, however, not by the idea but by the enormous increase in the value of bitcoin and other crypto currencies.

The problem for organisations, the authors argue, may not be about coming up with creative ideas.

It’s that we automatically organise ourselves into a defensive wall anchored in familiar, traditional or approved ideas.

So, what we need to do is learn how to address the biases we have – and improve the way in which we think about and recognise creative ideas.

Do you know which strategic play is right for you?

competitive-plays.png

It’s conference season in the energy sector – and it’s a good chance to look around and see what companies are doing to position themselves as the industry and markets change around them.

For over 20 years now, we have seen battles between incumbents and innovators.

Innovators come along and try to get market share

The innovators – let’s call them the Red Team – see a market and believe they can do better.

They come along with a cheaper substitute and capture new, low-end customers.

For example, domestic users now have a choice of switching portals that make it much easier to compare offerings from suppliers and switch.

The portals are moving upmarket, targeting increasingly larger users and higher end customers – and compete with a number of other software and portal based systems that address the same market.

They also compete with various framework type structures that try to make it easier for users to decide between options.

This happens at every level of the market – in energy it’s being seen from how we buy energy, to how we use it and check the bills are right, and how we use our assets to make money.

The incumbents are slow to recognise how things are changing

Incumbent companies – the Blue Team – with market share and good profits don’t really want things to change.

They can ignore the innovators, but the ones that do that tend to find that by the time they are awake to the danger it’s too late.

A Blue Team that is on the lookout for this kind of competition has to either acquire the innovator or invest time and resource into creating a competing business that tackles the innovator head on.

In today’s digitalised markets, however, this article from the Harvard Business Review by Larry Downes and Paul Nunes says that things are a little different.

The fight just turned unfair

Downes and Nunes point to the emergence companies that land with a Big Bang and take market share suddenly and completely, with no warning – let’s call them Green Teams.

The military like this approach – the old adage says if you find yourself in a fair fight, then you didn’t plan your mission properly.

The big example here is how smartphones with free maps have upset the market for navigation devices.

The Green Teams, however, don’t operate like a military unit.

Instead, they’re often a group of people working on cool stuff that unexpectedly takes over a completely different industry from the one they’re in.

Products that come out of hackathons and experimental product launches have an effect beyond expectations because they turn out to be cheaper, more inventive and better integrated than the stuff that is out there right now.

Products like Twitter, Whatsap and WordPress changed the way we communicate and build stuff on the internet all at once – people signed up in massive numbers very quickly, leaving no time for incumbents to react.

Which team are you on?

In the energy business, the Blue Teams include traditional suppliers who are dealing with a changing energy system that is decarbonising.

Some are getting rid of traditional generation and becoming completely green generators.

The Red Teams include a host of new suppliers and players in the supply chain – from developers of platforms to technology.

The energy industry is notoriously slow to change – and this time around no company jumps out as being a clear Green Team leader, although many are trying to position themselves in this space.

The game goes on – the essence of strategy is knowing which team and play we’re going with.

Is information enough to spur action?

what-results-in-action.png

The domestic sector uses nearly 30% of the total energy used in the UK, and 80% of that is used for space and water heating.

Reducing energy use in this sector would clearly help reduce emissions and help the UK move towards its carbon targets.

Several approaches have been used to do this – from carrying out performance measurements using the Standard Assessment Procedure (SAP) and providing Energy Performance Certificates (EPCs) to dispensing written and face to face advice and information on how to save energy.

There are very few studies on whether any of these approaches actually result in savings.

A study published by the Behavioural Insights Team (BIT) in late 2017commissioned by NEST and Npower found that the predicted results from models such as the SAP varied widely from actual performance.

For example, the SAP predicted that savings from loft insulation in a medium home would be £120 and pay back in 2.5 years.

Real world data had a saving of £21, raising the payback period to 11 years.

On a day to day basis, however, the way in which people use the controls and settings in their homes has a greater impact on the amount of energy they use.

Does providing advice improve how they use their controls?

Another study in 2014 found that written information or advice in the home had no impact in the amount of energy used.

In some cases, showing people how to use their thermostats may have increased energy usage as they now used them to increase temperatures and get more comfortable.

This could be because of a number of reasons, and include the common problems with behaviour such as forgetting, ingrained habits and just not wanting to deal with the effort or hassle or doing something.

The purpose of the NEST and Npower commissioned study was to see if there was a statistically significant saving to be had from using a system like the NEST learning thermostat, which uses sensors and machine learning to optimise the heating schedule.

Once installed, NEST uses occupancy and weather data that is collected over time to figure out when it should turn the heating up to ensure comfort levels are maintained and when it can be reduced without impact.

Four studies – the most rigorous of their kind so far – showed that compared to homes having a programmable timer, thermostat and radiator valves, the NEST system could save 4.5 – 5% of total gas consumption.

Adding in an optional feature that does seasonal savings by tweaking winter use adds another 3.3% to the savings figure, taking the total to nearly 8%.

It can also nudge users – giving them leaves if they turn down the heating and act in an energy efficient way.

The thermostat is around £280 installed, with a payback of 6.5 to 11 years.

And this is still where the problem lies.

Even at a relatively low capital cost, the payback is going to be on the order of 10 years.

And that makes it hard to create a simple business case for change – especially for operators of large portfolios that may quickly have to spend hundreds of thousands of pounds to retrofit a few thousand homes.

New homes will probably get systems like NEST fitted as standard and, when we do a major refurb, it will be a small part of the overall cost and easy to justify.

But, in summary, the evidence shows that we get better results when we automate how choices are made rather than if we ask people to change.

How to develop a product someone will buy

value-proposition-canvas.png

The acid test of a product is whether people will buy it.

It makes sense to have a systematic approach to testing our assumptions about how customers will respond, and a new canvas from the folks that developed the Business Model Canvas called the Value Proposition Canvas could help.

There are a couple of versions of this floating around, but the type in the picture above is easy to draw and work with.

The Value Proposition canvas has two main sections – what we do and think it does for customers, and our assumptions about the jobs our customers need doing and what that means for them.

Starting from the customer end, if they get their jobs done, we assume that they get certain gains.

These might simply be lower costs or greater sales.

Or they might be operational – better quality, intangible – greater corporate social responsibility scores or brand recognition.

If that is the case, they why aren’t they doing more about it already?

It’s usually because there are things in the way – pains that stop them from moving forward.

This then leads to a simple matching exercise that we need to do.

When we look at the products and services that we offer, which ones are gain creators and which ones are pain relievers?

If we can match gain creators to the gains that customers want and pain relievers to the pains that they have, then we have a better chance of creating value that a customer will be willing to pay for.

Let’s take an example of a company that does data analysis for customers – providing an Analysis as a Service proposition.

Many companies collect large amounts of data – from sales and product information to energy usage and cost data.

We might assume that if they could use this data more effectively, targeting areas where there are hidden costs or by using it to better target their sales efforts, then they could reduce costs or increase sales.

The problem is that there is more data than can be analysed using tools such as Excel, it takes time and many organisations can’t spare the skilled people that it needs to do this.

So, the service provider might see an opportunity to provide trained staff on a consulting basis, perhaps Devops engineers who can do both development and operations and work closely with managers and existing technical people to extend and develop the tools needed to do this.

If the area we are working in is core to the business, then it could be run as a partnership between the service provider and the company.

If it’s non-core, it could be outsourced.

A simple canvas such as this quickly makes the assumptions we have about the product and customer visible.

The next step is to get out of the building, as Steve Blank says, and talk to potential customers and test our assumptions.

We refine our model based on the conversations we have and iterate until we have something that is market ready.

And that has a better chance of passing our acid test.

How much time do you spend thinking about the future

now-and-future.png

Things change. How ready is your organisation when they do? Gary Hamel and C.K. Prahalad, two business academics, came up with the idea of core competence in the 1990s – the collection of resources and skills that set you apart from the rest.

Writing in the Harvard Business Review, they asked which companies will survive the changes that come along and argued that it would depend on whose view of the future was driving things.

Will that be us or our competitors?

As managers or leaders in organisations we can usually list who our customers are right now, what capabilities we have and who we are competing against.

Can we say the same about the future?

How will our customer mix change, what capabilities will we have and how will the competition morph and transform?

If we don’t have answers to the questions about the future, or we think the future will be the same as it is today, the writers argue, we’re can’t expect to stay a market leader.

There are two strategies we reach for when trying to improve performance – cut costs and grow revenue.

The first is getting harder to do. The second even harder.

The challenge organisations have in the West, particularly in the UK and U.S. is that they are very good at taking out costs.

For 30 years, they have cut headcount and improved the accounting performance of organisations, so now there is very little that can be saved from operating costs.

Yet, managers spend 97% of their time thinking about the urgent things they have to deal with, and a tiny amount of time building a collective view about the future.

If we want to grow market share, we have to take business away from the competition or create new markets.

The hardest thing in business is to overcome inertia – the reluctance of customers to move from an existing, tried and tested solution.

At the same time, customers are eager for new thinking and capability that will help them get ahead.

The primary goal for many organisations, therefore, should be not just to improve productivity and quality but to create new products and businesses.

Thinking in 10 year phases may be a good idea – 2 to learn about the new way and 8 to execute and build a customer base.

That way, we will always be able to do something relevant when the future arrives.

How to optimise only the things that matter

bottlenecks.png

Much of what we do can be described in the form of a process flow – and we often assume that if we can improve performance by improving parts of the process.

To improve traffic flow, for example, we could have all cars drive at the same speed – surely that will help?

That doesn’t turn out to be the case.

We can see this effect when something happens on the motorway that causes a lane to be shut.

It doesn’t matter how well everyone drives individually.

The flow rate of the vehicles is set by the capacity of the number of lanes available and so, when we lose one, everyone slows down as the same number of vehicles now has to pass through the lesser number of lanes.

Eliyahu M. Goldratt, in his books Goal and Theory of Constraints, sets out how the throughput from a process is going to result from one constraint or bottleneck.

To improve the throughput – the number of things coming out of the process – we need to figure out where the bottleneck is and what we need to do to improve its performance.

It’s a waste of time spending effort optimising any other part of the process, because the performance of the system overall will still be set by the bottleneck.

Goldratt sets out a five-step process for dealing with constraints. In adapted form, these suggest we should:

  1. Figure out where they are.
  2. Decide what to do about them.
  3. Decide how everything else works based on the impact on 1 and 2.
  4. If, in doing all this, the constraint is no longer the limiting one, then go after the next one.
  5. A warning – we need to keep repeating this, as the limiting constraint will move around.

The way we often figure out where constraints are is because they have piles of work-in-progress (WIP) in front of them.

The same thing applies with knowledge work.

A person can be a bottleneck if the work they do is slower than the rest of the work carried out by others, and so they become the limiting factor in the operation.

Aligning how we work with bottlenecks has a number of benefits:

  1. We know that throughput is set at the capacity of the bottleneck. To increase output, we need to work on the bottleneck.
  2. This means that we can minimise inventory to the level required by the bottleneck. Working any other part of the operation simply piles up money in stuff that will take time to be processed.
  3. We can also reduce operating expenses because we don’t need more people in areas that don’t directly contribute to the bottleneck activity.

In summary – when we try and optimise an activity we often try and speed up parts of the system.

What we need to do instead is improve flow through the system.

And that starts by focusing on bottlenecks.

When should you get really interested in new technology?

technology-readiness-levels.png

The pace at which technology is developing appears to be speeding up all around us – so what should we do in such an environment?

Technology developers and evangelists can have any number of brilliant ideas and solutions, but as buyers and users that may have to use the technology for a number of years, we need to be careful.

The Gartner hype cycle is a popular way of looking at different technology and says that they tend to pass through five phases:

  1. A new technology is created.
  2. We expect too much from it.
  3. We’re disappointed when it doesn’t meet those expectations.
  4. We learn and change and figure out how to use it properly.
  5. It helps to be more productive up to a point, and then the effects level off.

It’s a nice graph, but there is little evidence to say that is actually works or has any science behind it.

It’s more a picture of how industry insiders collectively think about technologies at a point in time than an accurate reflection of the journey technologies take from creation to mass adoption.

A more useful indicator, used widely in academia, is the idea of Technology Readiness Levels or TRLs.

TRLs originated in the aviation industry, where pre-flight checks are common before taking off – a process called flight readiness reviews.

Having a checklist to go through and check and double check critical elements is a major contributor to flight safety.

NASA took this one step further, and came up with the idea of checking whether technologies were ready to start being used in programmes – a technology readiness review.

This led to the idea of readiness levels – and a formal version of these are used by many organisations.

For those of us that need to make a decision about a specific technology option, the picture above shows an adapted version of the TRL framework that may be useful.

In essence, we go from low TRL levels, where a technology progresses from basic principles to a model that works in the laboratory between levels 1 and 4.

At the other end, at 9, we have systems that are proven, work in the field and are probably widely deployed – but that are also mature and perhaps need changing

The interesting stuff is happening between 5 and 8, and this is where we should focus our attention.

Take blockchain technologies, for instance.

The idea of the blockchain and demonstrators with code have been around for a while.

Bitcoin, arguably the first proper prototype that began operating on the web in anger, has been around for nearly a decade.

We are now in a situation where there are a number of prototypes being operated – we can create apps on ethereum now and test them out – but there are still challenges that need to be solved around scaling and power usage.

So, we might score blockchain a 7 with strong potential to go onto 8.

That might suggest that a good time to get involved is right now.

How to analyse the future

analysing-the-future.png

Thinking about the future is not easy.

As humans we fall prey to biases, and two in particular are important.

The first is hindsight bias where, looking back, we think that things that have happened were far more inevitable than they actually were.

For example a Trump victory seems like it was pre-ordained now – Hillary never stood a chance against the Twitter machine.

At the time, however, not many around the world seriously thought Trump would win.

The second is foresight bias – we believe some things are more likely to happen than others and so bet on them more heavily.

We need tools and methods to guard against these biases and reason about the future more effectively – and the military and intelligence establishments are a good source of information on these.

For example, this guide sets out a detailed approach to counterfactual reasoning, one of the tools every analyst should be able to use.

When we think about the future we often do one of two things.

1. We look at trends

We see trends and infer outcomes that result from those trends – a technique called forecasting.

For example, we might see a trend towards decentralised currencies with bitcoin or a trend towards widescale adoption of solar photovoltaic and distributed generation.

We forecast an outcome based on these trends – the end of traditional banking or energy firms.

2. We create possible futures

We do futuring when we look at drivers and come up with possible scenarios that might result.

For example, the widespread use of mobile phones will make desktop or offline services less relevant for things like getting media, checking mail and reading the news.

Counterfactual reasoning

Counterfactual means counter to the facts, and we reason that way by asking questions like “What if” or “If we”.

We can look at a problem in terms of antecedents and precedents – or before and after a fact.

Approaching a problem in this way has two benefits – it helps us explore cause and effects and it lets us be more creative.

For example, take a statement like the fall in the price of solar panels means that we will have widespread adoption in residential neighbourhoods.

That seems like a perfectly reasonable statement – but what happens if we break it down?

Should we start a solar panels sales business right now?

The before bit is a fall in the price of solar panels – which we see happening right now.

Cheap solar panels clearly lead to cheaper costs for the equipment.

But, does that alone justify the conclusion about what comes after – widespread adoption in residential neighbourhoods?

It does not – because we haven’t looked at the components in detail.

First, we need to examine why prices are low. Is it because the technology is getting better and cheaper, or is it because massive capacity increases in China are resulting in panels being dumped on the world market?

Then we need to think about the in-between – what may happen if what we predict takes place.

Low prices for panels don’t get around other problems – such as the connection constraints in neighbourhood, the other costs of installation such as scaffolding, and the possibility that high demand for installations coupled with low numbers of qualified tradespeople after BREXIT may result in bumping up the costs overall.

Then there is the after – new homes are very likely to have panels fitted – they can be designed in.

But will there be a rush by homeowners to retrofit panels or will they be put off by the up front cost and possible impact on sale prices?

If existing homes are slow to change, the overall rate of change will be slow because existing housing stock stays in place for decades so for everything to be replaced with new energy-efficient housing could take a century.

Summary

We can jump very quickly from what we see now to what we think will happen in the future.

The purpose of using analytic methods in a structured way is to help slow us down and examine the situation in more detail, coming to a more considered view on what may happen.

The conclusions we come to as a result may help us make better decisions.