This guest blog has been written by  Mark Weston, Partner and Head of Information Technology, Intellectual Property and Commercial at Hill Dickinson

One of the most pressing issues currently in technology law involves artificial intelligence and machine learning. The applications of this technology are multifarious, with the potential to touch on more aspects of our lives than we may have previously thought possible. However, as this technology grows and develops, there is a looming elephant in the room: ‘electro-ethics’.

Electro-ethics is the intersection of technology, moral philosophy and the law. To enable machines to perform sophisticated decision making to complete complex tasks, software developers need to program a set of rules that will underpin the decisions it makes in any situation. It is impossible to programme on a situational basis, so the rules need to be crystal clear.

For the most part, the decisions machines will make will be benign and straightforward. For example, a self-driving or autonomous vehicle will use its programming to avoid collisions with other cars and obstacles. However, occasionally an autonomous car will face what we may term an ‘extreme situation’ where the action it chooses will result in a loss of life. It is in these scenarios that the ethical underpinning of the programming will be thrust under the spotlight.

To give an example, imagine a self-driving car driving at 30mph in a 30mph zone. On the left hand side is a concrete wall, and coming up on the right hand side of the road, an elderly man waits at a bus stop. Two teenage girls step into the road in front of the car, close enough that it would not be able to reach a stop before hitting them. The car would face three courses of action to choose between:

  1. It could brake as much as possible before colliding with the two girls, risking their death.
  2. It could turn sharply left, avoiding the girls, colliding with the wall and risking killing its occupant.
  3. It could turn right, colliding with the elderly man and risking his death.

What should it do?

This is effectively a moral philosophy problem, and the car will make its choice according to the rules its software programmer has installed. However, the rub is that different parts of the world would choose a different option in the above scenario.

In Western liberal democracies, the prevailing view is likely to be to take option C, and risk the death of the elderly man – a classically utilitarian response.

In parts of the Middle East, however, the prevailing moral philosophy forbids any positive action which would take a life. Residents in this part of the world therefore would expect the car to take option A, and risk the deaths of the two girls, since choosing to steer into the wall or elderly man would be a positive action towards taking a life, whilst continuing to steer towards the girls whilst braking as much as possible is not, since the girls stepped into the road of their own volition.

How are car manufacturers to solve this dilemma?

One approach may be to install different rules for cars to be shipped to different markets, or include a mechanism for switching between ethical rules. However the import and export market complicates the picture – what happens if a manufacturer programmes a car for the UK market, which is later exported to the Middle East?

This illustrates the second problem linked to the car’s decision: liability.

In our example, it is very likely that someone will die as a result of the collision. The victim’s family might wish to sue the person responsible, but where would the liability reside?

There are reasonable arguments to make for the car manufacturer, the software developer, or even the occupant of the car, or the car owner who may not have been present, or who may not have kept its software up to date, or the person who installed the most recent software update – there is no clarity yet.

Even if such events are as rare as one in a million, consider that there are over 60 million people in the UK, of whom we may estimate half will drive. There could therefore still be 30 such events every year.

To date, there has been no guidance from the world leaders in technology on how they plan to meet these challenges. I believe, in the same way large global organisations meet to agree universal standards to adopt, large tech companies must work together to agree global standards for programming software rules. They must agree on either the philosophical underpinning for these rules, or, how to switch between them.

This would help serve as an essential first step towards legislators determining liability in extreme events, to the relief of litigators, lawyers, insurance practitioners and more. Uncertainty benefits no one.