As AI moves into the real world, it is critical to understand the legal implications of this new world order, says Mark Pearce

Conversations surrounding artificial intelligence (AI) have permeated every industry. How can this new technology be implemented, what new regulation can we expect to see, how could it increase productivity and efficiencies within business and how will it impact the future of work?

The blame game
A key issue with AI will be allocating responsibility when something goes wrong. Where will responsibility lie when an autonomously driven tractor leaves the field and causes damage or injury to adjoining property, nearby onlookers or animals? Who will we blame when an AI robot that is responsible for clearly displaying allergens on a product mislabels the item, causing the consumer to have an allergic reaction? Because a chain of different participants is involved, careful risk allocation is needed.

Blame could be assigned to the software developer that created the AI for use, to the hardware manufacturer for faulty assembly or to the service provider interfacing with the client. At the same time it may be that fault lies with the end user – the farmer or manufacturing plant who potentially input information incorrectly, leading to a mistake. Finding the root of any incident is a difficult business.

This leads on to a further consideration; to what extent can responsibility be properly allocated in agreements along the links in the chain? To some extent we expect risk to be apportioned by statute, with legal agreements filling the gaps. Once responsibility for different kinds of incidents are allocated, I expect to see a rise in tailored insurance policies designed specifically to cover AI. This will likely lead to a vastly different insurance landscape in the future. Moving from the current model of ‘detect and repair’ to ‘predict and prevent’ will cause disruptive change in the insurance market. Providers will need to position themselves to understand, embrace and respond to this rapid pace of change.

Your data or mine?
On the flip side, it is still up for deliberation who will reap the rewards when AI performs as it is designed to. For example, when using machine learning to manage a crop the technology will improve over time, as it acquires knowledge with use. New processes, algorithms and data will develop. Who will own these assets? They could belong to the system developer, allowing them to enhance their offering and sell improved products and services to other customers. Alternatively, control could rest with the end user, to use this information exclusively for competitive advantage and an increased ability to switch to a new provider.

These specifics can be controlled by negotiating contracts much as we do now, building in these additional considerations. The main difference being that in contractual discussions parties know what would be considered standard in a particular industry. An accepted approach has not yet developed for this technology and providers of AI are likely to put forward differing terms and conditions. We are still some distance away from a widely accepted approach, with some customers owning their own rights and others finding they must simply take what is offered by the supplier.

Setting the standard
Some are calling for international co-operation to establish consistent norms for AI. A European Parliament analysis of robotics called for specific legislation and oversight for robots and artificial intelligence. The issue has also reached the agenda of the G7 and we can expect to see increasing efforts to agree an international approach.

The European Commission has followed this with a strategy paper showing how some existing legal structures can be adapted to support and frame AI development. Existing law on product safety and liability, cybersecurity and data protection go some way towards a framework within which AI can operate. These areas of law will be assessed and developed to fit developments in AI, as well as IoT and robotics.

I expect that an international consensus will develop but perhaps not as soon as some may think. Ultimately these norms will need to be solidified so that creators and users of the technology can benefit. What is clear is that the issues are international. It is easy to imagine food produced in a plant in the UK being exported to Europe, having been produced with robotic equipment made in China running American software.

Altogether AI is an exciting technology with seemingly limitless implications and benefits. It is set to challenge the ways in which we work, reinvent tasks that have remained largely untouched for years and drive our society forward. This is set to touch every industry, not least the food and agriculture sector. By forcing us to think differently about the future of work, AI will prompt us to think differently about the future of law.

Mark Pearce is a Partner at Mills & Reeve, a national law firm with more than 500 lawyers and 900 staff across six UK offices. It is one of the most successful law firms in the UK as a result of its highly commercial approach, outstanding client service, the quality of its people and its collaborative culture. It is also the only firm to have been named in the Sunday Times 100 Best Companies to Work for 15 years running, and 97 per cent of its staff would recommend Mills & Reeve as a good place to work.
www.mills-reeve.com