REUTERS | Gloabl Creative Services (no copyright)

Towards robot governance? A short tour of current conversations on artificial intelligence

The development and use of artificial intelligence (AI) continues its inexorable momentum. Investment in AI is skyrocketing. However, with a few exceptions, how AI relates to the work of in-house lawyers on a day to day basis remains somewhat distant and confusing. This post aims simply to provide some food for thought on future governance in this area which can seem at once extremely exciting and utterly overwhelming.

Demystifying AI

The rapid rise of AI is being driven by ever expanding big data sets which provide the fuel for machine processing, machine learning, machine perception and machine control which are all converging to create increasingly sophisticated AI capabilities. This is all being enabled by the rapid move from on-premise computing to the cloud which supports ever bigger data analytics.

Practical Law has recently published two videos, Demystifying artificial intelligence and Legal aspects of artificial intelligence, by Richard Kemp of Kemp IT Law which provide essential grounding and introduce some of the key legal considerations. Richard has also recently developed the practice note, Demystifying artificial intelligence (AI).

Opportunity and threat

Recent controversies have brought into sharp focus the dystopian dangers of AI and the need for the application of coherent ethical frameworks, at state, industry and corporate levels, to step in where the law leaves gaps. Google has had to face up to its own employees who protested the company’s involvement in a Pentagon program (Project Maven) that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Academics have voiced huge concern over the Korea Advanced Institute of Science and Technology’s (KAIST) involvement in a collaboration with a defence company to develop autonomous weapons.

In a less sensational way, many are naturally uncomfortable with the direction of travel. The perception that robots will take all our jobs is unlikely to go away. The role of AI in the legal sector, for instance, increases by the day.

But the benefits AI is already bringing are not always properly made out. Dull, repetitive tasks like discovery and now contract review can be performed by machine under human instruction and, as a result, legal work as a whole is becoming more interesting and enjoyable. There are also natural controls: humans get satisfaction from working with other humans and having overall dominion over their technology. HR strategy will need to accommodate this as getting the best out of people will remain crucial.

For lawyers in particular, many are optimistic – a recent collaborative report by Linklaters and Microsoft depicted lawyers as “agents of change”, uniquely positioned to create clarity, build relationships and deliver success in an era fraught with highly complex legal and moral questions brought about by rapid digital transformation.

The legal framework: fit for purpose?

So what of the law we do have and to what extent is it ready for AI? It’s a big question and impossible to answer in a short blog post but perhaps in key areas it doesn’t fall too far short. Data protection has, of course, had its overhaul in the form of the GDPR, now the de facto global benchmark, which at least anticipates the new forms of digital tech and provides a much needed privacy framework.

Last year, the ICO published Big data, artificial intelligence, machine learning and data protection, a vital paper which draws out, in particular, some of the key compliance mechanisms available to lawyers and their clients as they get to grips with the privacy risks of AI: using anonymised data; using privacy notices; using data protection impact assessments; implementing privacy by design; developing auditability in relation to machine learning algorithms to ensure trust and transparency; and developing ethical principles that reinforce key privacy principles.

For a discussion on this blog of current and forthcoming AI-related privacy challenges, see Where are we now on artificial intelligence?.

AI also throws down substantial IP questions. Who owns robot-created content? What about the rights in relation to input data and output data from AI development – can you use and commercially exploit what you have created? A recent article in PLC Magazine, Artificial intelligence: navigating the IP challenges considers the key challenges and provides some guidance on best practice.

The common law torts of negligence and nuisance provide something of a bedrock for developments in relation to product liability and the way concepts such as negligence relate to other areas will be interesting to follow. Commentators such as Richard Kemp, for example, observe that the “technical and organisational measures” required by the GDPR in relation to cyber security are effectively becoming the expected standard of care for the security aspects of AI products. The EU has already started a review of the Product Liability Directive (85/374/EEC) to ensure its fitness for purpose for foreseeable AI use cases.

Lacunae will be inevitable, however, but wholesale regulation seems impracticable. The fluid and uncertain path of AI development requires the law to respond in piecemeal fashion to regulate discrete areas in a way that doesn’t crowd out welcome innovation. There are good signs that the UK is getting this balance right in relation to driverless cars, for instance (see the video, Driverless cars update: summer 2018). MiFID II provides an example of the EU responding to discrete AI challenges in relation to the use of trading algorithms. Fintech as a discrete sector is a particularly fast moving area with some good regulation but faces challenges such as lack of harmonisation (see Video, FinTech series: artificial intelligence).

One of the most contentious issues is whether autonomous robots should be granted a form of legal personhood, justified by some who say that damage liability would be impossible to prove and therefore appropriately pin on humans. This has been debated fiercely at EU level with the Commission recently coming out against the idea in the outline strategy it published in April of this year. The plan to assign legal personhood to robots had been strongly opposed by AI experts. The dangers of taking away responsibility and accountability from humans has been a foremost concern.

International outlook, AI ethics and corporate leadership

The US currently has twin “FUTURE of AI” bills going through each of the houses of Congress which will establish a Federal Advisory Committee with an objective of advising the Secretary of Commerce on maximising AI’s capabilities including, of particular note on the regulatory side, ethics training for technologists, the development of accountability and mechanisms for ensuring protection of individual rights and dealing with the risks of machine learning biases leading to “harmful outcomes”.

The EU Parliament and EU Commission have been at the forefront too, batting around an embryonic Civil Law on Robotics which covers such matters as compulsory robot registration, the imperative of systems interoperability and mandatory robot insurance as well as proposing voluntary codes for robotic engineers and for AI research ethics committees.

In Japan, recognising the potential for much of their development work to be essentially unregulated or at least operating in legal grey areas, technologists themselves formed the Japanese Society for Artificial Intelligence in 2014 and published Ethical Guidelines in 2017 (covering legal compliance, privacy, fairness, security, integrity, accountability and transparency) intended to provide an initial framework and promote deeper discussions.

Globally-recognised names in AI such as MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, Elon Musk and cosmologist Stephen Hawking (prior to his death in 2018) were behind the formation of the Future of Life Institute which has offered its own Asilomar AI principles with ethics and values at their core.

Unsurprisingly, major tech players are also keen to be seen as leaders in the development of AI ethics. Google, in the wake of its employees’ protests over Project Maven, published AI principles in June. Microsoft is raising its profile too on the discussion on the intersection of AI, law and ethics.

Looking ahead

We are beginning to see the emergence, from a number of sources, of legal and ethical frameworks for AI. The great trick will be in ensuring these various sources are talking to each other to ensure as much harmonisation as possible to provide certainty to businesses, workers, consumers and society generally. The structures, however, also need to have as much in-built flexibility as possible to allow for the uncertain path ahead as data and software interact in more and more sophisticated ways and to allow for discrete issues that present danger to society to be addressed robustly. Furthermore, good innovation must not be stifled by overly prescriptive regulation.

Some, particularly in tech companies, will already be watching this space closely but in-house lawyers across the board will need to pay increasing attention and think about how to develop their clients’ own corporate governance systems to adapt to the radical changes to come.

Rob Beardmore

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this post on: