3 March 2022
Brave new world
By Paul Branch
Whilst attempting to look away from the horrors unfolding to the East, let’s instead try to put a brave face on it by contemplating the future, assuming of course we have one. Picture the scenario: you’re riding along in your automobile, your robot beside you at the wheel, in your driverless car (aka the autonomous vehicle) perhaps humming to yourself an old Chuck Berry number on your way to visit your aged mother. You’re flicking through your tablet trying to wade past all the fulsome reports of last night’s triumph by West Ham in winning the Final of the European Champions League, thereby retaining their title for a ninth successive year. Fanciful I know, but bear with me (it could have been Leyton Orient).
Suddenly the car swerves from its designated lane, away from a pigeon and off the highway onto the adjacent footpath, heading straight for a troop of Girl Guides marching towards you with looks of abject horror as you get ever closer. The car stops dead. Ashen-faced you look to check that the girls are still in one piece, and the robot informs you that there’s been an anomaly in the system, please wait 30 seconds, re-boot the software and proceed to your intended destination. Still shaken and with a rueful wave to the Guides, you and the robot do that and soon arrive at your mother’s house. She is getting ready to celebrate receiving a congratulatory centenary telegram from His Majesty King George VII, in reasonable health for her advanced years and thanks to modern medicine she is still able live in her own house, alone save for her full-time live-in Japanese robot carer. You announce your arrival at her front door via the automated data connection and voice device integral to your wrist watch. No answer. Try again ….maybe she’s in the loo ..? But the robot should be able to handle that as well as opening the door, surely ….?
You activate the emergency override facility on your watch, the door opens and there you see your mother collapsed on the floor, her robotic carer bending over her taking critical life sign readings, administering CPR and speaking remotely to the emergency services referencing an unfortunate medication overdose incident. She still lives … phew! Having breathed your second sigh of relief that morning, your next thought is: Who could you have sued had your mother not survived? And who would the parents of the Girl Guides have sued had your autonomous vehicle not stopped just in time?
With Mark Zuckerberg unveiling several ambitious artificial intelligence projects where, in his view, AI is the key to unlocking the “Metaverse”, our world is changing rapidly, as is our role in it. Zuckerberg has already created a basic virtual world using the AI feature “Bot The Builder”, and has announced his intention to develop a universal speech translator, a “superpower” that has apparently been dreamt of since the beginning of time. He seems unaware that about 2000 years ago the first “app” to facilitate speaking in tongues was launched to help promote the Feast of Pentecost, but his plans do demonstrate where we’re headed with AI. The global market for artificial intelligence applications is forecast to grow from $27 billion in 2019 to over $250 billion by 2026, and to some in the field that is a conservative estimate with growth effectively being held back by uncertainties in commercial and civil law to cater for what happens when AI goes wrong.
The current methods of allocating liability are being reviewed especially in the US, EU and UK so as to achieve reforms which cater for AI. Misallocated liability can actually impede innovative and useful applications of AI if the developer is concerned that certain industry sectors put more liability on the shoulders of the designer, in which case these sectors will be less attractive and thus lag behind. Conversely early adopters of AI will be less attracted to applications which carry liability risk without compensation for the end user. The path being adopted to address reforms in liability provisions seems to be one which rebalances liability among the various parties – between the various end users (such as drivers, health care providers and clinicians) and the more upstream players (designers and manufacturers).
According to some clever professors at Harvard this would involve revising standards of care between, say, a health worker who was the only deliverer of medication to a patient, and the AI robot which becomes the regular medication provider subject to “human” oversight and care. Once the parameters and standards are defined for the integration of AI within a specific health care application, safety aspects become regulated and issues can be more easily resolved with reference to these new standards.
Insurance offers another opportunity for rebalancing liability. In the same way that car insurers today adjust premiums according to the class of vehicle and the driver’s track record, future insurance cover for vehicles with integrated AI could revolve around the tested effectiveness and reliability of the vehicle guidance software and the particular algorithms adopted. In the example above, maybe swerving off-piste is allowed only in order to avoid a child-like object in the road, as opposed to a dumb bird which would probably fly off anyway.
But ultimately who is responsible when we trust AI to take the place of, or even overrule, a human? Traditional liability and default rules assume that humans cause accidents, such as with a badly programmed piece of AI software or use of an inappropriate algorithm, so you can’t sue a robot as it doesn’t have a legal identity. But really clever AI will start to accumulate data and extrapolate subsequent commands or actions way beyond its initial capabilities, such that new decisions can be made for which it was not specifically programmed. In this instance it could be argued that AI should be given its own legal personality, as is the case with companies and corporate entities today, reinforced by the notion that, if the AI system demonstrates a process of rationality through being able to make independent decisions, then it should be held liable if the result of those decisions is beyond reasonable expectations. However, would the AI really have full legal capacity in a human-like free-thinking context if it functions strictly within the limits of what its coding parameters allow? Such are the thorny issues being debated in the rarefied atmosphere of superior legal minds.
There’s little doubt that the benefits of artificial intelligence will become part of an accepted way of life in the future, just as access to limitless communications and electronic social interactions are today. There will be drawbacks and hindrances to the rate of AI development and integration, of which making provision for the myriad ways of assessing liability in the event of error and malfunction is a major concern. But we do have a voluminous text book of lessons learnt to help guide us, from all the unexpected issues that have arisen with the internet, Facebook, Twitter and the like. So no excuses then for Mr Zuckerberg and his brave new artificially intelligent virtual world repeating the same old mistakes.