The effects of the use of artificial intelligence (AI) are controversial and the EU is introducing a regulatory framework that sets legal standards for the technology's application. This will have wider implications beyond the EU's borders.
Like all things in life, benefits do not come without some risk. While the AI advantages in everyday life are tangible (from a Spotify recommendation to advanced disease mapping), there is also the potential for damage.
Stephen Hawking prophesised: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.” In a similar vein, Elon Musk opined last year, “Mark my words, AI is far more dangerous than nukes” - all the more poignant considering the recent death of two men in a Tesla. Preliminary investigations revealed “no one was driving the car”.
Regulators are now taking action to rein in the potentially damaging effects of AI.
Liability and the EU regulatory framework
The European Union is looking at issues surrounding liability and has published its draft regulation on AI. It will have far-reaching geographical application including to non-EU organisations that supply AI systems into the EU. The concept involves a risk-based approach, proposing that all AI falls into one of these levels:
- Unacceptable risk and therefore prohibited. This bans, for example, the use of AI that deploys subliminal techniques.
- High-risk AI systems (HRAIS). The regulation centres on HRAIS with a raft of new mandatory requirements. Biometric identification is a particular focus, but also systems relating to critical infrastructure (e.g. water supply), safety componentry (e.g. robotic surgery), and other processes which necessitate significant risk management.
- Limited risk AI. This needs to be subject to enhanced transparency. Providers of AI that interact with humans must ensure that any individuals are aware of this fact e.g. chatbots.
- Minimal risk. Voluntary codes of conduct are available.
The draft European Commission framework is proceeding through the legislative process and is unlikely to become binding for up to 2 years, potentially followed by a grace period of another 2 years.
Of particular note is the draft includes fines of up to 6% of revenue for non-compliance with prohibited AI uses, and with data and governance measures for HRAIS. Other fines of up to 4% are proposed.
The insurance implications
The need for all businesses to be aware of the legal risks to their organisation is critical. The proposal requires providers and users of high-risk AI systems to comply with rules on various aspects of data and data governance, documentation and record keeping, transparency and provision of information, robustness, accuracy and security, as well as human oversight.
If implemented, the rules will also necessitate an assessment that HRAIS meet these standards before they can be offered on the market or put into service. Further, the rules mandate a post market monitoring system to detect and mitigate problems.
The regulation brings with it possible new legal liabilities including:
- Any increased exposure for failing to safeguard data according to the new AI regulatory guidelines referred to above (assuming implemented).
- The potential for poorly designed machine learning to operate unethically and/or breach anti-discrimination laws. The possibility exists, for example, of acting in contravention of the EU's Fundamental Rights Agency. Lemonade Inc., an internet based insurer recently came under fire for using AI to “pick up non-verbal cues that traditional insurers can't” when analysing videos submitted as part of the business's claims procedure.
- Issues surrounding malfunctioning AI causing damage (technological damage) but also flaws in AI decisions, based on machine learning principles. Consider the legal liability where a system operates independently from its operator, where the designers could not anticipate a particular outcome, essentially, where there is no human to blame.
- Additional risk of extortion. Nation states in particular will be keen to get their hands on attractive AI technologies and data sets.
AI-based cyber-attacks will also become another part of the cyber criminals' arsenal, upping the ante for all businesses.
Simply put, in addition to possible severe financial penalties from regulators, AI causes the liabilities to rise exponentially, and has the potential to create a devastating impact on businesses' reputation and commercial standing.
It is likely that certain insurances will see a rise in popularity. Market-leading cyber policies, for example, cover both regulatory issues and third party liability arising from privacy breaches. As outlined above, the AI threat creates regulatory liabilities far beyond the scope of the privacy breach regulations, which have been the main focus of businesses to date.
Combined with the recent torrent of ransomware incidents, we may see more and more businesses accepting that a cyber policy is no longer a discretionary spend. For those businesses using AI that already have cyber coverage, a recalculation of cyber coverage limits may be necessary.
For further information, please contact:
Vanessa Cathie, Vice President Global Cyber & Technology
T: +44 (0)20.7933.2478
E: Vanessa.Cathie@uk.lockton.com