As more companies are implementing Artificial Intelligence (AI) as a core part of their digital transformation, there is a requirement for boards to focus on the risks related to AI use of personal data and the potential bias and unpredictability in its output.  

AI has been around for a while; however, over the past few years its development has accelerated remarkably as a result of a combination of advances in computer hardware, distributed processing, programming methods and availability of data. 

Simply put, AI is a collection of technologies that combine data, algorithms and computing power.

“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).”

The European Commission (EC) in 2018

AI  has become a strategic priority for government and private industry. Alongside the excitement, the risks related to AI have been the talk of the town at government level. 

Following from that in May 2019 the Organisation for Economic Coordination and Development (OECD)  member countries approved the OECD Council Recommendation on Artificial Intelligence. The document includes principles which promotes AI that is innovative and trustworthy and respects human rights and democratic values. These are the first principles in the field of AI signed by OECD members as well as by other non-members. 

These recommendations are not legally binding but rather act as an international standard and provide a strong indication of upcoming national legislations in this field.  

This shifting regulatory landscape in the field of AI will require boards to stay on top of their developments in the relevant jurisdictions of interest. It is important to note that regulation will not only apply to businesses developing AI technologies but also to businesses adopting such technologies. Therefore, it is to be expected that compliance with the relevant AI regulatory framework will increasingly become an integral part of any business strategy which includes AI. 

On 19 February 2020, the EC, as part of the launch of its digital strategy for the next five years, published a “White Paper on Artificial Intelligence: a European approach to excellence and trust”.  

Further, the High-Level Expert Group on AI (HLEG) appointed by the EC to support the implementation of its AI highlighted seven key requirements for AI: 

  • Human agency and oversight; 
  • Technical robustness and safety;
  • Privacy and data governance; 
  • Transparency; 
  • Diversity, non-discrimination and fairness; 
  • Environmental and societal well-being; and 
  • Accountability 

It is clear, however, that the seven key requirements are not limited to data privacy but rather addresses a broader set of concerns arising from AI. 

The EU General Data Protection Regulation (GDPR) is one of the most comprehensive data privacy frameworks in the world and is considered to be in alignment with many of the requirements for trustworthy AI as highlighted by the HLEG. 

According to the UK Information Commissioner Office (ICO), the GDPR will be brought into UK law post Brexit as the ‘UK GDPR’, although it may undergo a further review in relation to UK-EU transfers.

The GDPR does not refer to AI specifically;  it regulates the processing of personal data regardless of the technology used. It follows that, as personal data is a vital component in the full life cycle of an AI system which includes two phases – the algorithmic training phase and the use phase – during which personal data is processed, these technologies are fully captured by the GDPR. 

It is also important to note that AI systems which rely on anonymised data, may still fall under the GDPR. This is as anonymisation techniques are not necessarily able to dismiss the risk of re-identification since AI systems are designed to make connections and spot patterns that may be unforeseen by the programmers.  

As previously depicted, the seven key requirements highlighted by the HLEG are an indication that more regulation is to come in the field of AI. 

The European Data Protection Board indicated that as the GDPR embraces a risk-based data minimisation principle and the requirement of data protection by design and by default, it addresses many of the potential issues associated with the processing of personal data through algorithms. This should serve as an indication that the EU intends to develop a homogenous AI regulatory framework to ensure there are no unnecessary duplications, conflicts and ambiguities with the GDPR.   

We should therefore conclude that at EU and UK level, the picture in relation to AI requirements regarding to data privacy is already fairly clear and that additional regulation will likely complement rather than conflict with it the GDPR, whilst addressing further areas that are not covered by it. 

For businesses that operate in multiple jurisdictions, the regulatory landscape they will be required to navigate may be more complex. First, as some countries have only started developing their regulation for AI, and second, as even if such regulations have reached the same maturity as the EU and the UK ones, there may be areas where different countries align or actually conflict. Businesses operating internationally can therefore expect an increased investment in order to keep abreast of, and when the time is ripe, comply with the regulatory frameworks applicable in the countries they undertake business. 

It is also important to note that even for businesses that operate only in one jurisdiction, their technology supply chains may require them to think about compliance beyond their jurisdiction of incorporation and business. This is due to the complexity of their technology supply chains; namely, they may not need to directly comply with a specific AI framework, but they may actually need to ensure their technology suppliers do via strong contractual terms. 

It follows that a business’s investment in AI-driven digitalisation may be more costly than expected as it will need to take into account the costs of national and potentially multi-jurisdictional regulatory compliance. 

And yet, prospects are not that bleak after all: it is to be expected that businesses which will invest in responsible and transparent approach to the use of AI will have a return on investment in terms of people’s trust in their brand. 

 By Elisabetta Zaccaria, Founder of Secure Chorus, ex-Chairman and Strategic Advisor to the Board

Elisabetta Zaccaria is an award-winning entrepreneur, C-level executive, author and speaker with more than 15 years’ international experience in the cybersecurity, ICT, defence and national security industries. Elisabetta is the founder of Secure Chorus and served as Executive Chairman of the board (2016-2019). She was Group CSO & COO (2006-2012) of Global Strategies Group, a defence & national security services and technology company. She played a key leadership role as chief strategist turning the British start-up into a US$600 million-revenue international business in six years. Elisabetta has held numerous board position in the UK, EU and USA and inclusive of US classified companies subject to FOCI regime.