Imagine a future society where artificial intelligence (AI) supersedes human intervention and data takes on a life of its own. Sound like science fiction? In fact, we’re already there.
The battle lines of the Fourth Industrial Revolution have been drawn. All eyes are on the emergent flashpoints of man versus machine and the notion of free digital exchange versus personal data rights.
Whichever way you view it, AI is decidedly a big deal, and futurists of every stripe are both welcoming and wary of its soaring influence. Google has made AI one of its top priorities. Facebook, Microsoft, and IBM are also investing significantly in research and applying machine-learning programmes to their commercial activities.
ResearchAndMarkets recently predicted that the global AI market is expected to reach USD 190.61 Billion by 2025, up from USD 21.46 Billion in 2018 – a CAGR of 36.62%. Current investments are spurred by factors such as the increasing adoption of cloud-based applications and services, growing big data, and demand for intelligent virtual assistants.
Organisations are already harnessing, adapting and protecting AI-centric services, benefiting from data captured from a wide range of sources. Sci-fi daydreams are becoming the mainstream reality and the future is packed with potential.
There are also challenges. One of the most notable and intriguing conundrums is how to wrap minds and plans around the potential impact of the EU General Data Protection Regulations (GDPR). In effect, the regulations mean that AI will evolve against a backdrop of unprecedented citizen data rights. While this is a good thing, there are legitimate concerns that the technology may outpace the GDPR’s efficacy – particularly as automated data’s ownership boundaries become harder to define.
Fact or fiction?
Responsibilities will blur as data volume, type, and use-cases exponentially explode. In essence, the GDPR introduces clarity and specificity to highlight new accountability obligations, stronger consumer rights and restrictions on the use and international data flows.
The process becomes trickier as AI becomes more advanced. Do you have legal rights to capture or process the data? Who is responsible for data manipulated by AI? Who is accountable if AI-created systems become vulnerable to hackers? Who must face the music – the manufacturer of the original hardware or the software developer? Does responsibility lie in the EU where the product is used or in the country of manufacture? Ultimately, both technology and geography become irrelevant. If you process EU citizen data, then the GDPR applies to you.
Fortunately, the GDPR isn’t oblivious to the challenge and aims to introduce a set of technologically neutral and future-proof rules, irrespective of how the digital environment may develop. The question is, can it evolve quickly enough to constructively support the sheer complexity of, say, an AI-powered multi-cloud world?
As AI transcends the frontiers of innovation, it may also be difficult to reinforce the GDPR’s push for the ‘right to be forgotten’, whereby a citizen requests that their personal data is no longer processed. For example, it is possible AI systems may decide to keep the data to protect itself from what it perceives as a counterintuitive request, or there may be an inherent desire to harness it to further evolve service capabilities. Either way, the law is firmly on the side of the data subject and any AI design will have to adapt accordingly.
Don’t wait around
The best thing organisations can do is to act now. Taking a risk-based approach and implementing robust controls to protect personal and sensitive information will avoid hefty non-compliance fines. Safeguarding the perimeter is old news. Irrespective of the AI’s GDPR complexities, the smart approach is to fully understand the legal basis of data capture before robustly protecting applications, which is where most of our personal information resides. To this end, application delivery and security services must be flexible, scalable, programmable and automatable. Intelligent security solutions need to provide complete visibility that allows firms to deploy and manage application services wherever the app exists.
The GDPR is a clear step towards strengthening citizens’ fundamental rights. However, it will need to move fast and be closely monitored closely if it is to stay ahead of global cybercrime and malpractice ambitions.
Legislation can be notoriously slow to update and implement. In a world where AI and technology are moving into unprecedented levels of innovation and complexity, organisations and governments must ensure that we are all working towards a secure society for the digital savvy cybercitizen.
We need to arm ourselves for the future, while also staying philosophically and operationally aligned with the GDPR’s key tenets. The regulations are essential to maintaining a globally level playing ground. At its core, it stands for sound data management. Transparency and best practice – whether fuelled by legislation or not – brings trust. It enables more robust customer relationships and it paves the way for the introduction of new, innovative services. Now is the time to keep pace with progress and stay compliant.
By David Warburton, Senior Threat Research Evangelist EMEA at F5 Networks
The inaugural Data Protection World Forum (DPWF) was held on November 20th & 21st 2018 at the ExCeL London and welcomed over 3,000 delegates seeking the very latest insight on data protection and privacy.
Pre-registration for DPWF 2019 will be opening in the coming weeks.