Ensuring your business utilises artificial intelligence ethically

Artificial Intelligence (AI) is no different to any other modern data-related initiative in that its starting point must be data ethics. It could even be argued that given AI’s complexity and its inherent mission to surpass human calculation, ethics is of even greater relevance.

Much of the popular discussion surrounding ethics in AI focuses on topics of enormous gravity, such as the philosophical examination of how our moral compass is affected, skewed and even dehumanised by mathematics. These are important and noble debates, and ones that must be concluded before AI infiltrates too far into everyday society. But there are also smaller, more practical ethical topics that need to be considered for everyday business deployment; topics that do not attract the same levels of popular interest, but are equally important.

The danger is that the degree of popular interest in the larger topics overshadows the equally practically important smaller ones, perhaps resulting in them being marginalised. Therefore, to remind businesses of their more practical ethical duties, we have listed some considerations for any businesses considering deploying AI – ones that will save businesses from insensitive, inappropriate or even illegal data use in AI projects.

Will you be transparent?

AI algorithms are complex and far exceed a human brain’s processing power. This means the decision-making processes are often too complex for us to fully understand.

This lack of transparency may be acceptable in automated production lines or predictive maintenance, but when algorithms are making decisions for, or about humans, based on their personal data – e.g. refusing a credit application – it is not acceptable for the reasoning to be unclear.

It is not just a question of ethics – it’s also good business sense. An algorithm may be advanced and highly capable, but it’s unlikely to be perfect. To counter anomalous, inequitable or dangerous results, which are liable to occur as the algorithm ‘learns’, it must be possible to “appeal” its decisions. A total lack of accountability is simply not appropriate.

How will you ensure that AI treats your customers, employees and partners appropriately?

In 1976, computer scientist Joseph Weizenbaum argued that AI should not be put in a position where it cared for people. This meant it should not, in his view, replace roles such as judiciary, police, military, therapy, nursing or even customer service.

Despite how AI attitudes have changed since, how comfortable are you with delegating interactions that demand empathy to an AI application? And, more importantly, how appropriate will your customers think it is? These are important considerations depending on the importance of face-to-face communication within your business model.

How will you identify and mitigate risks to safety, happiness or profit?

When your AI application delivers its outputs, what checks and balances will you have in place before those outputs are acted upon?

An algorithm will only provide an insight. It doesn’t have context to what it means, and nor will it act upon the insight until it is instructed and equipped to. Your business will need to agree the degree of appropriate human intervention between an insight being discovered and an action being taken. In what areas will you require more or less oversight? How far are you willing to let automation go? And, ultimately, what consequences are you willing to risk?

Data ethics is not just a matter of philosophy. It also requires you to address the relevant regulatory and legislative issues. Laws, after all, are simply codified versions of society’s agreed ethical standards. For instance…

Do you have the right permissions to use personal data for automated decision making?

This is fundamentally an issue of data privacy legislation. In some ways, AI brings no different privacy requirements or burdens than any other manipulation of data – it comes down to a fundamental question: do you have the necessary rights and consent to process the personal data or not? Regardless of whether that is machine learning or basic analysis, without the subjects’ consent to process it, the AI journey stops dead.

Is your use of AI ‘private by design’?

Going beyond the above, any technology applications and services are expected to show ‘privacy by design’. This simply means that the protection of a data subject’s privacy is written into the core design of a technology application, rather than retrofitted.

Practically speaking, this requires the implementation of certain rules and principles, including but not limited to:

  • Only collect and process the minimum necessary amount of data
  • Employ robust access controls
  • Wherever possible, use de-identification techniques such as pseudonymisation and data aggregation
  • Beware of quasi-identifying information that can be cross-referenced to re-identify anonymous records (e.g. gender, birth dates and postcodes)
  • Abide by any geographical data responsibilities, such as data residency or data sovereignty

Is your use of AI legal?

There are some privacy laws that make certain uses of AI illegal, the main one being GDPR. Under it, EU citizens must not be ‘subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her, or significantly affects him or her,’ unless they have given prior consent.

The use of the word ‘solely’ is important. Decisions based on artificial intelligence are permissible, provided there is meaningful human oversight. Where your use of AI involves personal data, you will need to consider how you guarantee this supervision, especially where the algorithm’s decision-making is too complex to fully understand.

These ethical and legislative considerations are all too often overlooked by businesses considering introducing AI into their processes. But those that address these questions early in the deployment process will find that their use of AI is safer, more reliable and ultimately, more likely to be well-received and appreciated by their customers and users.

 

By Julian Box, Founder and CEO, Calligo


PrivSec Conferences will bring together leading speakers and experts from privacy and security to deliver compelling content via solo presentations, panel discussions, debates, roundtables and workshops.

For more information on upcoming events, visit the website.

We have been awarded the number 1 GDPR Blog in 2019 by Feedspot.

Privacy Culture: Data Privacy and Information Security Consulting, Culture & Behaviour, Training, and GDPR maturity, covered. https://www.privacyculture.com/