Cyber Security

Getting to grips with the ethics of artificial intelligence

Derek Lin, Chief Data Scientist at Exabeam, discusses the growing importance of artificial intelligence (AI) ethics and why prioritising it now could soon pay major dividends 

With digital transformation touching every corner of the business world – and in many cases, now accelerating rapidly – the demand for (and reliance on) data has never been greater. This has made it an incredibly valuable asset, but also raised a number of key issues, particularly around the ethics and regulation of data sharing. One area where this is particularly true is the burgeoning AI industry, where data is the lifeblood that AI technology relies on to learn and develop. 

The dangers of data misuse

There are many potential places for data misuse.  AI relies on data.  When personal data is acquired or collected without proper consent, its use for AI exposes data owners to privacy loss.  AI systems are designed by humans. A lack of transparency around data modelling is subject to human manipulation and risks poor data-based decisions of policy makers.  Data can be biased.  Data-driven tools may institutionalise unfair biases, such as sexism or racism, if the training data is biased in the first place.

These factors, combined with the growing reliance on data in nearly every business sector, means concerns about the potential misuse of these technologies simply can’t be ignored.

An ethical approach can be a highly profitable one, if done correctly

In late 2018, the UK government established the world’s first national AI ethics advisory board, representing a truly landmark moment for the industry.  Working with key stakeholders, the board’s task is to lay the foundations for a sustainable AI industry by setting out best practice that will guide ethical and innovative uses of data in the future, as well as advise the government on the need for any specific regulatory action or policy.

While the government’s investment is driven first and foremost by a desire to ensure innovation in AI is both safe and ethical, it also acknowledges the commercial opportunity too, one that could see the UK establish itself as a global leader in AI for many years to come. Indeed, it’s estimated that AI could potentially add more than £600 billion to the UK economy over the next 15 years.

It’s not just the UK government that recognises the magnitude of the opportunity either. Last year saw several major tech vendors – including Google and IBM – publish their own AI ethical guidelines, as did the European Commission.  

These moves clearly signal that the time has come to prioritise ethics efforts in order to ensure AI is being used and applied appropriately, with future commercial prospects likely to be dependent on the effectiveness of the processes put in place now.

Building an ethical framework for AI development

When it comes to the development and application of AI, an ethical framework should focus on three key areas:

  • Creation – does the AI use training data that poses any sort of risk to privacy? Is the data truly representative or could it contain biases that might distort future decision making? 
  • Function – are the AI’s assumptions reasonable and fair?  Is it easy to understand how it works and can it be audited if necessary? Also, is it securely protected against malicious third parties and hackers? 
  • Outcomes – Is it being used for unethical purposes?  Has it been evaluated properly? Who is ultimately responsible for the decisions the AI makes?

The EU’s General Data Protection Regulation (GDPR) represents the first step in a raft of regulations that aim to establish clear governance principles in relation to data.  On the other side of the Atlantic, the California Consumer Privacy Act (CCPA), which came into effect on 1st January this year, is widely considered to be the most comprehensive privacy legislation in US history.  Organisations wishing to succeed must get ahead of this regulatory curve as quickly as possible with the implementation of new control processes that can manage risks and ensure AI is being used ethically and appropriately at all times.

Where to from here?

We’re at a pivotal moment. Companies currently working on new forms of AI technology need to be 100 percent confident that they aren’t unintentionally encoding any form of bias that might encourage poor or unfair treatment of any groups, be it customers, employees or anyone else.  Doing so requires the establishment of a suitable ethical framework at the beginning of the development process and putting the right tools and control structures in place to ensure ethical development practices are maintained at all times.

In a rapidly changing regulatory landscape, the stakes couldn’t be higher. But acting swiftly and decisively now could well pave the way for significant rewards in the not too distant future.