REUTERS | Michaela Rehle

Why you need to be aware of, and even participate in, the ICO’s evolving AI auditing framework

The UK Information Commissioner’s Office (ICO) has for some time now been trailing its combined interest in, enthusiasm for, and (of course) concerns about, artificial intelligence (AI) in the context of data protection. For the growing number of lawyers who need to advise on the implications of AI for GDPR compliance, as well as for those of us who want to understand AI in the context of data protection, the best starting point is the ICO’s excellent 2017 discussion paper, Big data, artificial intelligence, machine learning and data protection.

One of the ICO’s three technology priority areas for 2018-2019 is AI, big data and machine learning (see Technology Strategy 2018-2021). (Before you ask, the other priorities are cyber security and web and cross-device tracking.)

Hardly surprising, then, that the ICO has been developing an auditing framework for AI. The overview of the framework and a statement of its future direction were launched in an ICO blog post at the end of March. The purpose of the framework is twofold: (1) to support the ICO’s investigation and assurance teams in assessing data controller compliance using AI, and (2) to guide organisations in the proper management of data protection risks arising from AI applications.

Of itself, this statement of purpose signals the critical importance of the framework. And it puts on notice any organisation considering AI deployment for the first time, or further developing its existing AI solutions, that it disregards this framework at its peril.

The two main components of the framework are: (1) governance and accountability, and (2) AI-specific risk areas.

Governance and accountability

As the ICO team responsible for the AI auditing framework reminds us, for any organisation processing personal data, accountability is both a fundamental principle and legal obligation of the GDPR. Of course, deploying AI can cause new and aggravate existing data protection risks or mask such risks and therefore make them more difficult to control.

So the big pointer here – and it comes direct from the regulator – is that, when adopting AI, “data controllers will have to re-assess whether their existing governance and risk management practices remain fit for purpose”.

The ICO audit framework team spells out that boards and senior management need to reconsider, and more likely define, their data protection risk appetite before deploying AI in their organisations, as well as examining how AI applications fit within chosen risk parameters. This clearly implies that boards and senior management are going to have to understand how AI works within their organisations. For obvious reasons, given the technical complexity of even “Weak AI’, this will present a challenge to many boards and senior managers who don’t necessarily have a detailed grasp of IT systems, let alone AI.

In some sectors, notably financial services, regulators are concerned that senior management in financial firms does not fully understand the impact of the increasing use of data algorithms and automated services in those firms , which could in turn lead to (1) poor consumer outcomes, (2) threats to market integrity, and (3) the financial exclusion of more vulnerable groups in society. These are seen as key risks by the UK Financial Conduct Authority (see the FCA Business Plan 2017-2018). Those of you advising senior management in the financial services sector need no reminding of the implications of those risks and concerns under the Senior Managers Regime.

But, whatever your sector, as a legal or compliance adviser you really do need to be aware of this framework and its implications for AI deployment in your organisation.

AI-specific areas

The ICO AI audit team has identified eight specific risk areas that the framework will cover:

  • 1.  Fairness and transparency in profiling: these include the bigger- and better-known AI issues of bias and discrimination, ‘interpretability’ of AI applications and the ‘explainability’ of AI-generated decisions to data subjects.
  • 2.  Accuracy.
  • 3.  Fully-automated decision making models: covering classification of AI solutions and apparently including non-fully automated decision-making models and the degree of human intervention in AI processes.
  • 4.  Security and cyber: including testing and verification, re-identification risks and, interestingly, outsourcing risks.
  • 5.  Trade-offs: balancing competing constraints in AI application, for example accuracy vs privacy.
  • 6.  Data minimisation and purpose limitation.
  • 7.  Exercising of rights: for example right to be forgotten and portability.
  • 8.  Impact on broader public interests and rights: for example, freedom of association and freedom of speech, but only in so far as these impacts relate to data protection.

For a more detailed background on these risk areas, see Chapter 2 of the ICO’s Big data, artificial intelligence, machine learning and data protection.

The ICO will be issuing guidance on adequate risk management practices to address the above concerns. All organisations considering deploying AI solutions for the first time or extending them would do well to track and pay heed to the guidance as it emerges.

What happens next?

The ICO’s auditing framework was launched as a blog, and the team’s aim is to post updates containing ‘deep dives’ into the areas covered in the framework every two to three weeks for about the next six months.

The team has included a useful, simplified, AI application lifecycle diagram in the initial blog, ranging from business and use-case development to deployment and monitoring. The team intends, where practicable, to relate the risks and management controls to stages in that lifecycle. And to be super-useful, the team will highlight the main implications for various affected management and operational levels within organisations – from boards, to DPOs and through to data scientists. This emphasises the point made above that boards and senior management are going to have to be told about, and take notice of, this framework.

There is a standing invitation from the ICO team to feed back on any aspect of the framework at: AIAuditingFramework@ico.org.uk. And you can sign up for updates on the framework from the blog page.

For some organisations, the chance to participate in, and to shape, the development of this framework will clearly be important. Bring it to the attention of your AI stakeholders. They’ll thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this post on: