REUTERS | Global Creative Services (no copyright)

Artificial Intelligence and data ethics: understanding human values to design values-led systems

“I’ve done some questionable things” says Roy – a replicant, a robot – to its maker in a famous scene of the sci-fi cult movie Blade Runner. The movie came out in 1982 and it was set in an imaginary future – the year 2019 – where humanity was struggling with the question of what to think of machines that think and with the ethical and moral implications of artificial intelligence (AI).

And indeed, the advancements in technology have allowed us to achieve unprecedented results across the board. However, the true ethical impact of AI often remains an elusive subject. Many have tried to set forth some sort of framework of fundamental values and principles for the use of AI in business (at the IBE, we published a briefing on this). They all tried to answer questions such as: what is artificial intelligence and what is its impact on our society? What are the biggest ethical risks that new technologies can pose? How do we prepare ourselves?

This is a summary of the main takeaways that emerge from our research.

A new definition of privacy

The rise of AI has been described by some as the death of privacy, while others have compared it to an Orwellian Big Brother ready to scoop on everyone’s private life. Responding to similar concerns, privacy and data governance is mentioned by the European Commission amongst the seven essentials for achieving trustworthy AI.

They explain that “citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.” The comprehensive reform of the data protection rules approved by the European Commission with the General Data Protection Regulation (GDPR) is certainly a step in this direction.

Humans – not machines – are ultimately accountable

Who is responsible for the outcome of the decision-making process of an artificial agent? Accountability is central to the definition of good corporate governance: it implies that there should always be a line of responsibility for business actions to establish who has to answer for the consequences. AI systems introduce an additional strand of complexity, particularly when they are developed by third parties rather than in-house.

Who should be held accountable when an AI violates ethical values? Should it be the designer of the algorithm, the company that adopts it or the final user? It is difficult to provide a univocal answer. A valuable approach would be for each of the parties involved to behave as if they were ultimately responsible.

Organisations are encouraged to be part of the solution

Technologies, data and algorithms know no borders. Therefore, it is important for organisations to engage in multi-stakeholder dialogues in order to consider their commitment to ethical values in the applications of AI.

These initiatives have started developing at different levels. The International Standards Organisation (ISO) has been focusing on developing standards for AI – which have seen the collaboration of 22 countries around the world.

In addition, the European Commission has a series of activities in the pipeline to further the ethical development of AI. These include a set of networks of AI research excellence centres, networks of digital innovation hubs, and multi-country discussions to develop and implement a model for data sharing and making best use of common data spaces.

Practical tips for businesses

Each organisation has a role to play in making sure that its ethical values are not compromised by the use of AI. In order to fulfil this role effectively, it is essential that they know the impact and side effects that AI might have on its business and its stakeholders.

However, it is also very important that they put in place practical measures that will minimise the risk of ethical lapses due to an improper use of AI technologies:

  • Empower employees to make good decisions, providing training so that they can use AI systems efficiently and ethically. All staff should be able to understand the assumptions, limitations and potential risks of AI technologies they are exposed to.
  • ‘Ethics tests’ for AI machines and detailed decision-making tools for algorithms designers can provide a good assessment as to whether a specific technology poses an ethical risk, particularly when it faces an ethical dilemma. Dedicated company policies that ensure proper testing and appropriate sign-off from relevant stakeholders – both internally and externally – is also helpful.
  • Ethical due diligence on third parties that design algorithms is an essential step to ensure that ethical values are upheld. Organisations are encouraged to engage with third parties that provide AI, as well as clients and customers to whom AI technologies are sold, to ensure that they commit to similar ethical standards.

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this post on: