REUTERS | Gloabl Creative Services (no copyright)

Not just somebody else’s problem: notes from the launch of the IBE’s Board Briefing on “Corporate Ethics in a Digital Age”

Last week, the Institute of Business Ethics launched its Board Briefing on “Corporate Ethics in a Digital Age”. A wide-ranging panel discussion and Q&A session at the launch event tackled some of the implications of new technology for ethical business practice and emphasised that, despite its novelty and seeming complexity, it should be subject to the same standards of review and control as any other risk the board might consider.

For those board members exasperated at hearing that they might be expected to drill down into the detail of AI and machine learning on top of their usual agenda, the discussion gave some practical examples of when ethical problems arise out of the use of technology, and reassurance that in fact they may already have all the tools they need.

Technology itself is difficult to understand without a technical background, but grasping its implications is not

At the RSA conference in San Francisco earlier this year, “security guru” Bruce Schneier (whose credentials are rather too long to copy out here, but include a position as Lecturer in Public Policy at the Harvard Kennedy School as well as special advisor to IBM Security) appealed to data engineers and other members of the technology industry to do some “pro bono” by helping legislators to understand the technology they are tasked with regulating. (He has also advocated for better training in technology for lawmakers and law-enforcement.)

This appeal reflects the common perception of a lot of technology, and certainly AI, as being too complex to understand without a technical background. The fact that relatively few board members in the UK have such a background could therefore lead to the conclusion that boards are ill-equipped to analyse the risks posed by AI, and should look to recruit new members quickly. The IBE Briefing cites a review that found that, in 2018, 99% of companies had one or more board members with finance/accounting experience, whereas only 42.8% had one or more member with IT/Technology experience (a lower percentage than for Marketing/PR experience at 48.5%).

However, the prevailing view at last week’s event was that what is required of the board is to apply the same critical thought to the issue of technology as it would to other matters, and that a non-technical, humanities background is perfectly suited to examining the issues. As the IBE’s Briefing puts it:

Business experience, common sense and sound advice are more important for directors than technology expertise.

While it is perfectly valid to rely on the expertise of others when assessing AI, the onus is on board members to ask questions that will allow them ultimately to make a reasoned judgment as to its use, just as they would in relation to any other issue.

“Boards are accountable for what happens as a result of the application of technology to the business. They cannot hide behind experts whose answers they don’t understand or who they do not trust.”

Identifying ethical hotspots: see data as passing through a value chain

How does the board determine what questions to ask? One method suggested was to envisage the journey of data through an organisation’s own processes and beyond.

It is clear that where an organisation incorporates AI into its own decision-making processes, the board needs to think about how it is using data and whether that use is ethical. However, data gathered by an organisation is often passed on to third parties in the course of business. Boards should ensure that they understand how that data will be used by those third parties: will it be subjected to AI? How will it be processed? How will the result of that processing be used? Will that result affect individuals?

In a B2B context, an organisation should be able to achieve clarity on most of these questions and take steps to obtain contractual assurances on points of ethical concern. Where a business wants to buy technology to help it screen the CVs of job applicants, for example, the board should require assurances that the technology is not biased (last year it emerged that the recruitment software Amazon had been developing for its own use showed bias against women – the project team was disbanded after three years).

The IBE’s Board Briefing sets out questions for boards to ask to allow them to come to a conclusion on whether a particular use or aspect of AI is ethically justifiable, including:

  • How does the use of AI sit with our values?
  • Do we use AI to add value to our customers, or to extract value from them?
  • How do we use data brokers? Is the context in which the data is used fair to customers?
  • Who is in charge of technology? Do they make sense and answer questions intelligently?
  • Are we confident that our algorithms are free of bias?

“Don’t be evil”: regulating ethical behaviour

If ethical behaviour is what you do when no one else is watching, is it still ethical behaviour when it is mandated by regulation?

Logical niceties aside, some event attendees thought it likely that some form of ethical regulation in at least some areas was on the horizon. Until then, it is for businesses themselves to set the standards by which they will act in this sphere, and to abide by those standards: tasks which the IBE’s Board Briefing seeks to clarify.

However, it is not the case that there is no external barometer of ethical behaviour. Investors were named as one interest group who can gauge an organisation’s behaviour by interrogating its board. If a board cannot answer an investor’s questions as to how it approaches the risks posed by new technology, that may suggest that the organisation is not dealing with them adequately.

“Employee consciousness” was also mentioned as an emerging ethical brake on a business’s actions. In June 2018, following months of employee protests based on concerns that its technology was being used to develop weapons, Google (which has famously had “Don’t be evil” as one of its mottos) took the decision not to renew “Project Maven”, a contract with the Pentagon under which it would develop AI for use in analysing videos recorded by drones.

Whether such employee consciousness is a sign of a good or bad ethical climate within businesses was a point on which opinion last week was divided (arguably, it shows that a business’s culture encourages employees to speak up when they see behaviour they disagree with). However, it is clear that a business’s ethical behaviour can affect not only its reputation, but also its bottom line.

The full IBE Board Briefing can be found on the IBE’s website.

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this post on: