Date posted: 10/12/2021 8 min read

Why we need to put ethics in AI

Algorithms aren’t born bad, and a new report from ACCA and CA ANZ details how CAs can bring ethics to the AI age.

In Brief

  • Artificial intelligence (AI) has moved from an experimental stage to adoption at scale.
  • Accountants, with their commitment to ethics, can guide organisations to adopt AI responsibly.
  • Understanding AI should be part of the data stewardship expected of accountants.

By Stuart Ridley

Accountants aren’t just record keepers. They are trained to be adept at managing huge amounts of data and making sense of it. Indeed, ‘making sense’ of it is at the heart of the profession.

So it’s hardly remarkable that some accountants were among the earliest adopters of desktop computers, or that now they’re helping organisations of all sizes with their digital transformation in the cloud.

What is remarkable is how quickly data analytics is at risk of being handed over to artificial intelligence (AI), without most people knowing what AI systems are capable of, let alone how to ethically apply them.

A new report from the ACCA and CA ANZ highlights how accountants can step up to ensure decision-making aided by AI is firmly principled and honest.

Published in September 2021, “Ethics for sustainable AI adoption: Connecting AI and ESG” draws on a global survey of 5723 people, with 72% of respondents working in an accounting or finance related role. 

It presents several ways accountants can lead the responsible adoption of AI, including delivering sustainable value, exercising professional judgment and challenging greenwashing.

“Being ethical is a core attribute for the professional accountant,” explained Maggie McGhee, ACCA’s executive director, strategy and governance, at a 29 September webinar launching the report. “And indeed, I’d say they are often called upon to act as the ethical conscience of the organisations.

“[Accountants] are often called upon to act as the ethical conscience of the organisations.”
Maggie McGhee, ACCA

“The fast pace of technology change creates new and sometimes quite ambiguous scenarios that we’ve not seen before, and this makes the ability to navigate ethical landmines that much more valued.”

Why CAs need to understand AI

While the hype about AI has been massive, most people don’t understand it. Only about half (48%) of the respondents in the AI Ethics survey claimed to have a basic understanding of how an AI algorithm works, notes Charlotte Evett CA, CA ANZ ‘s New Zealand government lead and part of the organisation’s research and thought leadership team.

“I don’t think that’s too concerning, as most of us use email and smartphones multiple times every day, with no idea how they work. But we do need a basic understanding of AI before we can apply an ethical lens to it, because we [accountants] have an important role to play in ensuring AI deployment across organisations is done in a thoughtful and ethical manner,” she says.

“We do need a basic understanding of AI before we can apply an ethical lens to it.”
Charlotte Evett CA, CA ANZ

Evett sees an understanding of AI as part of the data stewardship expected of accountants these days; it demands a deeper understanding of the business implications of both non-financial and financial data to enlighten decision-making.

Ethical risks of AI

Ethical concerns about AI tend to focus on the way that every algorithm is first taught to ‘think’ with a series of training exercises designed by its human masters. Humans bring all kinds of biases to these lessons, warns Frith Tweedie, a digital law specialist and general counsel and chief privacy officer at Auror, an intelligence platform that aims to reduce shoplifting and retail crime.

“What can happen is AI can reproduce – and amplify – existing discrimination and patterns of inequality in society,” she says.

“What can happen is AI can reproduce – and amplify – existing discrimination and patterns of inequality in society.”
Frith Tweedie, Auror

“The people who design and develop AI tend to be white males, and their biases come through. For example, facial recognition technology has misidentified black people, and other AI tools are biased against women.”

“The people who design and develop AI tend to be white males, and their biases come through.”
Frith Tweedie, Auror

In a famous case of discrimination caused by distorted training data sets, Apple’s credit card algorithm was accused of discriminating against women after couples with the same financial information saw the husbands offered higher credit limits even when the wife had a higher credit score.

Amazon’s recruitment system, too, was shown to be biased. It was taught by 10 years of data on successful applications, but because most of the successful applicants in those years were male, it prioritised male applicants over female applications when selecting people suitable for an interview.

And then there’s Facebook’s ‘next video’ recommendation algorithm which incorrectly suggested people watching a video of black men would like to keep seeing videos about primates.

“These biases are yet another reason to have greater diversity in organisations,” cautions Tweedie, adding that while it’s important humans oversee and check AI outputs, those humans need to understand how AI makes decisions.

All organisations that want to use AI need an ethical framework for the technology, adds Dennis Gentilin, a director at Deloitte Australia and author of The Origins of Ethical Failures, a 2016 book that explored major ethical failures at some of Australia’s financial institutions.

Gentilin knows well what can go wrong when performance incentives motivate people to ignore ethical principles. He was a whistle-blower in the NAB foreign exchange trading scandal of 2004, when traders on the bank’s foreign currency desk entered fictitious trades to conceal losses of A$360 million and protect their bonuses.

“When incentive schemes are skewed towards generating revenue or profit for the organisation, that’s going to significantly change how you program that piece of artificial intelligence to work,” he says.

It’s a positive sign that many organisations are thinking through the social repercussions of actions prompted by data insights before applying them. Two-thirds (66%) of respondents in the AI Ethics survey agree that the leaders in their organisations prioritise ethics as highly as profit.

However, just one in five (21%) organisations surveyed said they had an ethical framework for AI use. Those that do have AI ethics policies share many of the same broad principles: fairness, accountability, sustainability, transparency, human oversight, ethical use of data, safety and robustness, standards and law.

AI and tackling climate change

Using AI to run the numbers on best- and worstcase scenarios could help uncover solutions to address organisations’ environmental, social and governance concerns, adds Rossana Bianchi, AI advisory and responsible AI lead for Accenture in Sydney.

“Artificial intelligence can help accelerate our thinking about how we tackle climate change,” she says. “First, we can use AI to model what has happened in the past, and then to prepare for what is coming next.”

An example of an AI-powered model for testing sustainability efforts is an app being developed in Germany that shows consumers the energy consumption and carbon footprint of products and services they’re considering buying. Bianchi says AI-powered tools such as these can help nudge consumers towards more sustainable habits.

Using AI to tackle greenwashing

Certainly, sorting fact from sustainability fiction is a massive challenge. And AI can help organisations set and measure sustainability targets they can realistically meet, notes Joseph Owolabi FCCA, CEO at green finance consultancy Rubicola Consulting and ACCA global vice president.

But fake data just won’t cut it. Big companies such as Volkswagen, Coca-Cola and BP have all been called out for greenwashing in the past, and the repercussions have gone beyond a dented brand image.

“The penalties can be very severe – especially for listed entities – including plummeting share prices and massive fines,” Owolabi explains.

“The fallibility of some of the data coming from traditional rating agencies is driving demand for AI powered systems to interrogate reports and find out which sustainability claims are genuine.

“… auditing bodies and activists have the funds and resources to challenge the data – and AI will certainly help them uncover untruths.”

How CAs can guide ethical AI adoption

  • Set tone at the top on AI adoption
  • Deliver sustainable value
  • Exercise professional judgment
  • Challenge greenwashing
  • Comply with AI regulation and ethics policies
  • Prioritise data management
  • Strategic approach to oversight and delivery
  • Understand vendor landscape
  • Build knowledge and skills

Read more:

Ethics for sustainable AI adoption: AI and ESG

Download the report

Changes to the Code of Ethics are effective from 31 December 2021

The Code of Ethics defines the standard of behaviour expected of members of CA ANZ. The Code has recently been to incorporate changes made to the International Code of Ethics promoting the role and mindset expected of professional accountants.

Find out more