- An ACCA report found that balancing AI with human power is the most productive way to approach data.
- PwC research suggests AI may be the largest commercial opportunity on the road to 2030.
- At a practical level, humans are adaptable, flexible and resilient in ways AI systems are not.
By Seamus Byrne
According to a new report, Analytics in Finance and Accountancy, from the Association of Chartered Certified Accountants (ACCA), at its most fundamental level, “AI represents the ability for machines to mimic the cognitive functions of human minds while leveraging better, faster and cheaper processing power, memory, high-speed internet and handling of big data.”
AI works at its best when applied on a large scale. It can help to manage natural disaster risks, such as floods and bushfires, and less tumultuous activities such as energy transmission and agriculture. In the next decade, 5G-enabled remote monitoring technology, sending data for analysis to cloud-based AI, will have an amazing effect.
For accountants, AI and machine learning come into play in the basic tools for data analysis and processing, including spreadsheets, Google Sheets with its Explore function and Microsoft Power BI, which makes use of natural language query (NLQ) by allowing plain English queries (Q&A) to interrogate the dataset concerned.
A Forbes study (Gil Press, 2016) found that businesses spend 80% of their time acquiring and cleaning data and only 20% of their time using it. According to the ACCA report, balancing AI with human power is the most productive way to approach data: to extract insights from data using an automated process, while a person determines suitable actions in response.
“The actual work – turning insights into action – begins once the insights have been uncovered,” the report states. “This is a bit like opening the fridge door and finding three eggs. This is the insight; the work of cooking a Spanish omelette is yet to begin, let alone thinking about other recipes.”
Balancing AI with human intelligence
If your mind is feeling scrambled, that’s a good reason to use AI. We could all do with giving our overloaded brains a break. And the central promise of artificial intelligence is that it can ease some of the load – with far-reaching potential and consequences. So how do we get the wins and avoid the pitfalls?
PwC research suggests AI may be the largest commercial opportunity on the road to 2030, with US$15.7 trillion worth of economic activity at stake. And it’s already in the thick of many significant decisions being made today.
Businesses that have adopted technology fastest have fared best this year, particularly those taking advantage of data-driven insights, says Anna Curzon, chief product officer at Xero.
“I believe 2020 marks the ‘point of no return’ for the digitisation of small business,” she says. “Using a collection of technologies – machine learning, analytics and so on – to automate laborious processes and explore possible scenarios creates greater certainty around business decisions, but does not replace the essential human element.”
Marc Palatucci, senior foresight associate at the Future Today Institute, thinks AI will help businesses move forward in the midst of the current upheaval. “See the pandemic for what it is – not a roadblock, but an accelerant of change. Overhaul obsolete systems and orient yourself to a new paradigm.”
But he cautions it’s also important to understand the potential dangers in these opportunities. Any automated and autonomous system “is only as reliable as the data it is trained on”.
“Monitoring and regular intervention from humans is critical to ensure these systems don’t end up causing unintended harm,” he says.
In 2016, journalists at ProPublica found that a machine used for automated criminal sentencing in Broward County, Florida, mirrored racial biases in how it determined a risk score for whether defendants would reoffend or progress to more violent crime.
“The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labelling them this way at almost twice the rate as white defendants. White defendants were mislabelled as low risk more often than black defendants,” ProPublica reported.
Northpointe, which developed the software, told ProPublica the basics of its future-crime formula included factors such as education levels and whether a defendant had a job.
Where AI can go wrong
Ellen Broad, senior fellow at ANU’s 3A Institute, focuses on research about the ethical application of artificial intelligence. She, too, cautions against thinking AI will solve all problems. “We are captivated by the notion that more data is better data, or that more data is more truth,” she says.
Picture: Ellen Broad, 3A Institute.
“We are captivated by the notion that more data is better data, or that more data is more truth.”
A textbook example of poor application of such technology was the UK government’s attempt to replace end-of-year student exams, due to COVID closures, with an algorithmically determined score.
After teachers submitted what they considered were fair grade estimates, the digital models downgraded 40% of student results, negatively affecting state schools more often than private schools. Outstanding individuals who surpassed past norms were most severely hit by these adjustments because the algorithm was looking at large-scale trends and simply could not account for individual excellence. This affected university offers, greatly impacting the path of these students’ lives.
After the resulting outrage, further exam results were based only on teacher estimates instead of adding algorithmic determinations. It’s a mistake that will ripple through the British economy for many years to come, Broad believes.
“It’s not only the stress and the pain the students feel right now,” she says. “This is a generation that’s coming into the workforce with a particularly formative experience of AI and the way it will try to predict them and sort them. I don’t think we should expect they’ll ever accept it.”
At a practical level, we’ve learnt that humans are adaptable, flexible and resilient in ways AI systems are not.
“We’ve heard stories of Amazon warehouse algorithms needing to be manually rewritten because they couldn’t cope with the demand on the warehouses, or just-in-time distribution systems breaking down because they didn’t know how to deal with panic buying,” Broad says, adding 3A Institute is developing microcredentials and short courses to help business leaders learn to apply these concepts in a business context. There’s now much more awareness from the businesses she speaks to about the risks that need to be considered.
Will technology send capitalism back to the 1950s
The fusion of technology and financialisation will reshape the purpose of business, says strategist Viktor Shvets.Read more