Date posted: 8/11/2017 7 min read

How banks can keep customer trust in the age of AI

There are more questions than answers in the robo-world that’s coming our way, but with care banks can turn these into positives and still protect data.

In Brief

  • The blend of humanity and technology is what makes a good algorithm, so develop a human-led data ethics framework to avoid AI mistakes.
  • AI can automate highly technical banking jobs, but there’s a thin line between a good algorithm and crossing into unethical practices and losing customer trust.
  • By 2018, half of business ethics violations will occur through the improper use of big data analytics, technology research company Gartner predicts.

By Carlo Lacota CA and Manish Bahl

The rise of artificial intelligence (AI) is the greatest story of our time. From Siri to Uber and Netflix, we’re already surrounded by smart machines that run on powerful and self-learning software platforms. And this is just the beginning.

As AI transitions from being our little daily helper to something much more powerful (and disruptive), it impacts every facet of our commercial lives and banks and financial service providers are no exception when it comes to moving from “digital that’s fun” to “digital that matters”.

There is, however, an unspoken side to this AI shift that raises important technological, social and ethical questions; for instance, regulator implications in robo-advised cross-border investments, or if an algorithm finds a pattern of loan defaulters for certain communities and based on that, rejects loan applications. Legal regulations denote that customers cannot be discriminated against, so such an outcome could cause a bank to face a potential legal suit, leading to a loss of reputation and business. 

Other questions include how an algorithm will identify and verify the user before providing information or allow for complex actions such as money transfers. Will over-reliance on AI result in unknown risks?

Make AI ethics your big point of difference

AI is a great guide, but an incorrect interpretation or a small mistake can quickly result in the loss of customer trust. We are increasingly heading towards trust-driven disruption with AI and need to balance the effective use of customer data and AI in a secure and private manner, whilst also maintaining a non-intrusive façade.

AI is becoming the new frontier in retaining loyal and engaged customers. There are several practical approaches that banks can employ to maintain customer trust in the age of AI which not only protect their customers’ data, but also increase data value and quality.

Companies are growing increasingly reliant on algorithm-driven decisions in scoping new business opportunities, reducing labour costs and automating highly technical roles in banks. There’s a thin line between designing a good algorithm and crossing into unethical practices though, and companies are increasingly struggling to draw that line. 

Unfortunately, there are no ethics and governance tools to manage the core risks related to AI. A concrete step that companies must take is to develop an ethics framework for their specific industry and add it as a tool to their current analytics solutions to avoid unwanted and potentially unknown outcomes.

Slowly reducing human oversight and interaction will continue to filter through many automated processes, such as computer generation of mortgage repayment schemes, or suggesting upsell for credit options based on recent behaviour and spending. 

Although these practices address primary business needs, customers may grow anxious (or untrusting) at the lack of human interaction involving their personal finances. According to technology research company Gartner, by 2018 half of business ethics violations will occur through improper use of big data analytics.

In order to address this, banks must establish data ethics frameworks which use analytics to intelligently differentiate between appropriate and inappropriate uses of data within the context of their business. This can take the form of an embedded ethics monitoring mechanism ― such as a tool or pre-built framework ― that can guide and notify users of ethical breaches with data. We see this as a human-led activity that sets out what is acceptable and what isn’t ― it cannot be automated away.

Enabling data scientists for trust and not just inference

The age of the algorithm has deified data scientists, endowing them with powers to draw new inferences from data. But many fail to consider ethical implications for their everyday actions, as there are typically no ethical guidelines.

In Andreas Ekstrom’s Ted Talk: ‘The moral bias behind search results', Ekstrom explains that unbiased, clean search results are likely to remain a myth. Behind every algorithm developed, there’s always a person with personal beliefs. That’s where people building algorithms need to identify their own personal bias and take responsibility for how it influences their work. The blend of humanity and technology is what makes a good algorithm. 

It’s time for ethics to be at the core of an algorithm creation process. In fact, ethics needs to be a key performance indicator for every employee who has a connection with customer data. As technology advancements increase the efficiency in delivery of customer service, with data at its core, instead of viewing technology as a barrier to trust, banks (as custodians of customer data) should view technology as its enabler. 

Develop self-control as the law will never catch up

Let’s face it, regulators are behind the curve when it comes to technological advancement. Although we are witnessing many countries taking steps to create laws and regulations around AI and bots, this should not be considered as the only method for maintaining customer trust. 

Digital regulation will evolve at its own pace across geographies and banks should not assume that they are absolved from responsibilities by publishing data privacy and security policies and their associated terms and conditions (which most customers don’t even bother read) even when a customer hits “I accept”.

Banks need to focus on self-regulation founded on openness and accountability with an obsession for maintaining customer trust at its core. When customers feel their trust eroding, they’ll move on.

In the coming years, there will be further breaches of security, privacy and ultimately trust, as virtual economies continue their rapid expansion. More so than in perhaps any other industry, customer trust within the banking and financial services sector is vital. There are numerous challenges to maintaining customer trust in the age of AI and bots, but achieving trust through technology and creating data ethical frameworks will ensure that customers remain at the centre of your business and trust you through the process. 

Related: Will AI lead to more efficiency?

Accountants have always had a reputation for working long hours,so will AI lead to more efficiency and accountants getting to work less? 

Carlo Lacota CA is Head of Banking and Financial Services in Australia & NZ Cognizant and Manish Bahl is Senior Director, Centre for the Future of Work, Asia-Pacific, Cognizant.

Search related topics