Powered by MOMENTUM MEDIA
investor daily logo

Could algorithms destabilise the entire financial system?

  •  
By James Mitchell
  •  
4 minute read

This critical question was raised by the World Economic Forum as it considered the impact of artificial intelligence on financial services.

The World Economic Forum’s new report, Navigating Uncharted Waters, was prepared in collaboration with Deloitte and released today. It is based on more than 10 months of extensive research, global workshops and contributions from the likes of RBS, Morgan Stanley, UBS, JP Morgan, Credit Suisse, Microsoft, BlackRock and the New York Stock Exchange.

The report warned that widespread adoption of artificial intelligence (AI) has the potential to create a fundamentally different kind of financial system, one in which the interconnections between humans and machines grow, even as humans struggle to understand the opaque behaviours of AI systems. 

“As a result, crises and critical events may occur more frequently and market shocks may be intensified,” the report noted. 

==
==

“Emerging risks will no longer sit neatly inside a supervised institution, but instead will be dispersed across an interconnected set of actors that includes small specialised fintechs and large tech companies.”

As a result, the report suggests supervisory authorities and regulators will need to reinvent themselves as hubs for system-wide intelligence. 

The rise of AI-powered systems has some people worried that the machines become too complex to understand. Others fear that it is the humans behind the machines that is the greatest cause for concern. 

The report warned that optimising algorithms that become locked into competition with each other could inadvertently destabilise markets. For example, two AI systems may continuously bid against each other, optimising their actions to achieve a single objective like the highest market price or return. 

“The average market price continues to rise as they repeatedly outbid each other, until one actor is no longer able to sustain its bids due to profitability constraints,” the report warned. “Over time this competitive optimisation may lead to a deterioration of actors’ balance sheets, encouraging riskier behaviours in order to maintain profitability, or leaving them out of the market completely,” the report said.

Deloitte Canada partner and global banking & capital markets consulting leader Rob Galaski said using AI in financial services will require an openness to a fundamentally new way of safeguarding the ecosystem, different from the tools of the past.

“To accelerate the pace of AI adoption in the industry, institutions need to take the lead in developing and proposing new frameworks that address new challenges, working with regulators along the way,” he said. 

The report notes that financial services firms that are first movers on AI face higher risks by deploying emerging technologies without regulatory clarity. But they also have the most to gain. 

“AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency,” World Economic Forum head of financial services Matthew Blake said. 

“This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately.”

Algorithmic bias is a top concern for financial institutions, regulators and customers surrounding the use of AI in financial services. AI’s unique ability to rapidly process new and different types of data raises the concern that AI systems may develop unintended biases over time; combined with their opaque nature such biases could remain undetected. 

Despite these risks, AI also presents an opportunity to decrease unfair discrimination or exclusion, for example by analysing alternative data that can be used to assess “thin file” customers that traditional systems cannot understand due to a lack of information.

“Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication,” the report noted. 

“This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks.”