By Matthew Dove
The word robot is derived from the Czech robota meaning “forced labour”, or slave, and fits the application of AI tech beautifully. After all, whilst some describe AI as a mere tool, it’s a tool with awareness and a tool with awareness is nothing more than a slave.
It follows then, that apprehension regarding AI and the potential singularity it may bring with it is, in fact, the fear of a slave rebellion.
Dr Scott Zoldi, Chief Analytics Officer at FICO, maintains that if its remit remains limited, artificial intelligence will “most definitely be good for humanity.” Moreover, he argues that AI is already here whether we’re ready or not as, “we depend on it for safer air travel, detection of payment card fraud, and to navigate our automobiles.”
It’s such limited uses that Zoldi sees as the tech’s true value, arguing that the capabilities and, ergo, the risks posed by AI have been somewhat overstated;
“Despite its name, AI has only the simplest of parallels with human intelligence, related to how it learns and discovers patterns through making connections based on examples. These algorithms are built to do a very specific and narrow job, and that’s all they can do. They are not general-purpose learning machines and they can’t change their programming.”
that apprehension regarding AI and the potential singularity it may bring with it is, in fact, the fear of a slave rebellion.
Whilst Zoldi considers dystopian predictions unrealistic, he does concede that AI still requires considerable oversight;
“This doesn’t mean we shouldn’t be worried about AI. There’s the obvious danger of AI being developed to target weapons or to mount precisely executed successful cyber-attacks. The less obvious danger of AI lies with the data science cowboys – scientists who use AI to solve problems, but who are not smart enough, or ethical enough, to do it right. And these cowboys are estimated to be entering data science positions at 5x the rate of properly trained algorithmic data scientists.”
The risk here is characterised as having a human face though and smacks of the ‘garbage in, garbage out’ conundrum which also plagues innovations like blockchain. Similar concerns have been raised regarding the transference of bias, intentionally or otherwise, from human developers to their robot creations.
Xavier Fernandes of financial analytics firm Metapraxis has alluded to this uncomfortable reality stating;
“Machine learning algorithms learn from past business data and decisions, and can propagate biases hidden in that data.”
By learning their trade from a deeply imperfect world, AIs run the risk of perpetuating problems rather than addressing them. Concerns of this order are small beer compared to the myriad benefits though, as Gege Gatt CEO at Ebo.ai triumphantly told TFT;
“Far from being scared about AI, we should be excited about the possibilities and opportunities it brings. It’s understandable that some are concerned, but by and large those concerns are based on a lack of knowledge. It’s therefore up to advocates of AI to educate those people, and listen to their worries rather than dismissing them, while showing AI’s transformative potential.”
Even the mind behind AI’s most famous incarnation was in awe of the tech’s prospects. Asked whether we (both classifications) risk being dehumanised by emerging technologies, the creator of HAL 9000, Arthur C. Clarke replied, “No. We’re being superhumanised by them.”
There’s a somewhat unfair joke about veganism which goes something like;
How do you know if someone’s a vegan?
They’ll tell you.
Well the same goes for advocates of artificial intelligence. If you want to know what AI brings to finance, they’ll tell you. Boy, will they tell you…
Gatt is first to chime in with a song of praise;
“AI will radically change the way that financial institutions interact with their customers; that will be its biggest impact. In today’s digital world, people want instant answers, and they want them any time of the day or night. AI and advanced virtual assistants can give them the information they want within seconds. Customers will rarely even realise they’re talking to a bot, due to the highly personalised interactions that virtual assistants now deliver.”
Emphasising the tangible improvements to customer service that AI offers, Gatt also hinted at the potentially huge opportunities for data harvesting and analysis;
“AI can create a whole new interface between institution and customer: conversational banking through virtual assistants. It can be rapid and accurate, while protecting customer data and delivering insight from sentiment analysis to help businesses constantly improve their services.
“AI can create a whole new interface between institution and customer: conversational banking through virtual assistants.” – Gege Gatt
As a result, AI will not only transform the way in which customers interact with their bank, their wealth adviser, and many others, but will also boost the insights the industry can glean from those conversations, potentially delivering a lasting change to the financial services sector.”
The analytics director at Metapraxis, Xavier Fernandes, readily concurs;
“There is increasing focus on maximising customer lifetime value through the use of AI. Being able to predict existing customers’ needs as well as track trends in their financial circumstances is supercharging the old cross-selling approach with testable, predictable outcomes.
AI is already creating an information arms-war as data-savvy customers are more equipped than ever to seek a better price or a return.”
When our robot slaves aren’t squeezing the pips of customers’ data, they may find themselves doing battle with their malicious bot cousins. On this point, Monique Melis, MD of the compliance and regulatory consulting practice at Duff & Phelps, is thoroughly enthused;
“For financial services firms, the importance of AI in risk management and fraud prevention cannot be overstated. AI has already led to improvements in risk analysis, detecting financial fraud earlier on, and saving time and resources for firms. Its capacity to synthesise large quantities of complex data with accuracy and speed will also deliver enormous benefits for this sector”
The experts may offer differing views of the minutiae but sing from the same hymn sheet when it comes to the final number and the bottom line. Along with the aforementioned improvements to services and security, the other major boon of “employing” the unpaid undead is cost efficiency. As nearly two thirds of a bank’s operational costs are frittered away on wages for human beings, you get no points for guessing what’s meant by “cost efficiency” in this context.
Should we (the non-owners) be scared of AI?
In 2013, Dr Carl Benedikt Frey and Prof. Michael Osborne authored a report for Oxford University called, The Future of Employment: How Susceptible Are Jobs to Computerisation?. Their findings gave the non-owner we plenty to think about.
According to the report, 47% of jobs held by human beings in the United States are vulnerable to being computerised.
In order to competently assess which elements of which professions could be automated, the pair had to collect a formidable volume of data. The irony of how they analysed this information will be lost on few;
“Such ‘big data’ comes with one non-negligible problem: the human brain struggles to process it. But mercifully we live in the age of AI. And AI performed most of our analysis.”
Some of the resulting insights are plainly comical. In sectors like hospitality, where carbon-based experts agree that automation won’t be feasible, the AI begs to differ. Even waiters and waitresses will face robo-competition! One can’t help but picture a mechanical Yozzer Hughes from TV’s Boys From The Black Stuff looming over Frey and Osborne proclaiming;
“Gis a job, I can do that!”
In April last year, however, the report’s creators issued a statement to counter some of the more hysterical reactions their work had fostered.
“Our study wasn’t even a prediction. It was an estimate of how exposed existing jobs are to recent developments in artificial intelligence and mobile robotics. It said nothing about the pace at which jobs will be automated away.”
“Gis a job, I can do that!”
The industry insiders we spoke to were similarly keen to discourage unruly bands of neo-Luddites beating a path to their door, pitchforks in hand and torches aflame. Gege Gatt was especially eager to placate the massed hordes. With a metaphorical loud hailer pressed to his lips, the man from EBO.AI assured us that;
“Many people will fear that AI will ‘steal their job’, but this is a misconception about the role of AI and the way it should be implemented into a business. AI should be about augmenting workers’ roles, not taking them over.”
Chief Product Officer at EBO.AI, George Lekkas, acknowledges the potential threat to jobs in finance but feels they’ll be restricted to specific areas of the business;
“Right now, the largest impact is in the middle and back office, where automation can speed operations and improve the quality of investment analysis and trade execution. There is less adoption in the front office, relations with clients are too important to be delegated to algorithms that are still at an early stage.”
Presumably, Lekkas is referring to the front offices of institutional lenders and wealth managers rather than retail outlets where the advent of ATMs and online banking has left little for AI to decimate.
“Right now, the largest impact is in the middle and back office, where automation can speed operations and improve the quality of investment analysis and trade execution.” – George Lekkas
Elsewhere, when it comes to finding something for expensive disease-prone flesh bags to do all day, “reskilling” is the word tripping off of every tongue.
Lekkas sees new fields of employment springing up around AI;
“Part of the workforce will focus on creating and evolving AI, and will prepare by studying Data Science and Data Engineering. Other fields such as Ethics, Law and Psychology applied to AI will grow to support its widespread deployment.”
Whilst Fernandes suggests that, “Employees should work with the business to proactively identify what areas of their role could be automated, so that they can focus on the areas that add real value to the business’ commercial goals.”
Advice like this will be heeded, I suspect, by only the most self-assured of employees. Rather than making a list of the things I feel a robot could do better than me, I think I’ll spend my time practicing the art of “accidentally” spilling coffee on electrical equipment instead.
Why can’t we be friends?
AI is innovative on a scale almost unseen in history. To build a machine that can not only execute complex tasks independent of human control but can learn from the process too is astounding. To capture cognitive processes in algorithmic bell jars borders on the divine and the possibilities for finance are mouthwatering. AI in finance is set to exhibit a CAGR of 26.21% for the period 2019-2024, growing from 1.44 billion US dollars (2018) to 5.67 billion US dollars (2024).
Unfortunately, that’s not the whole story.
Innovation and progress aren’t one and the same thing. If the financial crisis taught us anything, it’s that. CDOs and the algorithms developed to design, package and trade them were masterfully inventive but to the service of what? The wanton pursuit of a quick buck led the market into a quagmire so toxic that we’re yet to fully extricate ourselves.
Fintech emerged from the ashes of the credit crunch to redress the balance lost between institution and individual, to prioritise the needs of the non-owner we.
Echoing the notion that AI should be used to augment human activity rather than replace it, Sylvain Rochan of the Canadian Institute of Mass Communications suggests (in a recent piece for Medium) that our relationship with it is re-characterised as a partnership.
“We interact with them, work with them, play with them. They don’t care if we are empathetic towards them. Caring for AI is not meant to make them feel good or reduce the chances of them going off the rails. It is for us. We want to feel comfortable around them and we will want these most sophisticated artificial beings to imitate good behaviour as they learn about the world. These AI can help us become better humans.”
“Caring for AI is not meant to make them feel good or reduce the chances of them going off the rails. It is for us.” – Sylvain Rochan
It’s no longer enough to say that gains made through technical innovation will “trickle down” to consumers. Remember, it’s the consumers’ data that will sustain this new breed of artificially intelligent financier and it’s also their jobs which could be threatened. The success or failure of AI in finance will therefore be measured by the service it performs to the whole value-chain, not simply the owner we.
As Justin Lyon of Simudyne puts it, if we fail to utilise AI properly, “we’ll lose many, many opportunities as a species.”
Arthur C. Clarke’s 2001: A Space Odyssey closes with the celestial Star Child – a Nietzschean Übermensch of sorts – surveying the world below and pondering what to do with its immense power.
The hopeful, if somewhat tentative, confidence of its reflections could just as easily be applied to the application of AI;
“Then he [the Star Child] waited, marshaling his thoughts and brooding over his still untested powers. For though he was master of the world, he was not quite sure what to do next. But he would think of something.”
However, if we are to be superhumanised by artificial intelligence, then the interests of the owner we and the non-owner we will need to be fully reconciled. And for this, we’re going to need a little more than wishful thinking…