AI Fintech Trending

Why AI Should Be Redefined To Mean “Augmented Intelligence”

When someone mentions Artificial Intelligence, what comes to mind? Possibly robots taking over the world? You wouldn’t be alone if you imagine that. Elon Musk and other tech leaders are so concerned about this possibility that they are figuring  out how to colonise Mars so we have a place to go when computers rule the world. It isn’t hard to imagine the source of this paranoia. Anything unknown is inherently frightening.

The scariest scenes in movies are before the monster is revealed. And frankly, computers smarter than humans are as close to that unseen monster as we have in real life. But this doesn’t have to be our relationship with AI, and the truth is, the human-replacing artificial intelligence scenario is still  far away from a technology standpoint. Instead I suggest we reframe the AI acronym to stand for “Augmented Intelligence,” meaning machine learning that helps humans to make better informed decisions and work more efficiently, creating less tedium and stress in our lives.

People always have been afraid of new technologies. At the dawn of the Industrial Revolution “flying shuttle” weaving looms were outlawed in the UK as there were concerns that all the hand weavers would be out of business. In the early 1900’s with the advent of automobiles and assembly lines, people feared that all the jobs supported by equine transportation would be long gone. In the 1980’s with the rise of the computer these fears surfaced again, with concerns computers would replace humanity. These fears boil down to a single cause, that as a species, humans just do not adapt to change well and prefer things to stay the same as they have always been. This is ironic because the value of human labor is in our flexibility and intuitive sense to know when things are off.

In fact,  human flexibility has many times been added back into manufacturing to improve efficiency after total automation projects failed. As early as 1908 Henry Ford struggled with early production tests, as too much automation was slowing down the assembly line. Adding humans back into more parts of the equation increased efficiency. Elon Musk faced the same thing at his Tesla plants this year and hired back many jobs he had previously eliminated through automation.

It’s safe to assume that as automation and artificial intelligence tools are all the rage in the Fintech world, it is only a matter of time before companies realise the same truth applies to services and analytics as it does to manufacturing. You may wish to argue this point, especially since analytics relies on pure math, but hear me out. Analytics requires a certain amount of filtering of data, and that takes a certain human touch. There is a saying in the accounting field “garbage in, garbage out” and the same principle applies to analytics computations.

Artificial intelligence analytics have been especially aggressive regarding the products some banks are using for meeting Anti-Money Laundering (AML) regulations. I’ve heard endless horror stories about people whose accounts have been closed after many years of loyalty with a bank, and for no seeming reason that have been traced back by experts to be attributable to the AML analytics. Banks don’t seem to realise how harmful this can be to their reputation.  One can assume that their customer acquisition costs would be high enough to warrant a human review and investigation before terminating accounts blindly, not to mention the fall out and damage from the negative PR. . Yes, the fines for failing to meet all the AML disclosures are atrocious, but there is a better middle ground to operate on in order to retain good customers while weeding out the bad actors.  To meet AML while retaining the maximum number of good customers requires bringing that human touch back in with Augmented Intelligence.

Fintech companies have also been applying many artificial intelligence tools with regards to fraud prevention. Fraud prevention is an area where that human touch has been implemented well. Five years ago it used to be that if a fraud was detected on a credit card the transaction was declined without any notification except that it was disallowed, and without warning the card would be completely useless and often take many phone calls and difficulty to turn on again.

Now notifications are sent out to verify a suspicious transaction via text and email. The user then affirms or denies that transaction, and the card is good to go again. This human element of asking the user to verify the suspicious transaction is a great example of augmented intelligence because it uses machine learning tools to compare transactions with what is “normal” all the time, but the final decision of if it is a problem or not lies not with the computer but with the user.

Many other areas of professional decision-making are at risk for new types of professional liability lawsuits as artificial intelligence becomes more common. Artificial intelligence continually ends up in the headlines around medical diagnostic technology. In recent cancer trials, artificial intelligence read scans with greater accuracy than human doctors. This amazing breakthrough seemingly means the ability to correctly diagnose more patients, but the question remains: what happens if the computer is wrong and the human is right?

There will come a time in the not far future where computer diagnoses will be considered more accurate for a doctor or lawyer to follow,  disagreeing with the artificial intelligence answer will entail risking a malpractice lawsuit. Even if a doctor is right nine times out of ten when they disagrees with the computer, they will still be liable that last time. That would be enough to make most physicians follow a computer diagnosis even if it’s against their better judgment. But if redefined as augmented intelligence, then the decision stays squarely with the doctor and the diagnosis is seen as one tool among many, giving room for advantages brought by human intuition and flexibility of thought.

At the end of the day people are important. The idea of replacing humans in Fintech and professional services is appealing to businesses because this cuts costs, but for optimal efficiency the human touch should not be eliminated entirely. Finding this exact balance is not always obvious when planning to implement a new technology solution, but by viewing AI as augmented intelligence instead of artificial intelligence, it reframes the types of discussions, necessary to optimise this inevitable transition to include the human element.

Written by Crystal Stranger (CEO and Founder of PeaCounts) for the Fintech Times

For more on the sometimes hair-raising but never dull world of AI, follow this link

Author

Related posts

What Does 2024 Have in Store for Digital Wallets?

The Fintech Times

New Global Non-Profit Neobank Aims to Lead Efforts in Aid

Gina Clarke

SWIFT Celebrates ‘Major Milestone’ In Streamlining Cross-Border Transfers

The Fintech Times