By Rhys Powell, Managing Director UK, EBO.ai
At a time when diversity and inclusion are top of the agenda for many companies, some firms are worried that the use of AI could actually scupper this ambition. In reality, however, the real culprit in this scenario tends to be human failings, which can often lead to bias in the AI systems we create.
If humans are inherently imperfect and biased, how can the systems we build be any different? Given that AI systems are designed by humans, it is unsurprising that we inject our biases into them, even when it is unintended. But how can this be rectified?
As most AI systems are based on machine learning, we need to start by looking at the how we’re collecting and selecting the training data we are using. Training data is not only the ‘textbook’ that teaches a company’s AI to do its task, it is also used to continuously improve its success rate and the accuracy of its predictions.
If the training data is not inclusive and balanced enough, the systems we create could engrain this bias as well, which could lead to certain sections of the population we work with – or look after – being unfairly discriminated against.
It’s easy to see how this could manifest itself in the financial services sector. Imagine if AI was being used in the bank account opening verification process. Depending on the training data that was used to ‘teach’ the system, it’s possible that an engrained bias against certain customers could result in them being continually denied services or in flawed interactions between a customer and the virtual assistant tasked with dealing with their case. This is directly contrary to the goal of greater financial inclusion for everyone in society.
Selecting the right training data is therefore essential to the development of the machine learning model. The best examples of good training data are those which use bespoke information that is relevant to the task at hand. Without access to high-quality, relevant data, AI systems will learn only to do their job haphazardly at best, making detailed insight and analysis from AI/customer interactions absolutely vital.
AI can, however, become biased over time, even if it wasn’t built that way initially. The external information a machine interacts with can have a significant impact on how it learns, and as AI doesn’t question things in the same way as humans, that information can be accepted readily, even if it is detrimental to fighting bias. We can test machines to see if they carry bias, if they have inherited it through data training, and whether there is potential to rectify it, helping companies who deploy AI retain an important measure of control and ability to steer their machines in the right direction.
Selecting the right training data is therefore essential to the development of the machine learning model.
Using AI to encourage inclusivity
The good news is that AI can also play a significant role in creating unbiased structures. It can, for example, influence recruitment decisions by making relevant information available at decision points throughout the hiring process. This helps people to rely less on ‘gut’ feeling and more on the job-related information.
AI systems can even recommend what text to include in job postings in order to attract the most diverse applicant pool. By tapping into the latest market data, these systems can suggest the correct salary range for a particular role, to avoid any disparities between employees of different genders or ethnic backgrounds.
When used in this way, AI actually has the potential to remove bias and support processes that enable a more diverse and inclusive workforce in any industry. As financial services is usually perceived to be a male-dominated, middle class environment, this could open up job roles to a broader talent pool. As such, firms will be able to combat this stereotype and instead focus on finding the best person for the job.
One size doesn’t fit all
There is a common misconception that AI is only for the larger players in any market, but this simply isn’t the case. AI can transform customer interactions, among other things, regardless of a company’s size.
Historically, SMEs have been less enthusiastic about new technologies, but already that is changing.
A growing number of smaller enterprises are seeing the value of using AI to process huge amounts of data – and to do so much faster than people. Using the technology this way can dramatically reduce the time that staff spend on tasks that distract them from more valuable or pressing matters, such as customer service.
Customer service is a key priority for the financial services sector, with most customers now used to receiving service 24/7. AI can help here too in the form of intelligent bots and virtual assistants, which can be highly personalised and taught to avoid any bias.
AI actually has the potential to remove bias and support processes that enable a more diverse and inclusive workforce in any industry.
As a result, customers that might otherwise get frustrated waiting to get through to a call centre, or waiting for a response to an email, can instead experience quick, effective, even enjoyable conversations with the business they’re trying to contact. These virtual assistants can answer routine questions quickly or transfer the customer to an interaction with the most appropriate human member of staff if necessary.
Bots also provide the business with a complete, accurate, easily-accessed record of each customer interaction, including details of how the customer felt at the end of the conversation, which can be used to identify and remove any bias within the system very quickly. All of this data is captured immediately and automatically, which not only helps to build a deeper understanding of each customer’s experience, but also enables firms to improve their service.
Going a step further
Bias isn’t the only ethical conundrum that must be solved. Data security and privacy are valid concerns held by many when thinking about bot and virtual assistant interactions too. How can people be sure that any information they share with an AI assistant will be safe, and used in the correct way?
More importantly, the right to be forgotten is a vital consideration which must be appreciated when AI systems are being designed, as the capacity for persistent memory has implications for any data that customers share. When used improperly, such data could be the foundation of bias, and could lead to intrusive practices carried out by virtual assistants who don’t know any better.
If we are to solve the issue of biased AI, we must ensure that the customer is central to any solution, and that businesses who deploy AI do so in an ethical manner with respect for the rights of those interacting with their service.
AI can transform customer interactions, among other things, regardless of a company’s size.
AI: Advantageous to all?
AI is a technology that is both transformative and disruptive. As such, it has already had an enormously positive impact globally, both commercially and within society.
Its evolution over the last several years has been facilitated by the availability of enormous amounts of data, major technological advances in computational power and storage capacity, as well as significant scientific and engineering innovation related to AI methods and tools.
As these systems develop and improve, they are sure to continue to impact society in other ways. Because of this, it is important that there is a belief in, and a commitment to, trust in its processes and its ethical principles. Only then will businesses be able to fully and confidently reap the benefits that AI can offer.