Any behaviour–of people, numbers or events–is ultimately unpredictable, so says Dr Scott Zoldi of FICO. Will the advent of AI change all that?
What are the greatest barriers to wide-scale implementation of AI?
There are a variety of barriers to wide-scale implementation of AI. Two of the greatest barriers are the lack of
data scientist talent and challenges with operationalisation of AI. There is a huge shortage of qualified data scientists, and education institutions – despite working to train more data scientists – are often erring on the side of entry degrees vs. training scientists with PhDs with deep technical expertise. This problem is then exasperated by a lack of domain expertise.
On operationalisation, many organisations are dealing with legacy IT systems and architectures which are inadequate to build and deploy artificial intelligence solutions. The combination of data scientists with limited depth of experience (particularly in operational AI) and aged IT systems make the barriers high indeed.
How can barriers to implementation be overcome?
The industry has already made some strides by utilisation of open source to fill gaps in data scientist experience, although open source still requires that the scientist know when to use different algorithms and under what constraints.
The next move is to utilise notebooks which orchestrate both open source, proprietary AI intellectual property, and operational AI software code for specific domains. These notebooks allow scientists to follow recipes and best practices which often can’t be learned in school or not available in organisations where the history of successful deployment of operationalised AI is limited or non-existent. To address the IT hurdles these notebooks increasingly are developed specifically with views to directly outputting operationalisation AI and scoring systems, for example utilising docker containers and Kubernetes to allow for ease of use of productionised components that are proven operational.
Two of the greatest barriers are the lack of data scientist talent and challenges with operationalisation of AI
Should we be scared of AI, what do you think the biggest risks?
Artificial Intelligence (AI) will most definitely be good for humanity: Already we depend on it for safer air travel, detection of payment card fraud, and to navigate our automobiles. Despite its name, AI has only the simplest of parallels with human intelligence, related to how it learns and discovers patterns through making connections based on examples. These algorithms are built to do a very specific and narrow job, and that’s all they can do. They are not general-purpose learning machines and they can’t change their programming.
Despite its name, AI has only the simplest of parallels with human intelligence, related to how it learns and discovers patterns through making connections based on examples.
But this doesn’t mean we shouldn’t be worried about AI. There’s the obvious danger of AI being developed to target weapons or to mount precisely executed successful cyber-attacks. The less obvious danger of AI lies with the data science cowboys – scientists who use AI to solve problems, but who are not smart enough, or ethical enough, to do it right. And these cowboys are estimated to be entering data science positions at 5x the rate of properly trained algorithmic data scientists.
Machine learning models being built naively may incorporate non-causal relationships learned from limited data, spurious correlations, and outright bias. These fundamental flaws are hard to spot, because AI and machine learning models are generally ‘black boxes’ — it’s hard to see what patterns and variables they’re finding and using.