Engineers at Google recently published a paper outlining their work developing a custom AI chip which is so efficient that it saved them having to build new data centres to cope with the introduction of AI based services. The paper itself provided a great insight into the scale and nature of Google's operations. However, it was the catalyst for musing on where we are going with AI and what it all means.
In the 1980s there was a lot of hype around artificial intelligence resulting in thousands of university undergraduates learning languages such as LISP and Prolog, and the Japanese Government sinking hundreds of millions of dollars into its Fifth Generation Computing programme in an attempt to progress the technology further. More modestly the UK's DTI published a few pamphlets and books on state of the art Intelligent Knowledge Based Systems (IKBSs) and Neural Networks. A few rather expensive products hit the market. There were a few relatively trivial case stydies and then it all just seemed to fizzle out, gradually becoming forgotten as the Internet Bubble grabbed people's attention.
However, AI did not go away, its development just went into submarine mode. People were there working quietly away in the background on specific applications of AI and bringing them to a level of maturity where they can be adopted wholesale in real life situations. In the process there have been substantive strides, so for example voice recognition systems now require little training and accommodate regional and national accents. Additionally, there has been a little gentle re-branding to adopt the term "Machine Learning". Consequently many people use "personal assistant" agents on their phones or built into household automation devices. Self driving cars are reaching commercial aodption and the next generation of UAV aircraft will be capable of accepting a programmed mission, taking off and completing it without human intervention or interaction, unless some mission parameter is encountered which requires human decision making or to receive required intelligence.
I was literally blown away with incredulity recently, when someone showed me how easy it is to set up a simple machine learning application in Azure and train it to produce a useful output. But I have been reading recent alarmist claims that "AI robots" will replace millions of human professions with a level of scepticism, as this is not substantiated by previous experience.
In the 1980s, when I worked for a bank, there was an interesting article in the Financial Times on the adoption of IT by the banking industry. The gist of the story was that if the banking industry had not adopted IT to mechanism a lot of its work, then the explosion in consumer banking products in the UK which happened at that time would not have been possible without employing the whole of the UK workforce to support it.
A recently touted robotic brick layer is unlikely to eliminate all brick laying jobs because there is a current shortage of bricklayers and robots will not be economic on small sized jobs. Furthermore, the reason that the United States has traditionally enjoyed higher productivity than Europe is not a result of American technical superiority, but the result of the fact that the US has always been resource rich and people poor. A shortage of labour and skills leads to new methods and investments in mechanisation and automation.
Furthermore, the UK Government Digital Strategy is promoting the investment into the development of AI skills, because the Government believes this will grow both the economy and jobs. The real impact will be in the change of the nature of the jobs. People will do less humdrum stuff and explore their curiosity more to invent new things, discover new things and do their jobs more creatively.