Artificial Intelligence (AI) is very much the thing of the moment. A lot of new enterprise models appear to be built on a combination of AI, IoT and Big Data to collect and process massive amounts of data and to apply human like skills of recognition and decision making with a level of consistency, speed and volume that was previously unattainable with conventional processes or systems. This leads to greater insight, improved performance, better use of scarce skills and consequently other benefits such as improved customer experience, better security and higher cost efficiency. It also means that some tasks which would never have been practicable before are readily delivered via automation.
However when we look at what a human can do, AI capabilities are still lacking. The human capability to use our 5 senses, think abstractly, innovate and test hypotheses, to develop new learning are still a challenge to AI.
Just focusing on the 5 senses: Sight, Hearing, Touch, Smell and Temperature, AI's progress is quite patchy.
In the past 30 years or so, AI's ability to deal with vision and speech has progressed from a miserable score of about 1 to 2 out of 5, to a 3 to 3 and a half out of 5. Handwriting recognition works well with typed characters, but still has a significant error rate with hand writing. Face recognition systems work quite well with caucasian faces but apparently struggle to deal with black african faces. It is not clear to me whether the latter is to do with poor training (i.e. not been given enough sample data for adequate deep learning), fundamental flaws in the assumptions made when designing the systems to recognise features or some problems with technical measurement of contrasts and camera sensitivity across the visual spectrum. Whatever the problems, error rates are still quite high.
Temperature is quite easy with thermo-couples and can be engineered to deal with temperatures that people could not with stand. So high marks of 5 out of 5 are to be expected.
Touch has been a major area of research ever since robotics became a hot topic in the late seventies. Most of the work so far has focused on sensing around grip, so that a robot could pick up an egg or some other delicate object without crushing it. But I have not seen anywhere the type of sensing that could differentiate between the touch of skin, rubber, silk, cotton and wool. So this definitely languishing in 2 out of 5 territory.
Smell has barely been touched. However some work is suggesting that techniques for smell processing, which mimic natural biological processes could be invaluable in extending AI capabilities to deal with situations where learning data is limited, ambiguous or masked with other "noise". Smell has potentially many applications in medecine, agriculture and food processing. The techniques however, may be useful for Autonomous Unmanned Vehicles which often have to deal with unforeseen situations or noisy environments.
So there still a long way to go for AI to reach science fiction like capabilities. Conquering the 5 senses represents the first step. The question is, how long will this take and will all humans be machine augmented, by the time we get there?
No comments:
Post a Comment