Thursday, 26 October 2017

The Perils of Bad AI

Everyone by now will have read that they are about to be replaced by robots and AI bots at work. Many people have also heard how employment agencies and some large firms use AI based software to screen applications and most who do complain about it.

So, recently, I tried an experiment with CV parsing software. I used a "top 10" CV parser and took some copies of a friend's CV which had been tweeked to apply for slightly different roles and submitted it to the free demonstration site.

Within hours I had received back the analysis of the 3 different CVs. I almost fell off my chair in astonishment of the analysis produced. In all cases, it cited the wrong position as his most recent one. It ignored half of his work experience, reducing over 20 years experience to just over 10. It got his level of seniority wrong and in one case it totally missed what was his major skill. In fact it bore such little relationship to what the CVs said that it could be said to be "made up" or "fabricated". It certainly was not accurate and was misleading.

I then decided to change the presentation of the information slightly in the CV. I moved his most recent role to the second page and where he listed earlier previous experience, I changed the orger in which role, organisation and date was presented. The content itself remained the same. Guess what? his most recent role was identified as an even earlier one. But it now identified the missing 10 years or so of experience. It still however did not correct the level of seniority that it assigned.

The thing which strikes me is that the agencies are using the software to deal with a problem of volume. Each advert typically attracts 300 to 1,000 applications. But they are not advising people how to structure their applications so that the software reads the CVs (or resumes) accurately. 

In my opinion, any agency or recruiter using this type of software is not going to pick the right candidates and is opening itself to breaches of data protection legislation (at least in europe) as use of any incorrect, out of date or deficient data which causes loss, harm or embarrassment is a criminal offence. Arguably, failing to shortlist someone because you rely on erroneous information would be in that category.

This shows how fragile and potentially dangerous some of the existing AI and machine learning applications are. They are powerful when properly trained and tested, but treating them as a facile magic bullet which will automate all interpretative assessment out of processes is dangerous.

3 comments: