Hearing about Artificial Intelligence in recruitment is becoming increasingly common. In spite of what used to happen in the past, today AI is coming fully into the world of work and in particular into that part that deals with Talent Acquisition and recruiting.
As we know, relying on intelligence that is not just human, but artificial, allows those who do search and selection, to speed up many steps, to be able to find the right people right away with little effort and, at the same time, allows them to delegate tedious, though necessary, tasks to technology and automate them.
The positive aspects are apparently in the sight of many recruiters, but there always remains a question mark that is not even that latent: does focusing on AI mean abandoning ethics? Does relying on a machine mean putting human beings aside?
This is what we try to understand by trying to shed light on a doubt that grips many-whether we can talk about ethical Artificial Intelligence-and then analyze what risks and challenges companies adopting AI in HR face.
According to research conducted by software provider Tribe Pad and reported by Ukrecuiter, there is some distrust of AI, particularly on the part of candidates. Forty-two percent of survey respondents fear that the technology will wrongly control people.
This – and more – begs the question: can we talk about ethical AI? The immediate answer might be “Yes…but it depends.” On what? On how it is used.
The adjective “ethical” in fact requires adherence to certain principles that truly put people first.
An ethical AI system is one that prioritizes people’s safety and needs, takes their personalities into account, respects privacy, the rights of applicants, and tries not to be discriminatory in any way.
When AI abides by these basic rules, it can be said to “work” ethically. Not least because, otherwise, the consequences can be anything but trivial.
So let’s see what risks and challenges companies face with AI in recruitment.
While it is clear what AI ethics means, in practice, what aspects should a company pay special attention to? Let’s analyze some of them in detail.
That data management is given increasing importance is no mystery, especially since GDPR has entered our lives. But not only that, the privacy issue is at the heart of an ethical approach. Dealing with sensitive data can not only be risky, but also affect the reputation of the companies themselves.
That’s why in using data, even with AI, one must be careful and ensure the same shrewdness as a human being might.
To make that happen, a formal ethics system could be adopted to ensure that privacy is respected at every stage of recruitment that takes place with AI. Another important step would be to implement ethical tests to understand how in fact the system behaves and at what points its “attitude” is critical.
Artificial Intelligence can be trained so that it can learn from its mistakes and can, thus, make fairer decisions in the future.
It is not enough just to decide to adopt an AI technology: it must be used and tested.
Also, it is important to choose well the partners and technologies to rely on: with INDA, our AI solution for the HR world, we place great importance on security in data management. And not only that.
Another important aspect, also from an ethical point of view, concerns cognitive biases – those distortions of the mind that, even unconsciously, can influence a recruiting process.
Recruiters, as human beings, may be subject to biases (e.g., on age or background) that can affect their decisions in the search and selection process. Although to varying degrees, the same could be said of AI precisely because it is a technology developed by humans.
As an example, recall the case of Amazon, which, in 2018, discovered that its AI software had a cognitive bias against women since it excluded all applicants from women’s colleges.
Preventing this from happening is critical for a company. How to do it. How to reduce the appearance of bias and eliminate it? Unlike when a bias is the product of a person’s unconscious or conscious decision, AI allows you to analyze what happened and improve data collection, implement it to reduce subjective interpretation and be as objective as possible. As we said, such a technology can learn from its “mistakes,” and algorithms can be asked to consider only those variables that ensure the best possible selection.
Another of the challenges, or perhaps the real Challenge, is to be able to collaborate with technology. This is possible if we understand the different roles: that of the recruiter, as a human being, and that of AI as technology.
AI was not born at all to replace the human person, but to flank and support them in the search and pre-selection phase.
To give a few examples: imagine that a recruiter is conducting a particularly complex selection and has to do, in a very tight time frame, dozens and dozens of interviews.
What happens if he receives an application from a person who might be a good fit for the role? A chatbot could make up for the recruiter’s absence by giving feedback to the candidate and, rather than leaving him or her hanging, letting them know that their profile will be considered soon.
Relying on a digital assistant is not only a way of welcoming a person who, by applying, has given confidence to the company, but also to make it clear that what he or she has done is important and as such deserves proper consideration.
This applies not only to the first approach, but to the entire candidate experience. How many times do people not apply for an ad because they have experienced difficulties that do not allow them to move forward? With a digital assistant, this could be avoided. Which, of course, also has a bearing on Talent Attraction as well as Employer branding.
What we have just said leads us to a final thought: as important as technological innovation is and every company should try to take action not only to espouse it but to adapt it to its own context, we must remember that technology is only a tool. What matters and will always matter is the human approach.
The HR team is irreplaceable, and no machine can interact, understand and help people the way another person could. Technology does not replace people, does not replace them in their work, but rather supports them and can do so because it is always the human being who follows it and vouches for it.
At the base, therefore, there must be strong human values but not only that: we need to choose vendors who care about human ethics and know how to bring them back into Artificial Intelligence. All of this is essential to improve the hiring process and give more and more attention to candidates.
This is a challenge that is far from simple, but one that can be played if one is well aware of the potential of all the actors involved in the process.
Do you want to learn more about Inda?
+39 0371 5948800
Corso Duca d’Aosta, 1 – Turin
Via Caviglia 11 – Milan
Copyright © 2021 Inda
Inda is a solution by Intervieweb S.r.l. part of the Zucchetti Group P.IVA: 10067590017