We are used to technology evolving rapidly and coming up with new and surprising updates and solutions. Many recruiters and sourcers love artificial intelligence (AI) because it makes the job easier and improves the candidate experience. But whether the same is true for deepfake technology….
What is deepfake?
Have you seen Barack Obama call Donald Trump a “complete dipshit,” or Mark Zuckerberg brag about having “total control over the stolen data of billions of people“, or witnessed Jon Snow’s touching apology before the end of Game of Thrones? Answer yes and you’ve seen a deepfake. Deepfakes, the 21st century’s answer to Photoshopping, use a form of artificial intelligence called deep learning to create images of fake events, hence the name deepfake. Modern generative AI software can create this. These systems create new digital content such as images, videos, texts, human voices and other audio recordings. If you want to put new or different words in a politician’s mouth or star in your favorite movie, then it’s time to create a deepfake. The most famous deepfakes are where the face of one person is swapped with that of another.
As is true for any technology, artificial intelligence (AI)-driven generative techniques can also be abused. Expert Jarno Duursma describes it in his blog as follows: ‘The application can be used in many ways, manipulating opinions, blackmailing people or damaging reputations. We are entering an online era where we can no longer trust our eyes and ears.’ A significant danger, for example, is the circulation of videos showing politicians saying things they did not say.
It is not only politically speaking that deepfakes can pose dangers. Bart Jacobs, Professor of Computer Security at Radboud University Nijmegen, mentions another example to NPO Radio 1. “Suppose Elon Musk puts a video online in which he says he is going to invest in a certain area – or wants to stop. Then the stock markets can go in any direction. I read recently that some $40 billion in stock market value is lost annually because of such fake messages.” In addition, it is important to realize that deepfakes do not have to be just images: in late 2019, financial company fraud was perpetrated with a cloned director’s voice, successfully ordering a transaction of around 220,000 euros.
Deepfakes and recruitment
But what about deepfakes within recruitment? Imagine that a person, who has been rejected after an interview for a job with a certain organization, wants to get back at the CEO and launches a fake video showing his face. It may sound a bit far-fetched, but this video could hurt your company brand or the personal brand of every person involved in the process. We asked two experts in the field of video recruitment for their opinions.
Walter Hueber, CEO of video recruitment software Cammio: “I don’t think deepfake in recruitment is really an issue at the moment. The chance that a candidate will use deepfake in an automated or live interview is very limited, because eventually you will meet in real life anyway. Then there is little point in pretending to be someone else during the application process. And if you do want to do that as a candidate, you will still have to prepare yourself for the questions. In terms of content, it is still a valid interview.”
“One risk, however, could be that someone enters a selection procedure with your identitet with the aim of damaging you in the interview,” Hueber continues. “This person will then first have to make an impression with the application to get to the interview. That is, of course, a very cumbersome way to harm someone. In short, we are not likely to encounter deep fakes in interviews. What we will see more and more are candidates who are given the opportunity to show more of themselves in a structured interview online and to break through the prejudices from CVs.”
Nicolas Speeckaert, Founder of skeeled, an innovative all-in-one recruitment software based on artificial intelligence: “Deepfake technology brings both opportunities and threats. Unfortunately, we are all more exposed to the threats than the benefits, with many reports of cyber attacks against individuals and organizations such as exploitation, harassment and personal sabotage. This certainly needs to be addressed. We need to have tools and train employees for deepfake detection, and we need laws that protect us from these types of attacks.”
“However, we cannot ignore the immense potential that deepfake technology can offer organizations in terms of content creation. For example, there are innovative companies creating AI-assisted corporate training videos, which allows global companies to very easily create videos for internal training in different languages, contributing to a better employee experience. But even in these situations where deepfake technology is used for good reasons, I agree with researchers and experts who defend that labeling is the simplest and most important way to counter deepfakes. Viewers should always be aware that what they are viewing is not real,” said Speeckaert.
Although the negative impact of deepfakes on recruitment seems to remain negligible, it is always good to stay alert. Look critically at videos, especially when it involves an important issue. If you are a recruiter, do not make hasty decisions about your candidates based on a video, because it is very easy to make a personal contact with the person, candidate or applicant in question to find out if the video is fake.
Sources: Jarno Duursma, The Guardian, SourceCon, NPO Radio 1
Leave a Reply