Weather     Live Markets

Summarize this content to 2000 words in 6 paragraphs

Image via DALL-E via the prompt: “A person on a computer, interacting with a chatbot.”

If you’re secure in your job, you may not have encountered just yet how AI is “elevating” and “enhancing” the job search experience, for employers and job seekers alike. Its use is most clearly felt in the way high-volume staffing agencies have begun to employ AI chatbots to screen applicants well before they interact with a human hiring manager.

From the employer’s perspective, this makes perfect sense. Why wade through stacks of resumes to weed out the ones that don’t look to be a good fit even just on first glance, if an AI can do that for you? 

From the job seeker’s perspective, the experience is likely to be decidedly more mixed.

This is because many employers are using AI not just to search a body of documents, screening them for certain keywords, syntax, and so on. Rather, in addition to this, search firms are now using AI chatbots to subsequently “interview” applicants to screen them even more thoroughly and thus further winnow the pool of resumes a human will ultimately have to go through.

Often, this looks the same as conversing with ChatGPT. Other times, it involves answering specific questions in a standard video/phone screen where the chatbot will record your answers, thereby making them analyzable. If you’re a job seeker and you find yourself in the latter scenario, don’t worry, they will give the chatbot a name like “Corrie” and that will put you completely at ease and in touch with a sense of your worth as a fully-rounded person. 

On the job seeker’s side, this is where the issues begin to arise.

If you know your words are being scanned by a gatekeeper strictly for certain sets of keywords, what’s the incentive to tell the whole truth about your profile? It’s not possible to intuit what exact tally or combo of terms you need to hit, so it’s better to just give the bot all of the terms listed in the job description and then present your profile more fully at the next stage in an actual interview with a human. After all, how would a job seeker present nontraditional experience to the bot with any assurance it will receive real consideration?

Indeed, when the standard advice is to apply for jobs of interest even when you only bring somewhere between 40-to-60% of the itemized skills and background, why take the risk the chatbot sets the bar higher?

For a job seeker, lying to the bot — or at least massaging the facts strategically for the sake of impressing a nonhuman gatekeeper — is the best, most effective means of moving on to the next stage in the hiring process, where they can then present themselves in a fuller light.

But what are the ethics of such dishonesty? Someone who lies to the chatbot would have no problem lying to the interviewer, some might say. We’re on a slippery slope, they would argue. 

To puzzle out a way of thinking about this question, I propose we look at the situation from the perspective of the 18th-century German philosopher Immanuel Kant, who I referenced in my previous essay. Kant, you see, is famously stringent when it comes to lying, with a justly earned reputation as an absolutely unyielding scold. 

You need money you just don’t have to pay for something you think is truly, unequivocally good in itself: your mother’s last blood transfusion, say. Is it acceptable to borrow the money from a friend and lie when promising to pay it back when you know you simply can’t? Hard no, says Kant. Having an apparently altruistic reason for telling a lie still doesn’t make it OK in his view. 

In fact, the lengths he will go to uphold this principle are perhaps most evident in his infamous reply to a question posed by the English philosopher Benjamin Constant (truly, no one remembers who he is apart from his brush with Kant).

RELATED STORYAn ethicist’s take on Amazon’s fake review problem

Suppose your best friend arrives at your door breathless, Constant proposes, chased there by a violent pursuer — an actual axe murderer, in fact — and your friend asks that you hide them in your house for safety. And then suppose, having dutifully done so, you find yourself face-to-face with the axe-murderer now at your doorstep. When the murderous cretin demands to know where your friend is, isn’t a lie to throw him off acceptable here, Herr Professor? 

Absolutely not, Kant answers, to the shock and horror of first-year philosophy students everywhere. Telling a lie is never morally permissible and there just are no exceptions. (There is some more reasonable hedging in Kant’s essay on this matter, but you get the general idea.)  

The reason for turning to Kant specifically here is, I hope, now becoming somewhat clear. We can use his ideas to perform a kind of test. If we can come up with a reason why lying to the gatekeeping chatbot would be OK even for Kant, then it seems we will have arrived at a solid justification for a certain amount of strategic dishonesty in this instance. 

So what would Kant’s thinking suggest about lying to the chatbot? Well, we begin to glimpse something of an answer when we examine why exactly lying is such a problem in Kant’s view. It’s a problem, he argues, because it invariably involves treating another person in a way that ultimately tramples on their personhood. When I lie to my friend about repaying borrowed money, no matter how well-intentioned the ends to which I propose to put this money are, I wind up treating my interlocutor not as a person who has individual autonomy in their decision-making, but rather simply as a means to an end.

In this way, I don’t treat them as a person at all; I treat them as a tool for achieving ends I alone determine. The lie makes it impossible for them to truly grant or withhold, in any meaningful sense, their consent when it comes to participating in that particular way in my particular scheme. We oughtn’t treat others instrumentally, solely as a means to some end, for Kant, because when we do, we reduce them to a mere tool at our disposal, and thus fail to respect their real status as a being endowed with the capacity to freely set ends for themselves.

So what does this mean for our job interview with the chatbot?

It suggests that when job seekers give the chatbot what they think it wants to hear, there are far worse things to be worried about online. This is because the chatbot is itself, precisely, a means to an end — a tool, without any agency to set its own supreme, overarching ends; one that a hiring unit is using to make the task of finding suitable employees easier and less time consuming.

We might perfectly well decide those are admirable goals with respect to the overall functioning of the organization. But we shouldn’t lose sight, on either side of the interviewing table, of what they are and what purpose they serve, and thus in turn, how sizable the difference between “chatting” with them and an actual interlocutor really is.

Until Corrie becomes a real interlocutor, I think we all more or less know how their interactions with job seekers are going to go — and perhaps that’s just fine for now.

Share.
Exit mobile version