Smart computer tools that could prevent you from getting a job.

In the ever-evolving landscape of job recruitment, employers are increasingly turning to technology to streamline the hiring process, save time, and cut costs.

From one-way video interviews to digital monitoring, these tools promise efficiency but also raise questions about their effectiveness.

In her exploration of artificial intelligence (AI) in the workforce, Hilke Schellmann delves into the world of hiring tools, including a one-way video interview system called myInterview. Intrigued by its potential, she decided to test it firsthand. However, her experiments uncovered surprising flaws.

Schellmann, an assistant professor of journalism at New York University and an investigative reporter, discovered that the myInterview system provided inconsistent results. While she scored an 83% match when answering in English, her score remained decent at 73% when responding in her native German, even when she merely read a Wikipedia entry instead of answering questions. This revelation led Schellmann to question the reliability of such tools, especially when they seemingly rely on pseudoscientific principles.

In her recently published book, “The Algorithm,” Schellmann delves into how AI and complex algorithms are not only being used in the hiring process but also in monitoring and evaluating employees. Unfortunately, she concludes that these tools may be doing more harm than good. Many of them, she argues, are built on shaky scientific foundations, and their potential for discrimination is a significant concern.

One of the issues Schellmann addresses is digital monitoring, where productivity is scored based on metrics like keystrokes and mouse movements. She highlights the drawbacks of relying on such metrics and questions the accuracy of more sophisticated AI-based surveillance techniques, such as flight risk analysis, sentiment analysis, and CV analysis.

Schellmann categorizes the AI-based tools into four classes: one-way interviews, online CV screeners, game-based assessments, and social media personality prediction tools. According to her findings, none of these tools are ready for widespread adoption due to unclear methodologies, potential biases, and lack of transparency.

The lack of transparency in these tools raises concerns about discrimination and fairness. Schellmann emphasizes that many of these AI tools operate as black boxes, making it challenging to understand their underlying patterns and potential biases. Even vendors might not have a clear understanding of how their tools work, adding another layer of complexity.

Schellmann shares real-life stories, such as that of a black female software developer and military veteran who faced challenges in the tech industry job market. Despite applying for 146 jobs and utilizing various AI tools, the developer found success only by reaching out to a human recruiter.

In light of these findings, Schellmann calls for increased skepticism from HR departments regarding the deployment of hiring and workplace monitoring software. She advocates for thorough testing, asking questions, and pushing for more transparent practices. Additionally, she emphasizes the need for regulation, suggesting the establishment of a government body to assess and approve these tools before they enter the market.

While job seekers navigate the complexities of AI-based hiring tools, Schellmann points out the potential of AI assistance for applicants. Platforms like ChatGPT can aid individuals in crafting cover letters, polishing CVs, and formulating responses to interview questions. This, she suggests, is a way for candidates to level the playing field and shift some power away from employers in the current technological landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *