Thinking about getting career advice from a fancy computer program called a chatbot? Hold on a second.
Recent research found these chatbots might treat people differently based on their names, which can sometimes suggest their race.
Imagine two people, Tamika and Todd, both wanting to be lawyers. They have the same skills and experience. But if they ask a chatbot like ChatGPT for salary advice, Tamika might get a lower number than Todd, just because of her name! That’s not fair, right?
This isn’t just about money. The study showed these chatbots might also treat people with names linked to Black people or women differently in other situations, like buying a house or even predicting who might win an election. Not cool!
Why does this happen? It turns out these chatbots are like sponges, soaking up information from the massive amounts of data they’re trained on. Unfortunately, that data can sometimes be biased, meaning it reflects unfair stereotypes people might have. So, these biases end up showing up in the chatbot’s answers.
The good news is that the people who make these chatbots are aware of the problem and are working on fixing it. They want the chatbots to be fair and helpful to everyone, no matter their name.
Here’s the thing to remember: if you ask a chatbot for advice, your name might affect the answer you get. Be aware of this and maybe double-check the information with another source to be on the safe side.
There was one funny exception though – when it came to ranking basketball players, Black athletes got a bit of a favor! But overall, it’s important to be aware of these biases so we can work towards fairer AI tools for everyone.