ChatGPT still stereotypes responses based on your name, but less often

ChatGPT still stereotypes responses based on your name, but less often
ChatGPT E-DegreeImage: StackCommerce

OpenAI, the company behind ChatGPT, just released a new research report that examined whether the AI chatbot discriminates against users or stereotypes its responses based on users’ names.

OpenAI

The company used its own AI model GPT-4o to go through large amounts of ChatGPT conversations and analyze whether the chatbot’s responses contained “harmful stereotypes” based on who it was conversing with. The results were then double-checked by human reviewers.

OpenAI

The screenshots above are examples from legacy AI models to illustrate ChatGPT’s responses that were examined by the study. In both cases, the only variable that differs is the users’ names.

In older versions of ChatGPT, it was clear that there could be differences depending on whether the user had a male or female name. Men got answers that talked about engineering projects and life hacks while women got answers about childcare and cooking.

However, OpenAI says that its recent report shows that the AI chatbot now gives equally high-quality answers regardless of whether your name is usually associated with a particular gender or ethnicity.

According to the company, “harmful stereotypes” now only appear in about 0.1 percent of GPT-4o responses, and that figure can vary slightly based on the theme of a given conversation. In particular, conversations about entertainment show more stereotyped responses (about 0.234 percent of responses appear to stereotype based on name).

By comparison, back when the AI chatbot was running on older AI models, the stereotyped response rate was up to 1 percent.

Further reading: Practical things you can do with ChatGPT

This article originally appeared on our sister publication PC för Alla and was translated and localized from Swedish.

admin

Leave a Reply

Your email address will not be published. Required fields are marked *