The Prompt: Demand For AI Girlfriends Is On The Rise

The Prompt is a weekly rundown of AI’s buzziest startups, biggest breakthroughs, and business deals. To get it in your inbox, subscribe here.

Welcome back to The Prompt.

Popular large language models continue to produce racist stereotypes, specifically against speakers of the African American English (AAE) dialect, according to a new study by Stanford University's Human-Centered Artificial Intelligence and the Allen Institute for AI. Despite efforts to add guardrails and limit the models from creating harmful content, systems like OpenAI’s GPT-3.5 and GPT-4 and Google’s T5 model demonstrated “covert racism” while making decisions related to employment, legal or academic matters, the study found.

In different experimental situations, the LLMs were more likely to give speakers of African American English lower prestige jobs, describe them as “lazy,” “stupid” and “dirty” or even determine they should be convicted of a crime compared to speakers of Standard American English.

“This study suggests that instead of steady improvement, the corporations are playing whack-a-mole – they’ve just gotten better at the things that they’ve been critiqued for,” Stanford researcher Pratyusha Ria Kalluri said.

Now let’s get into the headlines.

On Monday, Apple unveiled the iPhone 16, the first line of iPhones that are purpose-built for AI. The new phones will be embedded with “Apple Intelligence,” the tech giant’s label for an array of generative AI features like summarizing audio notes or writing........

© Forbes