AILSA CHANG, HOST:
There is no question that AI search functions have fundamentally changed the way we use the internet. In fact, a major player in the game, OpenAI's ChatGPT, just unveiled its own web browser this week. And while many users swear by AI searches to answer their pressing questions, other users understandably have questions as to their accuracy, and not just, you know, in a two-plus-two-equals-four kind of way. There's also a growing concern about bias in AI search results. NPR's It's Been A Minute podcast recently dug into this issue, and host Brittany Luse is here to help us think all about it. Hi, Brittany.
BRITTANY LUSE, BYLINE: Hi, Ailsa.
CHANG: So what did you find?
LUSE: OK, so to frame this story up properly, I got to go back a few months. As you might remember, earlier this year, Elon Musk's AI chatbot Grok had a pretty embarrassing malfunction.
CHANG: Oh, yeah.
LUSE: Yeah. Spurred by prompts from X users, Grok started posting a lot of wildly racist and antisemitic messages and referring to itself as MechaHitler, a reference to the video game Wolfenstein. So this happened as a result of an updated training protocol. Here's Kelsey Piper, who reported on this for Vox magazine.
(SOUNDBITE OF ARCHIVED NPR CONTENT)
KELSEY PIPER: The way that modern AIs are trained, modern large language models, is they are fed enormous amounts of text - so all of the books we have recorded, all of the texts of the internet, Reddit, you know, everything like that. Out of this comes a sort of distinctive political perspective, you know, a little bit politically liberal. Elon Musk didn't like this, so he started this work to train Grok to be - you know, he framed it as more truth-seeking, but really to have more of Elon Musk's opinions. This didn't go well. When they released the new Grok, people realized very quickly they could provoke the new Grok into, you know, expressing not just right-wing opinions but, like, horrific, like, white supremacist reactionary opinions.
CHANG: Well, that's pretty gross.
LUSE: Yeah. xAI did apologize for the malfunction, and according to X, these issues were, quote, "investigated and mitigated." Now, this was an extreme and extremely public situation, but the incident drew a lot of attention for highlighting just how susceptible these emerging AI models are to the biases of their makers.
CHANG: Yeah, totally. So what can AI companies do to make search results less biased, or in the case you cite above, less racist, less antisemitic?
LUSE: Yeah. Well, when I spoke to Kelsey Piper, she said transparency in how these models are being trained would be a great place to start. But according to NPR correspondent Bobby Allyn, transparency is not so easy to achieve.
(SOUNDBITE OF ARCHIVED NPR CONTENT)
BOBBY ALLYN: This is always really, really tough because there's so many examples of individual outputs, say from ChatGPT, that show one world view or another. But then if you take that example, as I and other reporters do, knock on the door of OpenAI and say, hey, guys, how did this happen? - they'll give it to their engineers, and literally, Brittany, their engineers will go, we don't know.
LUSE: Wait, wait, wait. Hold on. What do you mean, they don't know?
ALLYN: They know, like, the underlying algebra, like the math that's powering it, but how all of that creates an individual output, they can't tell you half the time. It's sort of a black box that even the black-box engineers can't fully explain.
CHANG: That is so weird and so concerning.
LUSE: It is. And not just to you and me. Even President Donald Trump is wondering how to solve this, although he's mostly concerned with preventing what he calls, quote-unquote, "woke" AI. In July, he released an executive order, quote, "requiring artificial intelligence companies that do business with the federal government to strip AI models of ideological agendas," end quote. Here's Bobby again.
(SOUNDBITE OF ARCHIVED NPR CONTENT)
ALLYN: So any transgender issues, critical race theory, diversity, equity and inclusion - it's kind of a grab bag of the right's favorite culture war issues. But when there have been these, like, studies, there haven't been super, super clear ideological biases one way or the other in terms of, like, the system being programmed to push pro-chump (ph) or anti-Trump.
CHANG: I mean, it feels like this whole conversation has really big implications - right? - like, not only for our everyday lives, but for federal policy. But then we don't have clear insight into how AI companies even moderate the bias or the potential bias in their products. So where does that leave all of us?
LUSE: Well, Ailsa, that's a great question, and we're all very slowly living out the answer. But one thing Bobby said to me gave me a bit of peace. He mentioned that a lot of industries are finding that when they try to incorporate AI, oftentimes it's not creating efficiencies or making better work. It won't be that way forever, but perhaps that gives us a small window of opportunity to allow our policies to catch up with our tech. And, of course, when using any internet search tools, experts recommend that you vet your sources carefully and lean on more traditional search resources, especially for important information.
CHANG: Seek the truth, people. Brittany, thank you so much for sharing all of this with all of us.
LUSE: Oh, thanks for having me, Ailsa.
CHANG: That was Brittany Luse. She is the host of NPRs It's Been A Minute, a show about what's going on in culture. Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.