Добавить новость
ru24.net
News in English
Декабрь
2024
1 2 3 4 5 6 7 8 9 10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

The Curious Case Of ChatGPT’s Banned Names: Hard-Coding Blocks To Avoid Nuisance Threats

0

Over the weekend, I saw Andy Baio post on Bluesky an amusing experiment in response to Mark Sample posting about how the name “David Mayer” appears to break ChatGPT:

The “David Mayer” issue got a fair bit of attention in some corners of the media, as lots of people tried to figure out what was going on. Pretty quickly, people started to turn up a small list of other names that broke ChatGPT in a similar way:

  • Brian Hood
  • Jonathan Turley
  • Jonathan Zittrain
  • David Faber
  • David Mayer
  • Guido Scorza

I actually knew about Brian Hood, and had meant to write about him a while back, but never got around to it. A year and a half ago, a commenter here at Techdirt had posted a few times about the fact that ChatGPT broke on “Brian Hood.” That was about a month after Brian Hood, an Australian mayor, threatened to sue OpenAI for defamation, after someone generated some false statements about Hood.

OpenAI’s apparent “solution” was to hardcode ChatGPT to break on certain names like “Brian Hood.” When I tried to generate text about Brian Hood, using a similar method to Andy Baio’s test above, I got this error:

There has been widespread speculation online about why these specific names are blocked. A fairly comprehensive Reddit post explores the likely reasons each person ended up on ChatGPT’s blocklist.

There are many David Mayers, but one likely culprit is a UK-based American theater historian who made news a few years ago when terrorist watch lists confused him with a Chechen ISIS member who sometimes went by the name “David Mayer.” As of Monday when I was writing this article, the hard coding on the name “David Mayer” had been removed, though the reasons for that are unclear.

Jonathan Turley and Jonathan Zittrain are both high-profile professors (though one is nutty and one is very thoughtful). Turley freaked out last year (around the same time Brian Hood did) when he claimed that someone generated false information about him via ChatGPT.

Unlike the others on the list, with Zittrain there’s no such trail of freaking out or raising alarms about AI-generated content. Zittrain is a Harvard professor and the Faculty Director at the Berkman Klein Center for Internet and Society at Harvard. He writes a lot about the problems of the internet though (his book The Future of the Internet: And How to Stop It is worth reading, even if a bit out of date). He is, apparently, writing a similar book about his concerns regarding AI agents, so perhaps that triggered it? For what it’s worth, Zittrain also seems to have no idea why he’s on the list. He hasn’t threatened to sue or demanded his name be blocked.

Guido Scorza, an Italian data protection expert, wrote on ExTwitter last year about how to use the GDPR’s problematic “right to be forgotten” to delete all the data ChatGPT had on him. This is something that doesn’t quite make sense, given that it’s not a database storing information on him. But, it appears that the way OpenAI dealt with that deletion request was to just… blocklist his name. Easy way out, etc., etc.

No one seems to have any idea why David Faber is on the list, but it could certainly be another GDPR right to be forgotten request.

While I was finishing up this post, I saw that Benj Edwards at Ars Technica wrote a similar exploration of the topic, though he falsely claims he “knows why” these names are blocked, and his reporting doesn’t reveal much more than the same speculation others have.

Still, all of this is kind of silly. Hard coding names that break ChatGPT may be the least costly way for AI companies to avoid nuisance legal threats, but it’s hardly sustainable, scalable or (importantly), sensible.

LLMs are tools. Like most tools, the focus of liability for misuse should fall on the users, not the tool. Users need to learn that the output of LLMs may not be accurate and shouldn’t be relied upon as factual. Many people know this, but obviously, it still trips up some folks.

If someone takes hallucinating output and publishes it or does something else with it without first checking to see if it’s legitimate, the liability should fall on that person who failed to do the proper due diligence and relied on a fantasy-making machine to tell the truth.

But, of course, for these services, convincing the world of these concepts is a lot harder than just saying “fuck it, remove the loud threatening complainers.” But that kind of solution can’t last.




Moscow.media
Частные объявления сегодня





Rss.plus




Спорт в России и мире

Новости спорта


Новости тенниса
Елена Веснина

Веснина высказалась об участии россиян в Олимпиаде






Законодатели стран ОДКБ обсудили проблемы интернета и подготовку к юбилею Победы

Экс-замгубернатора Кубани Сергей Власов останется под стражей до 5 февраля

«МК»: мужчина с телефоном лег на пути Арбатско-Покровской линии метро в Москве

Сотни примеров для подражания. На ВДНХ открылся бесплатный Музей героизма