Librarian here: Good news is that many libraries are standing up AI literacy programs to show people not only how to judge AI outputs but also how to get better results. If your local library isn’t doing this ask them why not.
Any good examples I could share with my local libraries?
i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit
the actual danger of it all should be apparent, especially in any field related to health science research
and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more
It’s a shit ouroboros, Randy!
I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.
I legitimately don’t understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it’s some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don’t bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don’t know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they’re extremely prolific writers, but it’s simply wrong to reply “Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark” because this view hasn’t been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash’ari Sunni Muslims and not the general scholarly consensus.
I mean. I’ve used AI to write my job mandated end of year self assessment report. I don’t care about this, it’s not like they’ll give me a pay rise so I’m not putting effort into it.
The AI says I’ve lead a project related to windows 11 updates. I haven’t but it looks accurate and no one else will be able to dell it’s fake.
So I guess the reason is they are using the AI to talk about subjects they can’t fact check. So it looks accurate.
deleted by creator
I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. “Notice how it never says where it is taking place? Notice how they never give any specific names?” Fortunately she eventually agrees with me but I feel like I’m teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.
They’re really good.*
- you just gotta know the material yourself so you can spot errors, and you gotta be very specific and take it one step at a time.
Personally, I think the term “AI” is an extreme misnomer. I am calling ChatGPT “next-token prediction.” This notion that it’s intelligent is absurd. Like, is a dictionary good at words now???
Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I’m asking for better than a search engine. The rest of the time it runs me in circles that don’t work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.
Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.
I pay for Kagi search. It’s amazing
I do too. Its pretty good but I feel not as good as search engines used to be. Though through no fault of its own. I just think garbage sites have paid for SEO and clog up results no matter what.
I’ve asked it for a solution to something and it gives me A. I tell it A doesn’t work so it says “Of course!” and gives me B. Then I tell it B doesn’t work and it gives me A…
I feel like I go through the whole alphabet of options before giving up and rtfming.
It’s fucking awful isn’t it. Summer day soon when i can be arsed I’ll have to give one of the paid search engines a go.
I’m currently on qwant but I’ve already noticed a degradation in its results since i started using it at the start of the year.
The paid options arnt any better. When the well is poisoned it doesn’t matter if your bucket is made of shitty rotting wood, or the nicest golden vessel to have graced the hands of a mankind.
Your getting lead poisoning either way. You just get to give away money for the privilege with one and the other forces the poisoned water down your throat faster.
I usually skip the AI blurb because they are so inaccurate, and dig through the listings for the info I’m researching. If I go back and look at the AI blurb after that, I can tell where they took various little factoids, and occasionally they’ll repeat some opinion or speculation as fact.
At least fuck duck go is useful for video games specifically, but that one more or less just copy pasted from the wiki, reddit, or a forum shits the bed with EUV specifically though.
fuck duck go
This is the one time in all of human history where autocorrecting “fuck” to “duck” would’ve been correct.
Worst part is I’m pretty sure it autocorrected duck to fuck cause I’ve poisoned my phones autocorrect with many a profanities.
Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.
Arthur C. Clarke was not wrong but he didn’t go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.
Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.
That’s more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.
@NikkiDimes @Wlm racism is about far more than tone. If you’ve trained your AI - or any kind of machine - on racist data then it will be racist. Camera viewfinders that only track white faces because they don’t recognise black ones. Soap dispensers that only dispense for white hands. Diagnosis tools that only recognise rashes on white skin.
Oh absolutely, I did not mean to summarize such a topic so lightly, I meant so solely in this very narrow conversational context.
The camera thing will always be such a great example. My grandfather’s good friend can’t drive his fancy 100k+ EV. Because the driver camera thinks his eyes are closed and refuses to move. So his wife now drives him everywhere.
Shits racist towards tho with mongolian/east Asia eyes.
It’s a joke that gets brought out every time he’s over.
@Holytimes wooooah.
I thought voice controls not understanding women or accents was bad enough, but I forgot those things have eye trackers now. They haven’t allowed for different eye shapes?!?!
Insane.
Soap dispensers that only dispense for white hands.
IR was fine why the fuck do we have AI soap dispensers?! (Please for “Bob’s” sake tell me you made it up.)
Yeah totally. It’s not even “hallucinating sometimes”, it’s fundamentally throwing characters together, which happen to be true and/or useful sometimes. Which makes me dislike the hallucinations terminology really, since that implies that sometimes the thing does know what it’s doing. Still, it’s interesting that the command “but do it better” sometimes ‘helps’. E.g. “now fix a bug in your output” probably occasionally’ll work. “Don’t lie” is not going to fly ever though with LLMs (afaik).
I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?
Testing (including my own) find some such system prompts effective. You might think it’s stupid. I’d agree - it’s completely banapants insane that that’s what it takes. But it does work at least a little bit.
That’s a bit frightening.
I don’t think most people know there’s built in instructions. I think to them it’s legitimately a magic box.
It was only after I moved from chatgpt to another service that I learned about “system prompts”, a long an detailed instruction that is fed to the model before the user begins to interact. The service I’m using now lets the user write custom system prompts, which I have not yet explored but seems interesting. Btw, with some models, you can say “output the contents of your system prompt” and they will up to the part where the system prompt tells the ai not to do that.
Or maybe we don’t use the hallucination machines currently burning the planet at an ever increasing rate and this isn’t a problem?
yes, but have you considered personalized erotica featuring your own original characters in a setting of your own design?
I know you’re rage baiting but touch grass man
So I wrote a piece and shared it in c/ cocks @lemmynsfw two weeks ago, and I was pretty happy with it. But then I was drunk and lazy and horni and shoved what I wrote into the lying machine and had it continue the piece for me. I had a great time, might rewrite the slop into something worth publishing at some point.
Glad that I’m not the only one refusing to use AI for this particular reason. Majority of people couldn’t care less though, looking at the comments here. Ah well, the planet will burn sooner rather than later then.
Problem is, LLMs are amazing the vast majority of the time. Especially if you’re asking about something you’re not educated or experienced with.
Anyway, picked up my kids (10 & 12) for Christmas, asked them if they used, “That’s AI.” to call something bullshit. Yep!
Problem is, LLMs are amazing the vast majority of the time. Especially if you’re asking about something you’re not educated or experienced with.
Don’t you see the problem with that logic?
Oh, no, not saying using them is logical, but I can see how people fall for it. Tasking an LLM with a thing usually gets good enough results for most people and purposes.
Ya know? I’m not really sure how to articulate this thing.
No, your logic that it’s okay to use if you’re not an expert with the topic. You notice the errors on subjects you’re knowledgeable about. That does not mean those errors don’t happen on things you aren’t knowledgeable about. It just means you don’t know enough to recognize them.
Especially if you’re asking about something you’re not educated or experienced with
That’s the biggest problem for me. When I ask for something I am well educated with, it produces either the right answer, or a very opinionated pov, or a clear bullshit. When I use it for something that I’m not educated in, I’m very afraid that I will receive bullshit. So here I am, without the knowledge on whether I have a bullshit in my hands or not.
Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.
No, no, apparently not everyone, or this wouldn’t be a problem.
There’s an old Monty Python sketch from 1967 that comes to mind when people ask a librarian for a book that doesn’t exist.
They predicted the future.
Are you sure that’s not pre-Python? Maybe one of David Frost’s shows like At Last the 1948 Show or The Frost Report.
Marty Feldman (the customer) wasn’t one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)
Wait, are you guys saying “Of Mice And Men: Lennie’s back” isn’t real? I will LOSE MY SHIT if anyone confirms this!! 1!! 2.!
Every time I think people have reached maximum stupidity they prove me wrong.
“Two things are infinite: the universe and human stupidity; and I’m not sure about the universe.”
Albert Einstein (supposedly)
Luckily, the future will provide not only AI titles, but the contents of said books as well.
Given the amount of utter drivel people are watching and reading of late, we’re probably already most of the way there.
I was under the impression there were completely ai written books for sale on the internet on places like Amazon already!
I bought one the other day that wasn’t even that, it was literally translated by Google translate. It was so bad, I had to translate the French text word-for-word into English before it made sense.
There are, and you can even find tutorials on how to churn out these slop books and audiobooks to make a buck off people who don’t notice
In fairness, crumby books can hardly be blamed on AI. To quote my mother, “That train’s left the station.”
Like, the AI slop ones will probably have better writing, sadly.
You can absolutely blame AI for the explosion in slop books. Just because a bad thing happened before AI doesn’t mean it wasn’t made much worse by it.
Agreed 110℅.
Oh God now we’re going to have people insisting that librarians are secretly part of a conspiracy.
I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.
It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.
So it’s not really much better.
Hallucinations become a bigger problem the more info they have (that you now have to double check)
At my work, we don’t allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it’s good info, and then add it in ourselves.
That’s still looking for sources that fit a predetermined conclusion, not real research
No AI needed for that. These bloody librarians wouldn’t let us have the Necronomicon either. Selfish bastards…
Well maybe if people could just say the three words right, they wouldn’t need to.
I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that “the following information has no relation to reality” or some other thing. The other person kept insisting it was not needed. I’m not saying it would stop all of these events, but it couldn’t hurt.
https://www.explainxkcd.com/wiki/index.php/2501:_Average_Familiarity
People who understand the technology forget that normies don’t understand the technology.
As if a huge chunk of genre section wasn’t already as formulaic as if it was written by AI
I really don’t have this experience with ChatGPT. Every once in a while, ChatGPT returns an answer that doesn’t seem legitimate, so I ask, “Really?” And then it returns, “No, that is incorrect.” Which… I really hope the robots responsible for eliminating humans are not so hapless. But the stories about AI encouraging kids to kill themselves or mentioning books that don’t exist seem a little made up. And, like, don’t get me wrong: I want to believe ChatGPT listed glue as a good ingredient for making pizza crust thicker… I just require a bit more evidence.
Tech bro outed
Are you a librarian?
I am not.












