mechoman444
I am live.
- 0 Posts
- 115 Comments
mechoman444@lemmy.worldto
News@lemmy.world•USDA warns about Walmart chicken nuggets contaminated with lead
1·1 day agoTo be fair, I don’t think the producers of that particular chicken nugget intentionally poisoned it with lead.
Although I honestly don’t know what would be worse at this point. The actual intent to do this or the unbelievable negligence that caused it in the first place.
mechoman444@lemmy.worldto
News@lemmy.world•USDA warns about Walmart chicken nuggets contaminated with lead
2·1 day agoScrolled way too far down to see this kind of post.
Apparently just not right now.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
11·5 days agoNo. That is not what the analogy means. That is what you are choosing to extract from it because it supports the direction you want this exchange to go.
The use of the word “regurgitate” carries a very specific implication. It suggests that LLMs retrieve and repeat stored information verbatim. That is not how they function. We both appear to agree on that point.
LLMs do not rely on stored facts in the way the analogy implies. They generate outputs by modeling patterns in data, producing responses that are often novel rather than retrieved.
Whether or not the model understands or comprehends the content is irrelevant to this distinction. Comprehension is not a requirement for the system to function. So yes, the analogy is overly simplistic and ignores the actual mechanism at work.
To be precise: it does not matter that the model lacks awareness or understanding. It is still capable of analyzing patterns and generating new outputs from its training data. That is not regurgitation.
Concisely as I can: llms do not regurgitate data, the analogy fails.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
1·5 days agoHe is claiming the analogy works, then retreating to a more defensible position by admitting the system is more complex.
I am not being overly simplistic or imprecise. I am stating plainly that the analogy fails. LLMs do not regurgitate stored information. They generate novel outputs by statistically modeling and interpreting patterns in their training data. I supported that position with objective facts, and no one has attempted to directly refute them. Instead, the responses rely on vague arguments about “precision” and “simplicity,” which do not address the core claim.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
11·6 days agoSomeone else in the comments said it perfectly. Al is just data regurgitation. It’s like calling me highly intelligent because I read you a paragraph from Wikipedia. I didn’t know anything. I just read a thing and said it out loud.
Christ on a stick.
The original analogy literally states “AI is just data regurgitation” now you’re what? Saying it’s more complex? Ever heard of a motte and Bailey. Cuz that’s what you’re doing now.
Once again, for the people in the back, the analogy is a failure. It does not work. Llms are not regurgitation machines.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
12·6 days agoI fully understand the analogy being presented. It is a poor analogy and fundamentally incorrect because that is not how LLMs function. They do not “read back Wikipedia pages,” which is a complete misunderstanding of the technology, not a minor lack of precision.
I am not disputing that it is an analogy, nor am I claiming that exact precision is necessary to analyze it. The point remains: the analogy fails.
What is curious is how people focus on my tone, saying I am aggressive or should be more precise, rather than engaging with the substance of my argument. So far, no one has directly refuted my points. This suggests that many responding are simply following the anti-AI bandwagon without understanding the technology, which is both reductive and disappointing.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
14·6 days agoThe analogy is terrible and is not at all, once again, what llms do.
This is an objective fact I have provided evidence to support this.
How are you saying the analogy is good?
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
33·6 days agoThe reason he should learn about it is because he’s talking about it as though he’s informed and he is not.
I don’t have to be a LLM programmer working at openai to have a working knowledge of how these machines function. It’s literally just a Google search.
He made an unreasonable ignorant comment and I called him out. He should feel ashamed and I have absolutely no reason to pad down what I’m saying under the guise of being nice.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
23·6 days agoCalling an llm a Wikipedia regurgitator is factually and objectively incorrect.
Is there anything that you can say to refute the facts that I presented in my above comment?
(I rolled my eye so hard at your comment that I pulled my back out)
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
78·6 days agoNo. You’re not just wrong, you’re aggressively uninformed.
By you repeating the same tired “AI is just regurgitating data” line makes it clear you don’t understand what you’re criticizing. Calling large language models “AI” the way you are doing it just exposes that you do not know what you are talking about. It is like a creationist smugly saying “orangutang” instead of “orangutan” and thinking they sound informed. You are not demonstrating insight. You are advertising ignorance.
What you’re describing, reading a paragraph off Wikipedia, is literal retrieval. That is not how modern language models operate. They are not databases with a search bar attached. They are probabilistic systems trained to model patterns, structure, and relationships across massive datasets. When they generate a response, they are not pulling a stored paragraph. They are constructing output token by token based on learned representations.
If it were just regurgitation, you would constantly see verbatim copies of training data. You do not. What you see instead is synthesis. Concepts are recombined, abstracted, and adapted to context. The system can explain the same idea multiple ways, shift tone, handle novel prompts, and connect ideas that were never explicitly paired in the source material. That is fundamentally different from reading something out loud.
Your analogy fails because it assumes nothing is being transformed. In reality, transformation is the entire mechanism. Information is compressed into weights and then expanded into new outputs.
Is it human intelligence. No. Is it perfect. No. But reducing it to “just reading Wikipedia out loud” is not skepticism. It is a basic failure to understand how the technology works.
If you are going to criticize something, at least learn what it is first.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
49·7 days agoIt really isn’t. But you do you boo.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
301·7 days agoI know lemmy’s very anti-ai but this is really fascinating stuff.
mechoman444@lemmy.worldto
Technology@lemmy.world•Announcing ARC-AGI-3 - A benchmark that tests if AI can explore, learn, and adapt in unfamiliar situations. Humans score 100%. Frontier AI scores 0.26%.English
3·7 days agoRight. Thank you for this explanation, the percentages seemed out of context. So, the LLM was able to complete some levels?
spamming ctrl
mechoman444@lemmy.worldto
Technology@lemmy.world•Age checks creep into Linux as systemd gets a DOB fieldEnglish
2·9 days agoI am 100% right. The system is open source. The kernel can be edited literally by anyone if they so choose to.
They’re cannot and will not be a law that says you can’t edit a Linux kernel.
Why are all of you just not acknowledging this?!?
mechoman444@lemmy.worldto
Technology@lemmy.world•Age checks creep into Linux as systemd gets a DOB fieldEnglish
1·9 days agoYes, you are correct. Those of you who are concerned about this are not wrong to question it.
However, the point that keeps being ignored is that laws like this have very limited enforceability when it comes to platforms like Linux and other open-source software.
The reason is simple, anyone can modify the source code. There is no practical way to permanently embed restrictions like age verification into something that can be freely forked and redistributed. If a Linux distribution introduces age verification, a fork removing it will appear almost immediately. That is not hypothetical, that is how the open-source ecosystem functions.
Even if you personally install a version that includes such a feature, it is often trivial to bypass or remove it through system-level access.
Yes, the laws themselves are poorly conceived. They attempt to impose control in an environment that does not respond well to centralized regulation. But focusing on something like a birthday field in a Linux distribution misses the point. In that context, it is effectively meaningless and not something that warrants serious concern.




This post and its entire comment section are hilarious because the vast majority of people browse the internet on their phones, usually through Safari or Chrome.
What I find funny is that some people arrogantly and confidently turn their noses up at Windows users for not using Linux, yet they themselves are still using either an iPhone or an Android device