

We arnt in active physical danger.
You arent in percieved active physical danger.
If trump launches a nuke boy that sure will change lickity split tho


We arnt in active physical danger.
You arent in percieved active physical danger.
If trump launches a nuke boy that sure will change lickity split tho


You know programmers who use llms believe they’re much more productive because they keep getting that dopamine hit, but when you actually measure it, they’re slower by about 20%.
Everyone keeps citing this preliminary study and ignores:
Its the equivalent of taking 12 seasoned carpenters with very little experience on industrial painting, handing them industrial grade paint guns that are misconfigured and uncalibrated, and then asking them to paint some of their work and watching them struggle… and then going “wow look at that industrial grade paint guns are so bad”
Anyone with any sense should look at that and go “thats a bogus study”
But people with intense anti-ai bias cling to that shoddy ass study with such religious fervor. Its cringe.
Every professional developer with actual training and actual proper tooling can confirm that they are indeed tremendously more productive.


Lovely anthropic mcp. Make sure you give anthropic lots of money and use their tools
Its becoming clear you have no clue wtf you are talking about.
Model Context Protocol is a protocol, like http or json or etc.
Its just a format for data, that is open sourced and anyone can use. Models are trained to be able to invoke MCP tools to perform actions, and anyone can just make their own MCP tools, its incredibly simple and easy. I have a pretty powerful one I personally maintain myself.
Anthropic doesnt make any money off me, in fact, I dont use any of their shit, except maybe whatever licensing fees microsoft pays to them to use Claude Sonnet, but microsoft copilot is my preferred service I use overall.
I bet you your contract with them says they’re not liable for shit their llm does to your files
Setting aside the fact that I dont even use anthropic’s tools, my copilot LLMs dont have access to my files either. Full stop.
The only context in which they do have access to files is inside of the aforementioned docker based sandbox I run them inside of, which is an ephemeral immutable system that they can do whatever the fuck they want inside of because even if they manage to delete /var/lib or whatever, I click 1 button to reboot and reset it back to working state.
The working workspace directory they have access to has readonly git access, so they can pull and do work, but they literally dont even have the ability to push. All they can do is pull in the stuff to work on and work on it
After they finish, I review what changes they made and only I, the human, have the ability to accept what they have done, or deny it, and then actually push it myself.
This is all basic shit using tools that have existed for a long time, some of which are core principles of linux and have existed for decades
Doing this isnt that hard, its just that a lot of people are:
The concept of “make a docker image that runs an “agent” user in a very low privilege env with write access only to its home directory” isnt even that hard.
It took me all of 2 days to get it setup personally, from scratch.
But now my sandbox literally doesnt even expose the ability to do damage to the llm, it doesnt even have access to those commands
Let me make this abundantly clear if you cant wrap your head around it:
And it wasnt even that hard to do


You’ll be the 4753rd guy with the oops my llm trashed my setup and disobeyed my explicit rules for keeping it in check
Read what I wrote.
Its not a matter of “rules” it “obeys”
Its a matter of literally not it even having access to do such things.
This is what Im talking about. People are complaining about issues that were solved a long time ago.
People are running into issues that were solved long ago because they are too lazy to use the solutions to those issues.
We now live in a world with plenty of PPE in construction and people are out here raw dogging tools without any modern protection and being ShockedPikachuFace when it fails.
The approach of “Im gonna tell the LLM not to do stuff in a markdown file” is tech from like 2 years ago.
People still do that. Stupid people who deserve to have it blow up in their face.
Use proper tools. Use MCP. Use a sandbox environment. Use whitelist opt in tooling.
Agents shouldn’t even have the ability to do damaging actions in the first place.


The only people who have these issues, are people who are using the tools wrong or poorly.
Using these models in a modern tooling context is perfectly reasonable, going beyond just guard rails and instead outright only giving them explicit access to approved operations in a proper sandbox.
Unfortunately that takes effort and know-how, skill, and understanding how these tools work.
And unfortunately a lot of people are lazy and stupid, and take the “easy” way out and then (deservedly) get burned for it.
But I would say, yes, there are safe ways yo grant an llm “access” to data in a way where it does not even have the ability to muck it up.
My typical approach is keeping it sandbox’d inside a docker environment, where even if it goes off the rails and deletes something important, the worst it can do is cause its docker instance to crash.
And then setting up via MCP tooling that commands and actions it can prefer are explicit opt in whitelist. It can only run commands I give it access to.
Example: I grant my LLMs access to git commit and status, but not rebase or checkout.
Thus it can only commit stuff forward, but it cant even change branches, rebase, nor push either.
This isnt hard imo, but too many people just yolo it and raw dawg an LLM on their machine like a fuckin idiot.
These people are playing with fire imo.


Very true, though theres a certain threshold you can get past where the context, at least, is usable in size where the machine can at least hold enough data at once for common tasks.
One of the pieces of tech we are really missing atm is an automation of being able to filter info.
Specifically, for the LLM to be able to “release” info as it goes asap as unimportant and forget it, or at least it gets stored into some form of long term storage it can use a tool to look up.
But for a given convo the LLM can do a lot of reasoning but all that reasoning takes up context.
Itd be nice if after it reasons, it then can discard a bunch of the data from that and only keep what matters.
This eould tremendously lower context pressure and allow the LLM to last way longer memory wise
I think tooling needs to approach how we manage LLM context in a very different way to make further advancement.
LLMs have to be trained to have different types of output, that control if they’ll actually remember it or not.


You: just the cheap ones
I never said that. I just said that the cheap ones are especially shitty.
People on this site really lack reading comprehension it seems.


LLMs are not good at answering fact based questions, fundamentally. Unless its an incredibly well known answer that has never changed (like a math or physics question), they dont magically “know” things.
However, they’re way better at summarizing and reasoning.
Give them access to playwright web search capability via MCP tooling to go research info, find the answer(s), and then produce output based on the results, and now you can get something useful.
“Whats the best way to do (task)” << prone to failure, functional of how esoteric it is.
“Research for me the top 3 best ways to do (task), report on your results and include your sources you found” << actually useful output, assuming you have something like playwright installed for it.


What are you talking about.
No? I never said that.
I just explained /why/ it happened, I literally nowhere in my post said, or implied, someone should pay for more expensive models. What are you smoking?
You just have to be aware they have very short memory when using a cheap model and assume anything you wrote 1 minute ago has already left its memory, which is why they produce pretty dumb output if you try and depend on that… so… dont depend on that.


Uh… no its just the free models being free, theyre lower cost intentionally to provide free options for people who dont wanna pay subscription fees.
(context is (V)RAM)
Eh sort of, its more operating costs, the larger the context size the more expensive the model is to run, literally in terms of power consumption.
Keep in mind we are on the scale of fractions of cents here, but multiply that by millions of users and it adds up fast.
But the end result is that the agent will fuck stuff up, and will even quickly /forget/ it fucked that up if you dont catch it asap
A lot of them have a context window that can be wiped out within like, 2 minutes of steady busywork…


They dont lol
Pretty much always this is just the fact cheaper, especially free, chatbots, have very limited context windows.
Which means the initial restrictions you set like “dont do this, dont touch that” etc get dropped, the LLM no longer has them loaded. But it does have in the past history the very clear and urgent directives of it trying to do this task, its important, so it’ll do whatever it autocompletes its gotta do to accomplish the task. And then… fucks something up.
When you react to their fuck up, it *reloads the context back in
So now the LLM has in its history just this:
So now the LLM is going to autocomplete its generated text on top being very apologetic and going on about how it’ll never happen again.
Thats all there is to it.


My point is, people are throwing huge hissy fits over random barely-matters stuff like “oh my god a random asset in some corner of some room looks AI generated”
When meanwhile like 40%+ of the codebase is AI generated, but because people dont know about that they dont give a shit.
They only care because they can see it and notice it.
It comes across as shallow performative upset.
I don’t see anyone who consumes games remotely bringing up the fact coding IDEs have been AI autocompleting code for like 2 years now, no one even gives a shit.
Only when it started showing up in art assets, or hell, being used just in proof of concept stuff and devs say it wont be in the final game, people are like “oh mah dawg” and stamp their feet.
Its cringe, get over it. The game is either bad quality or good quality, how it GOT to be that way shouldnt matter. The devs either did a good job, or they didnt.
Let me put it this way:
If you are busy critiquing how the result was achieved by what tools, instead of WHAT the result was, you are cringe.
If you critique “this looks bad, its low quality, it looks like garbage” yeah, I have no issue with that, its a valid critique. Regardless of HOW they made the bad game, a bad game is a bad game.
But if you care about what TOOLS they use, you are incredibly naive.
People need to go look up how much resources/power/water data centres for build servers use, which the industry has been using for decades. You think AI uses a lot of power and water? Get fucked dawg, that is NOTHING compared to companies running multi-hour long gambits of automated UX testing suites. That shit is where the real power draw is.
BUT the industry has been doing that for DECADES and yet no one has raised a single eyebrow at it, no one cared, no one even knew it was a thing companies did.
Suddenly companies are using AI, which uses a fraction of that water/power, and everyone is like “oh my dawg, theyre killing the planet”
Fuck off lol, if you werent complaining about it before, you come across as cringe, uninformed, naive, and dumb for suddenly caring about a 5% uptick in energy/water usage compared to what we were doing before.
So all you end up with left is the “stolen property” argument, which STILL doesnt apply if its not in the final product anyways.
And its a VERY wobbly argument to stand up and die on a hill for, anyways.


Its not “everyone else is doing it” as an argument
Its reality that a massive fuck tonne of devs are using it totally unaware its AI generated
Its just a built in, enabled by default, opt out, feature in all the mainstream IDEs now and for the most popular ones, it doesnt even tell you upfront its AI
So a huge amount of devs are 100% unaware of the fact that a huge % of their code that they just tab auto-completed with was AI generated code. Everytime they hit the tab button to accept an inline suggestion, that was AI.
They just dont even know this, they use it totally unaware.
See my other comment here for further info on what I mean, so I dont have to repeat myself.


Very few indie developers arent using it. The vast majority of casual devs have literally no idea the “code suggestions” they get from VSCode and other IDEs are AI generated, they often just assume its part of the LSP being fancy because itll only suggest a tiny bit of inline code.
Source: I do a lot of technical interviews at my company I work at, and I interview a lot of senior and junior devs all across the board.
Almost everyone has this feature enabled still (it ships pre-enabled and is opt out) and an incredibly high percent of devs are surprised when I tell them they have to disable that for the test, because it’s AI assistance. They are often like “wait THATS AI?!?!?” and are genuinely shocked to learn this.
This % is very high for both juniors and seniors alike, its never really explicitly even made clear to you that its a feature you can disable, nor that its AI, its just there already working when you first install VSCode.
And basically everyone uses VSCode for most programming, theres other IDEs but VSCode heavily dominates as what pretty much everyone uses for every language except the small handful of ones that have their own bespoke IDEs for their use case.
But the VAST majority of game dev is C# and Lua now and a bit of python, and all of those are first class VS/VSCode languages as the IDE everyone and everything will recommend when you look up getting into it.
So yeah, no, Id estimate about 95% of game dev at this point, both amateur and professional, is using VSCode and has the AI “intellicode” feature enabled still, totally unaware they are injecting a shit tonne of AI generated code into their games.
The devs dont know it, the managers dont know it, the PR time doesnt know it, the CEO doesnt know it, no one is even aware this is a thing at most places lol. Everyone is just like “wow <language>'s VSCode plugin just has such excellent quality autocomplete and quick fix suggestions, I love it!”
Not even joking, this is how most devs are atm, they have no fuckin clue haha


So is every other major game company. This company just is open about it.
Only someone living under a rock can convince themselves developers arent using AI for all sorts of shit.
People are deeply unaware of the fact AI autocomplete for code has been baked into almost every major editor for almost 2 years now, and its enabled by default, opt out.
There are 3 types of game devs now:


So… death camps.
Honestly I was expecting death camps sooner than 2026, but not surprised to see ut arrive nonetheless. Sigh.


Cant go wrong with “Partner”
“Scuse me Partner, is this seat taken?” Still slaps if you say it confidently enough


Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me
Not even “I asked chatgpt and it said”, they just dump it in the chat @ me
Sometimes I’ll write up a 2~3 paragraph thought on something.
And then I’ll get a ping 15min later and go take a look at what someone responded with annnd… it starts with “Here’s a quick summary of what (pixxelkick) said! <AI slop that misquotes me and just gets it wrong>”
I find this horribly rude tbh, because:
I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its… just coming across as wasting everyones time.
Go find the richest person in your local city.
You have one, they exist. Probably several in the same area.
Make it their problem.