A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

      • Scrollone@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

          • daikiki@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

            • yucandu@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              1 month ago

              Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You might genuinely be using it wrong.

        At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.

        Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

        Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

        Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Agreed, I don’t understand people not even giving it a chance. They try it for five minutes, it doesn’t do exactly what they want, they give up on it, and shout how shit it is.

          Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.

          It’s like handing your 90-year-old grandpa the Internet, and they don’t know what the fuck to do with it. It’s so infuriating.

          Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just “me needs problem solvey, go do fix thing!”

          • Vlyn@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            It’s not really that simple. Yes, it’s a great tool when it works, but in the end it boils down to being a text prediction machine.

            So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)

            • dream_weasel@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              I feel like there needs to be a dedicated post (and I don’t want to write it, but maybe I eventually will) that outlines what a model really is. It is not just a statistical text prediction machine unless you are being so loose with the definition of “statistical” that it doesn’t even mean anything anymore.

              A decent example of a statistical text prediction machine is the middle word suggested by your phone when you’re using the keyboard. An LLM is not that.

              In the most general terms, this kind of language model tokenizes a corpus of text based on a vocabulary (which is probably more than just the words in the dictionary), uses an embedding model to translate these tokens into a vector of semantic “meaning” which minimized loss in a bidirectional encoding (probably), that is then trained against a rubric for one or more topic area questions, retrained for instruction and explainability, retrained with reinforcement learning and human feedback to provide guardrails, and retrained again to make use of supplemental materials not part of the original training corpus (resource augmented generation), then distilled, then probably scaled and fine tuned against topic areas of choice (like coding or Korean or whatever) and maybe THEN made available to people to use. There are generally more parts to curriculum learning even than that but it’s a representative-ish start.

              My point being that, yes, it would be nuts to pose ANY question to a predictor that says “with 84% probability, the word that is most likely follows ‘I really like’ is ‘gooning’ on reddit”, but even Grok is wildly more sophisticated than that and Grok is terrible.

              Edit: And also I really like your take at the start of this thread: user error is a pretty huge problem in this space.

              • Vlyn@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                edit-2
                1 month ago

                The training is sophisticated, but inference is unfortunately really a text prediction machine. Technically token prediction, but you get the idea.

                For every single token/word. You input your system prompt, context, user input, then the output starts.

                The

                Feed the entire context back in and add the reply “The” at the end.

                The capital

                Feed everything in again with “The capital”

                The capital of

                Feed everything in again…

                The capital of Austria

                It literally works like that, which sounds crazy :)

                The only control you as a user can have is the sampling, like temperature, top-k and so on. But that’s just to soften and randomize how deterministic the model is.

                Edit: I should add that tool and subagent use makes this approach a bit more powerful nowadays. But it all boils down to text prediction again. Even the tools are described per text for what they are for.

                • dream_weasel@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  21 days ago

                  Unless that’s how people are designing front ends for models, it literally DOESN’T work like that. It works like that until you finish training an embedding model with masking related tasks, but that’s the tip of the iceberg. The input vector, after being tokenized, is ingested wholesale. Now there’s sometimes funny business to manage the size of a context window effectively but this isn’t that unless you’re home-rolling and you’re caching your own inputs or something before you give it to the model.

  • Crozekiel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    AI is actively destroying the environment and harming people. Data centers have been caught using methane burner generators (which are banned for use by the EPA) which significantly increase health risk to residents that live nearby (cancer and asthma rates already significantly increased). Then you have the ridiculous effects it is having on computer hardware markets, energy and water infrastructure and prices.

    Then after all of that, the AI themselves are hallucinating somewhere in the neighborhood of 25% of the time, and multiple studies have found that people that use them regularly are losing their own skills.

    I can’t figure out why people would choose to use them. I can’t figure out why programming is the one place where people that might have otherwise been considered experts in the field are excited to use them. Writers, artists, lawyers, doctors, basically every other professional field that AI companies have suggested these would be good for, they get trashed by experts in the fields for making garbage. I have a hard time believing the only thing AI can do well is write code when it sucks so badly at everything else it does. Does development suck this much? Do developers have so little idea what they are doing that this seems like a good idea?

    • antihumanitarian@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      If you’re honestly asking, LLMs are much better at coding than any other skill right now. On one hand there’s a ton of high quality open source training data that appropriated, on the other code is structured language so is very well suited for what models “are”. Plus, code is mechanically verifiable. If you have a bunch of tests, or have the model write tests, it can check its work as it goes.

      Practically, the new high end models, GPT 5.4 or Claude Opus 4.6, can write better code faster than most people can type. It’s not like 2 years ago when the code mostly wouldn’t build, rather they can write hundreds or thousands of lines of code that works first try. I’m no blind supporter of AI, and it’s very emotionally complicated watching it after years honing the craft, but for most tasks it’s simple reality that you can do more with AI than without it. Whether it’s higher quality, higher volume, or integrating knowledge you don’t have.

      Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.

      • Venia Silente@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.

        On the contrary!

        I’ve seen quite a number of “AI cleanup specialist” job offerings so far, and even a few consulting positions on training juniors away from using AI in development.

        (No, I have not seen any position open on training management away from using AI…)

  • PerogiBoi@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    Aaaaand just uninstalled lutris. There are many other ways to install windows games and applications that aren’t ensloppified.

  • r1veRRR@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    From his perspective, he’s investing his free time and likely money into a project for people that are 99% of the time just leechers, as in they never contribute back and only complain.

    Now he has a tool that he feels helps him deal with all that FREE labor is doing for everyone, and the very same people now want to tell him how to do his FREE labor he does for them.

    I completely understand being pissed off by that.

    • qaeta@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I mean, a reasonable person would choose to stop rather than becoming an unethical egotistical fuckwit…

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      So he is no longer maintaining it and Claude is. And what bullahit choose a company that doesn’t work with the military. Does he know what the military is using eight now at this very instance for AI.

      • Stitch0815@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        1 month ago

        wtf are you talking about

        AI is a tool

        Claude does not take over any maintainer position.

        You are just inventing facts to be angry Don’t use lutris if you disagree with him.

        But don’t harass the dev

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    30 days ago

    Moral of the story is don’t let Claude do commits. It insists on crediting itself

    Also stop harassing openspurce developers

    Also be transparent when you have vibecoded commits. There’s no reason to hide it. Just say that parts of your codebase is vibecoded or coded with ai assist and those who don’t like it can fork it or use something else.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      30 days ago

      Also be transparent when you have vibecoded commits. There’s no reason to hide it.

      I find it rather ironic that one thing they are transparent about is the covering up the evidence that proves it was vibecoded. Apparently, they never heard of the Strainsand Effect.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    To admit some context: My company has strongly encouraged some AI usage in our coding. They also encourage us to be honest about how helpful, or not, it is. Usually, I tell them it turns out a lot of garbage and once in a while helps make a lengthy task easier.

    I can believe him about there being a sweet spot; where it’s not used for everything, only for processes that might have taken a night of manual checks. The very real, very reasonable backlash to it is how easily a poor management team or overconfident engineer will fall away from that sweet spot, and merge stuff that hasn’t had enough scrutiny.

    Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives. It’s just sad that in 99.9% of cases, we’re not anywhere near that perfect world.

    I don’t totally blame the dev for defending his use of AI backed by industry experience, if he’s still careful about it. But I also don’t blame people who don’t trust it. It’s kind of his call, and if the avoidance of AI is important enough to you, I’d say fork it. I think it’s a small red flag, but not nearly enough of one for me to condemn the project.

    • underisk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Even Bernie Sanders acknowledged on the senate floor that in a perfect world, where AI is owned by people invested in world benefit, moderate AI use could improve many people’s lives.

      I don’t think you should make a claim like this while AI is being heavily subsidized and burning VC cash to stay afloat. The truth is whatever value it may add to such a society might actually be completely negated by it’s resource costs. Is even “moderate” AI use ecologically or economically sustainable?

    • deadcade@lemmy.deadca.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)

      Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.

      If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.

      • bookmeat@fedinsfw.app
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        A few years ago we were all arguing about how copyright is unfair to society and should be abolished.

        • wirelesswire@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        If the developer isn’t able to keep up, they should look for (co-)maintainers.

        Same energy as “Just go on Twitter and ask for free voice actors,” a la Vivziepop. A lot of people think this kind of shit is super easy, but realistically, it’s nearly impossible to get people to dedicate that kind of effort to something that can never be more than a money/time sink.

        • deadcade@lemmy.deadca.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Absolutely true, but there’s one clear and obvious way; drop support for the project yourself.

          If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

          FOSS maintainers don’t owe anyone anything. What some developers do is amazing and I want them to keep developing and maintaining their projects, but I don’t fault them for quitting if they do.

          • P03 Locke@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            XKCD, of course

            If a FOSS project is archived/unmaintained, for a large enough project, someone else will pick up where the original left off.

            No, they won’t. This line of thinking is how we got the above.

            Their line of work is thankless, and nobody wants to do a fucking thankless job, especially when the last maintainer was given a bunch of shit for it.

  • jumjummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The AI hate crowd on Lemmy is pretty insufferable. Same folks would be complaining about Cloud tech back in the day.

    Know the limits of AI and use it appropriately. Completely shunning AI is just silly.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      The maintainer openly admitted to suspecting this would be become an issue and hid the co-authorship, promptly telling the “haters” to wish them luck finding the AI generated code. Who are the insufferable ones here again?

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I don’t care how many years of coding you have if your using AI to clear your backlog you are not going to review everything. And I’m sick of people saying I’m different I am using AI responsible. We all know eventually there well be a bug out in by AI.

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

        I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

        The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.