

Working on it! I should have an update later today


Working on it! I should have an update later today


Working on it! Will have an update for you later today


Very cool! I’m actually interested in helping with testing and porting to other architecture. Made a comment on the open issue for ARM support, happy to open a PR if you’re interested


Basically for fun yes, gamifying development. Anyhoo yeah the idea is you would encounter bugs when discovering bugs or working on bugfix branches, and other types for a few other circumstances.
At the moment the only real rewards are having specific buddies you can assign to specific sessions or threads and they’ll chime in like the OG buddy system, level up based on goals accomplished, no real effect on the code itself, just emergent complexity from whatever the user is up to.
Just FYI, I’m not using LLMs for art apart from placeholders till I can hire someone or find and dust off my old copy of CS6 to start making my own assets. (Inkscape drives me nuts and nobody made a PhotoGIMP-like option for illustrator yet)
All the software I’m designing is deterministic and local first, LLMs and cloud dependencies as fallbacks or gap fillers. And nothing critical without a human to review, etc.


An angels dream sounds like a perfectly satisfying ending to this nightmare


I mean, electrically all of those things will just attenuate amplitude, not really effect signal oscillations, which is actually what sound is …
All they’re doing is effectively adding a small resistance to the signal which will just lower the volume in effect. Adding any amplifier will fix that
Thank you for the several new additions to my own list xD
deleted by creator
Great list! Why no kiwix? Seems right up your street
This is the nightmare scenario for any team that built their whole workflow around a cloud API. No warning, no clear reason, no real support path. just a Google form and 60 people sitting on their hands.
The uncomfortable truth is that “terms of service” at this scale is just “we can pull the rug whenever.” Anthropic isn’t unique here either. OpenAI, Google, all of them have the same opaque enforcement problem. It’s a big part of why I’ve been building tools that run on local inference by default. Not because cloud is bad, but because your users shouldn’t be one vague policy complaint away from a complete outage.
Local gives you continuity even when the upstream disappears.