I run it on the hardware I have, the data stays with me, offline. I run it on hardware I already have for other purposes, and I even have a portable solar panel (but I use the plug socket).
Qwen3 30b a3b, for example, is brilliant for its size and i can run it on my 8 GB VRAM + 32 GB RAM system at like 20 tokens per second. For lower powered systems, Qwen3 4b + a search tool is also insanely great for its size and can fit in less than 3 GB of RAM or VRAM at Q5 quantization
I am very forgetful, and googling takes forever. So if it does not sound bs, I just accept the answer. If the stakes are higher, I google it’s references.
Luckily, I have local AI.
And you should too!
With these ram prices?
Rather live without ai
I run it on the hardware I have, the data stays with me, offline. I run it on hardware I already have for other purposes, and I even have a portable solar panel (but I use the plug socket).
doesn’t AI need like 96 gigs of ram to be comparable in quality (or lack there of, depending on how you view it) yo the commercial options?
Qwen3 30b a3b, for example, is brilliant for its size and i can run it on my 8 GB VRAM + 32 GB RAM system at like 20 tokens per second. For lower powered systems, Qwen3 4b + a search tool is also insanely great for its size and can fit in less than 3 GB of RAM or VRAM at Q5 quantization
Which model do you run and how? I tried ollama a few months back but it was still quite poor
What for? I can’t think of a single problem I have in my life where the answer is AI.
I am very forgetful, and googling takes forever. So if it does not sound bs, I just accept the answer. If the stakes are higher, I google it’s references.
???
Hey, so I’ve got this bridge for sale…
Username checks out
Username checks out.