

Step 1: get good at punching.


Step 1: get good at punching.


Sure, nature writ large is resilient and adaptable.
Individual species die off all the time. Sometimes for stupid reasons.


Unless that’s how people are designing front ends for models, it literally DOESN’T work like that. It works like that until you finish training an embedding model with masking related tasks, but that’s the tip of the iceberg. The input vector, after being tokenized, is ingested wholesale. Now there’s sometimes funny business to manage the size of a context window effectively but this isn’t that unless you’re home-rolling and you’re caching your own inputs or something before you give it to the model.


Bzz bzz!
That is incorrect.
Dream_weasel?
“I’d like to solve the puzzle: Slipping in mustard and crying”


I feel like there needs to be a dedicated post (and I don’t want to write it, but maybe I eventually will) that outlines what a model really is. It is not just a statistical text prediction machine unless you are being so loose with the definition of “statistical” that it doesn’t even mean anything anymore.
A decent example of a statistical text prediction machine is the middle word suggested by your phone when you’re using the keyboard. An LLM is not that.
In the most general terms, this kind of language model tokenizes a corpus of text based on a vocabulary (which is probably more than just the words in the dictionary), uses an embedding model to translate these tokens into a vector of semantic “meaning” which minimized loss in a bidirectional encoding (probably), that is then trained against a rubric for one or more topic area questions, retrained for instruction and explainability, retrained with reinforcement learning and human feedback to provide guardrails, and retrained again to make use of supplemental materials not part of the original training corpus (resource augmented generation), then distilled, then probably scaled and fine tuned against topic areas of choice (like coding or Korean or whatever) and maybe THEN made available to people to use. There are generally more parts to curriculum learning even than that but it’s a representative-ish start.
My point being that, yes, it would be nuts to pose ANY question to a predictor that says “with 84% probability, the word that is most likely follows ‘I really like’ is ‘gooning’ on reddit”, but even Grok is wildly more sophisticated than that and Grok is terrible.
Edit: And also I really like your take at the start of this thread: user error is a pretty huge problem in this space.


Supposing the prices they charge are still less than what you would pay for the convenience of purchasing a product with no extra effort, why would you switch?
I have myself had aspirations to buy fewer things from Amazon. However. Even including stuff like this, I am happy to pay $10 extra to not have to dick around.
I hope Amazon has to pay money for this and that it hurts their business model, bit as a customer they are still scratching my itch 2 times out of 3.


I would like to remove the word “slop” from common speech for overuse. Sorry for your jack stand experience.


Red alert! Red alert! Danger!
Uh yes. If you can’t touch type, prioritize it. The speedup alone is worth it.


I see your pedantry and raise you my own:
There are absolutely languages nobody speaks in your region even if you live in a city.
I therefore choose sign language or Assembly.
So I find it to actually be a really helpful “barometer” of language skill. When I’m in France, if I go in a store and conduct s full conversation in French, I know my accent, word choice, and general language skill is good. If halfway through the exchange we switch to English, I know I either made an egregious language error or I started sounding like an American. If the conversation switched to English right away, I either made a critical language mistake OR I just happened across a very competent English speaker.


And I’m sitting here wondering how I never heard he died in the first place. Before a few minutes ago I would have said he’s still serving time in Siberia.
I’m honestly not much of a belt buckle guy so it probably wouldn’t get as much use.
Get out of here BBB!
I want the shit out of this pillow


You can think whatever you like. It doesn’t seem cut and dry, even by known and overwhelming prevalence of Texas honor killings and alcoholics who are overwhelmed by 2 glasses of wine.
Indistinguishable facts from just an inexperienced idiot gun owner.


Get a better ad blocker, I don’t pay for BBC and I can see the whole thing. Also don’t be a twat, there’s no need for it.


You can believe whatever you like, but the title of the article should match the content, and you should read the content before you decide. It sounds like you did read it so good on you.
Based on my read of it, there’s definitely enough reasonable doubt that I’m not comfortable saying this guy murdered his daughter based on the content provided. I updated my original comment to say why.


The article literally says it’s a Glock 9mm. It also says 2 500 mL boxes whoch he claims to have had 1. That’s like 2/3 a bottle, not like a box of franzia. I appreciate being a lifelong shooter (me too), but this was the dudes first gun, and he’s a British immigrant as an adult. All of it taken together, yeah seems like reasonable doubt to me.
You could find the details by reading the whole article.
Thought this was c/cartographyanarchy for a second.