@[email protected] @[email protected] The language space is getting pretty overloaded in this area in ways that are unhelpful, it has to be said. I recently nearly skipped over a genuinely interesting piece of machine learning (sorry!) research as it had AI in the title and was something LLMs would obviously be terrible at - but they weren't using LLMs.
> I really mostly saw people correctly take it to just mean "this changes in response to information"
It's interesting to see you (Cat) say this, as one of the issues I have with LLMs is that people assume they will learn while in reality the 'learning' already happened at the model training stage (with added bonus confusion if there is or isn't any context being stored), and the thing you're actually using doesn't change in response to information. Which is true of a lot of machine learning tools in general, not just LLMs, but LLMs have the added disguise of tracking context within a conversation (and frequently now an internet connection), so interface almost implies that longer term learning is happening (despite not having 'learning' in the name :facepalm:).