It's particularly interesting to note, with large language models, that it is possible to play a quite decent game without wielding words as tools, but merely mimeing the game. We learn it because we care, because we wield language as a tool; but statistical methods care not, and learn anyway.
This is perhaps the best possible confirmation of words getting their meaning from their use: you don't need to feed dictionaries into LLMs in order for them to have a masterful command of how to structure language at that level.
For instance, ChatGPT4 does a pretty decent job when asked to explain this particularly difficult quote from Sellers (with decent explanations of difficult terms): "We have seen that the fact that a sense content is a datum (if, indeed, there are such facts) will logically imply that someone has non-inferential knowledge only if to say that a sense content is given is contextually defined in terms of non-inferential knowledge of a fact about this sense content."
What even is that nonsense? Well, ChatGPT can parse it for you, too: "We have seen that (the fact that (a sense content is a datum) (if, indeed, there are such facts)) will logically imply (that someone has non-inferential knowledge) only if (to say that (a sense content is given) is contextually defined in terms of (non-inferential knowledge of a fact about this sense content))."
It's not only humans who can play the game, now.