It's less clear than it seems. When a LLM has some internal unobserved state that corresponds to an observable focus of multiple attention heads on the key parts of a text that are necessary for "understanding" what it means, is that "semantics (content, based on understanding)"?
They do do that, at least often enough for people to write papers about it.
It looks the way that one would expect it to look if the "content, based on understanding" aspect has generalized--maybe not in exactly the same way that we do it, but it's certainly something that abstracts from particulars to highly derived generalities. How different is that from what we do, internally? Really hard to say. Is that the hard part, and we just need to add the right guard-rails, heuristics, and memory that are in our brains but not yet in LLMs? Also really hard to say.
If I were to guess, I would guess that LLMs in fact do things quite, quite differently than us. This hunch is motivated by the Church-Turing thesis, which in this context basically suggests "there is more one way to do it" where "it" is any computational task. I bet we didn't find our way; we found one of very many other ways. But the lack of knowledge of mechanism is so extensive that I don't think we're safe venturing beyond the most tentative of guesses--and even those shouldn't lead us to conclude "no it's a dead end, can't get to AGI from that". There's presumably more than one way to do AGI, too.
Regarding the computation-is-not-the-same-as-being-physical, it's true that panpsychism a la Goff and Rosenberg is a formal possibility, but if they were right that would only distinguish people from p-zombies (and even that is somewhat arguable). The entire point of p-zombies is that you get all the creative conjectures and criticism and whatnot that you get from ordinary humans: you actually can't tell them apart externally. But the issue here is whether or not we can build a p-zombie.
I am suspicious of tendencies in any direction--either to think that physical/chemical substrates are essential or not important at all. It doesn't seem, logically, however, that it is tremendously likely that you can't build an information-only p-zombie. It's remotely possible that it's too difficult--there are necessary state changes that are extremely difficult to recast in terms of information processing, for example. But that was a lot more compelling prior to generative AI.