Do we, though, if we set the goal-posts up in advance and then ask humans and LLMs to meet them?
My read is that almost all of the "good reasons" leaned very heavily on humans having radically different behavioral capacity than any straightforward algorithmic system we could think of.
Searle's Chinese Room (which is a particular implementation of a p-zombie, incidentally) established that the man following the rules does not "understand" Chinese. Quite right--he doesn't; nor does the GPU on which the transformers are being computed. Nor, if we were simulated, would the physics engine that is computing us. Understanding is emergent, not part of the basic mechanical or computational operations.
And yet there is clearly something that serves the same observable function as understanding. So perhaps the rules themselves, plus the state which they act on and compute, do?
Preposterous! one might think. We understand, and we're not equivalent to a bunch of internal rules applied to internal states because we're actually...um...actually...well...we don't really know.
This is why I think we're being far too hasty to conclude with confidence anything about understanding or semantics. It's fine to make guesses, but I just think we need to be appropriately cautious given our extreme lack of knowledge.