I'm not sure it's useful to conceptualize this as an understanding. It is, for certain, a linguistic computation. And in contrast to our previous rather laughable attempts to create a linguistic response function, these ones are getting really good.
But being able to hop between sensible places in language-space, though incredibly useful, is not necessarily sufficiently homologous to what we usually call "understanding" when it occurs in people.
Of course, because you can talk about a Turing machine, computations on language are in principle Turing-complete (though you have the usual problem of finite representational capacity), so in a trivial and boring sense, anything that is computable is computable linguistically.
In particular, I don't think it is obvious that linguistic context is a sufficiently appropriate representation of, say, actual reality in order to say that it "understands". It does compute mappings that correspond to our understandings that have been translated into language, but I think the jury is out as to whether that is enough to want to use the word "understand".
The reason to be careful is the same reason that we try to avoid anthropomorphizing animal behavior too much: it can be misleading to use terms that prompt us to operate in the context of human behavior when what we're trying to understand is only partially similar. (Then again, it wouldn't be the first time we used a superficially similar term that was both somewhat helpful for intuition but simultaneously misleading: "neural network", "hallucination", etc..)