A lot of structure is preserved. It's a lossy embedding--note that this is standard terminology, as dimensionality reduction techniques are often called "embeddings" (LLE, for instance). It's obviously not an injection. The only way to embed your actual in-practice world model into something is to simulate your brain (and probably a good deal of physical sensations so your brain works normally).
The question is: what aspects of the world, or of our models of the world, end up represented in language to a sufficient extent that LLMs implicitly possess that model by virtue of how we talk about it?
In my head I have a physical model of various material properties of objects, how they react under different forces and environmental conditions, and so on.
LLMs tend to be bad at any sort of nontrivial application of such models. We don't talk very much about throwing packing peanuts and pool noodles at wine glasses and windows.
For instance, it is bad at spatial relationships that we don't normally talk about (e.g. queries about positioning of elements in flags), and it's bad about understanding mechanical properties in physically nontrivial situations (e.g. queries about throwing various objects at wine glasses and windows and glass spheres). It's very very clear from the random flaws in the answers that the "model" isn't anything like our model.
On the other hand, we talk a lot about finding shade, so LLMs generally are pretty good at telling you that trees are good at making shade while flagpoles aren't. There is no sense in which this isn't a world model. The entire point of a world model is to make predictions about the world given what you know about the world. It absolutely does that, and it does a weirdly inhuman job at it, but it can get a lot of things right (again, if we've talked about the relevant pieces of the world: it can "reason" as long as we use compatible language in how we talk about things), and it can get more right than you might initially think if you probe it the right way--in some sense it "has" more of the model than it "knows" how to use.
But there's no reason to assume that because there's a world model embedded in the sum total of human language as learned by an LLM that the LLM is conscious. After all, when we're asleep we still have a world model. But we're not conscious. So it's not like the former is sufficient for the latter.