Have you actually tried getting them to do this? I have. They aren't spectacular, but if you sample the space of completely batty combinations of things, that the correct answers that they produce is because they've seen it is astoundingly unlikely. They just get way too much stuff right.
It's not that they have "conceptual understanding" in a human-like sense. It is that key relationships are implicit in how we talk about things, so that talking something through does in fact reflect that relationship (not always very solidly).