I think this article unfortunately makes a lot of implicit assumptions that, if we are careful, we have to admit are unfounded.
How, for instance, do we know to what extent human behavior and capability is simply statistical inference, except clunky stupid less-general inference? It's not like we understand how human cognition (or fruit-fly behavior) actually works. We have enough psychophysics experiments to be quite confident that our intuitions can be profoundly misguided when it comes to the working of our own brains (as if varied and fanciful theories of philosophers about perception were not evidence enough of this).
It is true, of course, that LLMs hallucinate and engage in various other sorts of errors. But it is also true that humans confabulate, exhibit a wide variety of cognitive biases, and somehow with this pretty flaky set of capabilities, we can nonetheless end up with spectacular displays of reasoning and analysis.
Okay, but how, in detail, do we construct this reasoning and analysis? We can talk about our reasoning in terms of models and induction and logic so forth...but so can LLMs! Have OpenAI and the other LLM companies actually solved the hard part, and what remains is to layer on epistemology and evaluation of generated hypotheses and a working memory subsystem? How could we possibly know, given that we're comparing a system whose workings we understand very poorly to another system whose workings we also understand rather poorly.
Arguably, the large majority of computation available to human brains has impacted the statistics of language use, and if LLMs are able to generalize away from specific instances (and they are) then...have they maybe generalized our cognition, because if not how would they produce language that matches language we produce when we are thinking carefully about things? Is it latent in the model? Well...maybe? Maybe not? How would we know?
Machine learning can learn that the earth is both flat and round--as can humans, since we can enjoy discworld novels' altered geometry perfectly well, thank you very much--and it can also learn that if you are talking about the real earth then it is a very very very low probability event to emit language that claims that it is both flat and round at the same time. It has the capacity to learn that both are simultaneously true (in a sense), but it doesn't use that capacity because that's not what we ask it to learn. "I created a system that was more powerful than I needed" isn't much of a criticism. (Especially since humans embrace obviously contradictory beliefs all the time, as long as they're not confronted with both at once.)
Functionally, LLMs can (and arguably have) learned that you can't simultaneously accept two contradictory properties for the same object. It isn't even hard. This isn't the kind of mistake that is commonly made. The much more common kind of mistake is the kind made by undergrads who don't really understand what is going on but are just reasoning linguistically: r = 255, g = 128, b = 0, and we know that area is pi r squared, so a = 204282. (This kind of extremely simpleminded confusion of alpha channel value with area isn't something LLMs do--but it illustrates the nature of the problem.)
"Wow, you really don't get it" is what I think when I see humans making these (completely incorrect) links. But is "getting it" something profound, or something more straightforward, and what is its nature? Could we do something likewise for LLMs? How would we know?
So, anyway, I don't think we are justified in concluding very much of the sort Chomsky et al. suggest. ChatGPT might be a sophist, and a stochastic parrot (for appropriately generous multidimensional deeply cross-correlated stochastic distributions), but we really can't determine whether or not in very fundamental ways, we are ourselves.