The reason ChatGPT does human reasoning isn't because ChatGPT has the same architecture or computes the reasoning in the same way, though. It uses human reasoning because we showed it our reasoning, as embedded in language, and we said "learn this!". And it has the representational capacity--probably quite different in detail than a brain's capacity, but it doesn't matter--to learn a great deal about the structure of the high-dimensional manifold which is human linguistic expression.
(I second the recommendation for Sejnowski's book as an approachable introduction to the history of ML leading to the current deep learning epoch. It's too old to fully express the revolutionary capabilities of carefully-guided LLMs, though, and as it is a personal perspective--albeit from someone who is as well-placed as anyone is to give a spectacular perspective--it also shouldn't be treated as comprehensive. With those caveats, is a good way to dig into the history of artificial intelligence, machine learning, and the connection to neuroscience. (And some neuroscience.))