Being equivalent to a Turing machine is not the same as having "understanding". That would require a decent formalization of what "understanding" is (or could be conceived as) which coheres sufficiently with intuitions. The paper is really cool, especially in introducing a usefully-applicable length complexity, but because it doesn't provide a definition of understanding of this sort, it can't even in principle provide much support for the "understanding" claim.
That you could build a system with understanding out of a Turing machine is already widely assumed; the question is really whether within practical limits on training etc., have we done it and/or are we in some sense knowably "close", or are we still extremely far away and/or unable to tell how far we are.