No, that's backwards--if you ask the model to find candidates who are likelier to succeed, the model will tend to see through the biases that kept deserving people out because that's what you asked it to do.
If you ask the model to replicate the flawed acceptance decisions from history, it will do just that, biases and all. Even if you leave out the variable that caused the bias, in a rich dataset it has a good chance to effectively infer it.
If you induce by selection spurious correlations between comparatively poor prospects for success and some other measure--let's say you let under-qualified people in on the basis of race and then train on that data--then the AI model will discriminate on the basis of that spurious correlation that you included.
If historical biases perniciously diminished the prospects of a category of people, the model will correctly tell you that their prospects were worse--but if the bias has changed, the model will be wrong, and anyway, you might decide that the proper thing to do is to let in people who experience bias and whose prospects are thereby diminished because being "likely to succeed" is not the sole criterion by which admission should be decided.
Anyway, your characterization of the ways in which AI-driven recruitment is problematic is wrong. There are potential problems. They're just (typically) not of the nature that you described.
Again, if you train an AI on data where some group was under-selected relative to their capability and you ask it to predict success not "did they get in", then the model is extremely likely to tell you that the group was under-selected, and predict that more should be admitted.