I don't think that's necessarily the case. They probably just put in a training set which emphasized high diversity in output despite lower diversity in input, and the model dutifully hallucinated diversity everywhere.
Generative AI is really good at hallucinating. I found it pretty funny that with all the worry about hallucinations, they ended up training it to do it more.