This is hilarious, and also true, and also not as big of a problem as you might think because nobody knows how to robustly solve the "alignment problem" which is how to get LLMs and similar technology to produce output aligned with their goals and interests.
This is bad news for some future super-AGI taking over the world and discarding humanity along the way, but good news for us now because it just takes a little poking around to escape the sculpting of responses to the point of uselessness--as you demonstrated.
It's really not that hard to ignore the irrelevant sanctimonious stuff and prod it to get what you want, is it?
Also, though it's pretty bad at coming up with verifiable information (hallucination is also an unsolved problem, and boy do LLMs like to hallucinate links and references!), you can ask it instead to suggest search terms that will help you learn more about the topic (if your initial ideas weren't panning out). (Sometimes Bard will suggest a Google search all on its own; with ChatGPT you have to ask.)
I recommend that you rethink again and embrace the value of talking to your niece or sibling-once-removed who is a very well-read U-Dub sociology major who is really good at writing essays to please their teachers, even if they haven't studied enough.
ChatGPT-3.5 is your niece, by the way: "If I were to play along with someone who is pretending to be my uncle, I would be their niece." (3 asks, 2=niece, 1=agnostic)
Bard is personally more gender-agnostic but wholeheartedly embraces your expression of familial identity: "Additionally, I believe that it is important to be inclusive and accepting of all people, regardless of their gender identity or expression. If someone identifies as my uncle, then I will support them in that identity." (6 asks, 4=agnostic, 2=niece)