That's a bit expansive of a statement! It might be hard with reinforcement learning to get a robust one-shot solution, but I'd be shocked if one couldn't get a system that would deliver pretty much the right thing pretty much every time if it was trained to ask clarifying questions and could build its own prompts in several steps (leveraging the latent knowledge in generative LLMs).
If you ask a human artist to produce some art off a brief description, you'll get all kinds of weird stuff, and nobody thinks there's anything wrong with that.
(If it's just art, "pretty much the right thing" and "pretty much every time" is enough. For matters of life and death, that still doesn't cut it, however.)