I really like the idea of the article, but the implementation and especially the examples are awful.
Making completely unsupported claims like the above is incredibly unhelpful when your point is that people need to scrutinize their (model-based) claims more closely.
The reader cannot distinguish what you've done from having an internal extremely simple mental model that "we trust models too much" from which you're reading off wildly wrong answers about which models are flawed.
Now, hypocrisy doesn't mean that an idea is wrong, but in this case it does make it a lot harder to tell whether it is right because you didn't bother to demonstrate even a single case where one can clearly discern the flaws that you claim are there. So the competing hypothesis--that people making models are appropriately careful, and that policy-makers already understand their limitations--hasn't received any serious challenge.
For the record, I think one can make the case that there are instances where people have forgotten about the differences between model and reality, gone all-in on following the model, and had harmful consequences.
But unfortunately you haven't done this here; instead you've relied on bare assertion. Maybe you will consider a major revision that carefully documents at least one important case (this was predicted, that was done, here's how the model was flawed, there's the harm, and we know in retrospect because of such-and-so).
(I also think in your eleven points you spend far too much attention on sociology and far too little on model analysis, but that is a far smaller issue than the failure to document the phenomenon you are talking about.)