No, I'm taking you up on it, but I'm not doing the parts which I view as busywork unless you justify better why all that stuff is necessary.
For instance, if I'm evaluating finite element models of bridge structural integrity during earthquakes, you would have me investigate whether the homeless people who live under the bridge have studied the model in depth.
And I'm saying: if the correctness of an OpenSEES model for bridge safety depends on critical review by homeless people, you've done something very, very wrong.
All we need to do is note that, in fact, people are at least sometimes under the bridge, so our model had better include any non-load-bearing elements that can still cause injury, and also predict falling debris not just complete structural failure.
So, again, I'm happy to go through the questions that assess whether the model is likely to reflect part of reality and whether it's the right part of reality given the situation, but if you want to go through more than the first five plus a general evaluation of bias, you need to argue compellingly for why the others are necessary.