I think you and Dr. Marvizon have somewhat different perspectives in part because of the nature of your respective disciplines.
Science contains both discovery and discrimination phases: making observations of new phenomena (often with new technology), and trying to explain patterns one sees or reveals in one's observations (new or old).
The more you have a handle on how the fundamental components act and interact, the easier it is to productively blend the two into something that might in some foundational sense depend upon deductive reasoning (one is using mathematics after all), but that is not the hard part and therefore that isn't where one's attention should be. If you're finding tissue-specific coregulatory pathways by using graph diffusion models on data with imputed missing values, you might be making fantastic and fairly reliable discoveries, but thinking about any of this in terms of falsification is just a distraction.
However, when you don't have such a good reductive handle on the system, and you want to try to say more than nothing, then using a falsificationist framework is a lot more productive: I have an idea, I think it's explanatory, but let's be rigorous about what the explanations are as far as they go, and then measure to what extent they work.
So I can see very easily how in your world, a Popperian framework would suit without much modification, whereas in Dr. Marvizon's case that aspect of the logic has been abstracted out multiple layers below, and you think more about statistical support for inferred function.
(Aside: falsification is a special case of model comparison where one model does an outrageously worse job of explaining observations than the other. One needn't go to that limiting case in order to make progress--you can adjust log odds ratios or error estimates, too, and make your answer conditional on the type of phenomenon, and do just fine.)