Yes, we absolutely can and should blame them for misusing frequentist statistics!
When you engage in p-hacking you're explicitly violating either the premises of your statistical model (e.g. were they explicitly using a null hypothesis of running repeated tests until you exceeded a p-value threshold or gave up...if not, they're using the wrong null hypothesis, which is not okay whether you're a frequentist or a Bayesian--the corresponding Bayesian error tends to be assuming "naive Bayes" is the same as the actual model), or you're violating the expectation that the actual number of experiments was reported.
The main way that switching to Bayesian statistics would help would be to get another chance to get people to pay attention to how to use statistics properly. For instance, Bayesian methods are also vulnerable to stop-when-it-looks-good-enough abuses. Maybe there wouldn't be a p < 0.05 goal people were trying to reach (although it's not hard to set an arbitrary log odds ratio either), but even though I fully support having access to a bigger statistical toolbox to better grapple with how to interpret data in the face of stochasticity, I don't think it's nearly as much of a panacea as simply not misusing-methods-because-you-don't-understand-them-enough.
As scientists, we have to understand how our tools work. If you don't, you'll draw unwarranted conclusions, whether the tool is PCR, single-cell RNAseq, infrared spectroscopy, Kr-Ar dating, or statistics. "Oh, everyone's misusing this tool" or "it's accepted best practice" is a flimsy excuse.