Rex Kerr
4 min readAug 10, 2023

--

This is a great start and I wholly second basically all of your points in the article. (One tiny quibble with the other part: being able to explain things simply without losing too much essential detail is an art, and it is good to develop our skill in this art! But I fully agree that some requests are simply not reasonable.)

However, there is one really big problem with the approach above, and you usually can't solve it without a lot of comfort with comprehending data, and you usually can't have that without a good working relationship with math. And that is that experts are sometimes wrong, and they tell the convincing stories they've told themselves to other experts who, all too often, believe them. The belief can be intellectually fashionable, but no more objectively correct than that scrunchies are objectively the best hair accessory.

The consequence of this is that someone needs to check. The best person to check one expert is another expert, but only if the second expert has access to some extra objective, impartial information. This is where replication in the sciences works its miracle: the experts each try to ask reality, "Hey, is it like this?" And reality often goes, "Nope!" If the first expert--experts are vulnerable to confirmation bias too--hears, "Yes, you are right, you were so right all along, and see, you've learned even more because you can tweak your ideas to fit this data too!" then the second expert, when listening not to the first expert but to the data, is more liable to hear, "Wow, no, that was wrong."

Of course, you can't be an expert in everything. But, at least with open-access journals, you can check whether the experts are basing their beliefs on beliefs which in turn are based on beliefs that etc. etc., or whether rather quickly they collide with evidence, and lots of it (plenty to deeply challenge the ideas, not a little window dressing like deciding, "Men are the worst and I'm gonna prove it", writing a survey that says, "Men have been responsible for almost all wars throughout history. Are men the worst? [y/n]?" and then when the answer comes back with over 50% yes, concluding "Yes, men ARE the worst! It's objectively proven!").

This is where a high degree of comfort with mathematics (statistics especially) as well as experimental design is really helpful.

In some cases, it's obvious that the experts are probably deeply engaged with data, meaning that their ideas--especially the bad ones--are facing stern challenges all the time. That's encouraging. You don't really need to be able to evaluate all the details of absorption spectroscopy to tell that exoplanet astronomers making claims about water in the atmosphere of those planets are really trying.

In other cases, it's obvious that the experts really aren't that engaged with data at all. Freudian psychoanalysis or Aristotelian mechanics are good examples of this.

In yet other cases--and alas there are a lot of these cases--you have to dig a bit to tell. Are they saying "this is bimodal" but the data shows it's a spectrum? Are they saying, "x causes y", but all the data only shows that x and y are merely correlated (and weakly at that)? Are they saying, "we know p doesn't ever cause q", but the only tests for p causing q were under extremely limited conditions? Is everyone making grandiose statements off of universal truths gleaned from 47 undergrads at one university (down from 60, because 13 dropped out of the study partway through)? Are the results plausibly caused by p-hacking? Have they been reproduced? Is there independent confirmation using other methods? Did they average data together that should have been kept separate, or did they keep separate data that should have been averaged together and then concluded "there is no evidence of an effect in case one (p > 0.05) or case two (p > 0.05) or ..."?

One of the best recent examples of this is physicist Richard Muller's skepticism of the global warming data that the climate scientists had claimed showed global warming. I also looked at the state of the art at the time and concluded that he was actually right to be worried. The methods were too often careless, shrouded in mystery and/or with datasets unavailable or only if you were nice to the right P.I., normalized in ways difficult to justify from first principle but seemed instead more obviously justified by getting the right answer, and so forth and so on. He got funding to do it again better (at least better-from-his-perspective), and it turned out that...the climate scientists had been right all along. But his effort wasn't pointless! It was a splendid contribution towards keeping everyone honest, and adopting better practice for keeping everyone honest--open data, open code, clearly explained rationales for analytical methods, and so on.

If you dig into things deeply and you have a skeptical-methodological-mathematical kind of toolkit, you can sometimes detect that experts are likely wrong, and a lot more often that that detect that experts' expertise is not a good guide to truth, because the standard in the field is to use unreliable methods to obtain knowledge.

Experts are only humans. They too want shortcuts, get attached to pet ideas, ignore contradictory evidence, decide that if another field isn't trivially comprehensible to them in two minutes with an explanation like-I'm-five then it's irrelevant, know that Dr. Smith has the best reputation so of course those results have to be right, and so on. Those of us who like to delve deeply can help function as an early-warning system to keep them honest.

--

--

Rex Kerr
Rex Kerr

Written by Rex Kerr

One who rejoices when everything is made as simple as possible, but no simpler. Sayer of things that may be wrong, but not so bad that they're not even wrong.

No responses yet