Rex Kerr
5 min readFeb 22, 2023

--

Awesome! I think you're exactly right on diagnosing the problem--in fact, I have a less-well-written half-finished article saying basically the same thing.

I think you're way off base on the solution, but I don't in the least hold that against you because this is a really hard problem, and maybe I'm the one who is really off base, or, more likely, we both are.

But I'll share some thoughts anyway, because I might never finish that article.

Problems:

(1) Treating social media sites as publishers does not help the social media site itself act like a courtroom. Quite the opposite: it encourages the social media site to enforce the perspective it wants to see (because it is a "publisher"...it gets to decide, unlike now, where sites play at being carriers or some hybrid of the two), decreasing its ability to function as a courtroom. Any adjudication of the truth of something will end up in the actual courts. So I think this is exactly backwards.

(2) Prohibiting the dissemination of demonstrably false claims is difficult to do because any appearance of error or bias is just going to lead to drastic levels of compensation--we see this already!--where people who were prone to believe falsehoods already correctly view the the content they see as sculpted by powerful forces. And, not believing the benevolence of such forces, they then turn to even less reliable ways to gain information, worsening the problem. Either that, or you need China-level intrusiveness into all electronic communications, which barely even manages to work for them. So I think this is also exactly backwards.

(3) The problem with misinformation isn't that high school students believe it. It's the older people with more sway who are the problem. So while I think media literacy (done well--there are SO many ways to do this badly, e.g. "if you see government propaganda, believe it") and epistemology (at least if you don't let postmodernists at it) would be a fantastic idea, I don't think this helps whatsoever in a timescale that is likely to be relevant.

So, if I'm right, and all this makes it worse or doesn't help (Fairness Doctrine might actually help a little, but network/cable news isn't super important any longer), what do we do instead?

I'm still thinking through the details, but I at least have a pretty strong hunch that a plausible solution will involve two factors.

First: in order to ward off trolls, most sites have a blocking feature. Furthermore, in order to increase engagement, most sites show people things they like (even outrage is better if it's your side expressing outrage about something "they" did). This is strongly actively antagonistic to either a marketplace or courtroom idea. It's as if Ford not only can compare its cars to its competitors, but if you're in Ford Country, most people don't even get to hear that the other cars exist. It dials up the information asymmetry to twelve, leaving many people in the infamous echo chamber.

So instead, I think we need to do the opposite. Instead of a right to not hear anyone you don't want to hear, we need a right to reply. This can't be completely ungated, because blocks and bans are there for good reason, and we have ample examples of repeated pointless hostility being the only thing generated by replies. But a site that doesn't have some sort of right to reply should be treated as a publisher, meaning that they are legally responsible for the content. Rather than forcing e.g. Facebook to be a publisher, it should be a threat: be a good channel for discourse, or you're penalized by publisher status.

The reason that this is only a hunch is that it is not entirely clear to me that an appropriate gating mechanism can actually exist, and I don't have a highly plausible one in mind. If you take an adversarial mindset and explore how this can be abused, there are a lot of abuses. But it's hard to see how anything else could produce anything but echo chambers. So this is still my hunch.

Second: people LOVE reactions. Getting lots of likes feels good! (Also: it's super-easy for a social media company to use this as data for what else to show to people.)

But if that's all there is, your social media reputation has an overall depth of, like, a day or two. You can coast from windfall to windfall of likes by sharing completely bogus stuff--as long as it's not so bogus people instantly know--and it never comes back to haunt you. This, I think, is the primary psychological mechanism behind lies spreading faster than truth on Twitter and the like. Lies are more likeable (no inconvenient reality to diminish the reaction), and you never have any consequences.

What is missing is the stigma of being wrong, and the reward of digging deeper.

And so my hunch for the second piece is that rather than gamifying those instincts that are worst for a marketplace (or courtroom) of ideas, we gamify the ones that are the best. For example, Stack Overflow had two badges that are of particular note in this regard: Necromancer, which was awarded to highly-ranked answers to long-abandoned questions; and Reversal (now retired, alas), which was awarded to spectacular answers to questions that were viewed as terrible. This obviously wouldn't work for something like Twitter, but the idea would be to have additional ways to react, possibly with content, that engage the "they love me"-type emotional engagement but for things that reward evidence, fact-checking, and correctness. Something like this already exists in the culture of Wikipedia (sloppy editors who add rubbish are poorly regarded), so it doesn't seem impossible. If your old tweets got decorated with reactions like "crystal ball" (you saw this before other people found it obvious), "sage" (deep, careful analysis), "facepalm" (this is so wrong--with link to documentation it's so wrong), "fencer" (good counterargument), etc., you'd get the same emotional boost as for likes (and dislikes), but in a way that promoted content that was useful for a marketplace of ideas. And then--probably key--the reactions themselves would also be rated as appropriate or misleading, based on future reactions: if you always heart and share junk, readers who see your stuff get a link to your shameful collection of heavily-facepalmed tweets from before (weighting especially heavily facepalms linking to tweets with lots of "sage" as counterevidence, etc.).

But it's just a hunch for three reasons. First, something as elaborate as this can't possibly be mandated. It would have to not just work, but work in such a rewarding way that people would go to a network specifically in order to get it. That's a very tall order for a feature that at its core is letting people hold each other accountable for spewing nonsense. Second, exactly how to structure it to achieve the desired effect--it would have to work in conjunction with the content-delivery algorithm--is not at all obvious to me. Third, it relies upon complex dynamic interactions between multiple users converging on something truth-like; is there any reason why it couldn't be exploited by bots, trolls, or simply psychological flaws in humans, to converge on rewarding mass hallucination of nonsense and punishing factual information? But my hunch is that there is some way to get this kind of thing right to take advantage of rather than suffer from the quirks of human psychology and our tendency to seek approval.

So, anyway: great analysis of the problem. Not convinced about the solutions.

--

--

Rex Kerr
Rex Kerr

Written by Rex Kerr

One who rejoices when everything is made as simple as possible, but no simpler. Sayer of things that may be wrong, but not so bad that they're not even wrong.

Responses (1)