Rex Kerr
2 min readFeb 23, 2025

--

I think we're getting close to as much as we can get out of this coversation, but just on this point: the charge of "behaviorism" is, I'm assuming, not a critique of induction in general, but just "if you look at whole-organism behavior, you're drastically underdetermined".

I agree with that. Behavior is a very important constraint, but it's not enough to know how it works. You might find the same solution or you might not--it really depends partly on chance and partly on what the search space looks like while trying to find a model. Now, it's important to point out that most of the experimental results that have led scientists to be down on behaviorism have come after intentionally selecting extremely impoverished behaviors to study in order to make scoring easy. Sensible enough in context, but we shouldn't over-interpret the finding that "results of two-alternative forced choice testing do not alone seem to reveal the entire mechanism of behavioral control in rats". Do we have a model that accurately predicts the behavior of rainbow lorikeets in flocks, up to statistical variance across the model and real lorikeets? Nope! Not remotely!

My point, however, is that once you start to dig into the behavior of the components that implement the organismal behavior, we can do better. What else can taking into account "what we know from neuroscience, developmental biology, ..." mean? What we know is the result of detailed experimentation and building models (sometimes mathematical, other times only conceptual) to explain our observations.

If you can do experiments on a biological system that would in principle tentatively suggest neural mechanisms of "understanding", but you instead do the equivalent experiments on an AI system and get an answer that would count if it were in the biological system, what do you say about it?

Either this sort of thing is a hint that pushes us more towards calling this a type of understanding in AI; or the claim is that our prior on behaviorism being bad is so strong that we should instead use the result in AI to decide that if we could do the equivalent experiment in biology we wouldn't be getting at the neural mechanism of understanding.

I think the latter is unfounded, which leaves me with (mostly) the former, and only a slight caution about the biology should we someday be able to do it.

--

--

Rex Kerr
Rex Kerr

Written by Rex Kerr

One who rejoices when everything is made as simple as possible, but no simpler. Sayer of things that may be wrong, but not so bad that they're not even wrong.

Responses (1)