Does The Brain Think?

I have discussed in previous posts how scientists often use intentional or psychological metaphors to describe the functions of different body parts. For example, autoimmune disease happens because the immune system for some reason “thinks” that body parts are foreginers; perception depends on how higher levels of the nervous system “predict” incoming sense data from lower levels; visual illusions happen when the brain makes a “mistake” about the meaning of sense data from the eyes. And pain is understood as the product of a system that “wants” to protect you from what it “thinks” is a physical threat to the body. This metaphorical thinking, which Dan Dennett calls the “intentional stance” is ubiquitous in biology, and plays a role in helping scientists to imagine and describe the awesomely complex workings of the body. Of course metaphors have limits, and therefore using them presents a risk of overreach. Thus, like scientific models, metaphors are always in some sense “wrong” but may be useful.

However, it has been claimed that psychological metaphors in the context of cognitive science are never useful, and represent a fatal flaw called the "mereological fallacy," which occurs when a part is confused with the whole. Under this view, it is wrong to say that a brain thinks, because only a person can think, and brains by themselves are just gobs of neural goo. This is true in a literal sense, but don’t scientists already know this? When world renowned neuroscientist VS Ramachandran says that “pain is an opinion” is he confused? It doesn’t seem like it - Ramachandran advanced our understanding of phantom limb pain, which is quite a puzzle without proposing that some unconscious parts of the brain have the wrong “opinions” about the state of the body. But I recently saw arguments on social media that Ramachandran is committing some fundamental philosophical errors that undermine his theories, as well as popular models of pain. I disagree and here’s why.

Some googling reveals that the term “mereological fallacy” does not generate many hits, and most relate more to philosophy than science. Further, nearly all references trace back to a single source of authority - arguments made by Max Bennett and P.M.S. Hacker, who take the radical view that the entire field of cognitive science is plagued by misunderstandings about the difference between persons and parts of persons. Their arguments are highly technical, and perhaps have some merit in particular contexts, but they do not seem to be widely accepted, and have been severely criticized by John Searle and Dan Dennett, two absolute giants in the field of philosophy of mind and cognitive science.

Below is an extended set of quotes from Dennett, explaining why there is nothing inherently wrong with making statements like “the brain thinks.” In fact, the “poetic license” afforded by this language may be “precisely the enabling move that lets us see how on earth to get whole wonderful persons out of brute mechanical parts.” Here’s Dennett:

“The use of psychological predicates in the theorizing of cognitive scientists is indeed a particular patois of English, quite unlike the way of speaking of Oxford philosophy dons…

When I began to spend my time talking with researchers in computer science and cognitive neuroscience, what struck me was that they unselfconsciously, without any nudges or raised eyebrows, spoke of computers (and programs and subroutines and brain parts and so forth) wanting and thinking and concluding and deciding and so forth.

….

It is an empirical fact, and a surprising one, that our brains - more particularly, parts of our brains - engage in processes that are strikingly like guessing, deciding, believing, jumping to conclusions, etc. And it is enough like these personal level behaviors to warrant stretching ordinary usage to cover it. If you don't study the excellent scientific work that this adoption of the intentional stance has accomplished, you'll think it's just crazy to talk this way. It isn't. … it pays off handsomely, generating hypotheses to test, articulating theories, analyzing distressingly complex phenomena into their more comprehensible parts, and so forth.

It is not just neuroscientists; it is computer scientists (and not just in AI), cognitive ethologists, cell biologists, evolutionary theorists all … teaching their students to think and talk this way … If you asked the average electrical engineer to explain how half the electronic gadgets in your house worked, you'd get an answer bristling with intentional terms that commit the mereological fallacy - if it is a fallacy.

It is not a fallacy. We don't attribute fully fledged belief (or decision or desire-or pain, heaven knows) to the brain parts - that would be a fallacy. No, we attribute an attenuated sort of belief and desire to these parts, belief and desire stripped of many of their everyday connotations (about responsibility and comprehension, for instance).

… For years I have defended such uses of the intentional stance in characterizing complex systems ranging from chess-playing computers to thermostats and in characterizing the brain's subsystems at many levels.

The idea is that, when we engineer a complex system (or reverse engineer a biological system like a person or a person's brain), we can make progress by breaking down the whole wonderful person into subpersons of sorts - agentlike systems that have part of the prowess of a person, and then these homunculi can be broken down further into still simpler, less personlike agents, and so forth - a finite, not infinite, regress that bottoms out when we reach agents so stupid that they can be replaced by a machine.

Far from it being a mistake to attribute hemi, semi, demi, proto, quasi, pseudo intentionality to the mereological parts of persons, it is precisely the enabling move that lets us see how on earth to get whole wonderful persons out of brute mechanical parts. That is a devilishly hard thing to imagine, and the poetic license granted by the intentional stance eases the task substantially.

When [Francis] Crick asserts that "what you see is not what is really there; it is what your brain believes is there," …[this] is intended by Crick to be understood at the sub-personal level. The interpretation in question is not of (personal level) experience but of, say, data from the ventral stream, and the process of interpretation is of course supposed to be a subpersonal process. …

There are also plenty of times when theorists' enthusiasm for their intentional interpretations of their models misleads them. For instance, in the imagery debate, there have been missteps of overinterpretation - by Stephen Kosslyn, for instance that need correction. It is not that map talk or image talk is utterly forlorn in neuroscience, but that it has to be very carefully introduced, and it sometimes isn't. ...

In conclusion, what I am telling my colleagues in the neurosciences is that there is no case to answer here. The authors claim that just about everybody in cognitive neuroscience is committing a rather simple conceptual howler. I say dismiss all the charges until the authors come through with some details worth considering."

Here is a link to the full Dennett paper.

Here is a link to a previous post on Dennett’s intentional stance.

Here is a link to a post on the idea that all models are wrong but some are useful.

Here is a post on the hierarchy of different systems in the body (e.g. organelles, cells, organs, brains, people), all of which have some degree of agency and decision-making ability.

Todd Hargrove4 Comments