Talk:Homunculus argument

Untitled commentary
Okay, so if a person reflects in a cognitive way upon his situation -- i.e., what his senses and reasoning powers tell him -- then it's turtles all the way down, is it?

Maybe so. The homunculus fallacy, it seems to me, suffers under the weight of what we have learned in the past century or so regarding the workings of the universe.

Sure, the mundane world presents itself to our sense in ways that LOOK like everything is causal. Time, after all, flows in a single direction, and events cascade, one after the after. Right? Or not.

In the classical descriptions of existence, a homunculus fallacy makes sense. But what we now suspect and in some cases know is this:

The universe appears to be underlaid with a foamy quanta, where probability reigns. Furthermore, energy and matter and gravity are manifestations that in space-time connect among one another in wholly relativistic ways -- i.e., they're fungible. And now come cosmologists to suggest that the entire universe may be the equivalent of a fantastic, projected hologram -- a virtual reality, if you will.

Thus (and these words come as fast as he can write them from a lay person who has but taken a few college physics courses), the mind that leaps from the vastly complex construct of the brain could be holistic, relativistic and quantum-rigged in its own fashion. Do we create existence by our own bootstraps? The evidence points toward that. Is the (or perhaps, "an") observer necessary for existence to manifest? Quite possibly. If there are multiple homunculae,there also may be multiple universes, allowing us to almost literally have our cake and eat it, too.

If string theory is right, the entire cosmos is effectively composed of a series of tiny, vibrating, looped strings -- rings, I suppose. You might picture that as a universal reptile which is eating its own tail and yet in the act finds sustenance. Reality is synergistic. The act of observing is only suggestive of a homunculus if you think in classical terms, e.g., one on one, and only in an orderly, seemingly causal progression of discrete events.

But if there are more than four dimensions, if there are very occasionally quantum effects that (I stress the word) appear to represent paradoxes or, worse, to violate physical laws through sheer statistical chance, if a wave collapses into a particle by the mere act of observation, if space and time are malleable and the latter is traversible in more ways than we can at the moment engineer (as for instance in the Aspect experiment, where entangled particles moving apart at light speed nevertheless share information), then mind may indeed be able to reflect upon itself, may be both external and internal as circumstance warrants, without the necessity for troublesome regression.

Having read this article a few times, I, a (I hope) reasonably bright woman with a high-school education (abnout to enter college) cannot make heads or tales of it. Granted, I've not studied logic overmuch, but I've been readging the logical fallicies section and seem to be bale to understand most of them decently... perhaps some easier to understand examples can be added? Kuroune 03:17, 10 February 2006 (UTC)

Discussion pages are for determining how to improve topic pages. They are not for My First Philosophy Essays.

Argument or Fallacy
Should this article be moved to "Homunculus argument" and "Homunculus fallacy" made a redirect? The article uses "argument" almost exclusively. Ibn Abihi 00:49, 26 February 2006 (UTC)
 * Never mind I did it myself. If you disagree with this, feel free to change it back. Ibn Abihi 01:09, 26 February 2006 (UTC)

What's wrong with an infinite regress?
For some reason, everyone assumes that just because something goes back infinitely, that means it can't exist.

But if each following step takes half as much time and brain space (and whever other resources are necessary) as the previous, it's totally possible to finish an infinite regress with finite resources! See also the calc solution to Zeno's paradox.


 * The problem's that unlike Zeno's paradox, which resolves itself all the way down to a "smallest conceivable distance" such as a Planck length, and also unlike that paradox which has a definite end in terms of physical possibility, infinite regress in the context of ever smaller Homunculi has no forseeable end, since "smaller" Homunculi are only conceptual. This is a true infinite regress because you can make a theoretical mind as "small" as you want to. —Preceding unsigned comment added by 119.95.157.254 (talk) 09:35, 5 July 2008 (UTC)


 * The problem is Supertasks. But in reaction to the other users, never mind if the fallacy thesis is true or not, it is subjective and still way too controversial to take it as given. Today's philosophy can not provide any hint to the concept of consiousness. --Chlebashořčicí (talk) 08:12, 24 April 2016 (UTC)


 * I'm sure it's because it offended the Jews who run this site's staff ring somehow and not for any logical and reasonable justification. — Preceding unsigned comment added by 47.42.170.138 (talk) 07:01, 19 July 2020 (UTC)

From top to bottom, a mixed up, overheated, condescending article
The homunculus bogeyman is not a theory. It's a primitive concept regarding perception and thought. The cart's way before the horse by falsifying it with infinite regress (which itself is no proof of fallibility - can you disprove infinity?) The homunculus-type concept emerged as optics became better understood in the 16th and 17th centuries. Lots of experience is directly perceived, without extensive forethought or review. At the limit, direct perception can be modelled as J.J. Gibson does, with sensing organisms strung out on ambient vectors of reflected wave/particles. This is perception as in the Shannon information model, dealing with intuitive probabilities and opportunities. But we think, sometimes hear music or see mental pictures, most often with words. Sitting bundled next to a warm stove, Descartes dissociated thinking from perception. He ended up believing mental activity, words and graphic imaginations, had nothing to do with what he saw, felt, or heard.

It's easy to criticize Descartes' model. It's also easy to defend it from infinite regress. We perceive surfaces, substances, events as we move among them. It's clear that beneath the chairs is floor, and we can envision foundations, rock, all the way to China (or visa versa). But that's a waste of time. Humans are evolutionary creatures, and the brain's metabolic energy gets conserved.

Likewise, the primitive notion of a visualizer in the brain, monitoring sensations, is an attempt to explain how we can think in words while continuously perceiving. Making sense and thinking occur in time. Neuroscience shows we surf mental fragments, which vanish with their network conditions, which aren't invariant states. The inner voice that talks to oneself isn't a little man, but what it is isn't well understood. Homunculus perception is a blunt idea, but captures the inherent dualism between inner voice and continuous engagement in the world.

Put a different way, homunculus is similar to soul: a single entity that exists apart from sense systems. We now detect unconsciousness in many brain modalities, so that there are many unconscious. There are brain regions for vision and areas for thinking in lexicons, areas that spread emotion and control it, and so forth. It's no longer sensible to describe thought and feeling as a single unit, except it depends on levels of analysis.

A lot of psychological research tries to overcome Cartesian duality, by demonstrating how perception and thought interact and impinge. Lots of ink has been spilled promoting mond-body unification. But the homunculus concept, taken as a separate inner observer-thinker, is not a logical fallacy. Consider a set of neurons that interpret vision. They represent their interpretation by spreading it across other cortical areas. Parts of mind monitor other parts, in a semi-closed system. Generalize this system as a unit, and it starts to resemble a being -- us. Brian Coyle ==

Yes, you can disprove infinity. An infinite number of homunculi cannot exist in a finite volume.

Homunculus argument should not be confused with Homuncular Decomposition
Homuncular Decomposition is the notion that we can explain behavior through a simpler Homunculus, which itself can be explained by even simpler Homunculii, until we get down to a machine or neural level. Such proceedures are essential to progress in the cognitive sciences and AI; the key is to prove that each level actually does both explain the behavior of the higher abstraction homunuculus, and does not itself use a homunculus which is as complex as those on any higher level of abstraction.

The Homuncular Fallacy is different
The Homuncular fallacy is not some contentious philosophical issue about metaphysics, reality, truth, etc. It is just a fallacy; meaning, that its use leads to illogical conclusions. The principle of logic observed by the recognition of this fallacy, is that an explanation of a phenomenon cannot depend on a system which presupposes understanding of the phenomenon itself. The example is, if I want to explain sight, I cannot just say there's a little man in my head watching a screen, because I'm just saying that sight is the result of something else's 'sight'. That's like giving a recipe for cake and listing 'cake' as the primary ingredient. To explain cake, I have to invoke non-cake items (simpler foodstuffs). And I can't then just say "and bake a cake with them," because I have not formalized the baking process in a way which does not presuppose understanding of cake! I have to use objects not of the target kind to explain a target object. Now the homuncular fallacy has got nothing to do with explaining something in terms of other unexplained things. To recognize the fallacy is to realize you can't explain something in terms of itself. And that is the logical principle violated by the fallacy.

Have cleared this up
I have cleared up this page, because, although I understand it, as the first commentator above pointed out, I think it's hard for an outsider to understand. Feel free to play around with what I have written as you see fit. 130.209.6.40 17:33, 24 November 2006 (UTC)

Why is the only relevant question "Who?"
The arguments against cognitivist theories is that the supposedly damning question is always “Who?” “”Who is it who is looking at this ‘internal’ movie?” “Who uses these rules?” etc. Why are we looking for something that possesses some kind of identity? Why is the question “what” never asked? Surely there are many functions already acknowledged in the brain that occur irrespective of any trace of identity; why is it a requirement of cognitive theory that the filtering process be some sort of entity? This seems to be a restriction placed on the possibilities of how this filtering process takes place that is placed by the critic, and not by the theory itself. It smacks of denigration by association, i.e. using the scornful term “homunculus” seems to imply “See? These people think there’s a little tiny man inside you interpreting everything. How silly!”

Kakashi64 (talk) 21:43, 27 November 2007 (UTC)

The homunculus fallacy is clearly a fallacy regardless of whether one uses the word 'who' OR the word 'what'. —Preceding unsigned comment added by 86.0.205.114 (talk) 14:58, 6 January 2008 (UTC)


 * Computers do video recognition like human face recognition.
 * Using this recognition, computers do some actions like warning that there is a criminal
 * This is how brain works - is doing video recognitions and is acting accordingly to obey the human needs.
 * Homunculus argument is like that theory in astrology with the universe that is sitting on the back of a turtle :)). Raffethefirst (talk) 10:17, 14 March 2008 (UTC)

Perhaps a little overboard?
I feel this article may present an overly strong (apparent) assertion that all homunculus arguments are false, which is true on their own due to the infinite regress thing, but in the brain many such arguments are physically true up to a few levels of recursion.

For example, the retina really does project onto the V1 visual cortex as a coherent image, which is then filtered by the rest of the brain as it is passed up through the cortex. At each stage local parts of the image remain coherent (although the image does break up on the large scale). this leads to the often pointed out fact (sadly this has been done) that if two toothpicks are inserted next to each other in the visual cortex, the resulting areas of lost visual perception are spatially next to each other. Of course the image is eventually interpreted into more abstract things like "tree", "house", "turtle", but, for the first few levels, each section of the cortex acts as a homunculus interpreting the lower levels.

Many other parts of the brain, particularly input and output segments are organized in this manner, although admittedly the vision system is one of the best examples.

Anyway, the point is that I feel this article could be very misleading to anyone not already well versed in cognitive neuroscience.

Unfortunately, as I haven't been well trained in philosophy, I don't feel qualified to rewrite this myself. I might accidentally make it look like Cartesian ideas have value, which would be a real step backward. --Anuran (talk) 14:31, 18 May 2011 (UTC)


 * I agree, and think that the article is just confusing, in general. For example: "In-completeness in this model therefrom that modern science already emphasized:" — that's not valid English grammar, and "incompleteness" is not a hyphenated word. - LesPaul75 talk 20:16, 8 September 2011 (UTC)

This talk page is overboard. Most people are discussing the topic instead of the page. Unacceptable. — Preceding unsigned comment added by 184.179.86.135 (talk) 14:11, 3 July 2018 (UTC)

Written like a school project
I know teachers over the years have assigned students to add to Wikipedia, and I feel this article may have been created this way but which also have slipped through the revision cracks over the years. If you check the oldest version, I think this is borne out. For instance, in the current revision one paragraph uses "cognate" as a verb, which is simply unnecessary. The first revision was also created in mid-October, which is the perfect time of the Sociology semester for more bad writing to suddenly appear on WP.

The article reads like it has never shaken this off, or perhaps has suffered multiple assignment-driven revisions, or perhaps only exists at all as a receptacle for student attempts at coherent writing, but in general I think the whole thing could use a rewrite. Manys (talk) 16:42, 27 September 2019 (UTC)

"In terms of rules"
I think the "in terms of rules" section is potentially original research and at least a little out of date. Maybe I'm just trying to cover up my thought that "It doesn't say what I want it to say".

Empirically, the only thing we see the brain composed of is Neurons, carrying out rules. (I can probably dig up a lot of primary and secondary sources saying brains are made of neurons (briefly disregarding "support infrastructure" like glia cells, blood vessels, etc ). We've also seen artificial neural nets make large strides in "computer vision" lately.

My own conclusion of the homonunculus argument (and I think the way I learned in neurophysiology) was the opposite of the conclusion presented in this section: to wit: obviously there's no infinite regress of little men. There's no separate vis vitalis (a now discredited concept) and biology (and thus neurophysiology) must be explainable by mechanistic means alone. That is to say: Vision arises as an emergent property of correctly formulated rules. (which -again- seems to be borne out in modern computer vision applications (however imperfect you still might consider those to be)).

Then again, biologists and psychologists don't talk with each other very often, so from an encyclopedia perspective: possibly there's papers that go either way in the two different fields. But I would like to see some sources here!

--Kim Bruning (talk) 09:55, 7 June 2023 (UTC)

Variant of above (explaining intuitively):
 * If you take a watch apart (to see how it works), you find a bunch of gears. You gotta explain how it works in terms of gears.
 * Take a car apart and you find cylinders and sparkplugs. You gotta explain how it works in terms of cylinders and spark plugs.
 * Take a brain apart and you find neurons (things that process action potentials according to mutable rules). You gotta explain how it works in terms of neurons that process action potentials according to mutable rules.

I've never taken a brain apart and found a homunculus hiding in there. And I'm not going to easily buy into a story that has cars moving on the basis of invisible spirits or vis automobilensis.

Right now this unsourced section seems to be suggesting the exact opposite, so SOMETHING is up. Of course if there's sources to psychology literature I'm not aware of, go ahead and reference them; in which case we might end up with multiple sections with points of view from different sciences.

I'm loathe to delete the text outright straight away, because a) there won't be much of an article left, and b) simply because it's been here for a while; and that's usually indicative of ... something. (to wit: one form of old school wp:consensus)

--Kim Bruning (talk) 10:14, 7 June 2023 (UTC)