Talk:Artificial consciousness/Archive 5

I will unprotect this page. For information: tkorrovi, non-sysops cannot protect pages. Anyway, if you guys get in edit wars, I will protect again and start asking for higher actions. DONT REVERT EACH OTHER MORE THAN 3 TIMES. If you have complaints against another's actions, talk to a sysop. ugen64 23:53, Mar 16, 2004 (UTC)

Who is Ataturk?
That is a good summary and it would be impossible to get it entirely correct so I let lots of small issues pass (being thankful that there is a summary and that the general flavour is faithful to that which is summarised) but I am concerned about one particular important circumstance where my view represented in the summary was not what was I said. I didn't say (or mean to be understood as saying) that both new proposed versions should be strictly POV. I said that each "side" of the discussion should be free to point out that something in the other side's argument was POV and to make it NPOV. Which is just a restatement of standard Wikipedia behaviour. Ataturk 00:52, 17 Mar 2004 (UTC)

No, I said that. Paul Beardsell 00:57, 17 Mar 2004 (UTC)

Oh, yeah! Ataturk 00:59, 17 Mar 2004 (UTC)

And then you agreed with me. Paul Beardsell 01:17, 17 Mar 2004 (UTC)

So I did. Ataturk 01:19, 17 Mar 2004 (UTC)

But if you were me, that would be underhand. Paul Beardsell 01:23, 17 Mar 2004 (UTC)

Yes, were I you, agreeing with you would be the only part I would regret. Ataturk 01:26, 17 Mar 2004 (UTC)

Are you sure? What about the bit where you said that you too were annoyed? Paul Beardsell 01:28, 17 Mar 2004 (UTC)

Ah! I said I was annoyed in a reply to tkorrovi but he thought I was sympathising with him: I didn't correct him, and I was allowed to improve the article for a while. The same error is made in the summary. Ataturk 01:31, 17 Mar 2004 (UTC)

How similar is my consciousness to yours? Were I to build a machine which has the same characteristics as my brain - artificial neurons with the same latencies, triggering thresholds etc - and I was to scan my brain, take a backup, and load it into the machine, might not that machine be artificially conscious yet more similar to my consciousness than it would be to your consciousness? Paul Beardsell

I see Matt Stan wonders elsewhere if simulated consciousness would be a better-titled article than artificial consciousness. Are you also Matt Stan? Or am I? Paul Beardsell 01:41, 17 Mar 2004 (UTC)

I don't want to argue from a priveleged position. I think, on the balance of probabilities, if you look at all the evidence, that I am not Matthew Stannard. I see that some think you are a good friend of his. Is that correct? Ataturk 01:45, 17 Mar 2004 (UTC)

Yes, I always thought so, and we seem to share a sense of humour. I'm sometimes a little more forthright than you so I am prepared to deny being Matthew. And to assert there was no collusion between us in the editing of this article. Paul Beardsell 01:51, 17 Mar 2004 (UTC)

I never said that recognizing oneself in a mirror was an essential test for consciousness. Paul Beardsell 01:56, 17 Mar 2004 (UTC)

That's disconcerting: You talk but I can't see your lips move. Ataturk 01:59, 17 Mar 2004 (UTC)

Look in the mirror! Paul Beardsell 02:09, 17 Mar 2004 (UTC)

Work needed
In all this arguing, the article is now very sloppily written. Things like "it is claimed [by whom?] that the fly has a lot going for it." In an article about a controversial subject--and I think we can agree that there is controversy--we need to say who makes claims. I'll try to give this a copyedit in the next day or two. Vicki Rosenzweig 14:26, 17 Mar 2004 (UTC)

I'm sure that would be very useful. I have said that I think that the article currently to be found at Consciousness (artificial) is an improvement over this one. Whether or not others agree with me at some point one of the two articles will be merged into the other, or will supercede the other, and one of them will be deleted. Comments as to how we ought to best approach this? Paul Beardsell 15:17, 17 Mar 2004 (UTC)


 * They should be merged. Artificial consciousness is the better article title. I just retouched this article, which will probably cause a heated controversy (at least in the minds of a certain user that will remain unnamed). ugen64 21:30, Mar 17, 2004 (UTC)

Both versions now have had useful edits which must not be lost. As I am responsible for creating the new article I am prepared to take responsibility for merging the two. This might not be acceptable to all concerned so objections should be raised now. Another issue is that edits by others may lose their attribution. Primarily it was Matt Stan and I who contributed to the other article. Unless Matthew would rather bring his own paras accross I will attempt to bring the paras over verbatim. Then I will attempt to merge, keeping all the recent edits to both versions. I don't think this is a particularly complicated job, there is probably no need for others to stop doing constructive edits, but I will try to do the work in my evening, i.e. starting 12 hours from now. Paul Beardsell 01:00, 18 Mar 2004 (UTC)

Master editor (originally on Village pump)
''I've moved the following from Village pump to reduce the size of that page; a pointer to here remains. - IMSoP 00:35, 18 Mar 2004 (UTC)''

I am right in thinking there are no "master editors" or owners of articles. It's just that edits are being habitually rolled back at Artificial consciousness and I am not the only one it is happening to. Ataturk 00:14, 14 Mar 2004 (UTC)


 * Some people just do that, because they have their own opinions, they think you're wrong, or they're just trying to cause problems. That's why I protected the page. And you're right, there are no "master editors." ugen64 00:25, Mar 14, 2004 (UTC)


 * Just to clarify, there are Administrators a.k.a. Sysops, who have certain technical abilities that not all contributors have (such as "protecting" pages), but this is not an editorial role, more of a house-keeping one.
 * The ability of anyone to change any article is one of the great strengths of Wikipedia, but, as you have unfortunately discovered, also presents one of its greatest challenges. - IMSoP 01:06, 14 Mar 2004 (UTC)


 * Thanks for freezing the article, ugen64, and for the other info, IMSoP. I am in the privileged position that the version frozen is one I wrote, although I am not happy with it and more work is required.  So, whereas I have proposed a way forward on the Talk page that seems to have been embraced by the person who keeps on reverting, that person is NOT editing a new version on a sub-page of the Talk page, where I suggested.  Well, that's up to him.  But past performance suggests that the moment the page is unfrozen the article is likely to be reverted either right away or as soon as it is touched by anyone and it will be reverted to a much earlier version.  I suppose the advice is that I regard this as one of Wikipedia's "greatest challenges".  I certainly agree that the alternative, that Wikipedia is policed more thoroughly is likely to have a negative impact overall.  Ataturk 01:26, 15 Mar 2004 (UTC)

Something which is perhaps a little unusual has occurred. One editor of Artificial consciousness has refused to work on a new consensus version of the article on a sub-page of the Talk page. He refuses to contribute in any other way. But he says he is unhappy. The new version of the article is patently an improvement and represents even the recalcitrant editor's POV, I think. A new article Consciousness (artificial) has been created as a copy of the proposed one. The recalcitrant editor has said he will not edit that either. I suggest an experiment be allowed. Unprotect Artificial consciousness and allow it and Consciousness (artificial) to co-exist for a while. The latter really is an improvement and must not be lost, but the former is at a disadvantage as it is frozen. I think it will defuse the edit war and the best article will be the one that survives. I know this is a little unusual but I ask that NEITHER article be deleted, for the timee being. Ataturk 02:29, 16 Mar 2004 (UTC)

AC Software Program
There's a link (Proposed mechanisms for AC implemented by computer program: absolutely dynamic systems) in the Consciousness (artificial) article to a web page that contains a link to http://tkorrovi.sourceforge.net/di32bin.exe, I downloaded, installed and ran this program on my PC running Windows XP. It loaded without error. I read the associated .rtf document but I'm afraid there is no functional description or user guide for the operation of the program, so it is not clear to me what it can do or what it is supposed to do. Matt Stan 22:41, 16 Mar 2004 (UTC)

Matt, sorry, there is no user guide yet. I tried to gather information from various forums etc to my AC forum. You may look at "AC discussion" at AC forum http://tkorrovi.proboards16.com/index.cgi?board=general&action=display&num=1062781241 where I tried to summarize a discussion at ai-forum with Rob Hoogers, Ribard et al- There is another discussion at the beginning, but in the end is how I explained this program to Rob Hoogers. If it would be helpful, different people may have different things what are hard to understand for them, so I think that there can be complete user guide only after explaining it to many people and making faq from these conversations. tkorrovi

And read the link page http://www.fortunecity.com/skyscraper/database/2/ also carefully, there is also some useful information about the program. It implements exactly the mechanism what is the result of derivation. tkorrovi

Christopher Nolan
The link to Christopher Nolan is to the wrong Christopher Nolan. See. The synopsis of the book Under The Eye of The Clock, taken from amazon, is: ''A book originally published in 1987 about a handicapped child, Joseph Meehan. It tells of Joseph's fight to escape the restrictions and confines of his existence inside a broken body. The book is largely autobiographical.''

Question: what is the wikipedia convention for disambiguating aritcles about two people who have the same name? Matt Stan 22:41, 16 Mar 2004 (UTC)


 * Whoever wrote the author's short bio at the bottom did it correctly. ugen64 03:14, Mar 19, 2004 (UTC)

Merger
Probably not entirely to everyone's liking. But as they say, edit boldly! Paul Beardsell 16:23, 18 Mar 2004 (UTC)

Average Human?

 * AC must be capable of achieving some of the same abilities as the average human

The example I picked (Christopher Nolan) was of an exceptional human. He suffers severely from cerebral palsy, but even more exceptionally he is a great artist. His genius is actually irrelevant to the argument for AC (and I only used him as an example in deference to other differently abled people). The point is that even the most deficient or dare I say disabled people are nevertheless deemed conscious. Therefore there's no need to stipulate that the AC machine needs to aim at any average, even if there was such a thing. Suggest this section be altered to reflect that the lowest level of consciousness ascribed to a human, i.e. anything other than being in a coma, qualifies as consciousness and therefore should also in respect of the AC machine. Matt Stan 16:41, 18 Mar 2004 (UTC)

Predictive capability (blinking is UNCONSCIOUS)
I continue having a difficulty with this. I know that the introduction to the parent section implies that not all tests need be passed (and what was merely an implication I have recently strengthened) this sub-section still uses the word "must". Should a fully conscious being (not even an artificial one) be placed in a completely alien environment (say being swept along in a blinding, deafening, tumbling avalanche) then no prediction / anticipation is possible. Nothing can be seen or heard. Not even "up" can be determined. If one blinks then AS USUAL this is UNCONSCIOUS! Seems to me anticipation is but a luxury of consciousness, not a necessity. Paul Beardsell 06:32, 19 Mar 2004 (UTC)

The ability to "operate coherently in the real world" is also not a defining attribute of consciousness. Perhaps one is most conscious when one is working on a problem, when one is living most in one's one mind and least in the world, when anticipation of events of the world is least likely to exhibited, when one is not particularly likely to be judged conscious by an outsider. The artificial tennis player might exhibit even super-human anticipatory / predictive abilities yet not be conscious. Paul Beardsell 06:40, 19 Mar 2004 (UTC)


 * Although it would likely be more conscious that a thermostat. Paul Beardsell 09:40, 19 Mar 2004 (UTC)


 * I had difficulty with this too. I agree that blinking is a reflex action, but the manifestation of reflex actions is itself an attribute of consciousness - one stops blinking when one is unconscious. I think the matter is partly resolved by introducing the notion of real-time response. But this is still a response rather than any type of predicitve capability - unless you interpret blinking as anticipation that one's eye is about to be damaged. Perhaps predictive capability is covered under coherency. See next comment. Matt Stan 08:55, 19 Mar 2004 (UTC)


 * As for operating coherently, I agree that one could operate incoherently and still be conscious, such as when drunk. This introduces the notion of impaired consciousness. I am suggesting that the thermostat that went wrong and caused its mechanism to overheat is operating incoherently and thereby has become unconscious. Therefore AC does require some level of coherency in order to qualify. Perhaps by phrase should be altered to operate not totally incoherently in the real world or operate with a measurable level of coherency.... Matt Stan 08:55, 19 Mar 2004 (UTC)

Perhaps one does not blink when unconscious because one's eyes are closed? I think that the point needs to be made that consciousness need not be apparent. But the first sentence of the article must then be changed. Paul Beardsell 09:09, 19 Mar 2004 (UTC)


 * I agree that one can be unapparently conscious, but don't want to get into an existentialist discussion on this point. I was aiming to come up with tests. If you refuse to open your eyes or communicate with me at all then you might be cojnscious or unconscious - I can't tell. If an erstwhile artificailly conscious machins refuses to make its consciousness apparent then I'm suggesting it wouldn't pass an objective test of AC. Matt Stan 13:43, 19 Mar 2004 (UTC)


 * On the point about a thermostat representing consciousness. I disagree that a thermostat or similar closed loop control mechanism is indicative of consciousness per se. Whilst such control mechanisms might be an important part of demonstrating consciousness, it is the fact that the machine has this 'personality' (is it an 'ego' or an 'id'?) i.e. the internal watcher that knows about its thermostat that makes the thing conscious. I have other closd loop control systems in my metabolism that control my blood sugar levels, etc. of which I am unconscious, as do plants. I am not suggesting that plants are conscious, though I think there are others that do. Matt Stan 13:43, 19 Mar 2004 (UTC)

Transitive or intransitive?
We should insist that when we say something is conscious that we say what it is conscious of. We should treat "to be conscious" as a transitive verb. A thermostat is supremely conscious of temperature and nothing else. I am not as conscious of temperature as the thermostat. To say that the thermostat has a belief about temperature, that it is conscious of the temperature, is at worst personification but I have no problem with that: A thermostat is not a person, and it has a very accurate "belief" about the temperature. We cannot start off with the assumption that we have a priveleged position. Appreciation of music is a higher consciousness, if you like, and we may be a long way from artificial consciousness of that type, but each of us has been arguing that there are degrees of consciousness. What is the lower bound? What is the least conscious that consciousness can be? Paul Beardsell 16:15, 19 Mar 2004 (UTC)

As to the question of the "internal watcher", Descartes' homunculus, there is no internal watcher. This is a delusion. Whatever it is, perhaps the echo of myriad feedback loops, the consequence of recursive calls to the subroutines; were there an internal watcher then it itself would need an internal watcher. And so on. The thermostat has one feedback loop: It has no internal watcher either. Paul Beardsell 16:26, 19 Mar 2004 (UTC)


 * I think the consciousness we are talking of, for the purpose of definition, is actually the intransitive version. Sure, the thermostat senses temperature, as humans do, and I read somewhere that humans can detect temperature changes of less than a degree. I can testify to that, working in an environment where other people tweak the air conditioning control without telling me! Consciousness, as I have construed it as per this article, is consciouness not of anything in general, but rather exactly the Descartes notion, which I prefer to translate from the French as I am thinking; therefore I am. (since French makes no distinction, unlike English, between the present and present continuous tenses). This is self-awareness. This part that can say I am thinking (even though it may not be, as it has no need to prove to us that it is, as per my contention that it needs no artificial intelligence) implies only one level of indirection. There is no need to be able to say I am s/he that is thinking, therefore I am s/he (respectively for he and she); otherwise Descartes would have said that instead! And you don't have to be able to spout philosophy or even pseudophilosophy in order to claim to prove that you're conscious - otherwise we're back with the psychics again. Matt Stan 21:02, 19 Mar 2004 (UTC)

How would we implement that one level of indirection in computing terms? Could there be one Window representing the watcher and one the watched? What would be their API? The conscious of parts would be in the watched. The watcher would decide what to take notice of, to decide, when its temperature sonsor told it that it was feeling hot whether to remain so, like when it's sunbathing, or whether to go take a cold shower. Matt Stan 21:08, 19 Mar 2004 (UTC)
 * A sonsor, incidentally, is a mis-spelt sensor.

An Example
The horrid llittle paperclip character and variants that appear in odd places in Windows, attempting to appear aware that my search is taking a long time, or the fact that I have just printed a document (which I knew anyway and is why I find him irritating). Matt Stan 21:26, 19 Mar 2004 (UTC)

But what is that an example of? Occasionally I have seen colleagues jump up and shout at their computer as if it were a conscious being. From time to time we are fooled into this belief. Paul Beardsell 05:16, 20 Mar 2004 (UTC)

Copernican principle
Why is it that I am not suspicious of other humans' consciousness? Why do some propose a gold standard test of consciousness? Were you to express sympathy that my search (above example) is taking a long time I do not suspect that you are unconscious. Why not give the paperclip the same benefit of the doubt? Perhaps I suggest this several decades too early.

What if the consciousness on-off switch is found in the human brain. That through the administration of a drug or the activation of an electrode I could turn (the intransitive version of) your consciousness off. What if I did so but your behaviour was the same? The light bulb in the projector of your homunculus's cinema has blown. The internal observer is blind but you continue regardless.

I think that as the abilities of machines continue to increase there is a danger that the gold standard is modified to maintain our exclusive grip on what many would like to see as our unique ability: Consciousness. To avoid that is why I am suggesting that we look for the lowest possible (set of) standard(s) - not a gold standard.

Paul Beardsell 05:16, 20 Mar 2004 (UTC)

No, disagreeing with myself: What I am really suggesting is that consciousness be considered relative; that there not be considered to be a pass mark. I am arguing against a set of pass/fail tests if only for the reason that some humans will fail some of those tests through disability (a truism!) or wilful bloody mindedness. And some machines will too. Oh, yeah, I forgot: We are machines! Paul Beardsell 05:25, 20 Mar 2004 (UTC)

I too was looking for the lowest common denominator. Hence my querying average human as a yardstick. You seem to be, like so many with commercial software house backgrounds, fudging the issue of acceptance criteria. If we can't propose, let alone, agree, acceptance criteria for artificial consciousness, then anybody can claim that anything is conscious, and who's to gainsay them? I don't see how I can be relatively more conscious than you. I am either conscious or unconscious, depending on whether I can remember where I put the vodka. A conscious human cannot fail the test of consciousness - one only needs to assert that one is conscious and say I am thinking, therefore I am. in order to be believed, or be examined by a medical doctor, who will aim a light into one's eye, hit one's knee with a hammer and ask who the Prime Minister is, in order to indicate that one is conscious. What will our doctor of artificial consciousness do to deem our machine artificially conscious? Matt Stan 15:05, 20 Mar 2004 (UTC)

I'm not sure that the acceptance criteria point applies to me. You claim a privileged position as a human. If Dr Spark (a C3PO) says that Mr Flash (another C3PO) is conscious, would you consider either of them conscious? Someone (I will have to look this up) said that language is necessary for consciousness. Some humans are better at language than others. Maybe that makes some humans more conscious than others? Paul Beardsell 16:59, 20 Mar 2004 (UTC)

Sometimes I am completely unconscious. Sometimes, dreaming, I am somewhat conscious, but possibly not observably so. At other times my level of consciousness varies. I am, it seems, sometimes perfectly capable of driving a car without being conscious of doing so. But the outside observer would swear I was consciously driving the car. Consciousness is not on or off, it is relative, we are conscious of, not just conscious. So when the thermostat says it is too hot, it is conscious of the temperature; when someone says he thinks, therefore he is, he is conscious of his existence; when a computer says it has no free RAM, it is conscious of the shortage.

I think this allows us a way forward. You think it is fudging the issue. What issue? The one you cannot define!

Paul Beardsell 17:08, 20 Mar 2004 (UTC)

I think you are confusing conscious of with attentive to. The fact that you are not paying attention to your hand moving to the gear lever does not mean that you are unconscious. I'm back at the intransitive consciousness point. You wouldn't be able to change gear at all if you were unconscious. You have to be conscious to do anything deliberate. Attention and filtering inputs is covered under another criterion for testing artificial consciousness. The driving example is interesting though because it implies another area of indirection. Assuming you are driving the car deliberately - (though I concede you might take a whole journey without being aware of doing so because your mind was elsewhere) then certain actions take place autonomously, without you being aware of them, or, as you would put it, without you being conscious of. I think that these extended actions are linked to one's autonomous motor control system, which is not a defining characteristic of consciousness, but may be a useful extension of the artificially conscious machine. Matt Stan 06:53, 22 Mar 2004 (UTC)

While driving I can be fully attentive and not conscious. Were I hooked up to the equipment used to investigate the fly my responses would be similar. I did the whole journey in a daydream, unaware of the complicated cognitive (not just reflex) driving decisions I made. What I was conscious of, on this particular journey, was the radio programme, or the plot of a film I saw recently. But I was a zombie or an automaton as far as the traffic was concerned. Where "conscious" is the TV channel the "internal watcher" habitually watches, when he is not unconscious! Paul Beardsell 11:45, 22 Mar 2004 (UTC)

We each argue each other's points. Which is good, of course. You said, "You have to be conscious to do anything deliberate." That is why the thermostat is conscious. Or, of course, one has to be human to be deliberate? Or quite human-like? Paul Beardsell 11:54, 22 Mar 2004 (UTC)


 * I am just saying that you couldn't drive a car if you were what doctors call unconscious. What you are conscious of whilst driving, or doing anything else for that matter while conscious, is dependent on what you are being attentive to. And one definition of conscious - the latest one in the SOED - is synonymous with being awake, i.e. attentive to something. Now there are warnings on the motorways against driving whilst asleep, as it tends to result in damage to the wildlife in the central reserve, amongst other things. Matt Stan 12:13, 22 Mar 2004 (UTC)


 * I don't think the thermostat deliberates. It's bimetal strip just clicks when it gets to hot or too cold. It's the entity that set the thermostat to a particular temperature that deliberates, though it could do it accidentally, of course. That's the watcher, or deliberator, which distinguishes consciousness from mere passive existence, like what a rock has. Is the rock that expands slightly when its temperature increases and causes an avalanche effectively acting as a catastrophic thermostat, doing it deliberately and consciously? I suggest not. It was not the conscious decision of the rock to cause an avalanche, even though it did cause an avalanche. But if I placed the rock in a pivotal position in order to leverage other rocks when the thaw started, in order to cause an avalanche, then it was my consciousness that brought that about, and that has no bearing on the status of the rock, which remains unconscious of what is going on and what it is doing at all times. Matt Stan 12:21, 22 Mar 2004 (UTC)