Talk:Eliezer Yudkowsky/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Vanity page?

Yudkowsky is extensively discussed in my 2012 book Singularity Rising. According to my book Yudkowsky is a significant AI researcher. I'm a professor at Smith College and am not affiliated with the Singularity Institute. — Preceding unsigned comment added by Economicprof (talkcontribs) 15:09, 10 October 2012 (UTC)

I've got a copy of your book in front of me, James. Not only do you seem to have a crush on Eliezer, but you've also written basically an advertisement for the Singularity Institute and Eliezer's greatness as humanity's savior, so please donate all the money you can to keep Eliezer from working for Walmart like he probably deserves. I predict that your infatuation with Eliezer will embarrass you eventually. Advancedatheist (talk) 04:33, 11 October 2012 (UTC)

As far as I know (someone please correct me if I'm wrong), Yudkowsky has no credentials of any kind -- no degree, no relevant job, no publications (except those he has "published" himself on his own web page, that is), no awards, etc. -- relevant to his "research". In fact, I don't think he has an education of any kind, even a high school diploma. I would be glad if someone who knew a little more about Yudkowsky's background and in what way he qualifies as a researcher (as opposed to just having a layman interest) in the topics he mentions researching could weigh in on this matter, because I strongly suspect that the article is currently misleading. All I've heard about Yudkowsky is that he's good at self-promotion -- like writing Wikipedia articles. --Miai 02:05, 4 October 2005 (UTC)

I didn't write the EliezerYudkowsky page in Wikipedia. So whatever it is, it's not self-promotion.
If Bayesian theorists had duels, I'd challenge you to one. I know the math. I just don't have a piece of paper saying I know the math. See e.g. An Intuitive Explanation of Bayesian Reasoning or A Technical Explanation of Technical Explanation.
My paper "Levels of Organization in General Intelligence" will appear in the edited volume "Artificial General Intelligence", forthcoming, eds. Ben Goertzel and Cassio Pennachin, Springer-Verlag.
Those sufficiently in the know regularly cite me as that weird guy who actually cares about friendly Artificial Intelligence. E.g., open up Kurzweil's "The Singularity Is Near" (currently #14 on Amazon.com), flip to the index, and you'll see entries for Eliezer Yudkowsky; look them up, and most of them are discussing Friendly AI (along with a miscellaneous quote or two from my other writings on the Singularity).
This is my full-time day job. I get paid to do this. Does that qualify as a relevant job?
As for whether all that qualifies me for a Wikipedia entry - I don't know, I'm not a regular Wikipedia editor. Someone just pinged me on IRC that this page was here, and I responded. --Eliezer Yudkowsky 4 October 2005.
Yudkowsky: yes, you definitely qualify for a Wikipedia article. --Maru (talk) 03:54, 4 October 2005 (UTC)
Fair enough, you do qualify for an article. However, I still think it should be stated somewhere that you lack formal credentials, and that your work is normally not published in peer-reviewed journals (ok, so you got a paper into a Springer volume, but that's a feat within reach of practically any doctoral student in the world). In short, my objection is simply that the article make you sound much more important than you really are. If you did not create this article, then of course this is none of your fault, but that does not dimish my considering it misleading. Miai 13:38, 4 October 2005 (UTC)
As far as I can tell he only ever worked as "AI researcher" at SIAI which is nonprofit he himself founded. 78.60.253.249 (talk) 05:43, 12 April 2012 (UTC)
Another thing: I have no idea why on earth you propose a "duel" to show off your math skills. I never said that I, personally, was a better Bayesian than you, or, for that matter, anything about Bayesianism in particular. Nor do I understand why it should matter how good at math you are relative to some random wikipedian (me). That said, I took a glance at your articles. The math in them is not beyond what any college freshman learns during his first week of probability theory. Qualifying as a Bayesian theorist who knows the math must mean something more than simply being able to explain Bayes theorem to laymen, or just about everyone qualifies, making the label worthless. --Miai 13:50, 4 October 2005 (UTC)
Explaining Bayes's Theorem in such a way that people actually understand it is no small feat, as you'd know if you were familiar with the literature on Bayesian education or rather the failures thereof. A talented math student can pick it up just by looking at the equation, but most people can't. Anyway, if you're already familiar with Bayes, good for you. If you want to see me engaged in a more difficult Bayesian argument with a doctorate-decorated scientist, see this thread. Why is this relevant? Because you questioned whether I know my topic. I do. Any other questions? Right now I am still trying to solve the problems, such as Friendly AI, that I seem to have become known for having pointed out. I've made a deliberate choice to focus my time on solving these open problems, rather than writing papers - I can afford to do that, since I'm not pursuing tenure. I left the formal school system after the eighth grade, but it should be bloody obvious that I'm not operating on an eighth-grade knowledge base. Are you seriously going to argue that it goes up to level XYZ and then stops there, or whatever? If you want to try and explain in the article that I got by without a formal education, then to be fair you should try and convince readers that I know the math anyway, and at that point it stops looking like an encyclopedia article and starts looking like an argument... I think your tone in your first query betrays your own lack of neutrality on this issue. Let's hear from someone with no particular emotional investment what they think the encyclopedia article ought to say. --Eliezer Yudkowsky 4 October 2005.
I took a look over the history of the page. (Jeebers...) Calling me a "self-proclaimed genius" isn't NPOV, but probably neither is referring to me as a "cognitive scientist" at this point in time. I don't think "Artificial intelligence researcher" is particularly controversial; plenty of people working on that problem don't have doctorates in the subject nor extensive publications. I made a quick edit of my own page (for the first time, please note), essentially just deleting stuff that wasn't NPOV (plus a quote and link that were obsolete), and the page now looks a bit more neutral. Not perfect, but more neutral. Other than that, heck, I didn't write it. --Eliezer Yudkowsky 4 October 2005.
My personal opinions on you and your work do not enter the equation here. I'm pointing out that things that are factual, indisputable, and relevant to the article are missing. One of those is that you have no formal education. Another is that your publications normally do not appear in peer-reviewed journals. Are you seriously disputing that such information is relevant in an article about a researcher (self-proclaimed or not)? As for having an "emotional investment" in the matter -- give me a break. Just because I don't hold your research in high esteem that doesn't mean I'm being emotional about it. However, I wholeheartedly agree that someone else should look at this page and correct it. In fact, I even suggested so in my first posting on this talk page. Just scroll up and it's right there. --Miai 22:09, 4 October 2005 (UTC)
It was me who called you a self proclaimed genius. I clearly remember your calling yourself a genius somewhere in one of your books, probably GISAI. Are you denying that you did so? IIRC, you were saying that some computational task was beyond the scope of even a genius like you.
The reason for my putting that in the article was not to show you in a bad light; IMHO much of the writing in GISAI is more intelligent than anything else I've ever read, and it is quite possible that you are as smart as people traditionally considered geniuses. But typically it is people other than themselves who called them a genius, and so your giving yourself that label is inclusion-worthy in the article. Arvindn 12:18, 23 December 2005 (UTC)
If I remember that quote correctly, it sounded more ironic and tongue-in-cheek than serious; it did not make the same impression as putting "self-proclaimed genius" on the wikipedia page would, at all. If it helps, though, I'm perfectly comfortable with proclaiming him a genius. 84.202.38.64 23:59, 8 September 2007 (UTC)

Wired quote

Added in this edit by Miai:

When Wired News wrote an article about Yudkowsky, and asked researchers to comment on Creating Friendly AI, the only response quoted was by one well-known researcher who deemed it "worthless speculation."1.

In addition to being factually incorrect (the Wired article in question quotes Alon Halevy of the University of Washington immediately afterwards), no name is given for this comment so the fact he is a "well-known" researcher is unverifiable. The opinion is too strong to be NPOV if there's no name attatched to it. The criticism from Alon Halevy would be more appropriate, because the author provides a name, credentials, and information on why his opinion should matter—Halevy's an editor at the Journal of Artificial Intelligence Research. -- Schaefer 01:15, 7 October 2005 (UTC)

I disagree with it being POV, since it only reiterated something written in a referred article published in a serious publication (well, ok, it's Wired, but you get my point). Perhaps your claim is merely that I should have clarified that further by writing "according to Wired News, one well-known researcher deemed it worthless speculation"? As for it being factually incorrect, that's also a little murky. Wired never says that Halevy actually comments on Yudkowsky's paper, only that he's not "worried about friendliness." Unlike the "well-known researcher," there isn't much to indicate that his comment is directed specifically at Yudkowsky's writings, but rather seems to adress the more general issue of whether spending time thinking about "friendly AI" is a worthwhile pursuit. All this said, though, I accept you removing the quote. I have no strong feelings either way, and if you feel that it was inappropriate, I won't bitch about it. PS. Please sign your talk page edits. Miai 00:42, 7 October 2005 (UTC)

Gaping factual errors inserted by Miai

Yudkowsky has no college degree, and did not complete high school. He decided to devote his life to the Singularity at age 11, after reading Vernor Vinge's True Names. In eighth grade he became convinced that the Singularity was so near that there was no time for a traditional adolescence, and therefore quit school.1 He has since devoted his time to thinking about the Singularity.

I left the school system after the eighth grade, which is hardly the same as failing to complete high school. I read True Names at age 16. Your given explanation for why I left the school system is completely wrong - I had never heard of Vinge's Singularity at that point and I do not in general endorse that kind of short-term thinking. You appear to have made up this 'fact' completely from thin air, which is hardly appropriate for an encyclopedia article. I deleted this paragraph entirely and made several other changes to maintain a more neutral NPOV. I'm not sure how one goes about asking Wikipedia editors to ban someone from editing an article, but making stuff up out of thin air is not an appropriate way to write an encyclopedia article, not to mention major factual errors. I think you've tipped your hand with this one. Please stop editing this page. Thank you. --EliezerYudkowsky 00:47, 7 October 2005 (UTC)

Eliezer, I don't know you. I am relying on the Wired article, which is the only article about you that I know of (which has been published in a semi-respectable publication). If you were misquoted in that article, my apologies, but all my edits had footnotes, and everything I wrote can be traced back to that article, so you should really blame Wired rather than me. I am certainly not "making stuff up out of thin air," so please stop accusing me of things I am quite obviously not guilty of. Now to the rest of your reply. If your idea of a "neutral NPOV" is to write of yourself that you are "one of the foremost experts on the technological singularity" or to remove all information on your schooling, although information about colleges degrees and lack thereof is present in all other Wikipedia articles about researchers, I'm afraid I can't let that stand. As for you not finishing high school, I never wrote that you "failed to complete high school" -- I wrote that you "did not complete high school." Is this incorrect? Did you in fact complete high school? If you did in fact do so, my apologies. Then please, by all means, change the article to reflect this fact. Do not, however, remove the whole section. Now, if you excuse me, I have some editing to do. Miai 01:10, 7 October 2005 (UTC)
Give me a bloody break! You read one hatchet job full of factual errors, not a primary source but a news article written by someone who spent a couple of hours, and you think you're an expert? If you think Declan's hatchet job is the only thing that's ever been written about me, you're displaying pure ignorance, so great that you're not even aware that other information exists. I'm not particularly fond of it, but you could check the subsection on me in Damien Broderick's The Spike. Also, I am one of the foremost experts on the technological singularity, widely listed as such, and I don't see why you would dispute this. If you don't know me, quit editing an article about me! There's plenty of people who've known me since 1996, both admirers and detractors. Let someone who actually knows what he's talking about edit the article. --EliezerYudkowsky 01:31, 7 October 2005 (UTC)
I am sorry if I have not read everything there is written about you, but surely you can see the danger with the object of an article being the main editor of it himself, especially if his edits are without footnotes. If you are widely credited as "one of the foremost experts on the technological singularity," find someone who has written that in some publication somewhere and then refer to that publication in the article. But you can't just write it because it feels good. In the same vein, I do not agree with your assessment that your article is better written by someone who "knows you". I prefer if it is written by people who have no personal stake in the matter. Speaking of which: you must understand that I have nothing personal against you, and that I am not "out to get you." I came across this article by a coincidence and felt that it was not telling the whole story. For example, the matter of your schooling, which you continue to avoid. Sorry, but I won't stop editing it merely because I include things that you are not comfortable with. If they are accurate and relevant, they belong in the article. BTW, I would be glad if you provide a link to the autobiographical essay where you write of the importance of your work to humanity, etc. if that wasn't also "made up" by the author of the Wired article. Miai 01:50, 7 October 2005 (UTC)
Miai, I never edited this article until someone pinged me on IRC that you had done so. If you want to revert the entire article to its status before either of us came along, that's fine by me.
After the Wired article ran, I was so horrified at the way the quotes had been taken out of context that I took down all autobiographical material about me to prevent further misreporting. If you have never had a reporter write about you, ask a friend who has been reported on. The experience, that first time, is rather like being run over by a truck - you do not realize how truly awful is the state of reporting, how little you should trust reporters, until you have been reported on yourself. Furthermore, even if the quotes were in their original context, I have by now repudiated that entire point of view - my outlook has changed drastically since I was 21 years old. Even in their original context, where they made far more sense, those quotes would be of interest strictly as a history of the ideas I discarded on the way to where I am now. If anyone had so much spare time. There is still plenty of primary source about me on the Net, including other articles than that Wired silliness - just read, read, read.
With regard to academia 'showing little interest' in my work - you have a rather idealized view of academia if you think that they descend on every new idea in existence to approve or disapprove it. It takes a tremendous amount of work to get academia to notice something at all - you have to publish article after article, write commentaries on other people's work from within your reference frame so they notice you, go to conferences and promote your idea, et cetera. Saying that academia has 'shown little interest' implies that I put in that work, and they weren't interested. This is not so. I haven't yet taken my case to academia. And they have not said anything about it, or even noticed I exist, one way or the other. A few academics such as Nick Bostrom and Ben Goertzel have quoted me in their papers and invited me to contribute book chapters - that's about it.
I suppose I can see why you, knowing nothing whatsoever about me except that Wired hatchet job, would be skeptical that I am one of the foremost experts on Vinge's Singularity. If you Google a bit, you will find that any page that lists people involved with the Singularity is more likely to list me than not. If you flip to the index of Kurzweil's The Singularity Is Near, by far the bestselling book on the Singularity, you shall find me there. Et cetera.
I am an academic outsider. I despaired of America's failing educational system at the age of 12, and learned on my own thereafter. The prospect of getting tenure at, say, MIT, doesn't really interest me - I have different goals; that's not what I want to do with my life. However, I regard myself as working mostly within the standard frameworks, such as Bayesian probability theory, Bayesian decision theory, the neo-Darwinian synthesis in evolutionary theory, et cetera, and those few works I have written for an academic audience reflect this. I agree that this should be in the article somewhere as factual information, and I will try to figure out an appropriate NPOV way to include it. Making me out as an ignorant madman railing against an indifferent academia simply isn't true.
In all seriousness - if you have nothing against me, then quit editing the article until you know. There is no reason why you can do better arguing and editing from complete ignorance of this topic than you could editing any other Wikipedia article in ignorance. --EliezerYudkowsky 02:35, 7 October 2005 (UTC)
"If you want to revert the entire article to its status before either of us came along, that's fine by me."—For the record, I support anyone that wishes to reduce this article back to stub or redirect to the article for the Singularity Institute for Artificial Intelligence (I have made such redirects in the past). Yudkowsky's notability does not extend beyond his service in that organization, and everything he's written that needs mention is an official SIAI publication. Describing Yudkowsky's life from a NPOV is proving problematic, because there are very few secondary-source articles about the Singularity that can confirm Yudkowsky's prominence in the field, and the one article that exists about Yudkowsky himself seems to contain misquotations and factual errors. I think this is evidence of a deeper malaise: Yudkowsky's life just isn't an encyclopedic topic. His advocacy of the Singularity is notable. The institute he co-founded is notable. But the details of his education and what repudiated views he once held are, well, what I would call "fancruft" if Yudkowsky were a fictional character. It doesn't need to be here. -- Schaefer 03:19, 7 October 2005 (UTC)
I would support redirecting the article, were it not for the fact that transhumanists form a disproportionally large subgroup on Wikipedia. For that reason, there are many Wikipedia articles about completely unnotable transhumanists, and those of even moderate interest, like Yudkowsky, are likely to just get created again if removed. I agree with your characterization of the problem we're having here, though. If there would be some way of proclaiming that a particular page is a stub or redirect by decision, I would support your proposal, but I think that is against the Wikipedia spirit. As for now, I cannot support either a redirect or a revert, but it is clear that something has to be done. Miai 03:53, 7 October 2005 (UTC)
I agree with Schaefer's observation about fancruft. I'm slicing the article down to a stub - call it an anti vanity-edit. If anyone tries to elaborate the article, hopefully they'll check the discussion page first.
Incidentally, Jimmy Wales, founder of Wikipedia, was posting to the SL4 list (which I own and operate) back when Wales was still working on Nupedia. (Yes! I am an actual person; yea, my history stretches back even to the roots of Wikipedia itself! We could always ask Wales to edit this article; he's not a domain expert, but he knows more about me than you do, Miai...) If there's a "disproportionately large" subgroup of transhumanists on Wikipedia, it's because transhumanists are early adopters and helped build Wikipedia at its beginnings. --EliezerYudkowsky 21:49, 14 October 2005 (UTC)
Sure, ask Jimbo to edit the article -- that sounds like a rational solution (rolls eyes). Regarding transhumanists -- I did not pass a value judgement, just noted a fact about the number of them at Wikipedia, without commenting on whether or not this was a good thing. Not everything is some kind of attack on you or your party line, you know. Miai 22:12, 16 October 2005 (UTC)

Recent edits

I support the recent edits by Mr. Yudkowsky. The bulk of the article should consist of only relevant encyclopedic information, such as what's available on Kurzweil's website. In addition, I've listed his photo, of which Mr. Yudkowsky holds the copyright, as a possibly unfree image. Ultimately, whether the photo should remain is up to Mr. Yudkowsky, and whether his personal life and education should be discussed is also his decision. Adraeus 22:11, 14 October 2005 (UTC)

I also support the recent edits by Yudkowsky. We have found good neutral ground here, and it's decent of him to support reverting an article about himself back to stub size. Miai 22:17, 16 October 2005 (UTC)

Piping links

"piping is not always good style, says Mel Etitis"

Hmm, I'm not sure when I said that, but Wikipedia:How to edit a page#Links and URLs certainly does: "Preferred style is to use [a blended link] instead of a piped link, if possible." --Mel Etitis (Μελ Ετητης) 10:11, 16 October 2005 (UTC)

Your minor edits to atheism are indicative of your encouragement to use blended links whenever appropriate. We briefly argued about this too. Adraeus 00:14, 18 October 2005 (UTC)

Ah, I thought that I hadn't actually said it — but you're quite right that I thought it, and acted on it, and would encourage others to do the same. --Mel Etitis (Μελ Ετητης) 21:13, 18 October 2005 (UTC)

notable wikipedian?

A real "notable wikipedian" when all he does is edit his own page joining the ranks of roger ebert.

This article does not describe Yudkowsky as a Wikipedian. Factitious 15:36, 18 December 2005 (UTC)

Recommendations

Gentlemen, I'll thank you to turn the temperature on this down a bit.

Eliezer, it's generally wisest to leave your page for others to edit. If you see a factual error, toss it in here first. Frankly, the most appropriate way to get a more accurate page here may be by giving more interviews, and including your own background, not just the problems that interest you. Once citable references exist, and references to your own work pop up in enough places, things will take shape on their own.

Now...

As it happens, a good chunk of the speakers at the Stanford Singularity Summit (http://sss.stanford.edu) are in the same basket (which is to say, people can't agree what's an appropriate level of detail for them). It seems to me an appropriate step at this point is to create a page to describe the summit, and break out subsections for the speakers. In that context, it's certainly appropriate to break out what they were called on to speak about. Combined with moving whatever information may be more appropriate for the Singularity Institute's page there, this may resolve the issue.

Time permitting, I may be able to do this myself. Breakpoint 2120PST09JUL2006

Turn the temperature down? :) This talk has been cooling down for more than half a year, Breakpoint. --maru (talk) contribs 20:04, 10 July 2006 (UTC)
Yeah, Breakpoint, it appears you are not only late to the party, but the party has been over for awhile now. (Cardsplayer4life 22:50, 13 July 2006 (UTC))
Oops. Well, you know what they say-- the best laid plans of mice and men are usually about equal. Last year was a busy one. User:Breakpoint 08JAN2007))
I was wondering... I'm not that familar with Eliezer Yudkowsky, but I know he wrote some pretty good fanfiction... would it hurt if there was a link to that? Or... if not that's cool too. —Preceding unsigned comment added by 67.208.225.150 (talk) 01:55, 2 May 2011 (UTC)

Merge proposals

I propose that both Seed AI and Friendly artificial intelligence articles be merged into this article. The reason is that both other articles are theories of Eliezer Yudkowsky and are not widely adopted in the mainstream thus it doesn't make that much sense for them to have their own articles -- also both articles are based on writings of Yudkowsky that he has self-published, which isn't a strong claim to being notable. It is likely that both Seed AI and Friendly artificial intelligence would fail an AfD if they were put up to it -- but rather than outright deletion I favor merging and replace both other articles with redirects. --70.51.233.97 00:55, 5 January 2007 (UTC)

Hmm, I don't necessarily agree or disagree with you, but I have heard both terms discussed apart from Eliezer, so I think they stand on their own, but it might also be because I am friends with lots of nerds. :P Cardsplayer4life 03:07, 5 January 2007 (UTC)
Though Yudkowsky is a prime promoter of these concepts they are also promoted by the SIAI. Though we don't like too many invented words in Wikipedia I feel that both Seed AI and Friendly artificial intelligence would both survive AfD as they are concepts used by others e.g. read [1] by Peter Voss from Adaptive A.I. Inc (which the Seed AI article mentions too). Nick_Bostrom refers to Yudkowsky papers on Friendly AI. So though both Seed AI and friendly AI may be new terms as concepts they have more then one person or organisation that use or indicate awareness of the term (though probably no one will really understand what it means until you get an email from a friendly seed AI saying hi! and then we'll all go ah, what a nice program. ).
I do think the articles all need a bit of cleanup as links are broken and we can probably add more wikilinks and when we do get friendly AI I want to make sure I'm in it's good books as we don't want to make it unhappy now do we ?. Ttiotsw 10:30, 5 January 2007 (UTC) (I'm happy to admit I'm a nerd).
I think the path towards fixing/merging a number of these articles might be a good deal clearer if we had a better understanding of whether or not SIAI has really grown much beyond Yudkowsky himself. I find a lot of what he has to say interesting, but I find some of it questionable, and it would help to find some solid peer review work. This was, in my opinion, one of the problems with the Singularity Summit; a lot of interesting ideas were discussed, but thorough responses were not available for all of the speakers (some, but not all). Yudkowsky has taken some pains to, apparently, thoroughly describe some of his theories, but this material is somewhat large and I must confess that I haven't found time to read them. I have a feeling that a number of other people may be in the same boat, and there are perhaps not adequate critiques available yet. Oh, and I apologize for not checking my messages frequently; I'm rather busy these days. Sorry folks. Breakpoint 08JAN2007
Well, if Goertzel's Real AI: New Approaches to Artificial General Intelligence ever gets published, we might be gifted with some criticism of Yudkowsky's "Levels of Organization in General Intelligence". --Gwern (contribs) 01:24 9 January 2007 (GMT)
  • I Oppose the merge. SeedAI is a mainstream concept, I have heard and used years before ever hearing of this guy. Concepts are important, not people.--Procrastinating@talk2me 10:44, 23 February 2007 (UTC)
  • I Oppose the merge.
I think that both concepts are distinct and important (if both don't happen at the same time, humanity is likely to face a holocaust, but that's another issue entirely). I also think that Mr. Yudkowsky himself is important and worth a distinct and distinguished bio in wikipedia. I agree that his importance is difficult to estimate in the sense that he hasn't won the Nobel prize or anything... but I think this is more due to the structure of the world rather than his actual unimportance. For example, the academic literature on AI is (in some sense) irrelevant to the real world and not worth publishing in if you're serious about doing something in AI that's good for anything other than gaining prominence within the academic community. Much of it has a sort of "think shallowly for a week, spend a year working on a math problem, publish without source code" feel. Flowing prose giving original insight into the issues of cognition, it's mechanical basis, the processes which give rise to it, and the pragmatic ethical implications of this thinking isn't really welcomed in any journals I know of. Mr Yudkowsky seems deeply similar (in wanting to write deeply about AI but not having a venue where deep thought on the subject is welcome) to Douglas Hofstader (except that Yudkowsky isn't yet an "grand old man of AI" like Hofstadter) whose page currently says:
Hofstadter has not published much in conventional academic journals (except during his early physics career, see below), preferring the freedom of expression of large books of collected ideas. As such, his great influence on computer science is somewhat subversive and underground — his work has inspired countless research projects but is not always formally referenced.
On top of this, Yudokoswky is an autodidact (with all the "possibly a crank, possibly a genius" connotations that involves - it's a high risk life strategy that many people are emotionally biased against) and is offering a revolutionary paradigm to the field of AI when it's not experiencing a crisis. Scientists doing "normal AI research" are faced with no crisis of empiric disconfirmation of previously working theories (nor are they especially pinched for grant money) so they aren't likely to take well to someone who has written[2]:
"Friendly AI is not a module you can instantly invent at the exact moment when it is first needed, and then bolt on to an existing, polished design which is otherwise completely unchanged.
"The field of AI has techniques, such as neural networks and evolutionary programming, which have grown in power with the slow tweaking of decades. But neural networks are opaque - the user has no idea how the neural net is making its decisions - and cannot easily be rendered unopaque; the people who invented and polished neural networks were not thinking about the long-term problems of Friendly AI. Evolutionary programming (EP) is stochastic, and does not precisely preserve the optimization target in the generated code; EP gives you code that does what you ask, most of the time, under the tested circumstances, but the code may also do something else on the side. EP is a powerful, still maturing technique that is intrinsically unsuited to the demands of Friendly AI. Friendly AI, as I have proposed it, requires repeated cycles of recursive self-improvement that precisely preserve a stable optimization target.
"The most powerful current AI techniques, as they were developed and then polished and improved over time, have basic incompatibilities with the requirements of Friendly AI as I currently see them. The Y2K problem - which proved very expensive to fix, though not global-catastrophic - analogously arose from failing to foresee tomorrow's design requirements. The nightmare scenario is that we find ourselves stuck with a catalog of mature, powerful, publicly available AI techniques which combine to yield non-Friendly AI, but which cannot be used to build Friendly AI without redoing the last three decades of AI work from scratch.
"In the field of AI it is daring enough to openly discuss human-level AI, after the field's past experiences with such discussion. There is the temptation to congratulate yourself on daring so much, and then stop. Discussing transhuman AI would seem ridiculous and unnecessary, after daring so much already. (But there is no privileged reason why AIs would slowly climb all the way up the scale of intelligence, and then halt forever exactly on the human dot.) Daring to speak of Friendly AI, as a precaution against the global catastrophic risk of transhuman AI, would be two levels up from the level of daring that is just daring enough to be seen as transgressive and courageous."
In other words, any already successful AI researcher whose career succeeded in the last three decades would have to agree that "redoing the last three decades of AI work from scratch" (including everything their success was based on) is potentially a good idea because these efforts might not just have been unproductive towards good goals and actively conducive of terrible catastrophe.
I've been following Yudkoswky's thinking since 2000 or so. His writings have influenced my career choices. I came looking for wikipedia's article on him thiking it would be interesting and worthwhile to read given my current research project and instead of a worthwhile article I found squabbling and accusations of a "vanity page"... :-(
-JenniferForUnity 20:22, 5 March 2007 (UTC)
Oppose the merge - these concepts exist outside of Yudkowsky's work/life. Many transhumanists think these are valid and valuable ideas. Paranoid 18:21, 13 May 2007 (UTC)
Oppose the merge - The interest is not just trans-humanist. This is about the risk of human extinction or the destruction of civilization. Please Read : Existential_risk and links therein. In my OPINION, every occupation that is not addressing this threat is, at best, analogous to providing impeccable customer service aboard RMS Titanic. I have nothing against impeccable customer service, but what would have made a difference would be educating First Officer Murdoch in how to handle, "Iceberg right ahead!" I lay no fault on Murdoch. Had he known to just go ahead and hit the iceberg head-on, that would have made a difference. RickJS (talk) 16:48, 3 July 2008 (UTC)