Talk:Superintelligence: Paths, Dangers, Strategies

Untitled
How are WaitButWhy and Tim Urban more mention-worthy here than other random bloggers on the internet? — Jeraphine Gryphon (talk) 10:20, 31 March 2015 (UTC)

Input from AI/physics/neuroscience experts needed
This article needs input from AI/ML, physics and neuroscience experts. The recent great success of the "new AI" are in machine learning (ML) and most experts feel the key to intelligence is learning (either biological or artificial) - see for example http://mcgovern.mit.edu/principal-investigators/tomaso-poggio However, most ML experts are, naturally, gung-ho about the field, and have no incentive to consider the potential limits. These limits generally reflect 2 issues. First, the adverse effects of high dimensionality n, with computational needs for universal intelligence growing exponentially with n. Brains have ~ 1 quadrillion synapses which are updated in parallel every millisecond; currently planned computers have ~ 10 billion transistors serially updated every nanosecond), giving an overall shortfall of order a quintillion. Second, Moore's law is now slowing as it hits physics limits, so it seems unlikely that quintillion-fold increases in density are possible. One could argue that in principle brains are proof that an approximation to UI is possible, perhaps at a time scale much faster than the learning that has occurred since humans evolved. Until we throughly understand the neural basis of human intelligence - in particular the detailed operation of the neocortex, that hope is like believing in God: no real evidence.Paulhummerman (talk) 13:13, 24 October 2015 (UTC)

This is the article on a book by Nick Bostrom, the philosopher. Other expert opinions should be included only, if they reacted specifically to the book. Otherwise, refer to the more gneral wikipedia-pages on this topics, such as Superintelligence Andre908436 (talk) 17:11, 25 April 2020 (UTC)

"the dominant lifeform on Earth"
Does Bostrom really consider superintelligence to be, in reality or potentiality, a lifeform? This seems counterintuitive and, if not a misrepresentation, could do with explaining. – Arms & Hearts (talk) 22:49, 19 November 2019 (UTC)


 * That is the fundamental problem with all these AI speculators. How can you convince a machine that it is 'alive', and how can you convince it that it should want to be alive? Over and over the AI 'experts' say we should be worried about an AI that wants 'self-preservation' because any 'intelligent' being would want that. Well, no! Any LIVING being should want that. Why should a computer care if we turn it off? Are we going to program a belief in computer-heaven into it? AI theorizing is SO ridiculous. It's almost as if the people doing it are using their own less-than-real intelligence, like they live so far removed from 'natural life' with so many artifices in their lives, that they are no longer capable of pondering the fundamental realities of life. They're the artificially intelligent trying to imagine programming real live intelligence into a machine, anthropomorphizing to the degree of cliche. — Preceding unsigned comment added by 154.5.212.157 (talk) 09:24, 1 March 2021 (UTC)