User:The Transhumanist/Sandbox11a

Weak AI (also known as narrow AI) defines non-sentient computer intelligence, typically focused on a narrow task. The intelligence of weak AI is limited. In 2011 Singularity Hub wrote: "As robots and narrow artificial intelligences creep into roles traditionally occupied by humans, we’ve got to ask ourselves: is all this automation good or bad for the job market?"

Siri is a good example of narrow intelligence. Siri operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life despite being a sophisticated example of weak AI. In Forbes (2011), Ted Greenwald wrote: "The iPhone/Siri marriage represents the arrival of hybrid AI, combining several narrow AI techniques plus access to massive data in the cloud." AI researcher Ben Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.

Some commentators think weak AI could be dangerous. In 2013 George Dvorsky stated via io9: "Narrow AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots..." The Stanford Center for Internet and Society, in the following quote, contrasts strong AI with weak AI regarding the growth of narrow AI presenting "real issues."

The following two excerpts from Singularity Hub summarise weak-narrow AI:


 * Weak AI, an artificial intelligence system which is only intended to be applicable on a specific kind of problems (e.g. computer chess) and not intended to display human-like intelligence in general; see strong AI
 * Weak AI hypothesis, the position in philosophy of artificial intelligence that machines can demonstrate intelligence, but do not necessarily have a mind, mental states or consciousness.