User:CharlesGillingham/More/AI effect

AI is whatever hasn't been done yet
As soon as AI successfully solves a problem, the problem is no longer a part of AI.

AI applications become mainstream
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.

Newsmaker interview with Rodney Brooks, director of MIT's CSAIL and CTO of iRobot: Sizing up the coming robotics revolution. By Candace Lombardi. CNET News.com (May 15, 2007). "[Q] What do you think are the greatest achievements in AI right now? Brooks: I think our whole lives are surrounded by artificial intelligence, but we don't think of it that way. Google--you know, all the techniques that Google uses."

Take a moment and a raise a glass to the wonderful, underappreciated AI. Andrew Kantor's CyberSpeak column. USAToday.com (June 1, 2006). "AI does more than make better games ... What Far Cry illustrates is how far artificial intelligence has come. It's so sophisticated that we almost dismiss it. In a way, that's a sign of their quality. Invisible tech is often the best tech. ... Because Google doesn't talk like HAL 9000, we don't think of it as AI. Working with its own algorithm and the data input by millions of users every time they search, Google is able to help you find information on the billions of pages of the Web in a matter of seconds. Or less. ... Another example: When I check my e-mail, Thunderbird deletes almost all of the incoming spam. It does this not by looking for obvious spam words, but by using artificial intelligence - in this case Bayesian filtering to create a detailed profile of each message. Based on what it's learned - yes, learned - about the mail I receive, it can tell it how likely any given message is legit. If you drive a modern car, your vehicle's artificial intelligence is doing a lot for you - quietly and behind the scenes, of course. ... So while we're waiting for our computers to have meaningful conversations with us, take a moment to appreciate the underappreciated AI - and be glad its not trying to kill us - much."

AI Knows It’s Out There - Artificial intelligence may not be living up to Sci-Fi visions, but it has gone underground into many day-to-day systems. Red Herring (August 22, 2005 print issue). "Many people think of artificial intelligence (AI) as a high-flying 1980s tech concept that crashed and burned back in the early 1990s after a good deal of hype. The fact is, AI technology has become pervasive in much of the software we use today. Take the word processor. Start to write a memo, and your word processor will try to decide which words you really mean to type, and which icons to hide because you rarely use them. Or do an online search, and notice the ads that the search engine displays based on the topics it decides must interest you. 'The big picture is that AI is almost everywhere, but we don't call it such,' says Alex Linden, vice president in the Frankfurt office of research firm Gartner. Turn up your nose at AI and you’ll be ignoring some of the latest technologies and business opportunities. AI experts point to exciting innovations in fields such as machine vision, data mining, and the semantic web, while old-school AI technologies like neural networking and expert systems still soldier on."

Machines are catching up to human intelligence. By Robert S. Boyd. Knight Ridder Washington Bureau (October 24, 2005). "Some AI systems are famous, such as Deep Blue, the computer that beat the world chess champion Garry Kasparov, or Predators, the unmanned spy planes hovering over Afghanistan. But the machine intelligence that underlies most such systems is largely invisible, so people take their cleverness for granted. AI experts grouse that once one of their projects succeeds, people no longer consider it to be AI. According to Rodney Brooks, the director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, 'AI is everywhere around you every second of the day. People just don't notice it.'

Digital Dullard. By Steven Cherry. IEEE Spectrum Online (January 2005). "To be sure, AI has its successes. Factory robots use machine vision to track parts. Automotive suspension systems and camcorders use fuzzy logic to smooth out jarring motions. Hospitals use large knowledge bases of drug effects and interactions to ensure that prescribed drugs don't conflict with one another. Computer programs now repeatedly beat the world's chess champions. Part of AI's image problem stems from the fact that whenever a development moves from lab to market, it's no longer artificial intelligence; it's just software."

Legacy of the AI winter
Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all.

Are you talking to me? Speech recognition: Technology that understands human speech could be about to enter the mainstream. The Economist Technology Quarterly (June 7, 2007). "Another difficulty will be encouraging sceptical consumers to give the technology another try. 'People have a lot of negative perceptions of speech technology, because the speech systems deployed first were pretty bad,' says Mr Hong. Mr Castro agrees. 'There's a history of disappointment and failed expectations,' he says. When setting up his firm, he presented his idea to some venture capitalists. They were impressed by the technology but were put off by the term 'voice recognition' which, like 'artificial intelligence', is associated with systems that have all too often failed to live up to their promises."

Behind Artificial Intelligence, a Squadron of Bright Real People. By John Markoff. The New York Times (October 14, 2005). "The five robots that successfully navigated a 132-mile course in the Nevada desert last weekend demonstrated the re-emergence of artificial intelligence, a technology field that for decades has overpromised and underdelivered. At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers. ... The feat, which won a $2 million prize from the Pentagon Defense Advanced Research Project Agency, was compared by exuberant Darpa officials to the Wright brothers' accomplishment at Kitty Hawk, because it was clear that it was not a fluke. ... Until recently, progress in artificial intelligence lagged so far behind computing technology that some in the field talked about an 'A.I. winter,' after commercial and government funding evaporated in the mid-1980's. Now there is talk about an A.I. spring among researchers like Sebastian Thrun, the director of the Stanford lab."

Hype cycle
How to identify high-impact future technologies. By Nestor E. Arellano. IT World Canada (October 2, 2006). "So how does one identify high-impact technologies of the future? That's the million dollar question. In fact, the 'right' answer can be worth far more than a million bucks. The challenge is finding it. Some industry insiders believe a review of the popularity cycle that ideas or products go through can provide valuable clues to tech prognosticators. Research firm Gartner Research Inc. has developed what it calls a Hype Cycle for Emerging Technologies, which is essentially a graph that charts the movement of specific technologies from inception to productivity. ... Much like a movie star, 'every technology goes through a cycle of development, initial awareness, popularity, slump, and if the technology has got what it takes…re-emergence,' said [Leo] Marland during a presentation at the recently concluded 2006 Showcase Ontario in Toronto. ... The introduction of a technology tends to build an expectation about what it could offer to user. If this expectation is not met, [Paul] Johnston said, the technology will probably disappear from the public's radar screen. 'The idea of artificial intelligence, for instance, first appeared in the 1950s, when it was thought day-to-day appliances would somehow become human-like in helping people make decisions.' Johnston noted that the word disappeared until the late 1980's when gadgets such as microwave ovens, computers and VCRs with so called 'built-in intelligence' came out. The Gartner Hype Cycle identifies such a hibernation period as the 'Trough of Disillusionment', when a product fails to meet expectations and becomes unfashionable. Even during this period, some businesses or organizations might continue working on the technology waiting for the time when its benefits and applications are further understood -- the 'Slope of Enlightenment' phase. As a technology evolves to its more stable second and third generations, it hits the 'Plateau of Productivity'. It is at this stage that the technology achieves acceptance either by the broad public or by a niche market."

More AI Effect notes from AI topics
This material is stolen directly from AIEffect at AI Topics

Plan: sort this material by topic, write a paragraph or two about each topic (using the best quote or two), and then take the rest of them and call them references.

AI Revisited - Pieces of the AI Puzzle are Already Deployed, but Much Remains to be Done. Bart Eisenberg's Pacific Connection series in Software Design Magazine (December 2004). "'There's a joke in the AI community that as soon as AI works, it is no longer called AI,' says Sara Hedberg, a spokeswoman for the American Association for Artificial Intelligence. Hedberg, who has written about AI for the past 20 years or so, has done her share of trying to enlighten reporters who are ready to declare AI dead. 'Once a technology leaves the research labs and gets proven, it becomes ubiquitous to the point where it is almost invisible,' she says."

Will AI Reach the Mainstream? Artificial intelligence has so far carried high expectations and little reality, but research slowly carrying through to the business marketplace, says analyst firm. By Jim Ericson. Line56.com.(September 14, 2004). "Computers and software can now perform tasks that were impossible five years ago, so it pays to keep an open mind, according to Amreetha Vijayakumar, Frost & Sullivan Technical Insights research analyst. 'AI is slowly starting to propagate in the normal business case, especially in applications risk assessment, CRM, data mining, these applications are starting to reach users.' ... In some cases she says, AI goes unnoticed because developers don't accept that AI is used in their products."

An apple for the computer - Machines are so sophisticated they can be used to grade essays. But in some ways, artificial intelligence still lacks common sense. By Faye Flam. Philadelphia Inquirer (August 30, 2004). "[Henry] Lieberman and other artificial intelligence researchers say computers could become dramatically smarter and more humanlike in the future. The brain is just a physical machine, albeit a complicated one we don't yet understand, they argue. 'People have this illusion that what we do is magic and it will never be automated,' said University of Pennsylvania computer science professor Lyle Ungar. When he first started studying artificial intelligence, he said, no one thought a computer could play chess well enough to beat the masters. Today, computers can beat everyone at chess, he said, and we're no longer impressed."

Talking to Bill. Interview by Gary Stix. Scientific American (May 24, 2004).[ "On the occasion of the fourth TechFest at Microsoft Research--an event at which researchers demonstrate their work to the company’s product developers--Bill Gates talked with Scientific American’s Gary Stix on topics ranging from artificial intelligence to cosmology to the innate immune system. A slightly edited version of the conversation follows. (An Innovations column on Microsoft Research, accompanied by a shorter version of this interview, appears in the June issue of Scientific American on page 18.)'] "SA: Do you see a continued relevance to the idea of artificial intelligence [AI]? The term is not used very much anymore. Some people say that's because it's ubiquitous, it's incorporated in lots of products. There are plenty of neuroscientists who say that computers are still clueless. BG: And so are neuroscientists, too. No, seriously, we don't understand the plasticity of the neurons. How does that work? There's even this recent proposal that there is, you know, prion-type shaping as part of that plasticity. We don't understand why a neuron behaves differently a day later than before. What is it that the accumulation of signals on it causes? So whenever somebody says to me, 'Oh, this is like a neural network,' well, how can someone say that? We don't really understand exactly what the state function is and even at a given point in time what the input-to-output equation looks like. So there is a part of AI that we're still in the early stages of, which is true learning. Now, there's all these peripheral problems--vision, speech, things like that--that we're making huge progress in. If you just take Microsoft Research alone in those areas, those used to be defined as part of AI. Playing games used to be defined as part of AI. For particular games, it's going pretty well, but we did it without a general theory of learning. And the reason we worked on chess was really not because we needed somebody to play chess with other than humans; it was because we thought it might tell us about general learning.But instead we just did this minimax, high-speed static evaluation, a minimax search on trees. Fine. I am an AI optimist. We've got a lot of work in machine learning, which is sort of the polite term for AI nowadays because it got so broad that it's not that well defined. But the real core piece is this machine-learning work. We have people who do Bayesian models, Support Vector Machines, lots of things that we think will be the foundation of true general-purpose AI."

Artificial intellect remains elusive. By Fred Reed. The Washington Times (April 22, 2004). "Whatever happened to artificial intelligence? There was a time, a couple of decades ago, when computers were expected soon to be able to behave intelligently -- to talk to people in English, answer questions, and make complex decisions. What people really had in mind was an artificial human. HAL, the computer in the movie '2001: A Space Odyssey,' comes to mind. It didn't happen. Today, although computers have advanced phenomenally in power, we see them doing very little that reasonably could be called intelligent. We still can't talk to computers about the meaning of art or why Rome fell. Why? ... First, it's harder than many thought it would be. ... Another reason for the apparent lack of machine intelligence is that, if you know how a computer does something, it no longer seems intelligent. ... An example of what might be regarded as intelligent behavior is automated translation of language. This is done by Google, for example. ... Finally, the use in connection with computers of words such as 'memory,' 'language' and 'logic' raised expectations of potential human likeness that weren't supported by reality."

What tomorrow may bring - Government challenges industry to develop software to help predict terrorist actions. By Alan Joch.

Homeland Security Special Supplement to Federal Computer Week (February 23, 2004). "'Pattern recognition is linked to [artificial intelligence], which was very hyped in the '70s and '80s, and that was very detrimental,' said Sameer Samat, chief technology officer at Kofax Image Products Inc., which bought pattern-recognition software maker Mohomine Inc. last year. 'For a time, if you mentioned pattern recognition, people just hung up the phone.' But new interest, based on security necessities arising after the 2001 terrorist attacks, may bring more popularity to pattern recognition."

Stuart Russell on the Future of Artificial Intelligence. Ubiquity; Volume 4, Issue 43 (December 24 - January 6, 2004). "There's a cliché that as soon as something starts to work people no longer call it AI. There's some truth to that because once it starts to work then people can explain how it works. Once the mystery is no longer there, people say that's just an algorithm. There is a misconception that AI is only AI if it has a black box that produces intelligence in a mysterious way."

Finding gold among the speculative ideas. By Mark Samuels. Computing (January 14, 2004). "The Cambridge centre is also working on leading-edge machine learning and information retrieval. Much of the research here represents a progression from the UK's strong artificial intelligence (AI) focus. 'AI has become a term that most people don't use anymore because some of the early methods didn't prove to be as scalable as people would have wanted,' says [Andrew] Herbert [managing director of Microsoft Research in Cambridge]. 'The current revolution is to look at the problems of AI as statistical problems, and find new ways of representing images and documents and use statistics to undertake classification.'"

AI Founder Blasts Modern Research. By Mark Baard. Wired News (May 13, 2003). "AI researchers also may be the victims of their own success. The public takes for granted that the Internet is searchable and that people can make airline reservations over the phone -- these are examples of AI at work. 'It's a crazy position to be in,' said Martha Pollack, a professor at the Artificial Intelligence Laboratory at the University of Michigan and executive editor of the Journal of Artificial Intelligence Research. 'As soon as we solve a problem,' said Pollack, 'instead of looking at the solution as AI, we come to view it as just another computer system.'"

Machine visionary - Author and inventor Ray Kurzweil is an authority on artificial intelligence. Interviewed by Hamish Mackintosh. The Guardian (February 6, 2003). Here's a sample of what you'll find: "[Q:] 'Is AI experiencing a renaissance?' [A:] 'We're in an era of what I'd call 'narrow AI', where systems are performing intelligent functions that used to require human intelligence. Intelligent systems can fly and land airplanes or make financial investment decisions. These were research projects 10 years ago and are now in widespread practical application and have become integrated into our information infrastructure. Every time an application works, it's no longer called AI - it becomes a separate field. It's speech recognition, character recognition, robotics, machine vision, etc.'"

Europe - Are robots after your job? After the hype, a new generation of artificial intelligence systems shows promise for solving real business problems, says Business Europe. Available from ebusinessforum.com (December 11, 2002). "This practical business focus is not the only reason AI is undergoing a renaissance. 'Today companies prefer to avoid the AI moniker,' said Shashi Buluswar, co-author of the McKinsey report. 'Now that the technology can demonstrate its applicability to real business issues where in the past its appeal was more conceptual, the term 'business intelligence' is preferred.'"

Darpa puts thought into cognitive computing. By R. Colin Johnson. EE Times (December 10, 2002). "'People say that neural networks and AI were not successful because we don't have humanoid robots walking around, but they don't realize that there are hundreds of applications of this technology that we use every day without thinking,' [Ronald] Brachman said. 'Machine-learning techniques are now built into a variety of commercial systems, finding credit card fraud, evaluating mortgage applications, detecting illegal telephone calls and recognizing speech.' He maintained that 'AI planning algorithms were successful in Desert Storm and are being used every day by the military in complicated logistic situations.'"

Artificial Intelligence. By Kristen Kennedy. Technology & Learning (November 2002). "Just as virtual reality applications have become so much a part of our daily lives we don't even recognize the science behind the display, so too have artificial intelligence-based technologies. For instance, voice and character recognition are now invaluable aids in assisting struggling readers and writers with text entry and word recognition. Script writing and recognition intelligence is powering your handheld, translating the chicken scratch of Graffiti into readable form. Toward the goal of making computers that think like humans, AI is now making new inroads into K-12 education with writing assessment engines and smart tutoring systems."

AI Center brings hi-tech degrees to University [of Georgia]. By Steve Saussy. Red and Black (September 18, 2002). "'Most people don't realize there is lots of artificial intelligence in, for example, Microsoft Windows,' [Michael] Covington said. Many people only think of robots when artificial intelligence is brought up, he said, but most of the current software available today use artificial intelligence."

Blinded by Science. By Patricia Leigh Brown. The New York Times (July 14, 2002: Week in Review, Section 4, page 3). "In his forthcoming book 'I'm Working on That: A Trek From Science Fiction to Science Fact,' William Shatner explores the reciprocity between Starship Enterprise fantasy and real-life scientific breakthroughs. 'What was suggested 30 years ago in 'Star Trek' is now old hat,' he said in a telephone interview. ... As a culture, we have become writers of our own fantasy saga in which pacemakers, cloning, the Internet, speech recognition software and the like are merely part of the scenery."

The return of artificial intelligence. By Corey Booth and Shashi Buluswar. The McKinsey Quarterly, 2002 Number 2 Web exclusive (no-fee reg. req'd). "Artificial intelligence (AI) has come in and out of vogue more times than Madonna in the past 20 years: it has been hyped and then, having failed to live up to the hype, been discredited until being revived again. In the late 1990s, an observer at a World Wide Web technology conference reported that most of the proposals there had been floated, several years earlier, under the AI moniker and were now being recycled - good technology solutions looking for real business problems to solve."

Lord of the Robots: Q&A with Rodney Brooks. Technology Review (April 2002). "TR: What is the state of A.I. research? BROOKS: There's this stupid myth out there that A.I. has failed, but A.I. is everywhere around you every second of the day. People just don't notice it. You've got A.I. systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an A.I. scheduling system. Every time you use a piece of Microsoft software, you've got an A.I. system trying to figure out what you're doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated characters, they're all little A.I. characters behaving as a group. Every time you play a video game, you're playing against an A.I. system."

AI by another name. The Economist (March 14, 2002). "Ironically, in some ways, AI was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, or read handwritten postcodes to speed mail sorting, the problem was deemed not to have been AI in the first place. 'If it works, it can't be AI,' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that AI came to refer to blue-sky research that was still years away from commercialisation. Researchers joked that AI stood for 'almost implemented'. Meanwhile, the technologies that worked well enough to make it on to the market, such as speech recognition, language translation and decision-support software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI research. ... Not everyone is rushing to embrace this once-stigmatised term, however. ...Max Thiercy, head of development at Albert, a French firm that produces natural-language search software, also avoids the term AI. 'I consider the term a bit obsolete,' he says. 'It can make our customers frightened.' This seems odd, because the firm's search technology uses a classic AI technique, applying multiple algorithms to the same data, and then evaluates the results to see which approach was most effective. Even so, the firm prefers to use such terms as 'natural language processing' and 'machine learning'."

It's Alive! From airport tarmacs to online job banks to medical labs, AI is everywhere. By Jennifer Kahn. WIRED (March 2002 / 10.03; pages 72 - 77). "Quietly, though, AI researchers were making more than progress - they were making products. It's a trend that's been easy to miss, because once the technology is in use, nobody thinks of it as AI anymore. 'Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation,' laments Rodney Brooks, the director of MIT's Artificial Intelligence Laboratory. ... In truth, we may never chat up a computer at a cocktail party. But in smaller yet significant ways, artificial intelligence is already here: in the cruise control of cars... The future is all around us."

Computers try to outthink terrorists. By Bruce V. Bigelow. The San Diego Union-Tribune (January 13, 2002). Also available from UC San Diego. "Once known as 'artificial intelligence' -- a term that many computer scientists disdain -- such technology now is used to detect fraudulent financial transactions, such as money laundering, and to monitor industrial processes for irregularities."

Trapped In The Future - How Long Before AI Apps Really Hit Their Stride? By William Van Winkle. Computer Power User. Volume 2, Issue 1 (January 2002): pages 50-54 in print issue. "In fact, AI -- or what was once considered AI until it became commonplace -- is now almost everywhere."

Almost Human? Artificial Intelligence is back in the hearts and minds of technology gurus. By Robert J. Derocher. Insight (The Magazine of the Illinois CPA Society, September 2001) ---> "'Software is just getting smarter and smarter and smarter.' [Carol] Brown agrees, saying that accounting firms have 'integrated AI into their normal software, so they don't think about it as AI anymore.'"

On the bleeding edge? Call 911, or a VAD. An editorial by Alison Eastwood. CBiz Magazine (Channel Business, formerly Canadian Computer Reseller; May 26, 1999). "I asked a woman at a systems management software development firm whether her company's solution used AI to diagnose problems. [Pause.] 'We don't like to use the words 'artificial intelligence,'' she said brightly. Oh? Well, how about the words 'neural networks'? 'Uh, no. Because it means so many different things, and then we get all these technical people asking us what algorithms we use.'" - Editorial:

"Our expectations for a technology rise with its advancement." - Henry Petroski. From page 83 of his book, The Evolution of Useful Things. (New York: Vintage Books, 1994). [From our collection of quotations.]

The Innovative Applications of Artificial Intelligence Conference. By Bruce Buchanan and Sam Uthurusamy. AI Magazine 20(1): Spring 1999, 11-12. "One measure of the growth of practical applications is the number of U.S. patents mentioning the term artificial intelligence and related terms (knowledge based, fuzzy logic, expert system, genetic algorithm). According to the primary examiner for AI in the U.S. Patent Office, Robert Downs, a decade ago only about 100 patents mentioned AI specifically; last year, about 1700 mentioned artificial intelligence, with another 3900 or so mentioning related terms. About 2200 patents are specifically classified in the Patent Office's class for artificial intelligence, which means that the invention or technique is specifically directed to something new in knowledge-based systems, machine learning, fuzzy logic, or neural networks. Other patents using AI techniques might be classified in an area of application such as medicine. These numbers confirm another important trend, which was noted by Reid Smith and others in the context of earlier IAAI conferences: AI technology is more likely to be embedded in some larger system than embodied in a stand-alone system. The difference between the 5600 patents mentioning AI and the 2200 specifically classified as AI is about 3400 patents in which AI contributes something in a larger context. ... Successful applications of AI are part of, and buried in, larger systems that probably do not carry the label AI inside." [Emphasis added.]

"