Thomas Dean (computer scientist)

Thomas L. Dean (born 1950) is an American computer scientist known for his work in robot planning, probabilistic graphical models, and computational neuroscience. He was one of the first to introduce ideas from operations research and control theory to artificial intelligence. In particular, he introduced the idea of the anytime algorithm and was the first to apply the factored Markov decision process to robotics. He has authored several influential textbooks on artificial intelligence.

He was a professor at Brown University from 1993 to 2007, holding roles including department chair, acting vice president for computing and information services, and deputy provost. In 2006 he started working at Google, where he was instrumental in helping the Google Brain project get its start. He is currently an emeritus professor at Brown and a lecturer and research fellow at Stanford.

Control
Dean and Wellman's book Planning and Control provided a much-needed bridge between research in AI on discrete-time symbolic methods for goal directed planning and decision making and continuous-time control theoretic methods for robotics and industrial control systems. Basic control concepts including "observability", "stability", and "optimality" are introduced, and many of the most important theoretical results are presented and explained. In a book review in the Artificial Intelligence Journal, James Hendler wrote that the book serves as a 'Rosetta Stone' for translation between the fields of robotics and AI.

Anytime Algorithms
The term anytime algorithm was coined by Dean and Boddy in the late '80s. The focus of Dean and Boddy's work in this area has been on deliberation scheduling applied to time-dependent planning problems. Deliberation scheduling is the explicit allocation of resources to tasks (in most cases anytime algorithms) so as to maximize the total value of an agent's computation. Time-dependent planning problems are defined to be planning problems where the time available for responding to events varies from situation to situation. In addition to defining the basic concepts, Dean and Boddy provided theoretical analyses and applications in robotics and operations research .

Markov Processes
Dean played a leading role in the adoption of the framework of Markov decision processes (MDPs) as a foundational tool in artificial intelligence. In particular, he pioneered the use of AI representations and algorithms for || factoring || complex models and problems into weakly-interacting subparts to improve computational efficiency. His work in state estimation emphasized temporal causal reasoning and the integration with probabilistic graphical models . His work in control includes state-space partitioning , hierarchical methods , and model minimization . This line of work is clearly summarized by a highly influential paper jointly written with Craig Boutilier and Steve Hanks.

AI Textbook
Working with his collaborators, James Allen and Yiannis Aloimonos specializing in respectively computer vision and natural language processing, Dean wrote one of the first modern AI textbooks incorporating probability theory, machine learning and robotics, and placing traditional AI topics such as symbolic reasoning and knowledge representation using the predicate calculus within a broader context. The first and only edition published in December 1994 initially competed with the first edition of Russell and Norvig's Artificial Intelligence: A Modern Approach that came out in 1995, but was eclipsed by the second edition of the Russell and Norvig text released in 2003.

Robotics
As co-chair of the 1991 AAAI Conference Dean organized a press event featuring mobile robots carrying trays of canaps and barely avoiding the participants. The coverage on the evening news was enthusiastically positive and in 1992, Dean and Peter Bonasso, with feedback from the robotics community, created the AAAI Robotics Competition featuring events aimed at showing off robots competing in events that involved performing tasks in the home, office, and disaster sites . The competition was still being held in 2010.

Stanford Course
After starting as a research scientist at Google, Dean was appointed as a consulting professor at Stanford and began teaching a course with the title Computational Models of the Neocortex. During the next fifteen years he invited top neuroscientists from all over the world to give talks and advise students working on class projects. Several of the classes resulted in papers coauthored by students that led to research projects at Google .

Neuromancer Project
In an effort to create a team focusing on scalable computational neuroscience, Dean and his students at Stanford produced a white paper entitled Technology Prospects and Investment Opportunities for Scalable Neuroscience that served as the basis for building a team of software engineers and computational neuroscientists focusing on connectomics. Early on, Dean worked with Christof Koch the chief scientist at the Allen Institute for Brain Neuroscience to develop a partnership, and hired Viren Jain from HHMI to serve as the technical lead for the project.

Dean and Jain expanded the team to more than ten software engineers and participated in the planning of the NIH Brain Initiative. As their computer vision and machine learning tools improved, the team sought out and developed additional partnerships with Gerry Rubin at HHMI Janelia Campus, Jeff Lichtman at Harvard, and Winfried Denk at the Max Planck Institute of Neurobiology. Each of these collaborations would lead to high-accuracy, dense reconstructions of neural tissue samples in different organisms, repeatedly surpassing the current state of the art in size and quality . Viren Jain is currently the project manager and lead scientist for the ongoing effort at Google. The resulting data on brain connectivity, including the 'hemibrain' connectome, a highly detailed map of neuronal connectivity in the fly brain and the 'H01' dataset, a 1.4 petabyte rendering of a small sample of human brain tissue, was publicly released.

Google Brain
Dean led some of the earliest investigations into the use of neural networks at Google, that directly led to the creation of the Google Brain project. He experimented with approaches for using hardware acceleration to overcome current performance limitations in building industrial-scale web services, and collaborated with Dean Gaudet on the Google Infrastructure and Platforms Team to make the case for introducing graphic processing units (GPU) in Google data centers. He worked closely with Vincent Vanhoucke, who led the perception research and speech recognition quality team, to demonstrate the value of GPUs for training and deploying deep neural network architectures in the cloud focusing on speech recognition for Google Search by Voice.

University Administration
Dean served as the Deputy Provost of Brown University from 2003 to 2005, as the chair of Brown's Computer Science Department from 1997 until 2002, and as the Acting Vice President for Computing and Information Services from 2001 until 2002. As Deputy Provost he helped develop and launch new multidisciplinary programs in genomics and the brain sciences as well as oversee substantial changes in the medical school and university libraries.

Professional Leadership
Dean was named a fellow of AAAI in 1994 and an ACM fellow in 2009. He has served on the Executive Council of AAAI and the Computing Research Association Board of Directors. He was a recipient of an NSF Presidential Young Investigator Award in 1989. He served as program co-chair for the 1991 National Conference on Artificial Intelligence and the program chair for the 1999 International Joint Conference on Artificial Intelligence held in Stockholm. He was a founding member of the Academic Alliance of the National Center for Women and Information Technology and a former member of the IJCAI Inc. Board of Trustees.