Adrian Walker (computer scientist)

Adrian David Walker is a US Computer Scientist, born in London, England.

Education
Adrian Walker attended Dartington Hall School, an experimental boarding school in England where attendance at classes was optional. He obtained a bachelor's degree in electrical engineering at Sheffield University (where he also chaired the Arts Society and edited a poetry magazine), and a master's degree in systems engineering from the University of Surrey. He next obtained a PhD in computer science from the State University of New York.

Career
He was assistant professor at Rutgers university in New Jersey, then Member of Technical Staff at Bell Labs. He moved to the IBM Almaden Research Center in California as a Research Staff Member then to the IBM Thomas J. Watson Research Center in Yorktown NY as manager of Principles and Applications of Logic Programming. After 17 years at IBM, he formed his own company, where he works on Internet Business Logic, a system for social knowledge acquisition and use in executable English.

Selected work: Walker's early work established a novel correspondence between stable patterns in formalised biological systems and the well known Chomsky hierarchy of languages—regular, context free, and context sensitive. He continued in grammar-based research by showing how Bayes' theorem can be used to fit a stochastic regular grammar to a collection of data, a result that can be used to inductively infer hidden Markov models. Walker next showed that, under certain practically useful assumptions, it is possible to compute the semantics of sets of syllogism-like rules in open vocabulary, largely open syntax English, in such a way as to answer English questions put to databases. This relaxes an onerous assumption made in many computational natural language understanding systems—namely, that the vocabulary must be narrowly restricted to obtain a useful level of understanding. A logical theory of knowledge developed in is applied in a system on the Web (reference 7, below) that combines three kinds of semantics – (a) data, as in SQL or Resource Description Framework, (b) inference, and (c) English, to answer questions over networked databases, and to explain the results in hypertexted English. The subject knowledge needed to do this (e.g. knowledge about the oil industry, or about energy independence, etc.) can be captured in social network style, by typing executable English into browsers. This contrasts with other social media, such as Twitter and Facebook, in which knowledge written in English is readable, but cannot be executed as a computer program.