Dimitri Bertsekas

Dimitri Panteli Bertsekas (born 1942, Athens, Δημήτρης Παντελής Μπερτσεκάς) is an applied mathematician, electrical engineer, and computer scientist, a McAfee Professor at the Department of Electrical Engineering and Computer Science in School of Engineering at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, and also a Fulton Professor of Computational Decision Making at Arizona State University, Tempe.

Biography
Bertsekas was born in Greece and lived his childhood there. He studied for five years at the National Technical University of Athens, Greece and studied for about a year and a half at The George Washington University, Washington, D.C., where he obtained his M.S. in electrical engineering in 1969, and for about two years at MIT, where he obtained his doctorate in system science in 1971. Prior to joining the MIT faculty in 1979, he taught for three years at the Engineering-Economic Systems Dept. of Stanford University, and for five years at the Electrical and Computer Engineering Dept. of the University of Illinois at Urbana-Champaign. In 2019, he was appointed a full-time professor at the School of Computing and Augmented Intelligence at Arizona State University, Tempe, while maintaining a research position at MIT.

He is known for his research work, and for his twenty textbooks and monographs in theoretical and algorithmic optimization and control, in reinforcement learning, and in applied probability. His work ranges from theoretical/foundational work, to algorithmic analysis and design for optimization problems, and to applications such as data communication and transportation networks, and electric power generation. He is featured among the top 100 most cited computer science authors in the CiteSeer search engine academic database and digital library. He is also ranked within the top 40 scientists in the world (top 20 in the USA) in the field of Engineering and Technology, and also ranked within the top 50 scientists in the world (top 30 in the USA) in the field of Mathematics. In 1995, he co-founded a publishing company, Athena Scientific, that among others, publishes most of his books.

In the late 1990s Bertsekas developed a strong interest in digital photography. His photographs have been exhibited on several occasions at MIT.

Awards and honors
Bertsekas was elevated to the grade of IEEE fellow in 1984 for contributions to optimization, data communications networks, and distributed control. Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book "Neuro-Dynamic Programming" (co-authored with John N. Tsitsiklis); the 2000 Greek National Award for Operations Research; and the 2001 John R. Ragazzini Award for outstanding contributions to education. In 2001, he was elected to the US National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks". In 2009, he was awarded the 2009 INFORMS Expository Writing Award for his ability to "communicate difficult mathematical concepts with unusual clarity, thereby reaching a broad audience across many disciplines." In 2014 he received the Richard E. Bellman Control Heritage Award from the American Automatic Control Council, the Khachiyan Prize for life-time achievements in the area of optimization from the INFORMS Optimization Society. Also he received the 2015 Dantzig prize from SIAM and the Mathematical Optimization Society, the 2018 INFORMS John von Neumann Theory Prize (jointly with Tsitsiklis) for the books "Neuro-Dynamic Programming" and "Parallel and Distributed Algorithms", and the 2022 IEEE Control Systems Award for “fundamental contributions to the methodology of optimization and control”, and “outstanding monographs and textbooks”.

Textbooks

 * Dynamic Programming and Optimal Control (1996)
 * Data Networks (1989, co-authored with Robert G. Gallager)
 * Nonlinear Programming (1996)
 * Introduction to Probability (2003, co-authored with John N. Tsitsiklis)
 * A Course in Reinforcement Learning (2023)

Monographs

 * "Stochastic Optimal Control: The Discrete-Time Case" (1978, co-authored with S. E. Shreve), a mathematically complex work, establishing the measure-theoretic foundations of dynamic programming and stochastic control.
 * "Constrained Optimization and Lagrange Multiplier Methods" (1982), the first monograph that addressed comprehensively the algorithmic convergence issues around augmented Lagrangian and sequential quadratic programming methods.
 * "Parallel and Distributed Computation: Numerical Methods" (1989, co-authored with John N. Tsitsiklis), which among others established the fundamental theoretical structures for the analysis of distributed asynchronous algorithms.
 * "Linear Network Optimization" (1991) and "Network Optimization: Continuous and Discrete Models" (1998), which among others discuss comprehensively the class of auction algorithms for assignment and network flow optimization, developed by Bertsekas over a period of 20 years starting in 1979.
 * "Neuro-Dynamic Programming" (1996, co-authored with Tsitsiklis), which laid the theoretical foundations for suboptimal approximations of highly complex sequential decision-making problems.
 * "Convex Analysis and Optimization" (2003, co-authored with A. Nedic and A. Ozdaglar) and "Convex Optimization Theory" (2009), which provided a new line of development for optimization duality theory, a new connection between the theory of Lagrange multipliers and nonsmooth analysis, and a comprehensive development of incremental subgradient methods.
 * "Abstract Dynamic Programming" (2013), which aims at a unified development of the core theory and algorithms of total cost sequential decision problems, based on the strong connections of the subject with fixed point theory. A 3rd edition of this monograph, which extends the framework for applications to sequential zero-sum games and minimax problems, was published in 2022.
 * "Reinforcement Learning and Optimal Control" (2019), which aims to explore the common boundary between dynamic programming/optimal control and artificial intelligence, and to form a bridge that is accessible by workers with background in either field.
 * "Rollout, Policy Iteration, and Distributed Reinforcement Learning" (2020), which focuses on the fundamental idea of policy iteration, its one iteration counterpart, rollout, and their distributed and multiagent implementations. Some of these methods have been the backbones for high-profile successes in games such as chess, Go, and backgammon.
 * “Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control" (2022), which introduces a new conceptual framework for reinforcement learning, based on off-line training and on-line play algorithms, which are designed independently of each other but operate in synergy through the powerful mechanism of Newton's method.