Margarita Chli

Margarita Chli is an assistant professor and leader of the Vision for Robotics Lab at ETH Zürich in Switzerland. Chli is a leader in the field of computer vision and robotics and was on the team of researchers to develop the first fully autonomous helicopter with onboard localization and mapping. Chli is also the Vice Director of the Institute of Robotics and Intelligent Systems and an Honorary Fellow of the University of Edinburgh in the United Kingdom. Her research currently focuses on developing visual perception and intelligence in flying autonomous robotic systems.

Early life and education
Chli grew up in Cyprus and Greece. She pursued her undergraduate degree at the University of Cambridge in the United Kingdom. She conducted her studies in Information and Computer Engineering at Trinity College. After receiving her bachelor's degree, she continued at Trinity to conduct her Masters in engineering as well.

In 2006, Chli pursued her graduate work at Imperial College London under the mentorship of Andrew Davison. She worked in the Robot Vision Group where she worked towards developing novel ways to manipulate data to enable efficient autonomous navigation of mobile devices. Since vision-based methods are the key to enabling autonomous navigation, Chli tried to address the challenges that lie in preserving precision while achieving efficient information processing. She used the principles of Information Theory to guide the estimation based decisions made after gathering information from the environment and showed that these principals improved the efficiency and consistency of the algorithms used to estimate motion and form probabilistic maps of the environment. Her algorithms also enabled dense feature mapping even in the presence of ambiguity and inconsistencies in camera dynamics. Chli completed her graduate work in 2009 and worked for one year as a research associated in the Robot Vision Group.

Career and research
After completing her PhD, Chli joined the Autonomous Systems Lab at ETH Zürich for her postdoctoral research, and soon became the Lab Deputy Director. While at ETH Zürich, she taught the Autonomous Mobile Robot Course, and this was later turned into an online course to train thousands of researchers worldwide for free. In 2013, Chli was awarded the Chancellor's Fellowship, and became an assistant professor at the Institute of Perception Action and Behavior at the University of Edinburgh. She held this prestigious fellowship for two years.

In 2015, Chli was promoted to the Swiss National Science Foundation (SNF) Assistant Professor in Vision for Robotics at ETH Zürich and relocated her lab from Edinburgh. She still holds an Honorary Fellowship at the University of Edinburgh. Her lab, the Vision for Robots Lab, or V4RL, focuses on developing intelligence robots to improve the quality and safety of human life. Chli has several lines of research going on in her lab to achieve these goals. With the SHERPA project, Chli aims to use intelligent and autonomous robotic systems to help with alpine search and rescue. Chli also participates in research towards the myCopter project whose goal is to design personal automated aerial transportation systems such that one could travel from work to home by air at low altitudes. Lastly, Chli's team develops methods to enable micro aerial vehicles to map out unknown environments through the SFly project (Swarm of micro flying robots). All of these projects require immense innovations in the field of computer vision and robotics, essentially demanding the ability that robots can handle large amounts of data in efficient ways to “see” their environments and respond quickly and autonomously. Chli's team aims to develop these intelligent systems with the capability of visual perception through the use of deep learning and robot collaboration.

Improving methods of computer vision
Chli's early work helped to improve computer vision approaches to enable the construction of autonomous robotic systems. Chli first tackled the issue of simultaneous localization and mapping (SLAM) in which a robotic system has difficulty estimating its new and changing environment while also keeping track of its own location. Since dividing the map into smaller submaps would allow for individual parts of a scene to be processed independently and thus more efficiently, Chli created an innovative method to perform submap division on SLAM maps. She used hierarchical clustering to group similar features together into subgroups and she revealed novel insight into the structure of visual maps which helped to guide the field in addressed the computational issues associated with SLAM.

The next computer vision issue that Chli tackled during her time as a postdoctoral fellow at ETH Zürich, was key point detection in images. She developed a method called BRISK (Binary Robust Invariant Scalable Key points) and it performed much faster and at a much lower computational cost compared to previous key point detection algorithms such as SURF and SIFT.

Autonomous helicopter
During Chli's postdoctoral work at ETH Zürich, she was part of a team that developed the first autonomously flying small helicopter. The helicopter had a monocular camera as the only inertial sensory and was able to navigate in novel environments. It achieved SLAM with extreme robustness to enable its autonomous flight.

Robot navigation
Chli has conducted fundamental research to improve the methods of autonomous robot navigation. One way Chli and her team worked towards improving robot navigation is by creating an algorithm to better integrate information from multiple sensors on the robot. Previously, multisensory integration was challenging due to sensory outages and differences in measurement rates and delays. Due to this, Chli and her team developed a framework called MultiSensor-Fusion Extended Kalman Filter (MSF-EKF) which is able to process delayed, relative, and absolute information from unlimited sensors and sensor types. They tested their framework with a micro aerial vehicle (MAV) that had a GPS receiver as well as visual, inertial, and pressure sensors and they found that it was able to self calibrate and show efficient re-linearization in response to state updates.

With improvements and advances in SLAM capabilities, Chli became interested in creating multi-robot collaborative SLAM. Collaborative scene perception and mapping by a group of autonomous robots would serve a broad spectrum of uses from environmental data collection to surveillance and rescue. In her framework, each individual unmanned aerial vehicle (UAV) would have a local SLAM with limited capacity as a part of its design and computational power, while there would also be a central grounded server to collect and gather all of the information from each individual UAV. This central controller also distributes this information back to all individual UAVs so that they can update their maps as well.

Chli has recently been working with convolutional neural networks (CNN) to improve their ability to perform place recognition for use in robot navigation. She proposed novel CNN-based image features for use in place recognition by creating regional representations of salient regions directly from convolutional layer activation. They found that their system has improved robustness in the face of viewpoint and appearance variations and they shared their insights about the process of feature encoding that make it robust to external variations in their system.

Awards and honors

 * 2017 Zonta Prize
 * 2016 25 Women in Robotics You Need to Know
 * 2016 International Conference on Robotics and Automation Best Associate Editor Award
 * 2001-2005 Scholarship for outstanding qualifications, Trinity College, University of Cambridge, UK
 * 2001-2005 Scholarship for outstanding performance, Cyprus State Scholarship Foundation

Select publications

 * Marco Karrer, Mohit Agarwal, Mina Kamel, Roland Siegwart, Margarita Chli: Collaborative 6DoF Relative Pose Estimation for Two UAVs with Overlapping Fields of View. ICRA 2018: 6688-6693
 * Patrik Schmuck, Margarita Chli: CCM-SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams. J. Field Robotics 36(4): 763-781 (2019)
 * Z. Chen, F. Maffra, I. Sa and M. Chli, "Only look once, mining distinctive landmarks from ConvNet for visual place recognition," 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, 2017, pp. 9–16, doi: 10.1109/IROS.2017.8202131.
 * P. Schmuck and M. Chli, "Multi-UAV collaborative monocular SLAM," 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017, pp. 3863–3870, doi: 10.1109/ICRA.2017.7989445.
 * S. Lynen, M. W. Achtelik, S. Weiss, M. Chli and R. Siegwart, "A robust and modular multi-sensor fusion approach applied to MAV navigation," 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 2013, pp. 3923–3929, doi: 10.1109/IROS.2013.6696917.
 * M. W. Achtelik, S. Lynen, S. Weiss, L. Kneip, M. Chli and R. Siegwart, "Visual-inertial SLAM for a small helicopter in large outdoor environments," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, 2012, pp. 2651–2652, doi: 10.1109/IROS.2012.6386270.
 * S. Leutenegger, M. Chli and R. Y. Siegwart, "BRISK: Binary Robust invariant scalable keypoints," 2011 International Conference on Computer Vision, Barcelona, 2011, pp. 2548–2555, doi: 10.1109/ICCV.2011.6126542.
 * M. Chli and A. J. Davison, "Automatically and efficiently inferring the hierarchical structure of visual maps," 2009 IEEE International Conference on Robotics and Automation, Kobe, 2009, pp. 387–394, doi: 10.1109/ROBOT.2009.5152530.
 * Margarita Chli, Andrew J. Davison: Active matching for visual tracking. Robotics Auton. Syst. 57(12): 1173-1187 (2009)