Australian Centre for Robotic Vision

The Australian Centre for Robotic Vision, formerly Australian Research Council Centre of Excellence for Robotic Vision or ARC Centre of Excellence for Robotic Vision, is an unincorporated collaborative venture with funding of A$25.6m over seven years to pursue a research agenda tackling the critical and complex challenge of applying robotics in the real world.

The centre won the 2017 Amazon Robotics Challenge with their robot Cartman.

History
The centre was funded in 2014 by the Australian Research Council (ARC), to conduct research in robotic vision and to increase research capacity, train researchers and to engage with the wider community to help people learn about robotics, vision and coding.

A former collaborator was Data61 (previously known as NICTA or National ICT Australia Ltd).

Research organisations
The centre is made up of an interdisciplinary team from four Australian universities: And international universities:
 * Queensland University of Technology (QUT),
 * The University of Adelaide (UoA),
 * The Australian National University (ANU),
 * Monash University, and
 * INRIA Rennes Bretagne,
 * Georgia Institute of Technology,
 * Imperial College London,
 * The Swiss Federal Institute of Technology Zurich,
 * The University of Oxford, and
 * University of Toronto

Goals
The centre aims to achieve breakthrough science and technology in robotic vision by addressing four key research objectives: robust vision, vision and action, semantic vision, and algorithms and architecture. Together the four research objectives form the centre's research themes, which serve as organisational groupings of the centre's research projects.

Robust vision
Will develop new sensing technologies and robust algorithms that allow robots to use visual perception in all viewing conditions: night and day, rain or shine, summer or winter, fast moving or static.

Vision and action
Will create new theory and methods for using image data for control of robotic systems that navigate through space, grasp objects, interact with humans and use motion to assist in seeing.

Semantic vision
Will produce novel learning algorithms that can both detect and recognise a large, and potentially ever increasing, number of object classes from robotically acquired images, with increasing reliability over time.

Algorithms and architectures
Will create novel technologies and techniques to ensure that the algorithms developed across the themes can be run in real-time on robotic systems deployed in large-scale real-world applications.