Contact region

A Contact Region is a concept in robotics which describes the region between an object and a robot’s end effector. This is used in object manipulation planning, and with the addition of sensors built into the manipulation system, can be used to produce a surface map or contact model of the object being grasped.

In Robotics
For a robot to autonomously grasp an object, it is necessary for the robot to have an understanding of its own construction and movement capabilities (described through the math of inverse kinematics), and an understanding of the object to be grasped. The relationship between these two is described through a contact model, which is a set of the potential points of contact between the robot and the object being grasped. This, in turn, is used to create a more concrete mathematical representation of the grasp to be attempted, which can then be computed through path planning techniques and executed.

In Mathematics
Depending on the complexity of the end effector, or through usage of external sensors such as a Lidar or Depth camera, a more complex model of the planes involved in the object being grasped can be produced. In particular, sensors embedded in the fingertips of an end effector have been demonstrated to be an effective approach for producing a surface map from a given contact region. Through knowledge of the robot's position of each individual finger, the location of the sensors in each finger, and the amount of force being exerted by the object onto each sensor, points of contact can be calculated. These points of contact can then be turned into a three-dimensional ellipsis, producing a surface map of the object.

Applications
In hand manipulation is a typical use case. A robot hand interacts with static and deformable objects, described with soft-body dynamics. Sometimes, additional tools has to be controlled by the robot hand for example a screwdriver. Such interaction produces a complex situation in which the robot hand has similar contact points with the tool.

Apart from robotics control, tactile models are calculated in virtual environments. If a human operator touches with a data glove on an object, he produces a heatmap on the contact points with the object. This surface can be displayed in realtime and allows a better understanding of motion models.