User:L.kamburov

’’’Sensor Path Planning’’’ is a topic in computer vision  which is discussed in CVonline

Introduction
An important area of autonomous robotics is the development of methods to produce motion plans for accomplishing specific goals whilst considering environmental limitations and restraints. The point of motion planning is to discover a chain of states to carry the robot from the start state to the goal state.

Classical motion planning is based typically on a known configuration space which entails the complete knowledge of both the robot kinematics as well as knowledge of the environment – i.e. a ‘priori map’ of the environment.

On the other hand, sensor-based path planning (in this case vision-based path planning), provides a more challenging and practical approach to a robot control. A robot motion plan should both utilize the sensor system feedback as well as involve criteria for maximazing the quality of this feedback.

Sensor based path planning is used when:
 * 1) The robot misses a priori knowledge of the world;
 * 2) The robot have only partial and insufficient knowledge of the world;
 * 3) The world model is bound to contain inaccuracies which can be overcome with sensor based planning strategies;
 * 4) The world is subject to unexpected occurrences or changing constantly;

Theory
The idea behind Visual-based path planning is obtaining visual data. Then processing and understanding the visual data.


 * Occlusions in the image. The occlusions in the image can be used as a good data for finding the next view. As occlusions may cause ambiguities, an laser scanning system from different views has to be integrated. In the general case occlusions occur when the reflected laser beam does not reach the camera or when direct leaser beam does not reach the scene surface.
 * Intermediate objects. Recent years an idea of spatial relationship between objects starts to spread rapidly. This concept describe improvement of searching for a specific object using intermediate objects. As same of the cases, when searching for a object takes too long or complicated. Such kind of indirect search is based on repeatedly finding an intermediate object that has a spatial relationship with the target object, and then search for the target in the restricted region specified by this relationship.


 * Automatic sensor placement. This approach improves the obtaining of the visual data. As the sensor can be automatically moved, this can provide better and multiple view points of the environment. Several researches show different strategies for acquiring the right position of the sensor. One of them explain an approach, which uses models of the object and the camera, with the following requirements :  the spatial resolution be above a minimum value, all surface points be in focus, all surfaces lie within the sensor field of view and no surface points be occluded. The next step is to compute the three-dimensional region of viewpoints that satisfies the geometric constraint produced by each sensing requirement.


 * Planning the optimal set of views using the Max-Min principle – Since a single viewpoint may not be sufficient to obtain data in order to accomplish a given task, additional viewpoints are introduced by acquiring a minimal number of images. The next-view planning is based on the Min-Max principle. To select a new viewing direction for next image taking:
 * Select the viewpoint direction which gives the max amount of new information under the assumption of worst-case scenario.
 * From the list of all possible viewing directions from where the necessary data can be taken choose the one which maximizes the                    amount of new information.

Applications

 * Robot navigation
 * Automation
 * The driverless car
 * Robotic surgery
 * Digital character animation
 * protein folding
 * Safety and accessibility in [[computer-aided architectural design]