User talk:Furkanbuet06

furkanipe06

 * your contribution to industrial robot was far too long. It exceeded even the original article in size. It included opinions beyond facts and covered issues concerned with general robotics and not industrial robots in particular (such as wall following and mobility and specific designs such as Robonaut). Many issues such as accuracy and drives were already covered. You wrote a good, self contained article that is not appropriate in an encyclopedia, you should try for a magazine.
 * You need to talk to other contributers before you make a major edit. Robotics1 (talk) 10:17, 29 April 2011 (UTC)

Robot & Automation Industrial robots are neither as fast nor as efficient as special-purpose automated machine tools. However, they are easily retrained or reprogrammed to perform an array of different tasks, whereas: An automated special-purpose machine tool can work on only a very limited class of tasks, and designed to do one task very efficiently. Choosing among HUMANS, ROBOTS, and AUTOMATION Some rules can help suggest significant factors to keep in mind. The first rule to consider is known as the Four D’s of Robotics. Is the task dirty, dull, dangerous, or difficult? The second rule recalls the fourth law of robotics: A robot may not leave a human jobless. A third rule involves asking whether you can find people who are willing to do the job. A fourth rule is that the use of robots or automation must make short-term and long-term economic sense. A task that has to be done only once or a few times and is not dangerous probably is best done by a human. A task that has to be done a few hundred to a few hundred thousand times, however, is probably best done by a flexible automated machine such as an industrial robot. A task that has to be done 1 million times or more is probably best handled by building a special-purpose hard automated machine to do it. Robot Industrial Application Material handling applications, like:

Material transfer Applications

Machine loading/unloading Applications Processing applications, for example: Welding Painting Assembly. Inspection.

A sensor is an electronic device that transfers a physical phenomenon (temperature, pressure, humidity, etc.) into an electrical signal. Sensors in Robotics are used for both internal feedback control and external interaction with the outside environment. Desirable Features of Sensors

• Accuracy. • Precision. • Operating range. • Speed of response. • Calibration. • Reliability. • Cost. • Ease of operation. Potentiometers The general idea is that the device consists of a movable tap along two fixed ends. As the tap is moved, the resistance changes. The resistance between the two ends is fixed, but the resistance between the movable part and either end varies as the part is moved. In robotics, pots are commonly used to sense and tune position for sliding and rotating mechanisms. Switch Sensors Switches are the simplest sensors of all. They work without processing, at the electronics level. Switches measure physical contact. Their general underlying principle is that of an open vs. closed circuit. If a switch is open, no current can flow; if it is closed, current can flow and be detected. Principle of Switch Sensors Contact sensors: detect when the sensor has contacted another object. Limit sensors: detect when a mechanism has moved to the end of its range. Shaft encoder sensors: detects how many times a shaft turns by having a switch click (open/close) every time the shaft turns.

The actions of the individual joints must be controlled in order for the manipulator to perform a desired motion. The robot’s capacity to move its body, arm, and wrist is provided by the drive system used to power the robot. The joints are moved by actuators powered by a particular form of drive system. Common drive systems used in robotics are electric drive, hydraulic drive, and pneumatic drive. Types of Actuators Drive Systems The drive system determines the speed of the arm movement, the strength of the robot, dynamic performance, and, to some extent, the kinds of application. Robot Actuators Quality Have enough power to acc/dec the links, Carry the loads, Light, Economical, Accurate, Responsive, Reliable and Easy to maintain. Characteristics of Actuating Systems Stiffness vs. Compliance Stiffness is the resistance of a material against deformation. Hydraulic systems are very stiff and noncompliant. Pneumatic systems are compliant. deform it. the load. material. and pressures and are more accurate. characteristics. Use of Reduction Gears Gears used to increase the torque and reduce the speed. Hydraulic actuators can be directly attached to the links. This Simplifies the design, Reduces the weight, Reduces the cost, Reduces rotating inertia of joints, Reduces backslash, Reduces noise and Increases the reliability of the system. Electric motors normally used in conjunction with reduction gears to increase their torques and to decrease their speed. This increases the cost, increases the number of parts, increases backslash, increases inertia of rotating body, increases the resolution of the system. Applications Electric motors are the most commonly used actuators. Hydraulic systems were very popular for large robots. Pneumatic cylinders are used in on/off type joints, as well as for insertion purposes.
 * Electric Motors, like: Servomotors, Stepper motors or Direct-drive electric motors
 * Hydraulic actuators
 * Pneumatic actuators
 * Weight, Power-to-weight Ratio.
 * Operating Pressure.
 * Stiffness vs. Compliance.
 * Use of reduction gears.
 * The stiffer the system, the larger load that is needed to
 * The more compliant the system, the easier it deforms under
 * Stiffness is directly related to the modulus of elasticity of the
 * Stiff systems have a more rapid response to changing loads
 * A working balance is needed between these two competing

Components of Industrial Robot •Physical parts or anatomy, •Built-in instructions or instinct , •Learned behavior or task programs. Physical Parts of An Industrial Robot •Mechanical part or manipulator (Body, Arm, Wrist) •End effector (Tool or Gripper) •Actuators •Controller (Sensors, Processor) •Power supply, •Vehicle (optional). Robot Anatomy Manipulator is constructed of a series of Joints & Links. A Joint provides relative motion between the input link and the output link. Each joint provides the robot with one degree of freedom. Robot Joints Linear, rotational, twiating and revolving. Degrees of Freedom Point location in space specified by three coordinates (P). Object location in space specified by location of a selected point on it (P) and orientation of the object (R). Six degrees (P,R) of freedom needed to fully place the object in space and orientate it. Robot Hand Location The arm joints are used to position the end effector. The wrist joints are used to orient the end effector. Robot Languages Robotic languages range from machine level to high-level languages. High-level languages are either interpreter based or compiler based. Levels of Robot Languages

•Microcomputer Machine Language Level •Point-to-Point Level •Primitive Motion Level •Structured Programming Level •Task-Oriented Level Industrial Robot Characteristics Lifting power (Payload), Reach (Workspace), Repeatability, Reliability, Manual/Automatic control, Memory, Library of programs, Safety interlocks, Speed of operation, Computer interface, and easy Maintenance.

Basic Concepts of Robot control Robot Control System Task The task of a robot control system is to execute the planned sequence of motions and forces in the presence of unforseen errors. Errors can arise from:

– inaccuracies in the model of the robot, – tolerances in the workpiece, – static friction in joints, – mechanical compliance in linkages, – electrical noise on transducer signals, and – limitations in the precision of computation. Controlled Variables In both Cartesian and joint spaces, we require precise control of: Position, Velocity, Force and Torque. Robot Control Techniques Open Loop Control (Nonservo Control) No Feedback! Basic control suitable for systems with simple loads, Tight speed control is not required, no position or rate-of-change sensors, on each axis, there is a fixed mechanical stop to set the endpoint of the robot, its called “stop-to-stop” or “pick-and-place” systems. The desired change in a parameter is calculated (joint angles), The actuator energy needed to achieve that change is determined, and the amount of energy is applied to the actuator. If the model is correct and there are no disturbances, the desired change is achieved. Feedback Control Loop Determine rotor position and/or speed from one or more sensors. Position of robot arm is monitored by a position sensor, power to the actuator is altered so that the movement of the arm conforms to the desired path in terms of direction and/or velocity. Errors in positioning are corrected. Feedforward Control It is a control, where a model is used to predict how much action to take, or the amount of energy to use. It is used to predict actuator settings for processes where feedback signals are delayed and in processes where the dynamic effects of disturbances must be reduced. Adaptive Control This control uses feedback to update the model of the process based upon the results of previous actions. The measurements of the results of previous actions are used to adapt the process model to correct for changes in the process and errors in the model. This type of adaption corrects for errors in the model due to long-term variations in the environment but it cannot correct for dynamic changes caused by local disturbances.

Robot Control The most common kind of robot failure is not mechanical or electronic failure but rather failure of the software that controls the robot. For example, if a robot were to run into a wall, and its front touch sensor did not trigger, the robot would become stuck (unless the robot is a tank), trying to drive through the wall. This robot is not physically stuck, but it is "mentally stuck": its control program does not account for this situation and does not provide a way for the robot to get free. Many robots fail in this way. This chapter will discuss some of the problems typically encountered when using robot sensors, and present a framework for thinking about control that may assist in preventing control failure of ELEC 201 robots. A few words of advice: most people severely underestimate the amount of time that is necessary to write control software. A program can be hacked together in a couple nights, but if a robot is to be able to deal with a spectrum of situations in a capable way, more work will be required. Also, it is very difficult to be developing final software while still making hardware changes. Any hardware change will necessitate software changes. Some of these changes may be obvious but others will not. The message is to finalize mechanical and sensor design early enough to develop software based upon a stable hardware platform. Basic Control Methods Feedback Control Figure 11.1: Driving along a Wall Edge

Suppose the robot should be programmed to drive with its left side near a wall, following the wall edge (see Figure 11.1). Several options exist to accomplish this task: One solution is to orient the robot exactly parallel to the wall, then drive it straight ahead. However, this simple solution has two problems: if the robot is not initially oriented properly, it will fail. Also, unless the robot were extremely proficient at driving straight, it will eventually veer from its path and drive either into the wall or into the game board. The common and effective solution is to build a negative feedback loop. With continuous monitoring and correction, a goal state (in this case, maintaining a constant distance from a wall) can be achieved.

Figure 11.2: Using Two Hall Effect Sensors to Follow Wall

Several of the sensors provided in the ELEC 201 kit can be used to control the distance between the robot and the wall. For example, two Hall effect sensors could be mounted to the robot as shown in Figure 11.2. In this example the wall contains a magnetic strip (as is sometimes the case on the ELEC 201 game board). The two magnetic sensors are mounted on the robot as shown. Since the A sensor is closer to the wall, it will trigger first as the robot moves toward the wall, followed by B if the robot continues to move toward the wall. As the robot moves away from the wall, B will release first, followed by A if the robot continues to move away from the wall. A decision process making use of this information is depicted in Figure 11.3. Figure 11.3: Control Process With Two Hall Effect Sensors

Notice that the situation with A off and B on is indicative of some failure of the sensor or its mounting.

Figure 11.4: Using a Proximity Sensor to Measure Distance to a Wall

Other sensors provided in the ELEC 201 kit can be used to measure the distance between the robot and the wall (see Figure 11.4). For example, a magnetic field intensity sensor can be used if the wall contains a magnetic strip. In this case the magnetic field sensor would produce a higher value as the robot got closer to the wall. A light source/photocell pair could also be used. In this case the light source (shielded from stray light, perhaps by a cardboard tube) would be aimed at the wall, and the photocell (also shielded from stray light) would produce a value proportional to the distance from a reflective wall. A "bend" sensor could also be used, although the ELEC 201 kit does not contain any of these useful sensors. In this case, the shorter the distance, the more the bend sensor is bent (see explanation of bend sensors). Suppose a function were written using the two Hall effect sensors to discern four states: TOO_CLOSE, TOO_FAR, JUST_RIGHT (from the wall), and SENSOR_ERROR. Here is a possible definition of the function, called wall_distance:

int TOO_CLOSE= -1; int JUST_RIGHT= 0; int TOO_FAR= 1; int SENSOR_ERROR= -99;

int wall_distance {

/* get reading on A & B sensors */ int A_value= digital(A_SENSOR); int B_value= digital(B_SENSOR);

/* assume "ON" means the sensor reads zero */ if ((A_value == 0) && (B_value == 0)) return TOO_CLOSE; if ((A_value == 0) && (B_value == 1)) return JUST_RIGHT; if ((A_value == 1) && (B_value == 0)) return SENSOR_ERROR; /* if ((A_value == 1) && (B_value == 1)) */ return TOO_FAR;

} Suppose instead a function were written using a proximity sensor to discern the three states: TOO_CLOSE, TOO_FAR, and JUST_RIGHT. Here is a possible definition of this function, called wall_dist_prox:

int TOO_CLOSE= -1; int JUST_RIGHT= 0; int TOO_FAR= 1; int TOO_CLOSE_THRESHOLD= 50; /* Embedding threshold constants in this */ int TOO_FAR_THRESHOLD= 150;  /* manner in a real program is not good */ /* programming practice. Instead, they */ /* should be placed in a separate file. */

int wall_dist_prox {

/* get reading on proximity sensor */ int prox_value= analog(PROXIMITY_SENSOR);

/* assume smaller values mean closer to wall */ if (prox_value < TOO_CLOSE_THRESHOLD) return TOO_CLOSE; if (prox_value > TOO_FAR_THRESHOLD) return TOO_FAR; return JUST_RIGHT;

} Now, a function to drive the robot making use of the wall_distance function would create the feedback. In this example, the functions veer_away_from_wall, veer_toward_wall, and drive_straight are used to actually move the robot, as shown in Figure 11.5.

Even if the function to drive the robot straight were not exact (maybe one of the robot's wheels performs better than the other), this function should still accomplish its goal. Suppose the "drive straigh" routine actually veered a bit toward the wall. Then after driving straight for a bit, the "follow wall" routine would notice that the robot was too close to the wall, and execute the "veer away" function. The actual performance of this algorithm would be influenced by many things, including: How sharply the "veer away" and "veer toward" functions made the robot turn. The accuracy of the Hall effect switching thresholds, or how well the proximity sensors measured the distance to the wall. For proximity sensors, the settings of the TOO_CLOSE_THRESHOLD and TOO_FAR_THRESHOLD values. The rate at which the follow_wall function made corrections to the robot's path. Still, use of a negative feedback loop ensures basically stable and robust performance, once the parameters are tuned properly. *The type of feedback just described is called negative  feedback because the corrections subtract from the error, making it smaller. With positive feedback, corrections add to the error. Such systems tend to be unstable. Open-Loop Control

Suppose now the robot has been following the wall, and a touch sensor indicates that it has reached the far edge. The robot needs to turn clockwise ninety degrees to continue following the edge of the wall (see Figure 11.6). How should this be accomplished? One simple method would be to back up a little and execute a turn command that was timed to accomplish a ninety degree rotation. The following code fragment illustrates this idea:

....   robot_backward; sleep(.25); /* go backward for 1/4 second */ robot_spin_clockwise; sleep(1.5); /* 1.5 sec = 90 degrees */ .... This method will work reliably only when the robot is very predictable. For example, one cannot assume that a turn command of 1.5 seconds will always produce a rotation of 90 degrees. Many factors affect the performance of a timed turn, including the battery strength, traction on the surface, and friction in the geartrain. This method of using a timed turn is called open-loop control (as compared to closed-loop control) because there is no feedback from the commanded action about its effect on the state of the system. If the command is tuned properly and the system is very predictable, open-loop commands can work fine, but generally closed-loop control is necessary for good performance.

How could the corner-negotiation action be made into a closed-loop system? One approach is to have the robot make little turns, drive straight ahead, hit the wall, back up, and repeat (see Figure 11.7), dealing with the corner in a series of little steps. Feed-Forward Control There are certain advantages to open-loop control, most notably speed. Clearly a single timed turn would be much faster than a set of small turns, bonks, and back-ups. One approach when using open-loop control is to use feed-forward control, where the commanded signal is a function of some parameters measured in advance. For the timed turn action, battery strength is probably one of the most significant factors determining the turn's required time. Using feed-forward control, a battery strength measurement would be used to "predict" how much time is needed for the turn. Note that this is still open-loop control -- the feedback is not based on the actual result of a movement command -- but a computation is made to make the control more accurate. For this example, the battery strength could be measured or estimated based on usage since the last charge. Summary For the types of activities commonly performed by ELEC 201 robots, feedback control proves very useful in: Wall following. As discussed in this section. Line following. Using one or more reflectance sensors aimed at the surface of the ELEC 201 game board. Infrared tracking. Homing in on a source of infrared light, using the IR sensors. Open-loop control should probably be used sparingly and in time-critical applications. Small segments of open-loop actions interspersed between feedback activities should work well. Feed-forward techniques can enhance the performance of open-loop control when it is used. Sensor Calibration Manual Sensor Calibration The function wall_dist_prox (one of the examples in Section 11.1.1) used threshold variables ( TOO_FAR_THRESHOLD and TOO_CLOSE_THRESHOLD) to interpret the data from the proximity sensor. Depending on the actual reading from the proximity sensors and the settings of these threshold variables, wall_dist_prox determined if the robot was "too close," "too far," or "just the right" distance from the wall. Proper calibration of these threshold values is necessary for good robot performance. Often it is convenient to write a routine that allows interactive manipulation of the robot's sensors to determine the proper calibration settings. For a given proximity sensor, a calibration routine could be included that allows placing the sensor a fixed distance from the wall (the TOO_FAR_THRESHOLD), and then depression of one of the user buttons. The routine would then "capture" the value of the proximity sensor at that point, and use this value as the appropriate threshold. Similarly, sensors could be placed closer to the wall and then captured as the TOO_CLOSE_THRESHOLD value. Later, the values of these thresholds could be noted when the robot is performing particularly well. These "optimal" settings could be hard-coded as default values. The calibration routine could be kept for use under certain circumstances or if other parameters affecting the robot's performance necessitate readjustment of the calibration settings.

Dealing with Changing Environmental Conditions Calibration routines are particularly important when environmental conditions cause fluctuations in sensor values. Two sensor types are strongly affected either by external environmental conditions or by the robot's internal state: Light Sensors. Heavily affected by room lighting (ambient light), unless extremely well-shielded. Motor Force Sensing. Dependent on battery voltage. When battery weakens, force readings increase. Light Sensors Any light sensor will operate differently in different amounts of ambient (e.g., room) lighting. For best results when using light sensors, they should be physically shielded from room lighting as much as possible, but this is not usually perfect. Given that room lighting will affect nearly all light sensors to some degree, software should be designed to compensate for room lighting. When using reflectance-type or break-beam light sensing, controlling the sensor's own illumination source is a good strategy. If a sensor reading is taken with the sensor's own illumination off, the reading due to ambient light is measured. If a reading is then taken with the illumination on, a value combining ambient light plus the sensor's own illumination results. By subtracting these two values, the sensor reading due to its illumination alone can be obtained. The illumination source control method will not wholly eliminate the influence of ambient light. Further calibration in an actual performance environment will probably be necessary. Motor Force Sensing Direct measurement of the battery voltage can be used in a function to compensate for its effect on the motor force readings. However, a simple calibration sequence might suffice. When the motor is trying to turn but can't, then the motor current increases. The RoboBoard's motor force sensing circuitry allows this current to be measured. Set the wheels of your robot in motion at the speed you intend to drive. Hold the wheel to keep it from turning and take a motor force reading. This reading should be significantly higher than the free spinning motor force. If you want to see if your robot is stuck, take a motor force reading. If you get a value near the stalled reading, your robot is stuck. This calibration sequence would need to be performed periodically over the life cycle of the motor battery. Using Persistent Global Variables A persistent global variable (PGV) is a type of global variable that keeps its state despite pressing reset or turning the robot on and off. PGV'S are ideal for keeping track of calibration settings: after calibrating the robot once, it would not need to be recalibrated until after a new program were downloaded (in general, downloading of code will destroy the previous values of a persistent global, although this can be circumvented as explained in Section 10.7.3). Using persistent globals requires the creation of an initialization program to allow interactive setting of persistent variable values. A menuing program could be written to use the two user buttons (CHOOSE and ESCAPE) and the RoboKnob variable resistor on the RoboBoard (VR1) to navigate around a series of menus. This program could allow the selection and modification or calibration of any of a number of parameters. By exiting the initialization program without making any changes, or simply not calling it at all, the robot can operate under the previous settings made to the persistent variables. The routine could also allow restoration of the default values of all of the globals, returning them to some tested and known-to-work values. Robot Control This section presents some ideas about designing software for controlling a robot. The focus is not on low-level coding issues, but on high level concepts about the special situations robots will encounter and ways to address these peculiarities. The approach taken here proposes and examines some control software architectures that will comprise the brains of the robot.

Probably the biggest problem facing a robot is overall system reliability. A robot might face any combination of the following failure modes: Mechanical Failures. These might range from temporarily jammed movements to wedged geartrains or a serious mechanical breakdown. Electrical Failures. We hope it is safe to assume that the computer itself will not fail but loose connections of motors and sensors are a common problem. Sensor Unreliability. Sensors will provide noisy data (data that is sometimes accurate, sometimes not) or data that is simply incorrect (touch sensor fails to be triggered). The first two of the above problems can be minimized with careful design, but the third category, sensor unreliability, warrants a closer look. Before discussing control ideas further, here is a brief analysis of the sensor problem. Sensor Unreliability A variety of problems afflict typical robot sensors: Spurious Sensor Data. Most sensors will occasionally generate noise in their output. For example, an infrared sensor might indicate the infrared light is present when actually no light is present. Or, a proximity sensor might give a questionable reading. If the noise is predictable enough, it can be filtered out in software. The noisy IR sensor might not be trusted until it gives some number of consecutive readings in agreement with one another. However, if the noise problem is very bad, a sensor might be rendered useless -- or worse, dangerous -- if the program running the robot places too much trust in the sensor reading. Missed Sensor Data. Affiliated with the problem of noisy data is missed data, where for either electrical or software reasons, a sensor reading is not detected -- a light sensor changes state twice before the software can count it, or a touch sensor jams and fails to trigger. Corrupted Sensor Data. As discussed in the previous section on calibration, sensor data can be adversely affected by ambient environmental conditions or battery strength. To some extent, unruly sensor data can be filtered or otherwise processed "at the source," that is, before higher-level control routines see it. The following example uses the function wall_dist_prox introduced at the beginning of the chapter: The wall distance routine gets its data directly from the proximity sensor and then outputs an interpretation of that data. The routine does not process the sensor data in any way -- it does not check for unreasonable data samples, for example. Suppose that the proximity sensor should never report a value above 250, and for some reason, a bogus value is detected. This probably indicates some type of sensor failure, such as an unplugged sensor. It makes sense to intercept this failure locally, where the sensor data is first entered into the software system. In a similar way, sensor data could be averaged, smoothed, or otherwise processed before interpretation. It is logical to assign individual routines to perform this activity for any sensor that might need to be dealt with in a particular way. Using the multi-tasking capabilities of IC, each sensor or sensor sub-system could be assigned its own C process to perform this activity. Task-Oriented Control With so many problems facing a robot, how can it get anything done? Usually, one assumes that these problems do not exist. The insidious part is that most of the time, ignoring the failure modes will work. However, when the failures do occur, they will return to inflict crippling damage to a robot's performance. Returning again to the wall-following example (as implemented by the function follow_wall (see Figure 11.5) ), in a worst-case scenario, what could happen while a robot was merrily running along, following a wall? Several possibilities: 1. The robot could run into an object or a corner, properly triggering a touch sensor. 2. The robot could run into an object or corner, not triggering a touch sensor. 3. The robot could wander off away from the wall. 4. The robot could slam into the wall, get stuck, and conditionally trigger a touch sensor. 5. The proximity sensor could fall off its mount, causing a series of incorrect sensor readings. Ideally, control software should expect occurrences of cases like those numbered #1 through #4 and be able to detect case #5.

Suppose the wall-following activity is treated as a discrete robot task with initial conditions, an activity to perform (perhaps repetitively), exit conditions, and a return value. Task Analysis of Simple Wall Follow Function. Exit Conditions Within this framework, the simple wall-following function could be extended such that it could deal with several of the potential problems it might face while following a wall. Some of these "problems" actually must be dealt with; if a robot doesn't run into an obstacle sooner or later, either something is wrong or the robot is following a very long (circular?) wall. By adding a test for touch sensors inside the loop code of follow_wall, a function that exits upon detection of a collision can be created. This new function, follow_wall2, is shown in Figure 11.8. Note the new sensor function robot_stuck, which is expected to return a boolean true if it believes that the robot is stuck. To double-check that the robot is stuck, the function can get additional data from any of the robot's sensors -- including touch sensors, shaft encoders, and motor force sensors. Timeouts Detecting collisions can only be as good as the collision sensors. Since such sensors are not perfect, it may be a good idea to add some kind of timer-based exit condition. This will prevent the case of a robot from getting stuck and not having its touch sensor depressed. Often a robot simply does not "believe" that it is stuck -- its program stays stuck in some loop and does not properly react. Time-outs can solve this problems and provide other information as well. In a typical application, the maximum time that the robot is allowed to take in performing a particular task would be determined. When the function to perform the task is invoked, it would be given this maximum time. If (continuing the example) the wall-following task failed to exit before the time limit had expired, the timeout would trigger and cause the function to exit (with an appropriate error return value). Additionally, the timing information could be used to verify that the task had exited normally. If a robot is supposed to take six seconds to get from the start of the wall-follow to another wall and in one instance takes only three seconds, then probably an obstacle caused the premature exit.

Figure 11.9 lists the third wall-following function, with added timeout capability. Note that the timing variables are defined in long integers (timing units in milliseconds). Floating point variables could also have been used (with the more intuitive units of seconds), but long integers are much more efficient. A task analysis of the follow_wall3 function shows a much better set of specifications, as shown in Figure 11.10.

Monitoring State Transitions inside a Feedback Loop The third version of the wall-following function ensures that the robot will not wedge and get stuck forever. However, going a step further, a program can be written to detect failure situations in advance of the overall task timeout. The key is in the guts of the feedback loop, associated with the functions veer_away_from_wall, veer_toward_wall, and drive_straight. When the robot is following a wall normally, these functions should alternate control, each being operative for only a short period of time. Said another way: the robot will not simply drive straight for a long time, it will veer into the wall for a bit, veer away from the wall for a bit, drive straight for a bit, etc. Conversely, if the robot wandered away from the wall, then the veer_toward_wall output would be continuously asserted. Monitoring for normal exchange of control among these feedback outputs allows detection of the feedback loop's normal operation. Conversely, by looking for abnormal exchange of control -- in particular, one output being asserted for too long a period of time -- failure conditions can be detected.

The code to implement this idea works as follows: each time a new feedback output is selected, a timer is reset. The timer measures time spent in consecutive selections of the feedback output. If the same feedback output is selected repeatedly for too long a period of time, an exit error condition is generated. Several constants are used to adjust the parameters of the timeout: DRIVE_STRAIGHT_MAXTIME, VEER_IN_MAXTIME, and VEER_OUT_MAXTIME. Three new exit error conditions are used to report which part of the feedback loop failed. The final program, follow_wall4, is shown in Figures 11.11 and 11.12. One potential problem: It is conceivable that for some duration of time that it would be correct for the loop to stay in one state for an unusually long period of time, in which case this method would incorrectly cause a premature exit. In the wall-following example, if the robot were perhaps to drive very straight and were oriented exactly parallel to the wall, it would be proper to stay in the "drive straight" state for a long while. Increasing the timeout values of the constants would minimize the potential of this problem -- at the expense of the method's effectiveness. It is probably best to deal with these circumstances on a case-by-case basis. One possibility is to deliberately handicap the feedback control so that it oscillates a bit; clearly this has disadvantages too. Coordination of Tasks The robot task model just presented should prove a useful way to make a robot's behavior more reliable. But further questions come up: How should the selection and execution of different tasks be done? This question is often asked by contemporary robot researchers. In addition to a variety of ways of thinking about robot tasks, there are many different approaches to organizing the higher-level control of mobile robots. Unlike robots used more generally in research, ELEC 201 robots have special requirements: these robots must be fast; many research robots can sit and compute for a while. ELEC 201 robots must be reliable, whereas for other demonstrations, a robot might be videotaped until it does what the programmers want it to. ELEC 201 robots have only a few chances to perform correctly for the competition; in some research experiments, software robots are "evolved" through many generations until they behave appropriately.

Still, some of the ideas from the research field may be helpful. To extend from the task model developed in this chapter, here are several different approaches that could be used to coordinate and control task execution. Task Sequencing In this model, only one task executes at a time. A "task manager" is responsible for selecting tasks based upon a predetermined task sequence, with alternative sequences to deal with exceptional circumstances. This can be visualized as a connected graph of tasks, with the path of traversal determined by the task manager. A simple example of task sequencing would be a program to make a robot follow the inside wall of a rectangular area. The task manager would alternately invoke "follow wall" and "negotiate corner" tasks, assuming there were no errors. Concurrent and Non-Competing Tasks This model builds on the task sequencing model by allowing concurrent execution of tasks that are essentially non-interfering. For example, a task to control a "radar dish" sensor (that locates sources of infrared light) can be operated independently from a task that drives the robot. There may be some communication between the tasks (the radar dish task may wish to know that the robot's base is moving), but there is no direct interference or need for coordination between the tasks. Concurrent and Competing Tasks In the most general situation, concurrent tasks might interfere with each other, or compete for resources on the robot (such as control of drive motors or control of an active sensor). In this case, some method, either explicit or implicit, must be devised to resolve resource conflicts. One method is to provide each task with a priority level; if two or more tasks are competing for the same resource, the task with the highest priority would win. A method for dealing with ties would be needed as well. Robot Metacognition A sophisticated task manager might have a separate module that acts as its "overseer." To use Marvin Minsky's idea and terminology from his Society of Mind, the main part of our brains (the "A brain") might be observed by a separate part of the brain (the "B brain"). The B brain, or overseer, checks the A-brain (which is mostly in control) for things like non-productive loops and other wedged conditions. If it detects one of these undesirable states, it makes an intervention that will provoke a different response from the A brain. Here is an example to bring this metaphor back to our robots. Suppose a task manager (of the sequencer variety) is trying to drive the robot around the inside of a rectangle, as suggested earlier. It is alternating between two tasks: a "follow wall" task and a "negotiate corner" task. But suppose the corner routine is failing, and unbeknownst to the sequencer, the robot is stuck in the same corner. The robot's "B-brain controller" might notice an especially tight loop between the execution of the two tasks (much the same as the wall-follower would notice trouble in shifting between the feedback outputs). The B-brain would conclude that something had gone wrong, and execute an emergency "get unwedged task. Control of an ELEC 201 Robot Most robots have been designed with the "task sequencer" model in mind. Occasionally concurrent, non-competing tasks are employed, but only rarely are concurrent and competing tasks considered. Rather than advocating a specific method, it is left for the reader to think about these issues and decide what is best for his or her own robot. Some of these methods were developed to make robots more "creature-like," which is not necessarily a desirable characteristic for an ELEC 201 robot. For example, it would not be ideal for a robot to suddenly decide that it did not really want to follow that wall. A task sequencing method, perhaps with a few checks for unproductive loops, should be more than adequate for most robot ELEC 201 designs.

Controller

The controller is the robot's brain and controls the robot's movements. It's usually a computer of some type which is used to store information about the robot and the work environment and to store and execute programs which operate the robot. The control system contains programs, data algorithms, logic analysis and various other processing activities which enable the robot to perform. The picture above is an AARM Motion control system. AARM stands for Advanced Architecture Robot and Machine Motion and it's a commercial product from American Robot for industrial machine motion control. Industrial controllers are either non-servos, point-to-point servos or continuous path servos. A non-servo robot usually moves parts from one area to another and is called a "pick and place" robot. The non-servo robot motion is started by the controller and stopped by a mechanical stop switch. The stop switch sends a signal back to the controller which starts the next motion. A point-to-point servo moves to exact points so only the stops in the path are programmed. A continous path servo is appropriate when a robot must proceed on a specified path in a smooth, constant motion. More sophisticated robots have more sophisticated control systems. The brain of the Mars Sojourner rover was made of two electronics boards that were interconnected to each other with Flex cables. One board was called the "CPU" board and the other the "Power" board and each contained items responsible for power generation, power conditioning, power distribution and control, analog and digital I/O control and processing, computing (i.e., the CPU), and data storage (i.e., memory). The control boards for Sojourner are shown below. For more info, visit Rover Control and Navigation at JPL. Mobile robots can operate by remote control or autonomously. A remote control robot receives instructions from a human operator. In a direct remote control situation, the robot relays information to the operator about the remote environment and the operator then sends the robot instructions based on the information received. This sequence can occur immediately (real-time) or with a time delay. Autonomous robots are programmed to understand their environment and take independent action based on the knowledge they posess. Some autonomous robots are able to "learn" from their past encounters. This means they can identify a situation, process actions which have produced successful/unsuccessful results and modify their behavior to optimize success. This activity takes place in the robot controller. ________________________________________ Body

The body of a robot is related to the job it must perform. Industrial robots often take the shape of a bodyless arm since it's job requires it to remain stationary relative to its task. Space robots have many different body shapes such as a sphere, a platform with wheels or legs, or a ballon, depending on it's job. The free-flying rover, Sprint Aercam is a sphere to minimize damage if it were to bump into the shuttle or an astronaut. Some planetary rovers have solar platforms driven by wheels to traverse terrestrial environments. Aerobot bodies are balloons that will float through the atmosphere of other worlds collecting data. When evaluating what body type is right for a robot, remember that form follows function. ________________________________________ Mobility

How do robots move? It all depends on the job they have to do and the environment they operate in. In the Water: Conventional unmanned, submersible robots are used in science and industry throughout the oceans of the world. You probably saw the Jason AUV at work when pictures of the Titanic discovery were broadcast. To get around, automated underwater vehicles (AUV's) use propellers and rudders to control their direction of travel. One area of research suggests that an underwater robot like RoboTuna could propel itself as a fish does using it's natural undulatory motion. It's thought that robots that move like fish would be quieter, more maneuverable and more energy efficient. On Land: Land based rovers can move around on legs, tracks or wheels. Dante II is a frame walking robot that is able to descend into volcano craters by rapelling down the crater. Dante has eight legs; four legs on each of two frames. The frames are separated by a track along which the frames slide relative to each other. In most cases Dante II has at least one frame (four legs) touching the ground. An example of a track driven robot is Pioneer, a robot developed to clear rubble, make maps and acquire samples at the Chornobyl Nuclear Reactor site. Pioneer is track-driven like a small bulldozer which makes it suitable for driving over and through rubble. The wide track footprint gives good stability and platform capacity to deploy payloads. Many robots use wheels for locomotion. One of the first US roving vehicles used for space exploration went to the moon on Apollo 15 (July 30, 1971) and was driven by astronauts David R. Scott and James B. Irwin. Two other Lunar Roving Vehicles (LRV) also went to the moon on Apollo 16 and 17. These rovers were battery powered and had radios and antenna's just like the Mars Pathfinder rover Sojourner. But unlike Sojourner, these rovers were designed to seat two astronauts and be driven like a dune buggy. The Sojourner rover's wheels and suspension use a rocker-bogie system that is unique in that it does not use springs. Rather, its joints rotate and conform to the contour of the ground, which helps it traverse rocky, uneven surfaces. Six-wheeled vehicles can overcome obstacles three times larger than those crossable by four-wheeled vehicles. For example, one side of Sojourner could tip as much as 45 degrees as it climbed over a rock without tipping over. The wheels are 13 centimeters (5 inches) in diameter and made of aluminum. Stainless steel treads and cleats on the wheels provide traction and each wheel can move up and down independently of all the others. In the Air/Space: Robots that operate in the air use engines and thrusters to get around. One example is the Cassini, an orbiter on it's way to Saturn. Movement and positioning is accomplished by either firing small thrusters or by applying a force to speed up or slow down one or more of three "reaction wheels." The thrusters and reaction wheels orient the spacecraft in three axes which are maintained with great precision. The propulsion system carries approximately 3000 kilograms (6600 lbs) of propellant that is used by the main rocket engine to change the spacecraft's velocity, and hence its course. A total velocity change of over 2000 meters per second (6560 ft/s) is possible. In addition, Cassini will be propelled on its way by two "gravity assist" flybys of Venus, one each of Earth and Jupiter, and three dozen of Saturn's moon Titan. These planetary flybys will provide twenty times the propulsion provided by the main engine. Deep Space 1 is an experimental spacecraft of the future sent into deep space to analyze comets and demonstrate new technologies in space. One of it's new technologies is a solar electric (ion) propulsion engine that provides about 10 times the specific impulse of chemical propulsion. The ion engine works by giving an electrical charge, or ionizing, a gas called xenon. The xenon is electrically accelerated to the speed of about 30 km/second. When the xenon ions are emitted at such a high speed as exhaust from the spacecraft, they push the spacecraft in the opposite direction. The ion propulsion system requires a source of energy and for DS1 the energy comes from electrical power generated by it's solar arrays. ________________________________________ Power

Power for industrial robots can be electric, pneumatic or hydraulic. Electric motors are efficient, require little maintenance, and aren't very noisy. Pneumatic robots use compressed air and come in a wide variety of sizes. A pneumatic robot requires another source of energy such as electricity, propane or gasoline to provide the compressed air. Hydraulic robots use oil under pressure and generally perform heavy duty jobs. This power type is noisy, large and heavier than the other power sources. A hydraulic robot also needs another source of energy to move the fluids through its components. Pneumatic and hydraulic robots require maintenance of the tubes, fittings and hoses that connect the components and distribute the energy. Two important sources of electric power for mobile robots are solar cells and batteries. There are lots of types of batteries like carbon-zinc, lithium-ion, lead-acid, nickel-cadmium, nickel-hydrogen, silver zinc and alkaline to name a few. Battery power is measured in amp-hours which is the current (amp) multiplied by the time in hours that current is flowing from the battery. For example, a two amp hour battery can supply 2 amps of current for one hour. Solar cells make electrical power from sunlight. If you hook enough solar cells together in a solar panel you can generate enough power to run a robot. Solar cells are also used as a power source to recharge batteries. Deep space probes must use alternate power sources because beyond Mars existing solar arrays would have to be so large as to be infeasible. The lifespan of batteries is exceeded at these distances also. Power for deep space probes is traditionally generated by radioisotope thermoelectric generators or RTGs, which use heat from the natural decay of plutonium to generate direct current electricity. RTGs have been used on 25 space missions including Cassini, Galileo, and Ulysses. ________________________________________ Sensors

Sensors are the perceptual system of a robot and measure physical quantities like contact, distance, light, sound, strain, rotation, magnetism, smell, temperature, inclination, pressure, or altitude. Sensors provide the raw information or signals that must be processed through the robot's computer brain to provide meaningful information. Robots are equipped with sensors so they can have an understanding of their surrounding environment and make changes in their behavior based on the information they have gathered. Sensors can permit a robot to have an adequate field of view, a range of detection and the ability to detect objects while operating in real or near-real time within it's power and size limits. Additionally, a robot might have an acoustic sensor to detect sound, motion or location, infrared sensors to detect heat sources, contact sensors, tactile sensors to give a sense of touch, or optical/vision sensors. For most any environmental situation, a robot can be equipped with an appropriate sensor. A robot can also monitor itself with sensors. The Big Signal robot NOMAD uses sensing instruments like a camera, a spectrometer and a metal-detector. The high resolution video camera can identify dark objects (rocks, meterorites) against the white background of the Antarctic snow. The variations in color and shade allow the robot to tell the difference between dark grey rocks and shadows. Nomad uses a laser range finder to measure the distance to objects and a metal detector to help determine the composition of the objects if finds. Very complex robots like Cassini have full sets of sensing equipment much like human senses. It's skeleton must be light and sturdy, able to withstand extreme temperatures and monitor those temperatures. Cassini determines it's location by using three hemisperical resonant gyroscopes or HRG's which measures quartz crystal vibrations. The eyes of Cassini are the Imaging Science Subsystem (ISS) which can take pictures in the visible range, the near-ultraviolet and near-infrared ranges of the electromagnetic spectrum. ________________________________________ Tools

As working machines, robots have defined job duties and carry all the tools they need to accomplish their tasks onboard their bodies. Many robots carry their tools at the end of a manipulator. The manipulator contains a series of segments, jointed or sliding relative to one another for the purpose of moving objects. The manipulator includes the arm, wrist and end-effector. An end-effector is a tool or gripping mechanism attached to the end of a robot arm to accomplish some task. It often encompasses a motor or a driven mechanical device. An end-effector can be a sensor, a gripping device, a paint gun, a drill, an arc welding device, etc. There are many examples of robot tools that you will discover as you examine the literature associated with this site. To get you going, two good examples are listed below. Tools are unique to the task the robot must perform. The goal of the robot mission Stardust is to capture both cometary samples and interstellar dust. The trick is to capture the high velocity comet and dust particles without physically changing them. Scientists developed aerogel, a silicon-based solid with a porous, sponge-like structure in which 99.8 percent of the volume is empty space. When a particle hits the aerogel, it buries itself in the material, creating a carrot-shaped track up to 200 times its own length. This slows it down and brings the sample to a relatively gradual stop. Since aerogel is mostly transparent - with a distinctive smoky blue cast - scientists will use these tracks to find the tiny particles. Robonaut has one of the many ground breaking dexterous robot hands developed over the past two decades. These hand devices make it possible for a robot manipulator to grasp and manipulate objects that are not designed to be robotically compatible. While several grippers have been designed for space use and some even tested in space, no dexterous robotic hand has been flown in Extra Vehicular Activity (EVA) conditions. The Robonaut Hand is one of the first under development for space EVA use and the closest in size and capability to a suited astronaut's hand. The Robonaut Hand has a total of fourteen degrees of freedom. It consists of a forearm which houses the motors and drive electronics, a two degree of freedom wrist, and a five finger, twelve degree of freedom hand. The forearm, which measures four inches in diameter at its base and is approximately eight inches long, houses all fourteen motors, 12 separate circuit boards, and all of the wiring for the hand.