Real-time Control System

Real-time Control System (RCS) is a reference model architecture, suitable for many software-intensive, real-time computing control problem domains. It defines the types of functions needed in a real-time intelligent control system, and how these functions relate to each other.

RCS is not a system design, nor is it a specification of how to implement specific systems. RCS prescribes a hierarchical control model based on a set of well-founded engineering principles to organize system complexity. All the control nodes at all levels share a generic node model.

Also RCS provides a comprehensive methodology for designing, engineering, integrating, and testing control systems. Architects iteratively partition system tasks and information into finer, finite subsets that are controllable and efficient. RCS focuses on intelligent control that adapts to uncertain and unstructured operating environments. The key concerns are sensing, perception, knowledge, costs, learning, planning, and execution.

Overview
A reference model architecture is a canonical form, not a system design specification. The RCS reference model architecture combines real-time motion planning and control with high level task planning, problem solving, world modeling, recursive state estimation, tactile and visual image processing, and acoustic signature analysis. In fact, the evolution of the RCS concept has been driven by an effort to include the best properties and capabilities of most, if not all, the intelligent control systems currently known in the literature, from subsumption to SOAR, from blackboards to object-oriented programming.

RCS (real-time control system) is developed into an intelligent agent architecture designed to enable any level of intelligent behavior, up to and including human levels of performance. RCS was inspired by a theoretical model of the cerebellum, the portion of the brain responsible for fine motor coordination and control of conscious motions. It was originally designed for sensory-interactive goal-directed control of laboratory manipulators. Over three decades, it has evolved into a real-time control architecture for intelligent machine tools, factory automation systems, and intelligent autonomous vehicles.

RCS applies to many problem domains including manufacturing examples and vehicle systems examples. Systems based on the RCS architecture have been designed and implemented to varying degrees for a wide variety of applications that include loading and unloading of parts and tools in machine tools, controlling machining workstations, performing robotic deburring and chamfering, and controlling space station telerobots, multiple autonomous undersea vehicles, unmanned land vehicles, coal mining automation systems, postal service mail handling systems, and submarine operational automation systems.

History
RCS has evolved through a variety of versions over a number of years as understanding of the complexity and sophistication of intelligent behavior has increased. The first implementation was designed for sensory-interactive robotics by Barbera in the mid 1970s.

RCS-1
In RCS-1, the emphasis was on combining commands with sensory feedback so as to compute the proper response to every combination of goals and states. The application was to control a robot arm with a structured light vision system in visual pursuit tasks. RCS-1 was heavily influenced by biological models such as the Marr-Albus model, and the Cerebellar Model Arithmetic Computer (CMAC). of the cerebellum.

CMAC becomes a state machine when some of its outputs are fed directly back to the input, so RCS-1 was implemented as a set of state-machines arranged in a hierarchy of control levels. At each level, the input command effectively selects a behavior that is driven by feedback in stimulus-response fashion. CMAC thus became the reference model building block of RCS-1, as shown in the figure.

A hierarchy of these building blocks was used to implement a hierarchy of behaviors such as observed by Tinbergen and others. RCS-1 is similar in many respects to Brooks' subsumption architecture, except that RCS selects behaviors before the fact through goals expressed in commands, rather than after the fact through subsumption.

RCS-2
The next generation, RCS-2, was developed by Barbera, Fitzgerald, Kent, and others for manufacturing control in the NIST Automated Manufacturing Research Facility (AMRF) during the early 1980s. The basic building block of RCS-2 is shown in the figure.

The H function remained a finite state machine state-table executor. The new feature of RCS-2 was the inclusion of the G function consisting of a number of sensory processing algorithms including structured light and blob analysis algorithms. RCS-2 was used to define an eight level hierarchy consisting of Servo, Coordinate Transform, E-Move, Task, Workstation, Cell, Shop, and Facility levels of control.

Only the first six levels were actually built. Two of the AMRF workstations fully implemented five levels of RCS-2. The control system for the Army Field Material Handling Robot (FMR) was also implemented in RCS-2, as was the Army TMAP semi-autonomous land vehicle project.

RCS-3
RCS-3 was designed for the NBS/DARPA Multiple Autonomous Undersea Vehicle (MAUV) project and was adapted for the NASA/NBS Standard Reference Model Telerobot Control System Architecture (NASREM) developed for the space station Flight Telerobotic Servicer The basic building block of RCS-3 is shown in the figure.

The principal new features introduced in RCS-3 are the World Model and the operator interface. The inclusion of the World Model provides the basis for task planning and for model-based sensory processing. This led to refinement of the task decomposition (TD) modules so that each have a job assigner, and planner and executor for each of the subsystems assigned a job. This corresponds roughly to Saridis' three level control hierarchy.

RCS-4
RCS-4 is developed since the 1990s by the NIST Robot Systems Division. The basic building block is shown in the figure). The principal new feature in RCS-4 is the explicit representation of the Value Judgment (VJ) system. VJ modules provide to the RCS-4 control system the type of functions provided to the biological brain by the limbic system. The VJ modules contain processes that compute cost, benefit, and risk of planned actions, and that place value on objects, materials, territory, situations, events, and outcomes. Value state-variables define what goals are important and what objects or regions should be attended to, attacked, defended, assisted, or otherwise acted upon. Value judgments, or evaluation functions, are an essential part of any form of planning or learning. The application of value judgments to intelligent control systems has been addressed by George Pugh. The structure and function of VJ modules are developed more completely developed in Albus (1991).

RCS-4 also uses the term behavior generation (BG) in place of the RCS-3 term task 5 decomposition (TD). The purpose of this change is to emphasize the degree of autonomous decision making. RCS-4 is designed to address highly autonomous applications in unstructured environments where high bandwidth communications are impossible, such as unmanned vehicles operating on the battlefield, deep undersea, or on distant planets. These applications require autonomous value judgments and sophisticated real-time perceptual capabilities. RCS-3 will continue to be used for less demanding applications, such as manufacturing, construction, or telerobotics for near-space, or shallow undersea operations, where environments are more structured and communication bandwidth to a human interface is less restricted. In these applications, value judgments are often represented implicitly in task planning processes, or in human operator input.

Methodology
In the figure, an example of the RCS methodology for designing a control system for autonomous onroad driving under everyday traffic conditions is summarized in six steps.
 * Step 1 consists of an intensive analysis of domain knowledge from training manuals and subject matter experts. Scenarios are developed and analyzed for each task and subtask. The result of this step is a structuring of procedural knowledge into a task decomposition tree with simpler and simpler tasks at each echelon. At each echelon, a vocabulary of commands (action verbs with goal states, parameters, and constraints) is defined to evoke task behavior at each echelon.
 * Step 2 defines a hierarchical structure of organizational units that will execute the commands defined in step 1. For each unit, its duties and responsibilities in response to each command are specified. This is analogous to establishing a work breakdown structure for a development project, or defining an organizational chart for a business or military operation.
 * Step 3 specifies the processing that is triggered within each unit upon receipt of an input command. For each input command, a state-graph (or statetable or extended finite state automaton) is defined that provides a plan (or procedure for making a plan) for accomplishing the commanded task. The input command selects (or causes to be generated) an appropriate state-table, the execution of which generates a series of output commands to units at the next lower echelon. The library of state-tables contains a set of statesensitive procedural rules that identify all the task branching conditions and specify the corresponding state transition and output command parameters.

The result of step 3 is that each organizational unit has for each input command a state-table of ordered production rules, each suitable for execution by an extended finite state automaton (FSA). The sequence of output subcommands required to accomplish the input command is generated by situations (i.e., branching conditions) that cause the FSA to transition from one output subcommand to the next.
 * In step 4, each of the situations that are defined in step 3 are analyzed to reveal their dependencies on world and task states. This step identifies the detailed relationships between entities, events, and states of the world that cause a particular situation to be true.
 * In step 5, we identify and name all of the objects and entities together with their particular features and attributes that are relevant to detecting the above world states and situations.
 * In step 6, we use the context of the particular task activities to establish the distances and, therefore, the resolutions at which the relevant objects and entities must be measured and recognized by the sensory processing component. This establishes a set of requirements and/or specifications for the sensor system to support each subtask activity.

Software
Based on the RCS Reference Model Architecture the NIST has developed a Real-time Control System Software Library. This is an archive of free C++, Java and Ada code, scripts, tools, makefiles, and documentation developed to aid programmers of software to be used in real-time control systems, especially those using the Reference Model Architecture for Intelligent Systems Design.

Applications

 * The ISAM Framework is an RCS application to the Manufacturing Domain.
 * The 4D-RCS Reference Model Architecture is the RCS application to the Vehicle Domain, and
 * The NASA/NBS Standard Reference Model for Telerobot Control Systems Architecture (NASREM) is an application to the Space Domain.