We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.
The Real-time Control System (RCS) Methodology has evolved over a number of years as a technique to capture task knowledge and organize it into a framework conducive to implementation in computer control systems. The fundamental premise of this methodology is that the present state of the task activities sets the context that identifies the requirements for all of the support processing. In particular, the task context at any time determines what is to be sensed in the world, what world model states are to be evaluated, which situations are to be analyzed, what plans should be invoked, and which behavior generation knowledge is to be accessed. This methodology concentrates on the task behaviors explored through scenario examples to define a task decomposition tree that clearly represents the branching of tasks into layers of simpler and simpler subtask activities. There is a named branching condition/situation identified for every fork of this task tree. These become the input conditions of the if-then rules of the knowledge set that define how the task is to respond to input state changes. Detailed analysis of each branching condition/situation is used to identify antecedent world states and these, in turn, are further analyzed to identify all of the entities, objects, and attributes that have to be sensed to determine if any of these world states exist. This paper explores the use of this 4D/RCS methodology in some detail for the particular task of autonomous on-road driving, which work was funded under the Defense Advanced Research Project Agency (DARPA) Mobile Autonomous Robot Software (MARS) effort (Doug Gage, Program Manager).
Sensory processing for real-time, complex, and intelligent control systems is costly, so it is important to perform only the sensory processing required by the task. In this paper, we describe a straightforward metric for precisely defining sensory processing requirements. We then apply that metric to a complex, real-world control problem, autonomous on-road driving. To determine these requirements the system designer must precisely and completely define 1) the system behaviors, 2) the world model situations that the system behaviors require, 3) the world model entities needed to generate all those situations, and 4) the resolutions, accuracy tolerances, detection timing, and detection distances required of all world model entities.
Virtual objects in a web-based environment can be interfaced to and controlled by external real world controllers. A Virtual Reality Modeling Language (VRML) inspection cell was created that models a real-time inspection system. The tested consists of a Cordax Coordinate Measuring Machine (CMM), a vision system for determining the part position and orientation, and an open architecture controller. Because of the open architecture, data such as the probe position and the part position and orientation, can be obtained from the controller to drive a VRML model of the system. The VRML CMM is driven using a socket connection between the collaborator's web browser and the real world controller. The current probe position, which is stored in a world model buffer in the controller, is collected by a Java applet running on the web page. The applet updates the VRML model of the CMM via the External Authoring Interface of the VRML plug-in. The part position and orientation is obtained from the vision system and the part is updated in the VRML model to represent the part's real world position and orientation. The remote access web site also contains a client-controlled pan/tilt/zoom camera, which sends video to the client allowing them to monitor a remote inspection with a PC and an Internet connection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.