PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A relatively simple and inexpensive near-infrared (IR) ranging system is being developed for mobile robot navigation and collision avoidance. Active triangulation ranging is employed with about 5 degree spatial resolution over a nominal field-of-regard of 100° in azimuth and 30° in elevation. Under typical indoor conditions, fairly accurate target detection and range measurements are obtained to about 8 meters in the dark and about 5 meters in the light. No mechanical scanning is employed, and the entire field-of-regard can be scanned in 0.1 to 1 second, depending upon the required accuracy, allowing range measurements to be taken in real-time while the robot is in motion. The transmitter consists of a number of high-power near-IR light-emitting diodes (LEDs) arranged in a partial spherical array behind a spherical lens, so as to produce a corresponding number of narrow, evenly spaced beams that interrogate the field-of-regard. The LEDs in the array are sequentially activated at a particular repetition rate, and a synchronous receiver detects reflected energy from targets within its field-of-view (FOV). The receiver consists of two identical units, each covering a FOV of about 50° by 50°. Each unit contains a Fresnel lens, an optical bandpass filter, a lateral-effect position-sensing detector, and the associated electronics to process and digitize the analog signals. The location of the centroid of reflected energy focused on the position-sensing detector is a function of the particular beam that is active and the range to the target being illuminated by that beam. The position signals from the detector (resulting from the sequential activation of LEDs in the transmitter) are collectively processed by a dedicated microcomputer to determine the ranges to valid targets throughout the sensor's FOV. Target azimuth and elevation are a function of the LED position in the transmitter array that is active at the time of detection. A look-up table derived from calibration data is used to perform the position-to-range conversions and to compensate for receiver nonuniformities. Detected-target ranges can be compared to a previously stored range-map of the area under surveillance for use in navigational or collision-avoidance algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an actual laser light striper and associated image processing hardware for use onboard a mobile robot. The sensor is used to locate soda cans in the robot's immediate environment, and to guide the robot toward a suitable grasping position. We explain the design tradeoffs involved in the construction of a portable sensor, present an alignment and approach strategy for grasping objects, and discuss the system's overall performance and limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research in autonomous navigation has demonstrated the feasibility of designing robotic systems that can successfully carry out relatively simple, yet intelligent, tasks such as autonomous, obstacle-free road-following under certain environmental conditions. While encouraging, the technology developed as part of road-following research addresses only a small portion of the immense research efforts required to solve more complex problems that have practical value such as autonomous execution of reconnaisance missions in unknown territory. To carry out such complex tasks, autonomous systems will need to constantly sense and percieve various aspects of their local environment and create an internal representation that is accurate enough to enable successful mission planning and execution. In this paper, we present an over-view of passive machine perception research being carried out in our laboratory. This research is targetted primarily for autonomous navigation applications. We review ongoing research in the areas of binocular stereo range detection, motion detection, and image segmentation. Using a custom-configured DATACUBE pipeline processor we match 256 x 256 pixel stereo images in one second and perform segmentation of 128 x 128 pixel images in 15 seconds on a TAAC-1 board.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on the principles of an on-board electro-optical system for the guidance of an autonomous mobile robot. Some of the signal processing adopted here was directly inspired by natural visual systems, in particular by the compound eye of the fly. The visual system has compound optics with a panoramic field but relatively low spatial resolution. It makes use of elementary motion detectors (E.M.D's) to estimate the distance to objects from the optic flow. Each E.M.D. constitutes one mesh of an analog network. It measures the relative angular velocity of any contrast point that passes across its receptive field as a result of the robot's own motion and evaluates the radial distance to this contrast point from the motion parallax. For this purpose, the mobile makes translation steps at constant speed during each visual acquisition. An obstacle avoidance algorithm is implemented on a parallel, analog network. This network integrates the numerous data provided by the E.M.D's and controls the drive motor and steering motor of the robot platform in real time. Other navigation modules may be added without altering the basic hardware architecture of the system. For example, a target detector has been associated with the system. No stringent hypothesis needs to be made as to the shape of objects in the environment. Both the visual processing principles and the obstacle avoidance strategy are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses uses of fractals, fuzzy sets and image representation as it pertains to navigation of an autonomous mobile intelligence unit operating in an unstructured natural environment. Sensor fusion is achieved by combining variable rate and resolution sensor data into a model of the environment with which to do navigational planning. The major sensor is assumed to be a high duty cycle laser imaging rangefinder (LIRF). A secondary sensor consisting of a FLIR is also used. The LIRF is assumed to be mounted on a basic Autonomous Mobile Platform, a vehicle about the size of a deer and having quadripedal locomotion rather than wheels or tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a methodology to include obstacles moving with uncertainty in path planning algorithms. Around each moving obstacle, a collision-zone is defined indicating a high collision likelihood space. These zones are treated as stationary obstacles providing the input to a path planning algorithm. Samples of moving obstacles' positions are assumed to be available. Three models of motion for moving obstacles are considered: 1) obstacles moving randomly, 2) obstacles whose motion is structured but consists of random parameters, 3) obstacles whose motion is predictable as a function of time. Simulation examples yielding collision-zones are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fundamental problem in robotics is that of exploring an unknown environment. Most current approaches to exploration make use of a global distance metric that is used to relate past sensory experiences to local measurements. Rather than rely on such an assumption we consider the more general problem of exploration without a distance metric, as is typical of exploring using only visual information: we propose robot exploration as graph building. In earlier papers we have shown that it is not possible for a robot to successfully explore a metricless environment without aid, but that by augmenting the robot with a single marker (which can be put down or picked up at will) it is possible for a robot to map its environment[1]. In this paper we present the extension of our algorithm to the case of k markers, and comment on the resulting decrease in time for exploration. By defining a minimal model for the world and the sensory ability of the robot, we separate spatial reasoning from visual perception. In this paper we deal only with the spa-tial reasoning component of the exploration problem, and assume that visual perception can identify the marker and the edges incident on the current location.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
World modeling and path planning techniques for a mobile robot that can navigate through a three dimensional world are developed. A crystal map (an extension of the 2D meadow map representation) serves as the world model. A free-space decomposition algorithm is described that produces this representation. Four variations of the A* search technique are applied to the crystal map producing paths for a robot that can fly and/or crawl through the modeled world. Path improvement strategies are also described. Simulation studies indicate the feasibility of these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frame walkers are a class of mobile robots that are robust and capable mobility platforms. Variations of the frame walker robot are in commercial use today. Komatsu Ltd. of Japan developed the Remotely Controlled Underwater Surveyor (ReCUS) and Normed Shipyards of France developed the Marine Robot (RM3). Both applications of the frame walker concept satisfied robotic mobility requirements that could not be met by a wheeled or tracked design. One vehicle design concept that falls within this class of mobile robots is the walking beam. A one-quarter scale prototype of the walking beam was built by Martin Marietta to evaluate the potential merits of utilizing the vehicle as a planetary rover. The initial phase of prototype rover testing was structured to evaluate the mobility performance aspects of the vehicle. Performance parameters such as vehicle power, speed, and attitude control were evaluated as a function of the environment in which the prototype vehicle was tested. Subsequent testing phases will address the integrated performance of the vehicle and a local navigation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Autonomous Planetary Rover Project at Carnegie Mellon University is investigating the use of geometric information obtained from terrain elevation maps for mobile robot planning and control. We review how surface geometry has been characterized by surface roughness parameters, and why several of these parameters must be combined to form a vector roughness measurement. Next we propose a technique to localize and extract the intrinsic roughness from terrain elevation maps, and show how this can be used to characterize terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rocks are natural landmarks for navigation by mobile robots through natural terrain, particularly rocky terrains. In order to use rocks as landmarks, the robot must be able to recognize such landmarks. For the robot to navigate effectively, it must also be able to automatically construct models of rocks to be used as new landmarks. This paper presents an approach to the automatic building of qualitative 3-D models for the rocks world; a world in which accurate quantitative models of objects are hard to obtain. The model is a graph where the nodes represent surface patches on the rock and the arcs represent the adjacency relationships between them. Shapes of surface patches are qualitatively described as types using a small set of possible types. With a camera mounted on a robot arm, a model is constructed from multiple views of the object. Starting with an initial view, the partial knowledge extracted is used for planning new camera positions. These new positions are needed for acquiring more knowledge about the object, e.g. the shapes and adjacency relationships of surfaces that are on the other side of the rock or that are only partially visible. As new knowledge is acquired, the model is updated and more new camera positions planned when necessary. The process is repeated until no additional knowledge can be acquired from the new positions. An example is used to illustrate how a rock model is built. The robustness and weaknesses of the approach are discussed. Suggestions for improvements are also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high level of mobility obtained with a four-articulated-track robot has led the authors to develop theoretical models for analyzing the locomotion performance of such platforms. The paper reports on the basic assumptions and the kinematic and static models which will be used either in the control system of the robot or for simulation purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique has been developed for estimating the computational load required for a complex robot and applied to making computational load estimates for a Mars rover. This technique while providing only gross approximations is presently superior to the intuitive approach used traditionally. In addition, it is defensible, traceable and easily modified. Several computational load estimates for various computational functions for the Mars rover are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A rule-based command language has been developed for the control of a semi-autonomous planetary rover. This language is embedded within an advanced control concept which uses distributed blackboards, synchronous simulations of the rover and planetary conditions and extensive activity aids. Distributed blackboards make Earth-rover coordination easier and make extended autonomous operations more realistic. In addition, rule-based program flow control may be more effective than classical spaceflight programming models for rover control. Both of these capabilities take advantage of a highly autonomous and capable rover to make the exploration of Mars efficient and cost effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high resolution Mars surface model is being developed for simulation of vehicle dynamics, mobility and navigation capabilities. The model provides a topological representation of surface features and is suitable for interface with dynamic simulations of Mars Rover vehicles including models of wheel-soil interaction and vision systems. Portions of the surface model have been completed and can be interfaced with other portions of an overall vehicle performance assessment system also being developed for the Mars Rover program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer model has been developed as a tool for evaluating the use of structured light systems for local navigation of the Mars Rover. The system modeled consists of two laser sources emanating flat, widened beams with a single camera to detect stripes on the terrain. The terrain elevation extracted from the stripe information goes to updating a local terrain map which is processed to determine impassable regions. The system operates with the beams and cameras fixed except, now and then, the beams are vertically panned to completely refresh the local map. An efficient surface removal algorithm determines the points on the terrain surface hit by rays in the bundle. The power of each reflected ray that falls on each pixel of the camera is computed using well-known optical laws.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Multi-layer Connection Network (MCN) is to control a one-legged mobile robot. The network has no knowledge of the dynamics of the robot, and learns to develop a contol strategy through trial and error. Our results are presented in the form of computer simulations that demonstrate the ability of the (MCN) to devise a set of proper control signals that will develop stable running on a flat terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a visually-guided robotic system that tracks a moving object in space. In this project two six-degree-of-freedom robots are involved: an "IBM Clean Room" and a "Puma 562". A video CCD camera is attached to the end effector of the gantry IBM robot. The other robot holds a light source in its gripper. We show how the IBM Robot dynumicull tracks the moving light source in 3D. A personal computer serves as the real time (!) image processor and the controller for the closed loops. The search for the moving object takes place in a small squared window. In this way the computation time is short. Three features are extracted from the images: The change in the object's area, and the two distances (x and y) from the window to the center of the image Using an appropriate controller three signals are produced to directly control the independent X, Y and Z axes of the IBM robot. Several problems are addressed, in particular, the effects of the "digitized" object in the image plane, the control loops, sampling rate, stability and performance. A six minute video tape demonstrates the results of this project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work discusses a new method to extract structural information from a sequence of images. The sequence must represent a fixed environment and its changes must be caused only by camera motion. The algorithm that will be proposed operates on an estimate of the two-dimensional velocity field, the so called "Optical Flow". The result is obtained when the camera undergoes a pure translation and it is a depth map of the environment in which the camera has moved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a technique of using the Hough space for solving the correspondence problem in stereo matching. This technique is proposed for depth recovery in our robot vision system which will work for a mobile robot in an office environment. It is shown that the line matching problem in the image space can be readily converted into a point matching problem in the Hough (p-0) space. Dynamic programming can be used for searching the optimal matching, now in the Hough space. The computational cost for dynamic programming in the Hough space is 0 (MN), where M and N are the numbers of the lines in the left and right images for each 0. Our preliminary results show that the proposed methodworkswell on several sets of test images (Rubik cubes and corridor scenes).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study was to develop image-processing techniques for detecting lane boundaries and vehicle tracking using an on-board video camera. It resulted in several algorithms which process the image of the road scene to extract the position of lane markers and estimate the position of the lane boundaries and the position of the vehicle within the lane. The following algorithms were developed to process the camera's output: a Hough-transform algorithm, a region-tracing algorithm, and a vehicle-tracking algorithm. These algorithms were successfully tested on 3000 real road images, including some with missing and discontinuous markers. This capability to estimate lane boundaries will play a key role in the development of advanced automotive functions such as collision warning, collision avoidance and automatic vehicle-guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensing lane boundaries is a core capability for advanced automotive functions such as collision warning, collision avoidance and automatic vehicle-guidance. Part I of this study described special image-processing algorithms for the detection of lane boundaries and vehicle tracking, using images from a video camera. Part II of this study describes a new algorithm for detecting lane boundaries using template matching. This technique was selected because of its speed and its ability to include additional knowledge--two characteristics which are required for real-time, on-board vehicle applications. The algorithm has been tested successfully on over 3000 frames of videotape from interstate highways I-75 and I-94.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Map data can provide contextual cues that enhance ATR system performance. Maps, however, sometimes lack scene cues such as secondary trails. In these instances, trail detection requires map-independent approaches. Therefore, we have developed a feature-based trail detection algorithm. Its low-level function extracts features (edges, lines) that characterize trail geometry. A high-level function evaluates scene features, ranks them according to their trail-like attributes, and applies rules. Rules interpret complicated scenarios such as partial trail occlusion and high trail curvature. The algorithm demonstrates robust performance on both FLIR and TV imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed an image processing system for autonomus vehicle. This system can detect white lines which are drawn on the paved road and a obstacle. This system is based on the video-rate image process-ing system IDATEN that we have developed. Therefore, this system can perform white line detection and obstacle detection at video-rate. White lines are detected as follows: First, a input image is converted to binary image and divide each area. Then, white lines are identified with the size of the area and the trace of the area. A obstacle is detected as follows: First, edge images are extracted from input stereo images and the distance of image-matching portions for edge images are detected. Then, a obstacle is identified with the detected distance and the position of the white lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a low-level visual strategy for positioning a mobile robot over short distances (6 feet) using the texture of an artificial landmark. The relative depth of the robot can be recovered from the number of texture generated edges detected in the landmark region. This technique can be extended to recover orientation as well as depth. In that application, the ratio of the number of edges per unit area in one side of the region to the other determines the orientation. The orientation taken with total number of edges determines the depth. The use of the number of edges per unit area as the metric enables this strategy to work well under variations in the shape and size of the region, including mild obscurations. Experiments show that depth can be recovered from an appropriate texture with an average error of 5.7% over a range of 73 to 10 inches. If the landmark is not perpendicular to the camera, the orientation can be recovered with an average error of 9.0° and depth with 8.0% over a range of 84 to 60 inches. Motivation and experiments are discussed, including the issues in designing an appropriate texture for an application. Results with our mobile robot using a motor control strategy similar to the controlled movement of the docking behavior are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A symbolic neural net is described. It uses a multichannel symbolic correlator to produce input neuron data to an optical neural net production system. It has use in obstacle avoidance, navigation, and scene analysis applications. The shift-invariance and ability to handle multiple objects are novel aspects of this symbolic neural net. Initial simulated data are provided and symbolic optical filter banks are discussed. The neural net production system is described. A parallel and iterative set of rules and results for our case study are presented. Its adaptive learning aspects are noted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective sensor integration requires knowledge of the characteristics of all sensor subsystems. This type of meta-knowledge can originate from theoretical models of the physical processes involved in the sensing, from actual testing of the sensory system or from a combination of both. This paper describes the collection and analysis of experimental data from an actual sonar ring. The effective beam pattern is mapped and modeled for the eight possible setting combinations of pulse width and gain profiles, using three different sizes of targets. The beam cross sectional characteristics are also analyzed to show the effective signal strength and its effect upon error in the depth readings. The performance of the system is highly dependent upon surface texture and orientation, and other tests of the sonar ring illustrate the types of artifacts which arise in the actual use of the system. The test results can be used to provide better certainty values in certainty grid representations, or used to build a boundary representation from a composite scan which integrates the data from the scans at different settings. The test results are shown graphically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for navigation and environment learning based on active sensing using sonar and a compass. The method consists of a set of incrementally designed behaviors for extracting environment features from the sensory data. The features are stored in a totally distributed list of augmented finite state machines which serves as a decentralized world model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel approach to solve the mobile robot path planning problem in an unknown environment is presented. Inherently, the obstacle information from robot perception is contaminated with uncertainties, and thus, the acquired obstacle knowledge for path planning needs to be updated dynamically. Therefore, the development of an adaptive path planning scheme capable of determining a desired path with uncertain, and incomplete obstacle knowledge is necessary. We use the concept of traversability vectors to analyze the spatial-relations between the robot and obstacles in the task environment. Then, these analyzed relations are used to determine the obstacles that must be bypassed by the robot, and the ways to bypass them. Dynamically changing obstacle knowledge can be accommodated by replanning the path at the time when a change is reported. The proposed scheme can work very efficiently since because of the elimination of an exhausted search process that is often required in previous approaches. We implement a computer program to simulated the proposed planning scheme. A graphical representation for robot motions guided by the planned paths is illustrated to show the feature of the presented work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional planning and programming techniques for mobile robot control do not do well in unknown and unstructured environments. The successful implementation of a control strategy for an autonomous mobile robot is presented where the motion of the vehicle is strictly based upon the integrated response of multiple uncoupled primitive reflexive behaviors which incorporate no planning. We previously demonstrated the resulting motion of a robot based upon this approach by using computer simulation. Those results show that such a robot is capable of performing many relatively complex tasks in unknown environments, with only a limited set of such primitive behaviors. In this paper, the hardware and software implementation issues required to bring these concepts into reality on an actual machine are discussed. These issues include range sensor interfacing, communications between multiple on board processors, real time control within an object-oriented environment, robot safety, and robustness in the presence of sensor error. The resulting motion of an actual mobile robot, Scarecrow, is then compared with the simulation results for a number of different higher level behaviors. The observed behavior was found to be similar to that predicted by simulation, despite significant sensor limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Path planning can be grossly defined as the problem of reaching a goal from a starting position, avoiding collisions and satisfying one or more optimality criteria. A prerequisite to such a plan is the availability of an occupancy map either as an a priori information or generated on-line. Recent work has shown that such information can at best be obtained within a probabilistic framework, hence exact occupancy status is never known with absolute confidence. This paper presents a formal framework for formulating path planning under uncertainty. It is shown that paths compete not just on the basis of physically measurable parameters but also on the grounds of collision risk. There emerges circumstances requiring a formulation of underlying subjective trade-offs among competing paths with the added element of risk. A set of experimental results show the actual implementation of the proposed path planner inside a certainty grid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The behavior of a system can be defined as its trajectory in state space. A system's behavior comes about through the interaction of its control law with the plant dynamics. In systems where the plant dynamics vary considerably a constant control law can lead to varying behaviors. A more consistent operation of a system can be attained by specifying the desired behavior and then, based on the current plant dynamics, selecting the appropriate control law. Assuming that only the lowest level of a hierarchical controller is responsible for selecting the appropriate control law, the rest of the hierarchy needs only to represent and operate on abstracted state spaces and behaviors. Here a hierarchical control system is presented that operates as a hierarchy of behaviors, rather than control laws, to achieve adaptive real-time control performance. A planning system at each level of the hierarchy composes sequences of behaviors that will implement a more abstract behavior specified by the next higher level. The first intermediate goal state generated by a plan is passed to the next lower level of the hierarchy as the goal state, and the planning sequence repeated. A set of relations is used to translate between the state abstractions at different levels. At the bottom level the behavior is mapped into a control law that will achieve the behavior when applied to the current plant dynamics. The system has been prototyped and demonstrated on a wheeled mobile robot platform with sonar range sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time, three-dimensional television systems have already found applications in the guidance of remotely controlled robots and operations carried out under tele-manipulator control. This paper presents an investigation into the performance of a 3-D television system and its dependence on various parameters. It is shown that the region of space which can be displayed in three dimensions without causing undue eyestrain to the observer does not cover the total overlap area of the fields of view of the two cameras. The extent of this region and the minimum detectable depth interval which influences the accuracy of remote handling and inspection tasks are found to be affected by the geometrical and optical parameters of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a basic study to establish a new man-machine interface technology using pseudo-workspace concepts is proposed. The pseudo-workspace is built by combining a stereoscopic display and a hand gesture input device. Our design goal for the pseudo workspace is to utilize this type of interface for a future teleconferenceing system. However such a system can also be used in many other applications, such as telerobotics, real time simulation, and CAD/CAM system. Discussed in this paper, are problems to be solved in order to realize this environment including design strategies, implementing issues and evaluation methods, such as a high fidelity display method for stereoscopic displays and a man-machine interface technique for 3-D image manipulation using pseudo-workspace.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
REACT is a language under development at Newcastle for the programming of autonomous robot systems, which uses AI constructs and sensor information to respond to failures in assumptions about the real-world by replanning a task. This paper describes the important features of a REACT programmed robotic system, and the results of some initial studies made on defining an executive language using a concept called visiblity sets. Several examples from the language are then applied to specific examples e.g. a white line follower and a railway network controller. The applicability of visibility sets to autonomous robots is evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robots can be distinguished from automatic guided vehicles by their lack of reliance on structured environments. Instead, mobile robots locate themselves using existing features in man-made environments. In this paper, a wall-following mobile robot relies on the existence of straight walls parallel to its desired path. Ultrasonic range sensors are used to measure range and bearing to the wall. A history of data points is maintained, and a least-squares fit to the wall is computed. If the quality of the data is sufficient, the range and bearing to the wall are used to update the robot's position. The robot then steers to maintain a path parallel to the wall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a design for a man-amplifying exoskeleton, an electrically powered, articulated frame worn by an operator. The design features modular construction and employ anthropomorphic pitch-yaw joints for arms and legs. These singularity-free designs offer a significant advancement over simple pivot-type joints used in older designs. Twenty-six degrees-of-freedom excluding the hands gives the Man-Amplifier its unique dexterity. A five hundred-pound load capacity is engineered for a diverse range of tasks. Potential applications in emergency rescue work, restoring functionality to the handicapped, and military applications ranging from material handling to an elite fighting core are discussed. A bibliography concludes this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.