PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The use of chord distributions in pattern recognition is discussed and efficient ways to compute such distributions are noted. New methods to achieve scale and in-plane rotational distortion-invariant multi-class recognition and estimates of the distortion parameters are described. 3-D out-of-plane rotational distortion-invariant methods are reviewed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A conceptually new algorithm for 3-D object recognition and shape estimation from a single image is presented. Here complex 3-D objects are viewed as concatenation of simple surfaces, essentially planar, cylindrical, and spherical. This paper addresses the problem of recognizing these different surfaces along with estimating their shape parameters from a single image. The surfaces are assumed to be Lambertian illuminated with a point source at infinity, and we allow more than one surface to exist in the image. Surface classification and recognition relies on exploiting the contours of constant image intensity associated with each surface. By Lambert's law the image intensity for a plane is just a constant; for a cylinder the contours are lines parallel to the axis of the cylinder, whereas for a sphere they are concentric circles (ellipses). The image is partitioned into small square blocks. In each the data patch could either be classified as planar or nonplanar (cylindrical). That involves looking at whether or not the ratio of the 2 eigenvalues of the scatter matrix associated with the contours is close to 1. For nonplanar (cylindrical) blocks the angle 0 associated with the direction of the lines is computed. Any nonplanar block is classified as cylinder or sphere by considering the distribution of the angles 0 in a 3x3 neighborhood centered around the block. Again based on the classification of the blocks (surface type) as well as their direction (0), the image is segmented into connected regions. Once a surface region is extracted, shape estimation is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If a dynamic scene is acquired using a translating camera and the camera motion parameters are known, then the analysis of the scene may be facilitated in a transformed space. It is shown in this paper that by using the complex logarithmic mapping with respect to the focus of expansion, the segmentation of the scene into its stationary and nonstationary components and the determination of the depth of stationary components can be achieved easily in the transformed image sequence. An added advantage will be the invariances offered by the complex logarithmic mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an object recognition method that works in nonoptimal conditions. The method does not require a fixed camera position and can be applied in cases of partially occluded objects and noisy image data. The method is based on matching local properties of the contours of the model with the corresponding properties of image contours using a Hough-method. A general scheme for three-dimensional recognition is developed. The special case of two-dimensional recognition is carefully studied and a two-phase, piece-wise algorithm tuned for hardware realization is designed. The performance of the algorithms is shown through experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many industrial applications of computer vision require fast, accurate, classification and orientation of known objects. For those objects which exhibit circular markings or circular surfaces, it is possible to determine object orientation from a single visual image. In this paper, a technique is presented which uses the parameters of an ellipse fit to points in the image to specify the orientation of the corresponding circular object surface. Location of candidate ellipse points in the image is accomplished by exploiting knowledge about object boundaries and image intensity gradients. A second order ellipse equation is fit to the candidate points using a nonlinear error measure based on the equation of a general conic and an average gradient constraint. The technique presented is applied to the task of estimating the orientation of a discrete transistor against a uniform background, and results are summarized for 138 images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Wigner Distribution (WD) function produces a simultaneous space-spatial frequency description of an image and thus provides an appropriate domain for analyzing spatial non-uniformities in an image. In this paper, the performance of WD based processors is compared to that of the cross-correlator for a simple pattern recognition problem. A method is also proposed to reduce the computational complexity of the WD based system. Difference between digital WD based and optical WD based processors is pointed out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We overview progress toward implementing a simulation system, being designed at Honeywell, to demonstrate robust recognition of planar shapes and textures in a scene irrespective of their variable appearance in the image as affected by a priori unknown perspective, position, orientation, size, range, illumination, and partial occlusion. The system is based on a theory that two of the authors have previously proposed for invariant visual form recognition. Besides describing the advanced image representation to be employed, we also introduce a pattern detection/association subsystem that allows one to directly use invariant visual patterns to "address out" related information from a distributed associative memory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous, self-guiding mobile robots require special sensors that are not necessary for stationary robots. The purpose of this paper is to describe the design, construction and initial testing of an omnidirectional vision system used to control a mobile robot. A unique sensor and algorithm have been developed to calculate the distance from the center of the vision system to fixed position targets. The system provides an absolute global positioning system for the mobile robot. The guidance and position determination algorithm are described and some experimental results are given. The sensor has been developed for a prototype mobile robot for lawn mowing. However, the method appears to have several other applications. Some of these include industrial carts for material transport, military sentry duties and mobile tanks, medical patient care transport systems, as well as domestic applications such as vacuum cleaning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usually, to find a hole in an image, its shape must he known and corrections made to account for perspective distortion or hole surface orientation. Not so with this algorithm. As a heuristic vision algorithm, this hole finder seeks edge patterns which correspond to holes -- any promising holes. The algorithm uses neighborhood operators applied in both horizontal and vertical strips to locate the hole centers. The power of the heuristic approach lies in the fact that these algorithms can be uninformed about the nature of the part itself. Yet, a robot can be directed to pick up the part using the hole to grasp it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an approach for automaticaly finding corresponding points between two perspective views (images) of a moving polyhedron when the two imges exhibit considerable perspective shape distortions. The approach consists of two phases. In the first phase a heuristic search process is used to extract two line drawings (graphs) from the gray level images, and in the second phase a relexation process is used to get a reliable matching between the two drawings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pragmatic 3D visual method is proposed to solve the bin-picking problem. In this approach the localisation and the recognition of an object are done in two steps. The first step consists in finding a simple 3D description of the scene : 3D measurements are provided by a triangulation based range finder ( the sensor consists of a laser plane, deflected by a mirror mounted on a galvanometer, and a camera observing the intersection of the scene and the laser plane). Then the scene's surface is represented by flat regions. In the second step one tries to find a grasping site that the gripper of the robot can fit, following a collision free trajectory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an approach is described for recognizing and locating partially hidden objects in an image. In the approach, templates are formed from the edge contours of the objects sought. Segments of each template are matched to segments of the edge contours of the image. A Bayesian approach is used to decide the probability an object has been located given that matches occur.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a method for recovering a three-dimensional convex approximation of the shape of objects is presented exploiting multiple occluding contours. A discrete volumetric representation is obtained and a possible way to convert it into a smooth, continuous representation of the boundary is sho 'n. Full knowledge the of camera parameters is not required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situations exist in which groups of similar parts are fed into the assembly process as clusters of randomly oriented components. Very limited success has been achieved, to date, in the development of sensors that can determine the position and orientation of a part within a cluster. Consequently, the parts must first be mechanically separated and presented to a robot for manipulation. This is not always feasible due to the nature of the manufacturing process or due to the nature of the part itself. Locating variably shaped components poses a particularly challenging problem for a vision based sensing unit. In the electronic manufacturing environment, this situation arises when the extreme flexibility of the leads of some axial-leaded discrete components results in their random spatial deformation. This effect combined with the possibility of mutual overlapping complicates the recognition and separation task. An efficient strategy for accomplishing such a task has been developed. A mechanical manipulator, a vision system, and a light table are used to detect the polarity of notched capacitors supplied in disordered random patterns with overlaps. The method is based on the recognition of local features that are extracted as a result of the masking of the binary image with a grid of curvilinear polygons, which fragments the image into a mosaic of dispersed information islands. This paper will describe the algorithms which ultimately lead to the derivation of the position and orientation of each individual component. Image processing takes place in parallel to the robot motion. As a consequence of the algorithm speed, the total time of the task implementation is now only bounded by the speed of the mechanical motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a robot vision system for the fast determination of object type and location. The algorithms are general and powerful enough to be used both in passive and active vision applications. Here, their practical implementation in a passive robot vision system is discussed. A solid state array camera takes images of the objects passing underneath. Local characteristic features are extracted from contours in a binary image and yield the basis for object recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years a critical need has surfaced for the development of methodologies on the evaluation and understanding of mature image processing algorithms. Effective and efficient algorithm evaluation can be accomplished through algorithm and image modeling. Algorithm performance models can be analytical, empirical or hybrid (a combination of the two). These models can determine the best operating points of an algorithm for optimum performance, as well as predict algorithm performance on non-available image data. This provides a very convenient and cost-effective way to evaluate algorithms on a wide range of scenarios, without actually collecting the imagery that represents these scenarios. This paper demonstrates the usefulness of analytical approaches to algorithm modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasingly broad scope of intended image understanding (IU) applications is driving IU architectures toward general purpose designs. This is reflected in growth of first generation domain-and-function specific processors to new multi-function/multi-scenario designs. These advances have been enabled by novel algorithm developments, expansion to multi-sensor capability, advances in VLSI/VHSIC circuit technologies, and development of supporting software design methodology. Our paper presents a perspective on on-going and anticipated developments in military image understanding system architectures. We briefly discuss the types of missions and applications motivating system developments. We overview resulting system requirements and classes of supporting algorithms. We discuss resulting processor requirements and show by case study how we are addressing them in our past and present IU system designs. The trend we establish . is development from early applications-specific hardwired processors to future generation modular, reconfigurable, high-level programmable VLSI system architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose to use simple geometric primitives like points, lines and planes to represent 3-D shapes for purposes of recognition and positioning. We suggest that the general paradigm of hypothesis prediction and verification is well suited to this task if recognizing and positioning are done simultaneously. We propose a mathematical solution to the problem of estimating best 3-D rigid displacements from such sets of geometric primitives and show that such estimation can be done recursively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a heuristic route planning system for use in robotic vehicles. The route planner described herein is applicable to a variety of natural terrain ground systems such as autonomous tactical vehicles and mobile robot sentries. The route planner consists of five processing stages: (1) terrain preprocessing, (2) local points-of-interest extraction, (3) postprocessing point reduction, (4) search space criterion graph construction, and (5) heuristic search path generation. Examples of these processing steps are presented and additional system improvements are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main function of an evidence accrual system for image understanding is to sequentially update information on scene objects based on new sensor data or on non-sensory information such as intelligence. This paper presents a concept for sequentially updating information on scene objects. Scene objects and background (clutter) are represented by attributed relational graphs in which nodes represent objects of interest and arcs represent inter-object relations. Dynamic recognition/identification of nodes is acomplished by a belief/disbelief measure. Our experimental results with infrared images show improvements in natural scene object recognition over traditional image processing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes development of a set of primitives based on analysis of point data derived from different views of four geometric solids: cube, pyramid, step, wedge. The method uses interpoint distances from corners, and involves computation of the minimum spanning tree. Threshold-setting rules, primitive-formation algorithms, and computational results are present. A primitive is an element or unit of a structure. Combinations of primitives make up objects in images in the same way as letters form words. In particular, primitives are combined according to rules analogous to 'He is followed by "u"' in spelling. Research on characterizing shape has concentrated on line-drawing and edge information. This work contributes to the shape recognition problem when sensor data is derived from corners or is point data. Since our points were not obtained from line-data, we developed a completely different set of heuristic rules. The utility of the point descriptors presented here to problems of object description, identification and classification is yet to be demonstrated, but further work seems warranted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matching successive frames of a dynamic image sequence using area correlation has been studied for many years by researchers in machine vision. Most of these efforts have gone into improving the speed and the accuracy of correlation matching algorithms. Yet, the displacement fields produced by these algorithms are often incorrect in homogeneous areas of the image and in areas which are visible in one frame, but are occluded in the succeeding frames. Further, these displacement fields are often incorrect even at non-occluded areas that border occlusion boundaries. In this paper, we present a confidence measure which indicates the reliability of each displacement vector computed by a specific hierarchical correlation matching algorithm. We also provide an improved hierarchical matching algorithm which performs particularly well near occlusion boundaries. We demonstrate these with experiments performed on real image sequences taken in our robotics labaratory. A more detailed version of this work appears in (Anan84).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parallel processing is commonly applied in image analysis but is relatively uncommon in graphics. An approach to graphical operations employing a pyramid machine architecture is presented which permits certain kinds of geometric figures to be rendered very rapidly. In addition to conventional lines and circles in a discrete space, methods for plotting pyramidal lines and spheres are presented. The graphical complexity of images and the stylization of images are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A set theoratical model for representing pattern and pattern classes was previously proposed (Gokeri, 1983). In this paper a method for matching the semantic net model of a given pattern with the elements of a set of pattern class models is proposed. Accordingly, for a pattern class a new mathematical model, M, is defined such that M=<P,ψ>, P is a semantic net defining the pattern class and ψ:PxP →[0,1] is a probability function. ψ(fi,fj) may be interpreted as the conditional probability of occurrence of feature Fi with given Fj. Using these values and an empirically developed decision function, Δ , a .1 measure of simiiarity between the model of a pattern class and model of a sample pattern is determined. The Δ- function returns a scalar value in the interval [-1, 1] such that positive values signify similarity. If A=0, no decision can be made regarding the degree of semblance between two semantic net models, and negative values of A indicate no likeness. Finally, a method for modifying the decision function is offered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inspite of much work done in Artificial Intelligence in respect to intelligent robots we don't believe that robotics has really profited from the achieved results so far. Current robots seem to be more hampered by their difficulties in basic perception and motion control than by lacking planning abilities. We discuss in our paper a class of Al methods which could have importance for robotics in the actual state of development already: In Al we invent new execution models, implement and explore the way of programming based on them. Such an programming method - based on a particular view of the execution is called a "programming style". Al has experienced during its development the invention on diverse programming styles. Some of them have been described as knowledge representation schemes. In this paper, we try to characterize some of the programming styles and to prove that they are useful for robotics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A microprocessor based multisensor robotic locating system (RLS) was designed and is currently under development. The system uses primarily off-the-shelf components, integrated into a single package applicable to a wide variety of robotic tasks. Range and orientation to a specified target is measured using a combination of video camera, illuminator, ultra-sonics, and optical rangefinder. The processing algorithms are designed to operate in real time by exploiting known patterns that are on the target in the form of retroreflective material.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel approach to solving the trajectory planning problem (TPP) in time-varying environments. The essence of our approach lies in a decom-position of the TPP into two sub-problems: (i) planning the path to avoid collision with the static obstacles, and then (ii) planning the velocity along the path to avoid collision with the moving obstacles. We call the first sub-problem, the path planning problem (PPP), and the second, the velocity plan-ning problem (VPP). Thus, our decomposition is summarized by the equation TPP = PPP VPP. We use a variation of an existing approach to solve the PPP. Furthermore, we transform the VPP to a 2-dimensional PPP in path-time space. The resulting 2-dimensional PPP is then solved to determine the collision-free velociy profile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with task-planning where sensing operations play a major role in performing the tasks. These observations must be slifficiently accurate to provide execution-time motion by means of a programmable manipulator. A special sensor arrangement is used to estimate the conveyor's position and orientation in real time. Uncertainties in these estimates are combined with uncertainties in estimates of the position, size and shape of the object to predict a nominal position at which to grasp the object. The plan is checked for feasibility. Additional observations may be required by the planner to reduce the uncertainty if the plan is found to be infeasible. Part of the planning function is to schedule these observations and to specify position and orientation of the manipulator hand to ensure the grasp.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a computationally efficient technique for determining the Jacobian of an arbitrary robot arm as a function of its state. Fast computation of the Jacobian permits rapid determination of the new position and orientation of the end effector in base coordinates from prescribed incremental changes in the joint variables. The inverse problem is also considered in this paper. A fast efficient method for computing the differential changes in the joint variables for a given differential change in the position and orientation of the end effector is outlined. The proposed approach leads to a unique solution if the inverse solution exists. The suggested method can also quickly identify the singularities in the Jacobian. One of the main advantages of the methods proposed here is in their generality. The methods hold for any robot arm with arbitrary number of links whose states can be described by homogeneous transformations. The computational simplicity along with the generality of the approach together make the proposed methods well suited for real time applications. The proposed methods can be readily implemented on a computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biological sensory-motor systems have an extraordinary facility for adaptation. The accurate behavior demonstrated by such systems even under severe informational discrepancy has generated theories proposing altered internal models as the basis for such adaptation. Here we propose a similar perturbed parameter scheme for the low-level control of robotic manipulators. Thus, the dynamic and kinematic parameters in any suitable theoretical model can be perturbed from their true values in order to achieve enhanced performance in the vicinity of a given trajectory. Critical issues in this approach involve selection of parameters for identification and the estimation technique itself. A new approach is also highlighted which permits the self-calibration of the link inertias while executing any desired trajectory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper explores the mutual collision problem for two moving anthropomorphic arms with complicatedly shaped objects grasped each in the case constant acceleration in three dimensional space. A discriminant criterion has been proposed for intersection detection, which is based on topology theory. This method is more expressive and straightforward.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-sensor Kernel System (MKS) has been introduced as a convenient mechanism for specifying multi-sensor systems and their implementations. In this paper, we demonstrate how control issues can be handled in the context of MKS. In particular, the Logical Sensor Specification is extended to include a control mechanism which permits control information to (1) flow from more centralized processing to more peripheral processes, and (2) be generated locally in the logical sensor by means of a micro-expert system specific to the interface represented by the given logical sensor. Examples are given including a scheme for controlling the Utah/MIT dextrous hand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capability of vision is desired in many applications in which a machine interacts with its surrounding environment. Detecting and recognizing 3-dimensional objects in a 3-D environment is an especially challenging task. In this paper we describe a high speed 3-D sensor which produces, at a high rate of speed, a dense range map of an object. The high data rate is obtained by using a triangulation scheme which utilizes a holographic scanner to position the laser beam on the object, a digital angle detector to determine the angle to the laser spot on the object and a pipelined processor to calculate the range to the object. The limits to the resolution of this type of 3-D vision sensor will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this paper is to describe a novel development for an imaging tactile sensing system. At the heart of this sensor is a magnetostrictive transductor using amorphous ferromagnetic material VITROVAC4040. The principle of the sensor, the construction, performance and its prospective applications are described. An imaging tactile sensor with hundreds of force sensors fits into a space the size of a fingertip. Each magnetoelastic force sensor is constructed as a transformer-pressductor type. The sensor yields an array of 256 individual data points with a center-to-center distance of 2.5 mm. A flat elastmetric contact surface is mounted over the sensing array to protect the sites from contamination. The magnetoelastic material is potentially good for force feedback and tactile imaging sensors because of its outstanding sensitivity, wide dynamic range, good linearity, low hysteresis and low temperature error. The sensor has been successfully tested using two objects to be recognized, and the results have been illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new type of tactile sensor intended for robotic applications. It is based upon the frustration of total internal reflection at an optical surface caused by an opaque elastic membrane. An optical image is created in which the intensity is monotonically related to the strains or pressures created by an impressed object. This image is subsequently converted to digital form by a CID camera. The performance characteristics of two planar tactile array sensors are presented. The first sensor is a tactile "table" with an active area measuring 7 x 12cm. A 128 x 128 pixel CID camera is used to image a 3.3 x 3.3cm section of the total active area, thereby resulting in an effective tactile element density of 1500/sq-cm. The second sensor is a small, compact unit designed for use on robot gripper fingers. A coherent cable of optical fibers convey the strain image to a remotely-located CID camera, resulting in a tactile element density of 54/sq-cm over an active area measuring 2.2 x 2.5cm. Such optical tactile array sensors are seen to offer significant promise in the area of robotics where they can provide the advantages of high spatial resolution and non-planar sensor geometries (e.g. cylindrical and hemispherical).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A recurring problem common to most IR systems operating in a closed area is the presence of interfering background events which degrade the detector's detection capability. This paper deals with the effects various light sources may have on the reliable operation of infrared-based instruments and systems in indoor applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tactile sensors which are mounted on a robot gripper are typically much smaller than the objects they touch. If the robot system is to use a tactile sensor to perform such tasks as object recognition, parts inspection, or manipulation, then integration of features extracted from multiple sensing incidents will be necessary. This paper describes ways of acquiring tactile features that may be used in such tasks. Extraction of such features requires knowledge of the inherent advantages and limitations of tactile array sensors, and how the information they provide can be combined with information from position and force sensors. A number of tactile features (such as edge radii and object deformation) are best acquired by active sensing, in which the sensor is moved with respect to the object in a known fashion. Some strategies for extraction of tactile features, in both passive and active sensing paradigms, are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the course of the National Bureau of Standards' program in measurements and standards for automated manufacturing and robotics,a tactile sensing array with a high degree of conformability has been developed. The array consists of a pneumatically controlled matrix of displacement pins which provides a deformable grasping surface, and a corresponding array of optoelectronic proximity sensors which determine workpiece orientation and geometry. Regulation of air flow into the finger to control grasping stiffness permits the sensing and handling of very delicate or complex objects. Additional features of the design include programmable array rigidity, zero mechanical hysteresis, and gripper mounted packaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognition of three dimensional (3D) objects in low contrast scenes and in scenes where objects are partially occluded is a challenging problem for advanced 3D sensor based robotic systems. Laser ranging systems measure the surface depth directly and therefore avoid the computation required for construction of a depth map from multiple camera views. Since the physical level knowledge of the surfaces is directly available, the capability for real time object recognition in complex scenes is introduced. This paper discusses some elements of a real time 3D pattern recognition system for sensor based robot applications. The investigation includes a conceptual discussion of the sensor and techniques for feature extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the required speed in robot vision applications, it seems obvious to speed up some time consuming phases in the image processing with special hardware. The preprocessing stage is likely to be the most suitable for hardware implementation because here the amount of data to be processed is the largest and because the computations are not too complex. This paper will discuss a few hardware building blocks for image preprocessing that have proved to be very useful, such as the computation of the low order moments of objects in binary images, runlength encoding, projections and profiles, gradient operators for edge detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm which converts a surface representation of three-dimensional objects to a cylindrical representation is presented. Given a surface representation which describes the topological relation among the faces, edges and the vertices of the objects, the algorithm generates an augmented quadtree which represents a set of square cylinders approximating the volume of the objects. The augmented quadtree is similar to the conventional quadtrees used to describe two-dimensional image regions except that the z-dimensional information of the cylinders is added to each node of the quadtree. This algorithm is described for polyhedral objects but can be applied to curved objects also. The complexity of this algorithm is proportional to the number of unit cells in the total projected images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new concept in passive ranging to moving objects is described which is based on the comparison of multiple image flows. It is well known that if a static scene is viewed by an observer undergoing a known relative translation through space, then the distance to objects in the scene can be easily obtained from the measured image velocities associated with features on the objects (i.e. motion stereo). But in general, individual objects are translating and rotating at unknown rates with respect to a moving observer whose own motion may not be accurately monitored. The net effect is a complicated image flow field in which absolute range information is lost. However, if a second image flow field is produced by a camera whose motion through space differs from that of the first camera by a known amount, the range information can be recovered by subtracting the first image flow from the second. This "difference flow" must then be corrected for the known relative rotation between the two cameras, resulting in a divergent relative flow from a known focus of expansion. This passive ranging process may be termed Dynamic Stereo, the known difference in camera motions playing the role of the stereo baseline. We present the basic theory of this ranging process, along with some examples for simulated scenes. Potentiat applications are in autonomous vehicle navigation (with one fixed and one movable camera mounted on the vehicle), coordinated motions between two vehicles (each carrying one fixed camera) for passive ranging to moving targets, and in industrial robotics (with two cameras mounted on different parts of a robot arm) for intercepting moving workpieces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Absolute location of mobile robot is a central problem to improve autonomy of vehicles in any known environments. So we developed five methods using a scanning laser range finder. We compared the precision, the necessary computer time and the robustness of their algorithms for non-polyhedral cylindrical worlds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer vision based technique for surface curvature gaging is reported in this paper. This non-contact technique utilizes the projection of a grating upon the surface to be gaged. A general purpose digital image processor is used to implement the proposed technique. Both simulation as well as experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image-forming optical systems the image of a three-dimensional object consists of a superposition of focused and defocused object layers. For a quantitative evaluation of the object it is necessary to decompose the superposition image into different images corresponding to single object layers. For this purpose the object radiation is measured with different optical transfer functions of the imaging system, for example by simply changing the focus plane. Each image contains focused and defocused parts of the object and can be described as a linear equation of the object layers, assuming linear space-invariant, imaging properties. From these images the real object distribution can be calculated by the evaluation of the resulting linear system of equations in the Fourier domain. Due to noise in the detected images it is only possible to get an estimate of the true object distribution. In our case this estimate is based on an integral minimal mean square error in the reconstructed object. The algorithm is presented and demonstrated by simulation experiments and reconstructions of real human cell images in optical microscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several industrial applications have been implemented based on a 3-D vision sensor developed by Robotic Vision Systems, Inc. (RVSI). The sensor, known as the Robo Sensor®, uses structured light and optical triangulation to obtain 3-D data in real time on surfaces within its field of view. This paper will describe the Robo Sensor vision system and some of the manufacturing tasks in which the sensors have been installed. The applications range from adaptively positioning welding robots for the General Motors car production to inspection of large propeller surfaces for the U.S. Navy. Extension of the technology to provide 3-D volumetric sensing is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general and flexible automatic system for visual inspection of industrial assemblies was developed and applied on a specific assembly - the HGE Hydraulic Steering System of TRW. The system measures constant and variable geometric characteristics of the assembly scene, such as length and height of parts, area of holes, etc, and compares their relations to test values obtained either from direct measurements of the parts or from the design database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the actual implementation of a machining laser robot. The interactions between workpieces and beams over the various possible tasks and the various optimisation criteria are so complicated that no general model is available. A "learning-by-doing" iterative process based on a coding, by human experts, of the decision system under the form of "behaviour rules", is proposed as a solution when the device is attempted to be given some autonomous behaviour. The example of a restricted experimental variant, early stage in the robot life, is more detailed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cracks in railroad wheels cause changes in the pattern of electric current flow. These changes cause localized heating. This paper proposes an automatic inspection system based on infrared detection of increased temperatures in these locatons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system of microprocessors on a high-speed bus offers a potential for doing faster computer vision than with a conventional uniprocessor. A previous study [1] illustrated the potential by showing that a three microprocessor systems, based on the Motorola 68000 processor, could out-perform a Vax 11/780 on a computer vision task. The essential idea is to distribute the image to different microprocessors so they can compute in parallel. This work further explores distributed vision by looking at three strategies for distributing the image in the context of a location and identification task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a specially-designed end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for the automatic visual inspection of printed wiring boards is described. Higher-level descriptions are derived from the board during one raster scan. The descriptions are compared to the design data base in which tolerance data for each conductor is stored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.