KEYWORDS: Adhesives, Control systems, Dynamical systems, Ultraviolet radiation, Micro optics, Process control, Optical mounts, Mirrors, Fiber couplers, Process modeling, Algorithm development, Space robots, Detection and tracking algorithms, Space operations
Today, the precision of micro-optics assembly is mostly limited by the accuracy of the bonding process ― and in the case of adhesive bonding by the prediction and compensation of adhesive shrinkage during curing. In this contribution, we present a novel approach to address adhesive bonding based on hybrid control system theory. In hybrid control, dynamic systems are described as "plants" which produce discrete and/or continuous outputs from given discrete and/or continuous inputs, thus yielding a hybrid state space description of the system. The task of hybrid controllers is to observe the plant and to generate a discrete and/or continuous input sequence that guides or holds the plant in a desired target state region while avoiding invalid or unwanted intermediate states. Our approach is based on a series of experiments carried out in order to analyze, define and decouple the dependencies of adhesive shrinkage on multiple parameters, such as application geometries, fixture forces and UV intensities. As some of the dependencies describe continuous effects (e.g. shrinkage from UV intensity) and other dependencies describe discrete state transitions (e.g. fixture removal during curing), the resulting model of the overall bonding process is a hybrid dynamic system in the general case. For this plant model, we then propose a concept of sampling-based parameter search as a basis to design suitable hybrid controllers, which have the potential to optimize process control for a selection of assembly steps, thus improving the repeatability of related production steps like beam-shaping optics or mounting of turning mirrors for fiber coupling.
KEYWORDS: Adhesives, Process control, Model-based design, Laser systems engineering, Polymerization, Polymers, Chemical elements, Systems modeling, Diffusion, Ions
The assembly process of optical components consists of two phases – the alignment and the bonding phase. Precision - or better process repeatability - is limited by the latter one. The limitation of the alignment precision is given by the measurement equipment and the manipulation technology applied. Today’s micromanipulators in combination with beam imaging setups allow for an alignment in the range of far below 100nm. However, once precisely aligned optics need to be fixed in their position. State o f the art in optics bonding for laser systems is adhesive bonding with UV-curing adhesives. Adhesive bonding is a multi-factorial process and thus subject to statistical process deviations. As a matter of fact, UV-curing adhesives inherit shrinkage effects during their curing process, making offsets for shrinkage compensation mandatory. Enhancing the process control of the adhesive bonding process is the major goal of the activities described in this paper. To improve the precision of shrinkage compensation a dynamic shrinkage prediction is envisioned by Fraunhofer IPT. Intense research activities are being practiced to gather a deeper understanding of the parameters influencing adhesive shrinkage behavior. These effects are of different nature – obviously being the raw adhesive material itself as well as its condition, the bonding geometry, environmental parameters like surrounding temperature and of course process parameters such as curing properties. Understanding the major parameters and linking them in a model-based shrinkage-prediction environment is the basis for improved process control. Results are being deployed by Fraunhofer in prototyping, as well as volume production solutions for laser systems.
In science and industry, the alignment of beam-shaping optics is usually a manual procedure. Many industrial applications utilizing beam-shaping optical systems require more scalable production solutions and therefore effort has been invested in research regarding the automation of optics assembly. In previous works, the authors and other researchers have proven the feasibility of automated alignment of beam-shaping optics such as collimation lenses or homogenization optics. Nevertheless, the planning efforts as well as additional knowledge from the fields of automation and control required for such alignment processes are immense. This paper presents a novel approach of planning active alignment processes of beam-shaping optics with the focus of minimizing the planning efforts for active alignment. The approach utilizes optical simulation and the genetic programming paradigm from computer science for automatically extracting features from a simulated data basis with a high correlation coefficient regarding the individual degrees of freedom of alignment. The strategy is capable of finding active alignment strategies that can be executed by an automated assembly system. The paper presents a tool making the algorithm available to end-users and it discusses the results of planning the active alignment of the well-known assembly of a fast-axis collimator. The paper concludes with an outlook on the transferability to other use cases such as application specific intensity distributions which will benefit from reduced planning efforts.
In this contribution, we present a novel approach to enable virtual commissioning for process developers in micro-optical assembly. Our approach aims at supporting micro-optics experts to effectively develop assisted or fully automated assembly solutions without detailed prior experience in programming while at the same time enabling them to easily implement their own libraries of expert schemes and algorithms for handling optical components. Virtual commissioning is enabled by a 3D simulation and visualization system in which the functionalities and properties of automated systems are modeled, simulated and controlled based on multi-agent systems. For process development, our approach supports event-, state- and time-based visual programming techniques for the agents and allows for their kinematic motion simulation in combination with looped-in simulation results for the optical components. First results have been achieved for simply switching the agents to command the real hardware setup after successful process implementation and validation in the virtual environment. We evaluated and adapted our system to meet the requirements set by industrial partners-- laser manufacturers as well as hardware suppliers of assembly platforms. The concept is applied to the automated assembly of optical components for optically pumped semiconductor lasers and positioning of optical components for beam-shaping
In remote sensing data, trees have a low interspecies variability and show a high variability within the tree
species. Therefore, specific features that distinguish between unique properties of two tree species are required
for a single tree based genera classification. To improve classification results, the suitability of seven surface
roughness features, calculated on single tree crown regions, is studied. The algorithms developed to provide
roughness parameters can be validated and prototyped in a Virtual Forest testbed. The features are extracted
from a normalized digital surface model with a resolution of 0.4m per pixel. Within the test area of 340km2 more than 4000 single trees of eleven different species and additionally 200 buildings are available as reference
data. Technical standards define several parameters to describe surface properties. These roughness features
are evaluated in the context of single tree crowns. All of these features are based on the deviation of the height
values of the tree crown to its mean height. As an additional feature the relationship between the crown's surface
area and its occupied ground area is used. The evaluation results of these features regarding the discrimination
of tree species on different levels - eleven single tree species, seven tree classes, deciduous and coniferous - and
also towards discrimination of trees from buildings will be presented.
Although fire is very common in our daily environment - as a source of energy at home or as a tool in industry - most people cannot estimate the danger of a conflagration. Therefore it is important to train people in combating fire. Beneath training with propane simulators or real fires and real extinguishers, fire training can be performed in virtual reality, which means a pollution-free and fast way of training. In this paper we describe how to enhance a virtual-reality environment with a real-time fire simulation and visualisation in order to establish a realistic emergency-training system. The presented approach supports extinguishing of the virtual fire including recordable performance data as needed in teletraining environments. We will show how to get realistic impressions of fire using advanced particle-simulation and how to use the advantages of particles to trigger states in a modified cellular automata used for the simulation of fire-behaviour. Using particle systems that interact with cellular automata it is possible to simulate a developing, spreading fire and its reaction on different extinguishing agents like water, CO2 or oxygen. The methods proposed in this paper have been implemented and successfully tested on Cosimir, a commercial robot-and VR-simulation-system.
In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
At the Institute of Robotics Research in Germany (IRF), an excavator- and construction machine simulator based on latest virtual reality technology has been developed. The main issues of the realization so far have been the real-time capability, the close-to-reality presentation of the environment and the physically correct simulation of the process, i.e. the simulation of the flow of the material handled with e.g. the simulated excavator. In the next step, it is being envisaged to enhance the system in a way that it cannot only be used for training, but also to command and supervise large construction machines in real world applications by means of virtual reality and automatic action planning components. Experiences gained from the control of space robots being controlled by methods of “projective virtual reality” will be introduced into this application to allow one driver to remotely control several excavators e.g. in a mining environment. In the paper, we describe how the simulation of the excavator and its interaction with the handled material can be handled mathematically and we will explain the basic ideas of how to “project” action that were carried in the virtual word onto physical excavators by employing the methods of projective virtual reality.
The symbiosis between computer graphics and modern planning and control methodologies is the basis for the development of projective virtual reality based telepresence techniques at the IRF. Virtual world appear close-to-reality for the user because of the interaction modeling technique borrowed from the field of robotic research. These techniques provide an intuitively operable user~environment for a greater range of application.The underlying idea of projective virtual reality is to first let the user work in a virtual world modeled after the physical plant to control and supervice. A projective virtual reality system then automatically deduce the impact of the user”saction on the state of the virtual plant and in turn employs action planning method to generate the equivalent impact on the physical plant using robots or other means of automation.Thus the robots are projecting the user “s action from the virtual into the physical world. The the German /Japanese project GETEX (German ETS-VII Experiment),the IRF realized the telerobotic ground station for the free flying robot ERA on board the japanese satellite ETS-VII. During the mission in April 1999the Virtual Reality based command interface out to be an ideally suited platform for the intuitive commanding and supervision of the robotic in space.
As part of the cooperation between the University of Souther California (USC) and the Institute of Robotics Research (IRF) of the University of Dortmund experiments regarding the control of robots over long distances by means of virtual reality based man machine interfaces have been successfully carried out. In this paper, the newly developed virtual reality system that is being used for the control of a multi-robot system for space applications as well as for the control and supervision of industrial robotics and automation applications is presented. The general aim of the development was to provide the framework for Projective Virtual Reality which allows users to project their actions in the virtual world into the real world primarily by means of robots but also by other means of automation. The framework is based on a new approach which builds on the task deduction capabilities of a newly developed virtual reality system and a task planning component. The advantage of this new approach is that robots which work at great distances from the control station can be controlled as easily and intuitively as robots that work right next to the control station. Robot control technology now provides the user in the virtual world with a prolonged arm into the physical environment, thus paving the way for a new quality of user-friendly man machine interfaces for automation applications. Lately, this work has been enhanced by a new structure that allows to distribute the virtual reality application over multiple computers. With this new step, it is now possible for multiple users to work together in the same virtual room, although they may physically be thousands of miles apart. They only need an Internet or ISDN connection to share this new experience. Last but not least, the distribution technology has been further developed to not just allow users to cooperate but to be able to run the virtual world on many synchronized PCs so that a panorama projection or even a cave can be run with 10 synchronized PCs instead of high-end workstations, thus cutting down the costs for such a visualization environment drastically and allowing for a new range of applications.
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
Intelligent autonomous robotic systems require efficient safety components to assure system reliability during the entire operation. Especially if commanded over long distances, the robotic system must be able to guarantee the planning of safe and collision free movements independently. Therefore the IRF developed a new collision avoidance methodology satisfying the needs of autonomous safety systems considering the dynamics of the robots to protect. To do this, the collision avoidance system cyclically calculates the actual collision danger of the robots with respect to all static and dynamic obstacles in the environment. If a robot gets in collision danger the methodology immediately starts an evasive action to avoid the collision and guides the robot around the obstacle to its target position. This evasive action is calculated in real-time in a mathematically exact way by solving a quadratic convex optimization problem. The secondary conditions of this optimization problem include the potential collision danger of the robots kinematic chain including all temporarily attached grippers and objects and the dynamic constraints of the robots. The result of the optimization procedure are joint accelerations to apply to prevent the robot from colliding and to guide it to its target position. This methodology has been tested very successfully during the Japanese/German space robot project GETEX in April 1999. During the mission, the collision avoidance system successfully protected the free flying Japanese robot ERA on board the satellite ETS-VII at all times. The experiments showed, that the developed system is fully capable of ensuring the safety of such autonomous robotic systems by actively preventing collisions and generating evasive actions in cases of collision danger.
Commanding complex robotic systems over long distances in an intuitive manner requires new techniques of man-machine- interaction. A first disadvantage of conventional approaches is that the user has to be a robotic expert because he directly has to command the robots. He often is part of the real-time control loop while moving the robot and thus has to cope with long delays. Experience with space robot missions showed that it is very difficult to control a robot just by camera images. At the IRF, a new approach to overcome such problems was developed. By means of Projective Virtual Reality, we introduce a new, intuitive way of man-machine communication based on a combination of action planning and Virtual Reality methods. Using data-helmet and data-glove the user can immerse into the virtual world and interact with the virtual objects as he would do in reality. The Virtual Reality System derives the user's intention from his actions and then projects the tasks in to the physical world by means of robots. The robots carry out the action physically that is equivalent to the user's action in the virtual world. The developed Projective Virtual Reality System is of especially great use for space applications. During the joint project GETEX (German ETS-VII Experiment), the IRF realized the telerobotic ground station for the free flying robot ERA on board the Japanese satellite ETS-VII. During the mission in April 1999 the Virtual Reality based command interface turned out to be an ideally suited platform for the intuitive commanding and supervision of the robot in space. During the mission, it first had to be verified that the system is fully operational, but then out Japanese colleagues allowed to take the full control over the real robot by the Projective Virtual Reality System. The final paper will describe key issues of this approach and the results and experiences gained during the GETEX mission.
To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.
A major problem when using off-line programming system is today's robot-based workcells still is that for more complex tasks like e.g. arc welding, general coating, grinding or laser-based applications, the support by these programming tools is poor. Particularly, together with intricate workpieces designed from free-form surfaces (e.g. NURBS) the programming of the desired robot motion still requires a lot of manual, teach-in like work by the programmer. This paper presents two approaches which now support the programmer in a comprehensive manner to solve this problem efficiently.
When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.
With the use of multi-robot-systems new chances and perspectives are revealed for industrial, space- and underwater-applications. At the IRF, a versatile multi- robot-control, which fully exploits the inherent flexibility of a multi-robot-system, has been developed. In order to guarantee optimized system-throughput and increased autonomy, the system builds on a new resource-based action planning approach to coordinate the different robot manipulators and other automation components in the workcell. An important issue of the realized action planning component to be applicable to real world problems is that it is realized as an integral part of the hierarchical multi- robot control structure IRCS (Intelligent Robot Control System). The IRCS is the multi-robot control that was chosen by the German Space Agency (DLR) for major space automation projects. In this structure the resource-based action planning component is tightly integrated with components for coordinated task execution and collision avoidance to guarantee save operation of all agents in the multi-robot system. As the action planning component `understands' task descriptions on a high level of abstraction it is also the perfect counterpart for a Projective Virtual Reality (VR) system. The paper will describe the mechanism of resource based action planning, the practical experiences gained from the implementation for the IRCS as well as its services to support VR-based man machine interfaces.
Smart man machine interfaces turn out to be a key technology for service robots, for automation applications in industrial environments as well as in future scenarios for applications in space. For either field, the use of virtual reality (VR) techniques showed a great potential. At the IRF a virtual reality system was developed and implemented which allows the intuitive control of a multi-robot system and different automation systems under one unified VR framework. As the developed multi-robot system is also employed for space application, the intuitive commanding of inspection and teleoperation sequences is of great interest. In order to facilitate teleoperation and inspection, we make use of several metaphors and a vision system as an `intelligent sensor'. One major metaphor to be presented in the paper is the `TV-view into reality', where a TV-set is displayed in the virtual world with images of the real world being mapped onto the screen as textures. The user can move the TV-set in the virtual world and, as the image generating camera is carried by a robot, the camera-viewpoint changes accordingly. Thus the user can explore the physical world `behind' the virtual world, which is ideal for inspection and teleoperation tasks. By means of real world images and with different measurement-services provided by the underlying 3D vision system, the user can thus interactively build up or refine the virtual world according to the physical world he is watching through the TV-set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.