At the end of 2012, ALMA software development will be completed. While new releases are still being prepared
following an incremental development process, the ALMA software has been in daily use since 2008. Last year it was
successfully used for the first science observations proposed by and released to the ALMA scientific community. This
included the whole project life cycle from proposal preparation to data delivery, taking advantage of the software being
designed as an end-to-end system. This presentation will report on software management aspects that became relevant in
the last couple of years. These include a new feature driven development cycle, an improved software verification
process, and a more realistic test environment at the observatory. It will also present a forward look at the planned
transition to full operations, given that upgrades, optimizations and maintenance will continue for a long time.
KEYWORDS: Antennas, Software development, Observatories, Optical correlators, Astronomy, Software engineering, Prototyping, Information technology, Solar thermal energy, Control systems
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna
from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and
the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front
line of the project's software deployment and integration effort. Among the group's main responsibilities are the
deployment, configuration and support of the observation systems, in addition to infrastructure administration,
all of which needs to be done in close coordination with the development groups in Europe, North America
and Japan. Software support has been the primary interaction key with the current users (mainly scientists,
operators and hardware engineers), as the software is normally the most visible part of the system.
During this first year of work with the production hardware, three consecutive software releases have been
deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at
5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the
experience of this 15-people group as part of the construction team at the ALMA site, and working together
with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent
results of teamwork, and also some of the troubles that such a complex and geographically distributed project
can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations
plan.
KEYWORDS: Observatories, Software development, Data archive systems, Computing systems, Antennas, Information technology, Control systems, Interfaces, Interferometers, Data processing
The ALMA Software (~ 80% completed) is in daily use at the ALMA Observatory and has been developed as an end-toend
system including: proposal preparation, dynamic scheduling, instrument control, data handling and formatting, data
archiving and retrieval, automatic and manual data processing, and support for observatory operations. This presentation
will expand on some software management aspects, procedures for releases, integrated system testing and deployment in
Chile. The need for a realistic validation environment, now achieved with a two antenna interferometer at the
observatory, and the balance between incremental development and stability of the software (a challenge at the moment)
will be explained.
The Atacama Large Millimeter/Submillimeter Array (ALMA) is a large radio interferometric telescope consisting of
66 antennas with variable positions, to be located at the Chajnantor 5000mat a high site (5000m) in Chile. ALMA
commissioning has now started with the arrival of several antennas in Chile and will continue for the next 4 years.
The ALMA Software was from the beginning has been developed as an end-to-end system including: proposal
preparation, dynamic scheduling, instrument control, data handling and formatting, data archiving and retrieval,
automatic and manual data processing systems, and support for observatory operations. This presentation will expand
mostly on ALMA software aspects issues on which we are concentrating in this phase: management, procedures,
testing and validation. While software development was based on a common software infrastructure (ALMA
Common Software - ACS) from the beginning, end-to-end testing was limited by the hardware available, and was
possible for years until recently only on computer models. Although the control software was available early in
prototype stand-alone form to support testing of prototypes antennas, it was only recently that dynamic
interferometry was reached and software could be tested end to end with a somewhat stable hardware platform. The
lessons learned so far will be explained, in particular the need for a realistic validation environment, the balance to be
achieved between incremental development and the needed for stability and usability, and the way to achieve all the
above with a development team distributed over three four continents. Some general lessons can be drown drawn on
the potential conflicts between software and system (hardware) testing, or in other words on the danger in taking
short-cuts in software testing and validation.
KEYWORDS: Computing systems, Antennas, Adaptive optics, Data archive systems, Optical correlators, Switches, Data acquisition, Lithium, Data communications, Software development
The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating from millimeter to sub-millimeter wavelengths. ALMA will be located at an altitude of about 5000m in the Chilean Atacama desert. The main challenge for the development of the ALMA software, which will support the whole end-to-end operation, it is the fact that the computing group is extremely distributed. Groups at different institutes have started the design of all subsystems based on the ALMA Common Software framework (ACS) that provides the necessary standardization.
The operation of ALMA by a community of astronomers distributed over various continents will need an adequate network infrastructure. The operation centers in Chile are split between an ALMA high altitude site, a lower altitude control centre, and a support centre in Santiago. These centers will be complemented by ALMA Regional Centers (ARCs) in Europe, North America, and Japan.
All this will require computing and communications equipment at more than 5000m in a radio-quiet area. This equipment must be connected to high bandwidth and reliable links providing access to the ARCs. The design of a global computing and communication infrastructure is on-going and aims at providing an integrated system addressing both the operational computing needs and normal IT support. The particular requirements and solutions foreseen for ALMA in terms of computing and communication systems will be explained.
The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating in the millimeter and sub-millimeter range. It will be located at an altitude of about 5000m in the Chilean Atacama desert.
The primary challenge to the development of the software architecture is the fact that both its development and runtime environments will be distributed. Groups at different institutes will develop the key elements such as Proposal Preparation tools, Instrument operation, On-line calibration and reduction, and Archiving. The Proposal Preparation software will be used primarily at scientists' home institutions (or on their laptops), while Instrument Operations will execute on a set of networked computers at the ALMA Operations Support Facility. The ALMA Science Archive, itself to be replicated at several sites, will serve astronomers worldwide.
Building upon the existing ALMA Common Software (ACS), the system architects will prepare a robust framework that will use XML-encoded entity objects to provide an effective solution to the persistence needs of this system, while remaining largely independent of any underlying DBMS technology. Independence of distributed subsystems will be facilitated by an XML- and CORBA-based pass-by-value mechanism for exchange of objects. Proof of concept (as well as a guide to subsystem developers) will come from a prototype whose details will be presented.
This paper gives details about our experience with the development and installation of the Very Large Telescope (VLT) control software, considering standardization, iterative development, release concept, testing, configuration control, which were all elements of our approach.
The very large telescope (VLT) software commissioning has started since a while, earlier than any VLT subsystems were ready for integration. This was possible thanks to the new technology telescope (NTT) upgrade (reported in a separate paper, Ref. 3), which shares most of the software with the VLT. The integration tests with the main VLT structure do also represent another fundamental milestone in the software commissioning process (see also separate paper, Ref. 4). The whole control software is based on a very distributed computer architecture. The final layout of the computers (work-stations and microprocessors), networking devices and underlying concepts have been tested both at the NTT and on the so called VLT computer control model, a relevant off-line subset of the computer equipment to be used in the VLT control room and telescope area for one unit telescope. The VLT common software including a real-time database, is the stable core of the whole VLT control software. This comprises also high level applications, like the real-time display (RTD), the panel editor and the CCD software to be used for technical CCDs. It is distributed with a policy of regular releases, subject to automatic regression tests and is used by VLT Consortia and contractors too. New modules have been added to insert the VLT control software in the data flow context, interfacing it in particular to schedular and archive. The VLT software support team will soon start regular operation at the VLT Paranal site, providing continuity between the integration activities of the various subsystems. They will be the front-end of the commissioning effort, based also on background support provided over links from the European headquarters.
KEYWORDS: Telescopes, Control systems, Local area networks, Software development, Electronics, Observatories, Interfaces, Databases, Computer programming
The ESO VLT control software consists of all the software, which will be used to directly control the VLT Observatory and associated instrumentation. This is now in the implementation phase, performed to a large extent by ESO staff in the VLT software group. Consortia of Institutes responsible for some ESO instruments and contractors, who implement some of the telescope subsystems, are also involved. The main foundation body of the VLT control software, called VLT common software, is basically complete in its main functions. It has a size of about 500 K lines of code. This software is used in all the developments for the VLT telescopes and instruments and is distributed by ESO to all the collaborating consortia and contractors. The key components of the telescopes control software (TCS) have also been implemented. They make use of the VLT common software and have been field-tested in a first version in December 1995 at the ESO New Technology Telescope (NTT) in Chile. The NTT is being upgraded in parallel to the VLT development, using the same software. At the end of this conference a scheduled period of 6 months of site tests with the VLT main structure is going to start in Milan, Italy. This will allow us to perform hardware specific tests on this software. The Alt/Az axes control and hydraulics bearing subsystems are also part of the tests, which involve a set-up of two workstations and three VME/VxWorks based controllers. In parallel the first enclosure, including its software, is going to be accepted at the VLT site in Chile. This will mark the beginning of the control system implementation at the Paranal site. This paper gives an overview of the VLT control software, and describes its main components and characteristics.
The New Technology Telescope (NTT) installed at the Observatory of ESO, La Silla,
Chile, employs a 3.5m active primary mirror and foresees that two instruments are
mounted all the time at the Nasmyth foci. Remote observations from Germany will
also be possible.
The NTT is in many ways a prototype for ESO's Very Large Telescope (VLT), an array
of four 8m telescopes now in the construction phase.
This applies also to the control/acquisition system software, developed for the
NTT, i.e. the software environment where specific control programs for telescope,
adapters and instruments are running.
Characteristic aspects of this system, which allows simultaneous multi
instrument and multi-user operation, are described in this paper.
The system is intrinsecally distributed both in hardware (several microprocessors
on Ethernet) and in software (CPU independent program communication).
System-wide information and complete decoupling of control programs from the
user-end are obtained via a central parameters database (Pool) which supports data
both on disc and memory, for time critical operations. This allows local and remote
users to access in real-time all information on telescope and instruments without
directly interfering with control programs.
It permits also to have a truly open system with coherent expansion as soon as new
modules are added.
The distribution of the Pool on many CPUs and remote access methods will allow to
develop a portable user-end for the remote use of the NTT and the VLT.
This will be implemented on a workstation accomodating the user-end both for image
processing and control and will be software configurable to act as a local control
console of one or several telescopes and instruments or a remote observing tool
useable from several astronomical Institutes in Europe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.