Open Access
29 March 2024 Special Section Guest Editorial: Lessons Learned from the James Webb Space Telescope Program
Author Affiliations +
Abstract
Guest Editors Jonathan W. Arenberg, John M. O’Meara, and Paul H. Geithner summarize the Special Section on Lessons Learned from the James Webb Space Telescope Program.

The special section of JATIS was originally intended to collect lessons learned over a wide variety of large astronomical projects, ground, space, studies and flight programs. While that was the intention, it was not the result. All the submissions for this section come from experience on a single program, NASA’s James Webb Space Telescope.

The most important lesson that we can learn from and about the Webb telescope is that it is functioning as intended. Heading into its third year of science operations, it is functioning at or beyond required levels. In this section, Feinberg et al. indicate that the wavefront is twice as good as specification, allowing for significantly more science. The clear lesson is that with good, careful engineering even the seemingly impossible is, in fact, possible. Webb’s achievements have been recognized with numerous awards, including the highest award in U.S. Aerospace, The Robert Collier Trophy (a partial list of honors earned by JWST includes: 2023 John “Jack” L. Swigert Jr., Award for Space Exploration; 2023 Engineers Council Project of the Year; 2023 NCSF Robert H. Goddard Memorial Trophy; 2023 NASM Michael Collins Trophy; 2023 Top Honor in Space for Fast Company’s Most Innovative List; 2023 AIAA Goddard Astronautics Award; 2022 NAA Collier Trophy; 2022 Aviation Week Grand Laureate Award; listed as a 2022 TIME Invention of the Year; listed as a 2022 TIME Photo of the Year; listed as a 2022 Bloomberg Top 50; 2022 Popular Science “The Best of What’s New Award”; 2022 Science Magazine’s Breakthrough of the Year; 2022 Project Management Institute’s 50 Most Influential Projects).

This is all very good news for the extremely challenging mission NASA is beginning to study, the Astrophysics Decadal Study’s top large mission selection, the Habitable Worlds Observatory (HWO). The editors hope that by collecting these relevant lessons in one place, they will be a reference for HWO mission studies that are currently being organized and encouragement to those involved.

The papers in the special section are contributed by authors with deep, detailed knowledge of the Webb program and with a variety of viewpoints and experiences. As a collection, they provide a nuanced set of views of the challenges and triumphs of Webb’s development. The papers that make up this collection come from NASA and industry, and cover a wide range of perspectives, from focused to the system level and should provide food for thought for anyone contemplating the design of a novel and complex astronomical system.

Menzel et al. provide a detailed history of the major systems engineering activities in the Webb’s development. This detailed look provides insight into the history of the systems design, verification, and initial operation of Webb. Authors of this paper communicate lessons learned from the Webb experience, with a particular aim toward, and recommended application to, future large space telescopes—in particular HWO.

Systems engineering should be part of the design process at every phase from mission conceptualization through operations. Ample margins and burn down plans offer the flexibility to meet program needs and deal with the challenges that are a natural part of development. Webb was challenged with tight volume and mass constraints from early on. The next generation of large launch vehicles, with greater fairing volumes and lift capabilities, should allow for more payload design flexibility and provide greater design margins, which can be harvested during development to solve problems with less finesse required. Future missions would be wise to design for compatibility with the spectrum of future large launchers. Integrated modeling is essential to credibly predict system performance. Such integrated modeling played a central role in Webb system design, and it will again to an even greater extent on HWO. On Webb, integrated modeling was used to validate designs and verify performance pre-flight. Coordinating modeling and testing is key to a productive and successful design process. Useful models require solid experimental validation by leveraging high fidelity test beds and executing fit checks when practical. In the future, examine the benefits of a more nuanced modeling campaign—namely one that models a nominal system and not just the worst case. This more nominal-case modeling might avoid the design process being overly driven by so-called, “corner cases.”

Do perform detailed failure modes and effects analysis, to mitigate the risk of unavoidable single point failures. Given the accumulating damage to the Webb primary mirror from sporadic micrometeoroids and knowing that precise thermal control will be part of achieving the mechanical ultra-stability necessary for ultra-high contrast coronagraphy by HWO, the designers of HWO should consider the addition of a barrel around the telescope’s exposed optics (the primary and secondary mirrors). For future large space telescopes like HWO, in-space servicing is strongly recommended as an integral design element to manifest the “mountaintop observatory” model in space. Because Webb is largely cryogenic and was severely mass and volume constrained, the conscious decision was made early on not to design for extensive servicing. However, given that HWO will be “warm” combined with the coming generation of large launchers and advances in robotics, designing the mission for planned servicing makes sense. Design for servicing modularizes, which can facilitate a more flexible integration and test plan and faster build schedule. Moreover, planned servicing enables advances in instrumentation and routine maintenance of spacecraft bus subsystems over the course of a long mission life leveraging the hard-won optical infrastructure of the space observatory. Much will be learned about the as-built HWO telescope during initial operations with the first-generation coronagraph that can be applied to subsequent generations of instruments. Features that support serviceability, such as camera systems, should be considered early as part of the baseline design definition and evaluated in architectural and performance trades. Finally, mechanism requirements should be well understood for the specific case of the future flagship. Develop a mechanism life test protocol that is designed for the needs of flagship missions. Failure assessment of critical items, such as single faults, demand detailed and rigorous analysis of the effects of hardware and workmanship. This knowledge will facilitate the development and inspection of these critical items.

After a clear notice that JWST is meeting or exceeding all mission and science requirements, Menzel and co-authors remind readers that Webb was accomplished by a large diverse team over a long period of time.

Whitman concentrates on the value of “pathfinder” equipment. He presents the two extremal options, no pathfinder equipment and a complete replication of the flight hardware. The author succinctly defines the trades between the two options and the risks. The key risk of the first option of no pathfinders, is that issues are found on the flight hardware, and regression testing under those kinds of constraints can be more expensive and time consuming than a pathfinder. It is worth noting that one of the lessons learned from Chandra was to keep the pathfinders in the program, even though they were always on the “chopping block.”1 The Webb pathfinder for the telescope and telescope with integrated science instruments are discussed.

Whitman describes the two main uses of pathfinders on the for the Webb optical telescope element (OTE). The first is to help with integration, reduce risk, ensure safety for the flight hardware and gain proficiency. The author cites all of the pathfinder effort as key to the telescope integration completing two months early with no incidents or errors.

The other main use of pathfinders on the telescope program was in in the systems testing. The paper recounts how over a series of tests, using pathfinder, potential issues were identified and corrected before the arrival of the flight hardware for test. The preparation paved the way for a safe and effective system test.

Whitman’s paper illustrates the power of using pathfinders and rehearsals to improve integration, and reinforces the dictum, “don’t do anything for the first time on the flight hardware.”

Stahl recounts the development of the Webb Space Telescope’s primary mirror segment technology. After setting the stage by describing Webb’s primary mirror functional and performance challenge and relating it to other large telescope engineering developments, the author explains how the mirror technology was developed and demonstrated to be of sufficient maturity to support mission preliminary design review (PDR)—effectively summarizing three previous papers identified and cited by the author—and lists 21 discrete lessons extracted from the experience:

  • 1. Start with very clear specifications and performance metrics.

  • 2. Examine a wide solution trade space—do not limit your trade space too early.

  • 3. Use a competitive down-select process to rapidly and cost effectively develop technology.

  • 4. Place the effort under a single Government Principal Investigator and Insight/Oversight Team.

  • 5. Use a single Government Team to certify compliance with performance metrics.

  • 6. Do not trust models to validate performance—validate performance by testing at a relevant scale in a relevant environment. Then iterate until the model matches the data within the allocated error budget uncertainty.

  • 7. It is nearly impossible to have sufficient “as-built” information to model a mirror’s performance to optical specifications. For example, coefficient of thermal expansion (CTE) homogeneity is critical for achieving stable thermal performance, but it is nearly impossible to achieve a high-resolution 3D as-built CTE map.

  • 8. Plan for failure and statistically improbable events. Mirrors break, bend, or fracture; mechanisms fail; micrometeoroids happen.

  • 9. Technology development costs more and takes longer than what anyone estimates—maybe as much as 2× more and longer.

  • 10. Stiffness is more important than areal density.

  • 11. CTE homogeneity and uniform properties are critical for stable thermal performance.

  • 12. Avoid complexity; it is expensive and risky. The simplest solution is always the best solution.

  • 13. Make the mirror as large as possible. Polishing edges is hard. Mechanisms are complex and have had infant mortality up to 30%.

  • 14. Large mirrors are harder to make than small mirrors. Demonstrate technology and processes on the smallest relevant mirror and then scale up by factors of 2×.

  • 15. You cannot manufacture something that you cannot test, and you cannot be certain that you are testing it right unless you have an independent confirming test.

  • 16. Things do not behave the same at 30 K as they do at 300 K and—without experience—your intuition about how they will behave is probably wrong.

  • 17. Iterate the design, and then iterate again.

  • 18. Full-scale pathfinders and engineering development units (EDUs) are extremely valuable. If possible, make the flight spares before starting flight mirror production.

  • 19. Manage the transition to production to maximize learning and minimize forgetting.

  • 20. Transparently include all stakeholders and consider alternatives to gain a consensus decision.

  • 21. Most importantly, there is no substitute for relevant experience.

Barto et al. provide both technical and programmatic lessons learned from the authors work on multiple flagships.

The technical lessons learned begin with an emphasis on developing and carrying a complete error budget. This includes the typically small items that don’t influence the top line. The lesson for carrying these “nuisance” terms is to demonstrate that they are not forgotten and avoids unproductive discussions of those terms. In building the comprehensive error budget, build one that meets customer needs. Consider using a non-worst-case approach. Avoid “how to” requirements that limit design flexibility and don’t enhance the probability of mission success. Carefully consider interfaces and coupling in the design. Develop and deploy lower fidelity design tools that run quickly, to increase the speed and productivity of design. Consider the advantages of serviceability to enhance the design on ground and in flight. Finally, we are implored to exploit, rather than fear complexity.

The programmatic lessons focus on team dynamics. The first lesson is to build teams that are sustainable and maintainable—that is they are designed for the long haul that is flagship development. Work to create an environment where a badgeless team, focused on mission success can emerge. Develop a team-wide sense of optimism. This will carry the team through the challenges and ups and downs of the long gestation period of a flagship.

Feinberg et al. present the story of the wave front performance and its stability, giving great insight into how and why Webb’s performance is much better than expected. The driving requirements and the wavefront budget are discussed in detail. The authors cover extensively the approach to analysis and verification. Issues related to soft structure, the membranes that provide the optical and stray light close outs are highlighted. In the cases discussed, the membrane had insufficient size, so at operating temperature an anomaly was detected. The authors present and compare the predicted with the on-orbit performance.

The paper concludes with specific set of lessons presented here in a highly paraphrased form:

  • 1. Inspection of workmanship is a foundation of stability verification, model validation is only part of the picture.

  • 2. The design of soft structure requires adequate review to that proper slack is included, so it doesn’t bind.

  • 3. Sufficient time in the schedule needs to be included to remediate issue found in items 1 and 2. Reserve should be included in the error budget in case a remediation cannot be accomplished.

  • 4. A conservative worst-case approach to analysis including uncertainty factors did bound on-orbit performance. It also provided performance margin to absorb the impacts unknowns such as workmanship issues, micrometeoroids, and end-of-life degradations. The approach for structural dynamics jitter analyses was generally conservative.

  • 5. Deformation affecting stability is possible through secondary load paths such as harnesses and should be tested as part of workmanship verification.

  • 6. Mockups and careful attention to detail should be used to evaluate soft structure.

  • 7. Consideration of verification architecture should be considered from the start of architecture development. An approach that makes maximum use of active controls can provide a simpler verification strategy, one in which even workmanship surprises can be mitigated.

  • 8. Most of the soft structure surprises on JWST could have been avoided through alternative designs or by better inspections. Avoid soft structure when possible and add workmanship testing when it cannot be avoided.

Stahl reviews the development of the Hubble and Webb space telescope programs. The author draws attention to the fact that technology development enables the leaps that Flagship missions must achieve and reviews the many challenges that had to be faced to develop the revolutionary systems. The author notes that NASA has not flown a Flagship as recommended and that it is likely that as yet unknown reasons will cause some sort of descope. Also cited are cost growth in the science instruments. A necessary element for the success of these missions is sustained support from both industry and the scientific community to overcome political challenges. Launch vehicles are cited as the single most important factor in architecture development.

Stahl details the history and approaches to metrology as performed on JWST. This history is used to develop a rubric for future efforts. The seven steps defined are: (1) fully understand the task, (2) develop an error budget, (3) continuous metrology coverage, (4) know where you are, (5) test like you fly, (6) independent cross-checks, and (7) understand all anomalies.

Arenberg et al. develop the concept of “The Lesson of Newness,” the fundamental challenge of designing a complex and high-performance system with little to no tolerance for risk in performance. The lesson of newness is developed from analytic framework of model development, and the authors argue that this situation is a natural consequence of doctrine of successive refinement system development. The lesson of newness is intrinsic to flagship development. A flagship, by definition must make a leap in performance using new system elements. Arenberg and co-authors present the growth in size of the JWST system thermal model to make their point. The analytic development of the lesson of newness is amplified by some selected anecdotes from author’s experiences on Webb.

The paper offers a list of eight strategic lessons.

  • 1. Full nature of the system is not known until late in the program, as evidenced by the example of the Webb observatory system thermal model.

  • 2. The design and therefore the system model will naturally change over development as problems are uncovered and solved. If the design is not changing, technical and management leadership should inquire as to the reasons.

  • 3. Given the challenges of designing a new system, a future Flagship will do well to always have a “requirements check” run of the (integrated) model, where all parameters are set to the acceptable extreme values and system performance reconfirmed as standard practice.

  • 4. Expect a continuing resolution and plan the program accordingly.

  • 5. Verification is an integral part of program design.

  • 6. Hold regular lessons learned meetings with the team and record the results, throughout all stages of development. This will help with identifying changes that need to be made on the program as it evolves as well as capturing lessons from every phase for future missions.

  • 7. Model development is a time-consuming and expensive effort. Re-use of designs allows for reuse of models. Reuse can be complete when the design is identical, or partial when the same design is used for an “adjacent” purpose.

Each of the papers contributes lessons from the authors’ perspectives. In considering this special section as an integrated set of lessons, some themes are strongly evidenced. The central role of the error budget, as a means of documenting and communicating the current state of the design, identifying and managing performance reserve, resonates most loudly. This special section also repeatedly identifies verification as a subject needing early and substantial thought. The central role of modeling is also a major theme. Many of the contributed papers also noted various aspect of team dynamics as key to success, which was noted as foundation to previous flagship success.1

The JWST’s performance meets or exceeds all mission and science requirements. This was accomplished by a dedicated team working together and diligently for over twenty years. The lessons that were learned during this process and presented here are the legacy of that effort and is a gift that the Webb team gives to the future.

The path to develop the Habitable Worlds Observatory may seem long, and challenging, but with the example of the Webb team’s achievements and these lessons lighting the way, the journey to HWO just got a little easier.

Let’s go!

Reference

1. 

and J. Arenberg et al., “Lessons we learned designing and building the Chandra telescope,” Proc. SPIE, 9144 91440Q https://doi.org/10.1117/12.2055515 PSISDG 0277-786X (2014). Google Scholar
© 2024 Society of Photo-Optical Instrumentation Engineers (SPIE)
Jonathan W. Arenberg, John M. O'Meara, and Paul H. Geithner "Special Section Guest Editorial: Lessons Learned from the James Webb Space Telescope Program," Journal of Astronomical Telescopes, Instruments, and Systems 10(1), 011201 (29 March 2024). https://doi.org/10.1117/1.JATIS.10.1.011201
Published: 29 March 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Design

James Webb Space Telescope

Telescopes

Mirrors

Online learning

Systems modeling

Modeling

Back to Top