Open Access
2 November 2023 Rules for optical metrology: Webb primary mirror case study
Author Affiliations +
Abstract

The Webb Space Telescope’s on-orbit performance was made possible by successful in-process optical testing and cryogenic requirement compliance certification, verification, and validation of the Webb optical components. This was accomplished by the hard work of dozens of optical metrologists, the development and qualification of multiple custom test setups, and several inventions, including 4D PhaseCam and Leica absolute distance meter. We define a set of rules for optical metrology and summarize how they were applied to the metrology tools, test setups, and processes used to characterize the Webb Space Telescope primary mirror segment assemblies.

1.

Introduction

Webb Space Telescope’s on-orbit performance was made possible by successful in-process optical testing and cryogenic requirement compliance certification, verification, and validation of its optical components. This was accomplished by the hard work of dozens of optical metrologists (mentioned in Acknowledgements), the development and qualification of multiple custom test setups, and several inventions, including 4D PhaseCam and Leica absolute distance meter (ADM).1 Finally, as the cognizant NASA technical authority, this author performed his insight/oversight responsibility in accordance with seven rules for optical metrology.2 This paper defines these rules and summarizes how they were applied to the metrology tools, test setups, and processes used to characterize the Webb Space Telescope primary mirror segment assemblies (PMSAs).

2.

Rules for Optical Testing

No matter how small or large your optical or metrology task, following these principles will ensure success: (1) fully understand the task, (2) develop an error budget, (3) continuous metrology coverage, (4) know where you are, (5) test like you fly, (6) independent cross-checks, and (7) understand all anomalies. These rules are based on this author’s many years of optical testing experience (since 1980) and are lessons learned from author’s failures and successes.

2.1.

Fully Understand the Task

Before accepting a task, make sure you fully understand it: what is your customer’s application; what parameters do you need to quantify and to what level of uncertainty; and who is your manufacturing interface? Study your customer’s requirements and understand how they relate to the final system application. Then summarize all requirements into a simple table that can be shared with your customer and your manufacturing methods engineer. Make sure that your customer agrees that what you will quantify satisfies their requirements, and the manufacturing methods engineer agrees that they can make the part based upon the data you will be providing.

Zernike polynomial coefficients are a good example of how not following this rule can cause trouble. Many optical designers use Zernike coefficients to specify optical components and most optical metrologists use Zernike coefficients to describe surface shape. However, while there is an international standard for Zernike coefficient order (ISO 010110), no one seems to use it (Table 1). Most interferometer manufacturers use the University of Arizona FRINGE sequence (descended from ITEK), whereas many optical design programs use the University of Rochester Born and Wolfe sequence (descended from Zernike), and Kodak had their own sequence. This problem is compounded because while most use peak-to-valley normalization (surface PV is 2× the coefficient value), some (such as Perkin-Elmer) use RMS normalization. So, if the customer specifies that an optical component needs to have less than 10 nm of Z8, is that X-Coma (B&W), Y-Coma (fringe), spherical (ISO), or trefoil (Kodak)?

Table 1

International standard Zernike polynomial coefficient index (first eight coefficients only).

DescriptionPolynomialISOFRINGEBorn and WolfeKodakRMS to PV ratio
Piston101101
X-tiltrcosθ1221½
Y-tiltrsinθ2332½
Power2r2134531/sqrt(3)
X-astigmatismr2cos2θ45441/sqrt(6)
Y-astigmatismr2sin2θ56651/sqrt(6)
X-coma(3r22)rcosθ67861/sqrt(8)
Y-coma(3r22)rsinθ78971/sqrt(8)
Spherical6r46r2+18913101/sqrt(5)

2.2.

Develop an Error Budget

An error budget must be developed for every specification and its tolerance. In addition, every element of the error budget must be certified by absolute calibration and verified by independent test.

An error budget has multiple functions. First, it informs you as to whether you can actually measure the required parameters to the required tolerances. It drives the required accuracy and reproducibility (not repeatability) of the metrology tools. It defines which test conditions have the greatest impact on test uncertainty. It also identifies risks and technical problems to be overcome. Second, it is necessary to convince your customer that you know what you are doing; and third, it provides a tool for monitoring the test process. If the variability in the test data exceeds the error budget prediction, then you must stop and understand why.

To construct an error budget, perform a propagation of error analysis. Start with the equation for the specification value and take its partial derivative as a function of each variable. Square each result and multiply times the uncertainty (i.e., variance in data) for that variable. Then take the square root of the sum. For example, assume that a requirement R is a function of variables (a,b,c), i.e., R=f(a,b,c). The uncertainty of the knowledge of the requirement R is given as

σR=  (δf(a,b,c)δa)2σa2+  (δf(a,b,c)δb)2σb2+  (δf(a,b,c)δc)2σc2.

If the defining equation is a linear sum, then the result is a simple root mean square of the individual standard deviations. But, if the equation is not linear, then there will be cross terms and scaling factors. In calculating standard deviations use reproducibility and not repeatability. Repeatability gives an “optimistic” result. Reproducibility gives a realistic result. Repeatability is the ability to get the same answer twice if nothing in the test setup is changed. Reproducibility is the ability to get the same answer twice if the mirror is completely removed from and reinstalled into the test setup. From a real-world perspective, reproducibility is much more important than repeatability.3,4

Possibly the most important element of an error budget is its reserve. Mistakes happen and it is prudent to plan for them in advance. If possible, I suggest a 30% RSS reserve.

Error budget reserve saved me on the ITT program (which became Spitzer). I was the secondary mirror responsible metrology engineer. I had a complete error budget, but some elements were allocations. The secondary mirror was manufactured to a Hindle sphere test, and the optician achieved an excellent result. Unfortunately, I waited too long to validate my error budget. I did not calibrate the Hindle sphere until it was time to perform the final certification. To my horror, it had a trefoil mount distortion. Because the secondary mirror had a three-point mount, every time it was inserted into the test, the bumps introduced by the optician exactly matched the holes in the Hindle sphere. Fortunately, because of my error budget reserve, the mirror still met its figure specification; it just was no longer spectacular.5 The moral of the story is to not only validate your error budget early but also, as much as possible, randomize your alignment from test to test. Sometimes bad things happen from being too meticulous. (This could almost be an eighth rule.)

Error budget reserve was also important for Webb. For two mirror segments, instead of polishing the cryo-deformation into the mirror, a “negative” cryo-deformation was accidentally polished into the mirror—thus doubling the error. Fortunately, because of reserve, the primary mirror still met its specification.

2.3.

Continuous Metrology Coverage

The old adage is correct: “you cannot make what you cannot test.” The key to implementing these rules is simple. Every step of the manufacturing process must have metrology feedback, and there must be overlap between the metrology tools for a verifiable transition. Failure to implement this rule typically results in one of two outcomes, either very slow convergence or negative convergence.

2.4.

Know Where You Are

It might seem simple, but if you do not know where a feature is located on the mirror, you cannot correct it. To solve this problem, you must use fiducials. There are two types of fiducials: data fiducials and distortion fiducials. Data fiducials are used to define a coordinate system and locate the measured data in that coordinate system. Sometimes this coordinate system is required to subtract calibration files, other times it is required to produce hit maps. Distortion fiducials are used to map out pupil distortion in the test setup. Many test setups, particularly those with null optics can have radial as well as lateral pupil distortion. Distortion can cause tool mis-registration errors of 10 to 50 mm or more.

Fiducials can be as simple as a piece of tape or black ink marks on the surface under test or as sophisticated as mechanical “fingers” attached to the edge protruding into the clear aperture. While I have used tape fiducials for simple reproducibility of difference tests, or to register a calibration alignment, I do not recommend them for computer-controlled process metrology. In these cases, fiducials define your coordinate system and need to be applied with a mechanical precision of greater accuracy than the required prescription alignment to the substrate. In addition, because the interferometer imaging system might invert the image or because fold mirrors in the test setup might introduce lateral flips, I highly recommend an asymmetric pattern. The pattern that I have always used is fiducials at 0 deg, 30 deg (or 120 deg), 90 deg, and 180 deg. The 0/180-deg fiducials produce a central axis for the data set. The 90-deg fiducial defines left/right, and the 30- or 120-deg fiducial defines top/bottom. In addition, for test setups with null optics, pupil distortion can be a problem. In these cases, distortion fiducials are required. One option is to place multiple fiducial marks along a radius. For null tests with anamorphic distortion, a grid of fiducial marks is recommended. Finally, if you have a clear aperture requirement, make sure to place fiducial marks inside and outside of the required clear aperture distance, this way you can certify whether or not the requirement is achieved.

Another problem is software coordinate convention. Most interferometer analysis software assumes that the optical (Z axis) positive direction points from the surface under test toward the interferometer, such that a feature that is higher than desired is positive. However, many optical design programs define the positive optical axis to be into the surface. The problem occurs because both programs will typically define the Y-axis as being up, so it is critical to understand which direction the +X-axis follows. (I have also seen a software program that used a left handed coordinate system—talk about confusing.) The problem is further complicated when interfacing with the optical shop. A good metrologist needs to know the coordinate system of every computer-controlled grinding and polishing machine. Every optical metrologist I know, including myself, has a story of the optical shop doubling the height or depth of a bump or hole because of a sign error, or adding a hole or bump to a surface because of a flip or inversion.

2.5.

Test Like You Fly

Test like you fly covers a wide range of situations. For example, space telescopes can operate at a range of potential temperatures. Spitzer’s performance had to be certified at 4 K and Webb’s at 30 K. UVO telescopes such as Hubble operate at temperatures between 270 K and 300 K. Also, space mirrors do not have gravity, thus their “zero-g” shape must be characterized and certified. But this rule is not limited to space telescopes. Large ground-based telescopes can have large gravity sags. Therefore, they must be tested in their final structure (or a suitable surrogate) at an operational gravity orientation. Gravity is not typically a problem for small, stiff mirrors. But it can be a problem if the mirror is not stiff. Another problem is non-kinematic mounts. Once, I had a task to test an “egg-crate” 0.75-m diameter flat mirror to 30 nm PV. After some initial characterization tests with the customer, I declined. The customer provided “metrology” mount was unsuitable. The mirror was so “floppy” (i.e., low stiffness) that simply picking it up and setting it back down onto the metrology mount resulted in a 100 nm PV shape change (both astigmatic bending and local mount induced stress).

2.6.

Independent Cross-Checks

Probably the single most “famous” lesson learned from the Hubble Space Telescope is to never rely on a single test.

2.7.

Understand All Anomalies

Of all the rules, this one maybe the most important and must be followed with rigor. No matter how small the anomaly, one must resist the temptation of sweeping a discrepancy under the metaphorical error budget rug.

3.

Case Study: Webb Primary Mirror Segment Assemblies

Given its complexity, the Webb primary mirror provides an excellent case study for how to apply the “rules for metrology” for in-process optical testing and cryogenic requirement compliance certification, verification, and validation.

3.1.

Fully Understand the Task

The Webb optical telescope element (OTE) is a three-mirror anastigmatic telescope with a primary, secondary, tertiary, and fine steering mirrors. Because of mass and thermal stability considerations, all components are manufactured out of beryllium. Figure 1 summarizes its in-process and final cryogenic specifications. The Webb 6.2-m primary mirror is a near parabola with a conic constant of 0.9967 and radius of curvature of 15.880 m at 30 K. The primary mirror is divided into 18 segments with three different prescriptions (A, B, and C). The primary difference between segment types is the off-axis distance (and hence the aspheric departure). “A” segments were closest to the central hole. “C” segments were at the corners of the hexagonal aperture. The exact radius of the primary mirror is allowed to vary about the requirement specification by ±1  mm. But all 18 segments must match that value to ±0.1  mm at 30 K. Webb’s diffraction is limited at 2  μm, which translates into a transmitted wavefront specification of 156 nm rms. Of that amount, 131 nm rms is allocated to the telescope and 62 nm rms is allocated to the primary mirror. Each segment is allocated 22 nm rms surface error. The PMSA surface figure error is divided among three spatial frequency bands: 20 nm rms is allocated to surface errors with low- and mid-spatial frequencies longer than 222  mm/cycle, 7 nm rms is allocated to spatial frequencies from 0.08 to 222-mm/cycle, and 4 nm rms is allocated to surface roughness. The primary mirror has a collecting area specification of 25 square meters at 30 K. When this requirement is flowed down to the segment level, accounting for all potential obscuration losses and material shrinkage, it yields a 1.48 square meter requirement per segment that translates into a clear aperture specification of 7 mm from the physical edge.

Fig. 1

PMSA ambient and cryogenic specifications.

JATIS_10_1_011202_f001.png

At each step of the manufacturing process, there are optical specification “gates” that must be achieved before the mirror can move to the next step. Process documents define the gate specifications that allow a given mirror to pass from machining to generation, generation to grinding, grinding to initial polishing, initial polishing to cryo-testing, and from final cryo-null polishing to coating. During figuring at Tinsley, conic constant, radius of curvature, prescription alignment, and surface figure error are measured and controlled simultaneously. The clear aperture, high spatial frequency figure error, and surface roughness specifications are each measured and controlled separately. As noted in Fig. 1, the certification of conic constant, radius of curvature, prescription alignment, and surface figure error (low/mid and part of high) are accomplished at 30 K. Clear aperture, high spatial frequency figure error, and surface roughness are not certified at 30 K. Instead, they are certified at ambient with the Tinsley high spatial and surface roughness test station. In these cases, it is assumed that the parameter’s measured properties are independent of temperature.

All OTE optical components experienced the same fabrication process. Mirror blanks were manufactured by Brush–Wellman and machined into substrates by AXSYS Technologies. Figuring was accomplished via an iterative process. All mirrors were ground and polished at Tinsley. During these processing steps, the mirrors were in “configuration 1,” i.e., bare substrate mounted on three tooling balls. After initial figuring, they are sent to Ball Aerospace Technology Corp (BATC) for integration into “configuration 2” and “configuration 3” (Fig. 2). Configuration 2 is an ambient manufacturing/metrology mount. Configuration 3 is the final flight mount. Once on the flight mount the mirror’s cryogenic figure is tested at Marshall Space Flight Center (MSFC) X-Ray and Cryogenic Facility (XRCF). From this data, a cryo-deformation hit map was calculated. The mirror was then returned to BATC for conversion back into configuration 2 then sent to Tinsley for cryo-null figuring. Once the predicted ambient figure was achieved, the mirror was sent to Quantum Coating Inc. for gold coating and then back to XRCF for final cryo-testing on the flight mount.

Fig. 2

All mirrors (PM, SM, and TM) are processed in one of three configuration states. Config 1 is the bare substrate on tooling balls. Config 2 is an ambient metrology and manufacturing mount. Config 3 is the flight mount.

JATIS_10_1_011202_f002.png

All OTE optical components are manufactured in Observatory Coordinate Space as defined by “master datums” on the back of each mirror. In configuration 1, the tooling balls are attached to the mirrors at the master datums. The optical surface figure is registered to the mirror substrate and to the observatory coordinate system via data fiducials placed on the front surface of each mirror and secondary fiducials on the sides of each, which are used to transfer the coordinate system from the master datums to the data fiducials.

3.2.

Develop an Error Budget

Once the specifications were known, the next step was to determine if they could be quantified. Thus, an error budget was developed for every specification and its tolerance (Fig. 3). The optical testing challenges were multiple. The primary challenge was how to ensure that components manufactured at ambient satisfied their requirements at cryogenic temperatures to the required tolerances. This is accomplished by measuring the mirrors at 30 K and correcting them at ambient. To achieve the optical figure requirements, it is necessary to perform 10-nm rms absolute accuracy interferometry over a 16-m optical path. For in-process testing, the optical path is under ambient conditions on a vibration isolated table. For cryogenic testing, the optical path is in vacuum in a less benign vibration environment. A complete understanding of each metrology tool’s test uncertainty is critical.

Fig. 3

Each Webb PMSA specification had a separate error budget, i.e., surface figure, radius of curvature, conic constant, decenter, and clocking of the prescription on the substrate. For every item in this figure, there was a highly detailed error budget.

JATIS_10_1_011202_f003.png

The first problem encountered by the test team in 1999 was vibration. At the NASA MSFC XRCF, while the test optic and the interferometer were each isolated from the building, they were not physically connected to each other. Thus, they experienced relative motion of 5 to 10 microradians of tilt and 5 to 15  μm of piston. This magnitude of motion made it virtually impossible to acquire data using conventional temporal phase-shifting interferometry. The solution was found in a breadboard concept at Metrolaser, which, after NASA MSFC development funding, yielded the first ever PhaseCAM and has resulted in an entire product line of 4D PhaseCam interferometer products. These interferometers were fundamental technology in enabling the manufacture of Webb.

The next problem was how to measure and certify a 16-m radius of curvature to a precision of 10-μm at 30 K. For small optics, radius can be measured by either an inside micrometer or a distance measuring interferometer DMI. But in this case, neither option was viable. It is not possible to insert a calibrated mechanical “meter” into a 30 K environment; and DMIs are not absolute. DMIs measure relative distance change, i.e., the motion of the mirror or the interferometer or a cat’s eye reflector from the mirror vertex to the mirror center of curvature (CoC). They also require an uninterrupted beam. None of these are possible when testing a mirror at 30 K in a cryo-vacuum chamber through an optical window. The first half of the solution was the ADM developed by Leica via NASA MSFC funding. The second half was two 0.5 m massive (i.e., solid) ULE (Corning ultra-low expansion glass) spherical surface radius of curvature optics (ROCOs) manufactured by coastal optics. Both ROCOs were absolutely characterized by the University of Arizona to an accuracy of better than 0.050 mm. ROCOs were used to calibrate and inter-compare radius of curvature measurements between the two Tinsley optical test stations (OTS), BATC OTS (BOTS), and MSFC XRCF OTS.

The third key problem to be solved by the metrology team was thermal stability. This problem become important only in the last few years as the mirrors neared their final quality specifications. To achieve a 20-nm rms class mirror required that the mirror figure must be thermally stable to better than 5 nm. While Beryllium (Be) has very low coefficient of thermal expansion (CTE) below 90 K, it has a large CTE at 300 K (11.3  μm×m1×K1). And while Be is a metal with a high thermal conductivity, a highly light-weighted mirror such as Webb lacks sufficient thermal capacity to maintain a uniform constant temperature under ambient conditions. Therefore, it is very easy for small thermal gradients to cause significant surface figure errors. To achieve 10-nm rms metrology required that thermal gradients in the mirrors must be kept at the 0.01 K level. In this case, the solution was application of proven precision metrology principles: test in an extremely stable thermal environment and monitor the mirror’s bulk temperature and gradients.

Finally, because PMSAs are moved back and forth between manufacturing and test at Tinsley, between Tinsley and BATC, and between BATC and MSFC XRCF, a complete understanding of each metrology tool’s test uncertainty is critical. Data from Tinsley, BATC, and the MSFC XRCF must reproduce each other within the test uncertainty. Certified cryo-data must be traceable from XRCF at 30 K in configuration 3 (on flight mount) to BATC at 300 K as the mirrors are changed from C3 to C2 to Tinsley where they are polished on their fabrication mount at 300 K. This required that BATC demonstrates an ability to convert a PMSA from configuration 2 to configuration 3 with an uncertainty of <10-nm rms.

3.3.

Continuous Metrology Coverage

Tinsley developed overlapping metrology tools to measure and control conic constant, radius of curvature, prescription alignment, and surface figure error throughout the fabrication process. During rough grinding this was accomplished using a Leitz coordinate measuring machine (CMM) (Fig. 4). The CMM was the primary tool used to establish radius of curvature and conic constant.

Fig. 4

Leitz CMM was used at Tinsley during generation and rough polishing to control radius of curvature, conic constant, and aspheric figure for PMSAs, secondary mirrors, and tertiary mirror.

JATIS_10_1_011202_f004.png

Ordinarily, optical fabricators try to move directly from CMM to optical test during fine grinding. But, given the size of Webb PMSAs, there was concern that CMM could not control the mid-spatial frequency specification. Thus, a Wavefront Sciences scanning Shack Hartmann sensor (SSHS) (Fig. 5) was designed and built to provide bridge data. The SSHS is an auto-collimation test. An infrared (10  μm) source is placed at the focus for each PMSA prescription (A, B, or C) to produce a collimated beam. An infrared Shack–Hartmann sensor is then scanned across the collimated beam to produce a full aperture map of the PMSA surface. Its infrared wavelength allowed it to test surfaces in a fine grind state, and its large dynamic range (0 to 4.6 mrad surface slope) allowed it to measure surfaces that were outside the interferometer’s capture range. The SSHS was only certified to provide mid-spatial frequency data from 2 to 222 mm. Figure 6 shows an example of the excellent data agreement between the CMM and SSHS. Eventually, Tinsley’s process control with the CMM became sufficient that they could go directly to interferometry.

Fig. 5

Scanning Shack–Hartmann sensor tested PMSA mirrors in auto-collimation during coarse grinding. Photo shows sensor (white) mounted on scanning gantry (black).

JATIS_10_1_011202_f005.png

Fig. 6

Comparison of CMM and SSHS data (for 222 to 2 mm spatial frequencies) after smooth-out grind of the EDU (8/1/2006).

JATIS_10_1_011202_f006.png

For fine grinding and polishing processes, metrology feedback was provided by a custom-built optical test station (OTS) (Fig. 7). The OTS is a multi-purpose test station combining the infrared SSHS, a CoC interferometric test with a computer-generated hologram (CGH), and an interferometric auto-collimation test. The OTS simultaneously controlled conic constant, radius of curvature, prescription alignment, and surface figure error. The CoC test pallet contains a 4D PhaseCAM, a diffraction international CGH on a rotary mount, and a Leica ADM. The ADM places the test pallet at the PMSA radius of curvature with an uncertainty of 0.1 mm, which meets the radius knowledge requirement. Please note that this uncertainty is an error budget built up of many contributing factors. Once in this position, if the PMSA was perfect, its surface would exactly match the wavefront produced by the CGH. Any deviation from this null is a surface figure error to be corrected.

Fig. 7

OTS CoC interferometric CGH null test simultaneously measures conic constant, radius of curvature, prescription alignment, and surface figure error. Pallet contains 4D PhaseCAM, diffraction international CGH on a rotation mount, and Leica ADM. PMSA mount rotates for six position tests. Quad cell alignment aids attach to PMSA.

JATIS_10_1_011202_f007.png

Other parameters that required overlapping measurement methods were the mid- and high-spatial frequency figure error and surface roughness specifications. Full aperture interferometry with OTS can get part way into the mid-spatial regium, but to fully measure compliance with these specifications required special metrology tools. A high-spatial frequency and surface roughness test station was designed and built (Fig. 8). On one end was a Fizeau interferometer, which measured high-spatial frequency over small regions of the mirror with high resolution. On the other end was a Chapman profilometer. Data were acquired via the method of multiple independent sub-apertures. The PSD measured by the two tools overlapped for validation (Fig. 9).

Fig. 8

High-spatial frequency and surface roughness test station.

JATIS_10_1_011202_f008.png

Fig. 9

Overlapping high-spatial and roughness data are required to confirm that PMSA complies with high-spatial frequency requirements.

JATIS_10_1_011202_f009.png

3.4.

Know Where You Are

Because the CoC test is a null test, the key to controlling PMSA conic, radius, and figure simultaneously is controlling the prescription alignment—both knowing where the prescription is on the substrate and knowing where the prescription is in the test setup. Prescription alignment (off-axis distance and clocking) is controlled by aligning the PMSA in the test setup with an uncertainty that is smaller than the decenter and clocking tolerances. This is made possible with fiducials. PMSAs are manufactured in Observatory Coordinate Space as defined by “master datums” on the back of each substrate. The optical surface figure is registered to the mirror substrate and to the observatory coordinate system via data fiducials placed on the front surface of each mirror. The CMM is primary in establishing compliance with prescription alignment. Starting with the master datums, the CMM defines “transfer” fiducials on the side of the mirror. Then, the CMM establishes the data fiducials based on these secondary fiducials. Figure 10 shows fiducialized mirrors being loaded into the MSFC XRCF for cryogenic testing. Some of the mirrors have only the data fiducials. Others of the mirrors have both data fiducials and distortion fiducials (2D grid of dots). Distortion fiducials are necessary to compensate for anamorphic distortion introduced by the CGH.

Fig. 10

PMSA mirrors with data and distortion fiducials are ready for loading into the MSFC XRCF.

JATIS_10_1_011202_f010.png

3.5.

Test Like You Fly

Webb is an infrared space telescope. Therefore, it operates at a cryogenic temperature less than 50 K. But, because Webb mirrors were fabricated at room temperature (300 K), it was necessary to measure their shape change from 300 K to 30 K, generate a “hit-map,” and cryo-null polish the mirrors such that they satisfy their required figure specification at 30 K. The MSFC XRCF (Fig. 11) was used to measure the in-process cryogenic shape of each mirror and certify its cryogenic optical performance specifications (Fig. 1) at 30 K. The XRCF test is an in-line CoC test with a CGH null. The XRCF can accommodate up to 6 PMSAs in a single test. The XRCF interferometer measures each PMSA individually with a CGH to match its prescription. In addition, each mirror’s radius of curvature was set at 30 K in the XRCF using an ADM calibrated against a solid mirror radius standard.6 After coating, all mirrors underwent a final cryo-certification test of conic constant, radius of curvature, prescription alignment, and surface figure error (low/mid and part of high) at 30 K in the MSFC XRCF and cross-checked at 30 K in the Johnson Space Center (JSC) Chamber A. Clear aperture, high spatial frequency figure error, and surface roughness were certified at ambient with the Tinsley High Spatial and Surface Roughness Test Station. It is assumed that the parameter’s measured properties are independent of temperature.

Fig. 11

MSFC x-ray and cryogenic test facility (XRCF), with its 7-m diameter and 23-m length can test up to 6 Webb PMSAs. Test equipment is located outside a window in ambient temperature and atmospheric conditions.

JATIS_10_1_011202_f011.png

Finally, because Webb operates in the micro-gravity of space but was manufactured in the gravity of Earth, it was necessary to removed gravity sag from the measured shape. This was accomplished using a standard six rotation test. Using symmetry, each PMSA is tested in 60-deg rotation position. The CGH is rotated in its mount to match. To maintain prescription alignment, quad-cell sensors are mounted to secondary fiducials on the side of the PMSAs.

3.6.

Independent Cross-Checks

Every Webb optical component specification had a primary certification test and at least one confirming test (Fig. 12). The PMSA prescription had multiple cross-check tests. The prescription was defined during fabrication at ambient using the Tinsley CoC interferometer CGH test. To confirm the results of this test, an independent auto-collimation test was performed (Fig. 13). The PMSA prescription was further tested via an independent ambient test at BATC and the MSFC XRCF 30 K test. The prescription received a final confirmation test at 30 K when the entire assembled primary mirror was tested at CoC with a refractive null corrector at JSC.

Fig. 12

PMSA final cryogenic optical performance requirements. Cryo-testing is performed in configuration 3, i.e., on the flight mount.

JATIS_10_1_011202_f012.png

Fig. 13

PMSA interferometric auto-collimation prescription cross check test.

JATIS_10_1_011202_f013.png

3.7.

Understand All Anomalies: Three Examples

3.7.1.

Clear aperture anomaly

There was a significant discrepancy between the clear aperture measured by the CoC interferometer and the clear aperture measured by the high-spatial frequency interferometer (Fig. 14). The CoC interferometer was measuring a “good” edge while the HS interferometer was measuring a “significantly” down edge. Using the wrong data would result in, at best, a poor convergence rate and, at worst, a mirror that failed to meet its specification. Clear aperture was important because it is a key factor in the telescope’s on-orbit performance.

Fig. 14

Clear aperture data between high spatial interferometer and CoC interferometer did not match until the mirror surface reached its final specification. It was necessary to use HS data to control the edge fabrication process.

JATIS_10_1_011202_f014.png

Initially, the CoC test was assumed to be correct, and the mirrors were processed using its data, but as the mirrors became better, the CoC reported that it was measuring valid data several millimeters outside of the mechanical clear aperture. A quick test with edge fiducials revealed that instead of the required 7 mm clear aperture, the CoC test was only seeing to within 15 to 25 mm of the mechanical aperture. It should be noted that while the final undistorted and interpolated data have 0.75-mm size pixels, in raw or distorted space, the pixel “footprint” on the mirror can be as much as 1  mm×2  mm. Once the HS data were used to control the process, convergence improved, and the mirror clear aperture met the required specification.

The sources of the edge discrepancy are interesting and important to optical metrologists. An early candidate for the discrepancy, but ultimately a non-factor, was the distorted test image viewed through the CGH. It was thought that an error in the un-distortion algorithm was making the data set appear to cover more of the mirror that it really was. One contributing effect was geometric retrace error. Slope error on the rolled edge can cause the reflected ray to return to the test setup with an outward radial shear. This effect could have been mitigated via a field lens, but our test setup did not have one. The real source of the edge error was depth of focus. The aspheric departure of the PMSA was so great that it was not possible to simultaneously have the center and the edge of the mirror in focus. Fresnel diffraction from an out of focus edge coherently added to the reflected wavefront to obscure the true shape of the PMSA surface at the edge of the mirror. Interestingly, gravity sag also had a role. Astigmatic bending caused the mirror to be “flatter” in one direction than in the other, and thus more in focus in one direction than the other.

3.7.2.

Test station anomaly

As discussed in Sec. 3.3, in accordance with rule 2, to achieve the specified RMS surface figure error, it was necessary for the two OTS CoC test setup to have a 10 nm rms absolute reproducibility and agreement with each other. Detailed error budgets were developed, vetted, and validated by test before the OTS were certified to take metrology data.

Initially, OTS#1 did not achieve the required reproducibility. One problem was non-reproducibility in the PMSA mount itself, which resulted in a non-reproducible gravity sag. Another problem was that the PMSAs were experiencing unacceptably large thermal gradients. Because of the high CTE of beryllium at 300 K (i.e., room temperature) and the extreme light-weighting of the PMSAs, small thermal gradients can introduce significant figure distortions. In addition, temporal variations of these gradients (as well as the bulk temperature) result in an inability to acquire accurate data with the required precision. Figure 15 shows the reproducibility to two separate tests on the OTS#1.

Fig. 15

Test reproducibility for OTS#1 (VC6GA294-VC6HA270) validates that its performance complies with its predicted error budget.

JATIS_10_1_011202_f015.png

Certification was also delayed because OTS#2 had a systematic 0.2 mm radius of curvature discrepancy relative to OTS#1. While the discrepancy was small, might have been caused by a radius bias in the OTS#2 fold mirror calibration, could have been “calibrated” out of the measurement, and would have resulted in a very small residual figure error if real, rule 7 was rigorously enforced. The anomaly was eventually tracked down to a slipping translation motor. Once corrected, OTS#2 repeated with OTS#1 at the required 10-nm rms level.

3.7.3.

Tinsley versus BATC anomaly

PMSAs were moved between Tinsley and BATC where they were de-integrated from their manufacturing mount (Config. 2) and integrated into their flight mount (Config. 3) for cryo-testing in the MSFC XRCF. They were then converted back into their fabrication mount (Config. 2) for cryo-null polishing at Tinsley. To facilitate these tasks, BATC built the BATC Optical Test Station (BOTS). BOTS (Fig. 16) used a linear configuration with a CGH (same as Tinsley and XRCF) in an insulated temperature-controlled enclosure. BOTS performed incoming/outgoing inspection to verify data handoff. It quantified any change in these values as a function of integration into Config. 2 or Config. 3, or as a result of vibration testing or thermal-vacuum testing. It also performed a rotation test to create a gravity sag back-out file for use at the XRCF. To achieve the required error budget specification, the BOTS Config. 2 measurement had to agree with the Tinsley OTS (TOTS) to less than 10-nm rms.

Fig. 16

BOTS is an inline CoC test with a CGH null. Thermal stability is produced by an insulated temperature-controlled enclosure.

JATIS_10_1_011202_f016.png

Initially, even though the BOTS and TOTS had been calibrated using ROCO, as shown in Fig. 17, the two did not agree on radius of curvature. This is because beryllium has a high CTE at ambient and just a few mK of temperature difference between test environments produces a measurable radius error. This discrepancy was easily fixed via a calibrated thermal deformation model. Once a thermal radius back-out model was created, BOTS and TOTS agreed to the 10nm rms level.

Fig. 17

Initially BOTS and TOTS radius of curvature did not match. This was traced to a bulk temperature difference.

JATIS_10_1_011202_f017.png

In addition, in the process of reconciling BOTS and TOTS data, a systematic prescription alignment discrepancy was discovered. The source of this discrepancy was a difference in how BATC and Tinsley installed their alignment fiducials to the secondary fiducial interfaces. Again, once corrected, prescription alignments matched.

4.

Conclusion

The Webb Space Telescope’s on-orbit performance is nearly identical to its final compliance certification predicted performance (Fig. 18).7 This success was made possible by the hard work of dozens of optical metrologists (mentioned in Acknowledgements), the development and qualification of multiple custom test setups, and several inventions, including 4D PhaseCam and Leica ADM, and, partly because the optical component in-process optical testing and cryogenic requirement compliance certification, verification, and validation testing conducted according to this author’s rules for optical metrology. No matter how small or large your optical or metrology task, these rules for metrology are a useful rubric tool to ensure success: (1) fully understand the task, (2) develop an error budget, (3) continuous metrology coverage, (4) know where you are, (5) test like you fly, (6) independent cross-checks, and (7) understand all anomalies.

Fig. 18

(a) Webb’s pre-flight predicted wavefront is nearly identical to (b) the on-orbit wavefront.7

JATIS_10_1_011202_f018.png

Acknowledgments

The author thanks NASA MSFC: Ron Eng and W. Scott Smith; BATC: Bob Brown, Dave Chaney, Ben Gallagher, Jake Lewis, John Schwenker, and Koby Smith; L3-Tinsley: Chris Alongi, Andrea Arneson, Rob Bernier, Jay Daniel, Lee Dettmann, Glen Cole; Robert Garfield, Patrick Johnson, Allen Lee, Adam Magruder, Ankit Patel, and Martin Seilonen; UAH: James Hadaway and Pat Reardon; GSFC: Doug Leviton; LLNL: Michael Messerly. This author declares no potential conflicts of interests with respect to the research, authorship, or publication of this article.

References

1. 

H. P. Stahl et al., “Survey of interferometric techniques used to test Webb optical components,” Proc. SPIE, 7790 779002 https://doi.org/10.1117/12.862234 PSISDG 0277-786X (2010). Google Scholar

2. 

H. P. Stahl, “Rules for optical metrology,” Proc. SPIE, 8011 80111B https://doi.org/10.1117/12.902826 PSISDG 0277-786X (2011). Google Scholar

3. 

H. P. Stahl, “Phase-measuring interferometry performance parameters,” Proc. SPIE, 0680 https://doi.org/10.1117/12.939588 PSISDG 0277-786X (1987). Google Scholar

4. 

H. P. Stahl and J. A. Tome, “Phase-measuring interferometry: performance characterization and calibration,” Proc. SPIE, 0954 https://doi.org/10.1117/12.947576 PSISDG 0277-786X (1989). Google Scholar

5. 

H. P. Stahl et al., “Fabrication and testing of the ITTT beryllium secondary mirror,” Proc. SPIE, 3134 62 –71 https://doi.org/10.1117/12.295156 PSISDG 0277-786X (1997). Google Scholar

6. 

D. Chaney, J. B. Hadaway and J. Lewis, “Cryogenic radius-of-curvature matching for the Webb primary mirror segments,” Proc. SPIE, 7439 743951C https://doi.org/10.1117/12.845115 PSISDG 0277-786X (2009). Google Scholar

7. 

L. Feinberg, “UV-Optical Telescope Technology,” in AAS Presentation, (2023). Google Scholar

Biography

H. Philip Stahl, senior optical physicist at NASA MSFC, is a leading authority in optical systems engineering and metrology. He matures technology for large space telescopes, was responsible for Webb Telescope mirrors, and developed “Stahl” telescope cost model. He is recipient of NASA’s DSM, fellow of SPIE and OSA, and SPIE 2014 president. He received his PhD in 1985 and his MS degree in 1983 in optical science from the University of Arizona and his BA degree in 1979 in physics/mathematics from Wittenberg University.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
H. Philip Stahl "Rules for optical metrology: Webb primary mirror case study," Journal of Astronomical Telescopes, Instruments, and Systems 10(1), 011202 (2 November 2023). https://doi.org/10.1117/1.JATIS.10.1.011202
Received: 3 April 2023; Accepted: 3 October 2023; Published: 2 November 2023
Advertisement
Advertisement
KEYWORDS
Mirrors

Metrology

Optical metrology

Cryogenics

Optical testing

Interferometers

Mirror surfaces

RELATED CONTENT


Back to Top