KEYWORDS: Stars, Point spread functions, Sensors, Large Synoptic Survey Telescope, Calibration, Data modeling, Equipment, Signal to noise ratio, Modeling, Edge detection, Galaxy evolution, Galaxy groups and clusters
We present the phase one report of the Bright Star Subtraction (BSS) pipeline for the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST). This pipeline is designed to create an extended PSF model by utilizing observed stars, followed by subtracting this model from the bright stars present in LSST data. Running the pipeline on Hyper Suprime-Cam (HSC) data shows a correlation between the shape of the extended PSF model and the position of the detector within the camera’s focal plane. Specifically, detectors positioned closer to the focal plane’s edge exhibit reduced circular symmetry in the extended PSF model. To mitigate this effect, we present an algorithm that enables users to account for the location dependency of the model. Our analysis also indicates that the choice of normalization annulus is crucial for modeling the extended PSF. Smaller annuli can exclude stars due to overlap with saturated regions, while larger annuli may compromise data quality because of lower signal-to-noise ratios. This makes finding the optimal annulus size a challenging but essential task for the BSS pipeline. Applying the BSS pipeline to HSC exposures allows for the subtraction of, on average, 100 to 700 stars brighter than 12th magnitude measured in g-band across a full exposure, with a full HSC exposure comprising ≈100 detectors.
The Vera C. Rubin Observatory will advance many areas of astronomy over the next decade with its unique widefast- deep multi-color imaging survey, the Legacy Survey of Space and Time (LSST).1 The LSST will produce approximately 20TB of raw data per night, which will be automatically processed by the LSST Science Pipelines to generate science-ready data products – processed images, catalogs and alerts. To ensure that these data products enable transformative science with LSST, stringent requirements have been placed on their quality and scientific fidelity, for example on image quality and depth, astrometric and photometric performance, and object recovery completeness. In this paper we introduce faro, a framework for automatically and efficiently computing scientific performance metrics on the LSST data products for units of data of varying granularity, ranging from single-detector to full-survey summary statistics. By measuring and monitoring metrics, we are able to evaluate trends in algorithmic performance and conduct regression testing during development, compare the performance of one algorithm against another, and verify that the LSST data products will meet performance requirements by comparing to specifications. We present initial results using faro to characterize the performance of the data products produced on simulated and precursor data sets, and discuss plans to use faro to verify the performance of the LSST commissioning data products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.