Team
- Luiz Fernando Guedes dos Santos (NASA Goddard Space Flight Center)
- Souvik Bose (University of Oslo)
- Valentina Salvatelli (SETI Institute)
- Brad Neuberg (SETI Institute)
- Mark Cheung (Lockheed Martin)
- Miho Janvier (Universite Paris-Saclay)
- Meng Jin (Lockheed Martin)
- Yarin Gal (University of Oxford)
- Paul Boerner (Lockheed Martin)
- Atılım Güneş Baydin (University of Oxford)
Abstract
Context. Solar activity plays a quintessential role in influencing the interplanetary medium and space-weather around Earth. Remote sensing instruments on-board heliophysics space missions provide a pool of information about the Sun’s activity, via the measurement of its magnetic field and the emission of light from the multi-layered, multi-thermal, and dynamic solar atmosphere. Extreme UV (EUV) wavelength observations from space help in understanding the subtleties of the outer layers of the Sun, namely the chromosphere and the corona. Unfortunately, such instruments, like the Atmospheric Imaging Assembly (AIA) on-board NASA’s Solar Dynamics Observatory (SDO), suffer from time-dependent degradation that reduces their sensitivity. Current state-of-the-art calibration techniques rely on sounding rocket flights to maintain absolute calibration, which are infrequent, complex, and limited to a single vantage point.
Aims. We aim to develop a novel method based on machine learning (ML) that exploits spatial patterns on the solar surface across multi-wavelength observations to auto-calibrate the instrument degradation.
Methods. We establish two convolutional neural network (CNN) architectures that take either single-channel or multi-channel input and train the models using the SDOML dataset. The dataset is further augmented by randomly degrading images at each epoch with the training dataset spanning non-overlapping months with the test dataset. We also develop a non-ML baseline model to assess the gain of the CNN models. With the best trained models, we reconstruct the AIA multi-channel degradation curves of 2010–2020 and compare them with the sounding-rocket based degradation curves.
Results. Our results indicate that the CNN-based models significantly outperform the non-ML baseline model in calibrating instrument degradation. Moreover, multi-channel CNN outperforms the single-channel CNN, which suggests the importance of crosschannel relations between different EUV channels for recovering the degradation profiles. The CNN-based models reproduce the degradation corrections derived from the sounding rocket cross-calibration measurements within the experimental measurement uncertainty, indicating that it performs equally well when compared with the current techniques.
Conclusions. Our approach establishes the framework for a novel technique based on CNNs to calibrate EUV instruments. We envision that this technique can be adapted to other imaging or spectral instruments operating at other wavelengths.
Press Releases and Media Coverage
- Jul 2021: Artificial Intelligence Helps Improve NASA’s Eyes on the Sun, NASA front page
Video
Publications
- Guedes dos Santos, Luiz Fernando, Souvik Bose, Valentina Salvatelli, Brad Neuberg, Mark Cheung, Miho Janvier, Meng Jin, Yarin Gal, Paul Boerner, and Atılım Güneş Baydin. 2021. “Multi-Channel Auto-Calibration for the Atmospheric Imaging Assembly Using Machine Learning.” Astronomy & Astrophysics 648: A53. doi:10.1051/0004-6361/202040051.
Context. Solar activity plays a quintessential role in influencing the interplanetary medium and space-weather around Earth. Remote sensing instruments on-board heliophysics space missions provide a pool of information about the Sun’s activity, via the measurement of its magnetic field and the emission of light from the multi-layered, multi-thermal, and dynamic solar atmosphere. Extreme UV (EUV) wavelength observations from space help in understanding the subtleties of the outer layers of the Sun, namely the chromosphere and the corona. Unfortunately, such instruments, like the Atmospheric Imaging Assembly (AIA) on-board NASA’s Solar Dynamics Observatory (SDO), suffer from time-dependent degradation that reduces their sensitivity. Current state-of-the-art calibration techniques rely on sounding rocket flights to maintain absolute calibration, which are infrequent, complex, and limited to a single vantage point. Aims. We aim to develop a novel method based on machine learning (ML) that exploits spatial patterns on the solar surface across multi-wavelength observations to auto-calibrate the instrument degradation. Methods. We establish two convolutional neural network (CNN) architectures that take either single-channel or multi-channel input and train the models using the SDOML dataset. The dataset is further augmented by randomly degrading images at each epoch with the training dataset spanning non-overlapping months with the test dataset. We also develop a non-ML baseline model to assess the gain of the CNN models. With the best trained models, we reconstruct the AIA multi-channel degradation curves of 2010–2020 and compare them with the sounding-rocket based degradation curves. Results. Our results indicate that the CNN-based models significantly outperform the non-ML baseline model in calibrating instrument degradation. Moreover, multi-channel CNN outperforms the single-channel CNN, which suggests the importance of crosschannel relations between different EUV channels for recovering the degradation profiles. The CNN-based models reproduce the degradation corrections derived from the sounding rocket cross-calibration measurements within the experimental measurement uncertainty, indicating that it performs equally well when compared with the current techniques. Conclusions. Our approach establishes the framework for a novel technique based on CNNs to calibrate EUV instruments. We envision that this technique can be adapted to other imaging or spectral instruments operating at other wavelengths.
@article{dossantos-2021-multi, title = {Multi-Channel Auto-Calibration for the Atmospheric Imaging Assembly using Machine Learning}, author = {{Guedes dos Santos}, Luiz Fernando and Bose, Souvik and Salvatelli, Valentina and Neuberg, Brad and Cheung, Mark and Janvier, Miho and Jin, Meng and Gal, Yarin and Boerner, Paul and Baydin, Atılım Güneş}, journal = {Astronomy \& Astrophysics}, year = {2021}, volume = {648}, pages = {A53}, doi = {10.1051/0004-6361/202040051}, url = {https://doi.org/10.1051/0004-6361/202040051} }
- Salvatelli, Valentina, Souvik Bose, Brad Neuberg, Luiz F. Guedes dos Santos, Mark Cheung, Miho Janvier, Atılım Güneş Baydin, Yarin Gal, and Meng Jin. 2019. “Using U-Nets to Create High-Fidelity Virtual Observations of the Solar Corona.” In Second Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019), Vancouver, Canada.
Understanding and monitoring the complex and dynamic processes of the Sun is important for a number of human activities on Earth and in space. For this reason, NASA’s Solar Dynamics Observatory (SDO) has been continuously monitoring the multi-layered Sun’s atmosphere in high-resolution since its launch in 2010, generating terabytes of observational data every day. The synergy between machine learning and this enormous amount of data has the potential, still largely unexploited, to advance our understanding of the Sun and extend the capabilities of heliophysics missions. In the present work, we show that deep learning applied to SDO data can be successfully used to create a high-fidelity “virtual telescope” that generates synthetic observations of the solar corona by image translation. Towards this end we developed a deep neural network, structured as an encoder-decoder with skip connections (U-Net), that reconstructs the Sun’s image of one instrument channel given temporally aligned images in three other channels. The approach we present has the potential to reduce the telemetry needs of SDO, enhance the capabilities of missions that have less observing channels, and transform the concept development of future missions.
@inproceedings{salvatelli-2019-virtual, title = {Using U-Nets to create high-fidelity virtual observations of the solar corona}, author = {Salvatelli, Valentina and Bose, Souvik and Neuberg, Brad and {Guedes dos Santos}, Luiz F. and Cheung, Mark and Janvier, Miho and Baydin, Atılım Güneş and Gal, Yarin and Jin, Meng}, booktitle = {Second Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019), Vancouver, Canada}, year = {2019} }
- Neuberg, Brad, Souvik Bose, Valentina Salvatelli, Luiz F. Guedes dos Santos, Mark Cheung, Miho Janvier, Atılım Güneş Baydin, Yarin Gal, and Meng Jin. 2019. “Auto-Calibration of Remote Sensing Solar Telescopes with Deep Learning.” In Second Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019), Vancouver, Canada.
As a part of NASA’s Heliophysics System Observatory (HSO) fleet of satellites, the Solar Dynamics Observatory (SDO) has continuously monitored the Sun since 2010. Ultraviolet (UV) and Extreme UV (EUV) instruments in orbit, such as SDO’s Atmospheric Imaging Assembly (AIA) instrument, suffer time-dependent degradation which reduces instrument sensitivity. Accurate calibration for (E)UV instruments currently depends on periodic sounding rockets, which are infrequent and not practical for heliophysics missions in deep space. In the present work, we develop a Convolutional Neural Network (CNN) that auto-calibrates SDO/AIA channels and corrects sensitivity degradation by exploiting spatial patterns in multi-wavelength observations to arrive at a self-calibration of (E)UV imaging instruments. Our results remove a major impediment to developing future HSO missions of the same scientific caliber as SDO but in deep space, able to observe the Sun from more vantage points than just SDO’s current geosynchronous orbit. This approach can be adopted to perform autocalibration of other imaging systems exhibiting similar forms of degradation.
@inproceedings{neuberg-2019-autocalibration, title = {Auto-Calibration of Remote Sensing Solar Telescopes with Deep Learning}, author = {Neuberg, Brad and Bose, Souvik and Salvatelli, Valentina and {Guedes dos Santos}, Luiz F. and Cheung, Mark and Janvier, Miho and Baydin, Atılım Güneş and Gal, Yarin and Jin, Meng}, booktitle = {Second Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019), Vancouver, Canada}, year = {2019} }
- Salvatelli, Valentina, Luiz Fernando Guedes dos Santos, Mark Cheung, Souvik Bose, Brad Neuberg, Miho Janvier, Meng Jin, Yarin Gal, and Atılım Güneş Baydin. 2021. “Self-Supervised Deep Learning for Reducing Data Transmission Needs in Multi-Wavelength Space Instruments: a Case Study Based on the Solar Dynamics Observatory.” In American Geophysical Union (AGU) Fall Meeting, December 13–17, 2021. https://agu.confex.com/agu/fm21/meetingapp.cgi/Paper/984065.
The Solar Dynamics Observatory(SDO), a NASA mission that has been producing terabytes of observational data every day for more than ten years, has been used as a use-case to demonstrate the potential of particular methodologies and pave the way for future deep-space mission planning. In deep space, multispectral high-resolution missions like SDO would face two major challenges: 1- a low rate of telemetry 2- constrained hardware (i.e.limited number of observational channels). This project investigates the potential, and the limitations, of using a deep learning approach to reduce data transmission needs and data latency of a multi-wavelength satellite instrument. Namely, we use multi-channel data from the SDO’s Atmospheric Imaging Assembly(AIA) to show how self-supervised deep learning models can be used to synthetically produce, via image-to-image translation, images of the solar corona, and how this can be leveraged to reduce the downlink requirements of similar space missions. In this regards, we focus on encoder-decoder based architectures and we study how morphological traits and brightness of the solar surface affects the neural network predictions. We also investigate the limitations that these virtual observations might have and the impact on science. Finally we discuss how the method we propose can be used to create a data transmission schema that is both efficient and automated.
@inproceedings{salvatelli-2021-selfsupervised, title = {Self-supervised Deep Learning for Reducing Data Transmission Needs in Multi-Wavelength Space Instruments: a case study based on the Solar Dynamics Observatory}, author = {Salvatelli, Valentina and {Guedes dos Santos}, Luiz Fernando and Cheung, Mark and Bose, Souvik and Neuberg, Brad and Janvier, Miho and Jin, Meng and Gal, Yarin and Baydin, Atılım Güneş}, booktitle = {American Geophysical Union (AGU) Fall Meeting, December 13--17, 2021}, year = {2021}, url = {https://agu.confex.com/agu/fm21/meetingapp.cgi/Paper/984065} }
- Guedes dos Santos, Luiz Fernando, Souvik Bose, Valentina Salvatelli, Brad Neuberg, Mark Cheung, Miho Janvier, Meng Jin, Yarin Gal, Paul Boerner, and Atılım Güneş Baydin. 2020. “Multi-Channel Auto-Calibration for the Atmospheric Imaging Assembly Instrument with Deep Learning.” In American Geophysical Union (AGU) Fall Meeting, December 1–17, 2020. https://agu2020fallmeeting-agu.ipostersessions.com/Default.aspx?s=58-34-12-15-E8-F1-7E-63-04-54-FB-78-A5-C9-FF-B4&pdfprint=true&guestview.
Solar activity plays a major role in influencing the interplanetary medium and space-weather around us. Understanding the complex mechanisms that govern such a dynamic phenomenon is important and challenging. Remote-sensing instruments onboard heliophysics missions can provide a wealth of information on the Sun’s activity, especially via the measurement of magnetic fields and the emission of light from the multi-layered solar atmosphere. NASA currently operates the Heliophysics System Observatory (HSO) that consists of a fleet of satellites constantly monitoring the Sun, its extended atmosphere, and space environments around the Earth and other planets of the solar system. One of the flagship missions of the HSO is NASA’s Solar Dynamics Observatory (SDO). Launched in 2010, it consists of three instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic & Magnetic Imager (HMI), and the EUV Variability Experiment (EVE). The SDO has been generating terabytes of observational data every day and has constantly monitored theSun with the highest temporal and spatial resolution for full-disk observations. Unfortunately, the (E)UV instruments in orbit suffer time-dependent degradation, which reduces instrument sensitivity. Accurate calibration for EUV instruments currently depends on sounding rockets (e.g., for SDO/EVE and SDO/AIA) infrequent. Since SDO is in a geosynchronous orbit, sounding rockets can be used for calibration, but calibration experiments may not be practical for deep space missions (e.g., STEREO satellites). In the present work, we develop a neural network that auto-calibrates the SDO/AIA channels, correcting sensitivity degradation, by exploiting spatial patterns in multi-wavelength observations to arrive at a self-calibration (E)UV imaging instruments. This removes a major impediment to developing future HSO missions that can deliver solar observations from different vantagepoints beyond Earth-orbit.
@inproceedings{dossantos-2020-multi, title = {Multi-Channel Auto-Calibration for the Atmospheric Imaging Assembly instrument with Deep Learning}, author = {{Guedes dos Santos}, Luiz Fernando and Bose, Souvik and Salvatelli, Valentina and Neuberg, Brad and Cheung, Mark and Janvier, Miho and Jin, Meng and Gal, Yarin and Boerner, Paul and Baydin, Atılım Güneş}, booktitle = {American Geophysical Union (AGU) Fall Meeting, December 1--17, 2020}, year = {2020}, url = {https://agu2020fallmeeting-agu.ipostersessions.com/Default.aspx?s=58-34-12-15-E8-F1-7E-63-04-54-FB-78-A5-C9-FF-B4&pdfprint=true&guestview} }
- Cheung, Mark, Luiz Fernando Guedes dos Santos, Souvik Bose, Brad Neuberg, Valentina Salvatelli, Atılım Güneş Baydin, Miho Janvier, and Meng Jin. 2019. “Auto-Calibration and Reconstruction of SDO’s Atmospheric Imaging Assembly Channels with Deep Learning.” In American Geophysical Union (AGU) Fall Meeting, San Francisco, CA, United States, December 9–13, 2019. https://agu.confex.com/agu/fm19/meetingapp.cgi/Paper/628427.
Solar activity has a major role in influencing space weather and the interplanetary medium. Understanding the complex mechanisms that govern such a dynamic phenomenon is important and challenging. Remote-sensing instruments on board of heliophysics missions can provide a wealth of information on the Sun’s activity, especially via the measurement of magnetic fields and the emission of light from the multi-layered Sun’s atmosphere. Ever since its launch in 2010, the observations by NASA’s Solar Dynamics Observatory (SDO) generates terabytes of observational data every day and has constantly monitored the Sun 24x7 with the highest time cadence and spatial resolution for full-disk observations. Using the enormous amount of data SDO provides, this project, developed at the NASA’s Frontier Development Lab (FDL 2019), focuses on algorithms that enhance our understanding of the Sun, as well as enhance the observation potential of present and future heliophysics missions with the aid of machine learning. In the present work, we use deep learning to increase the capabilities of NASA’s SDO and focus primarily on two aspects: (1) develop a neural network that auto-calibrates the SDO-AIA channels, which suffer from steady degradation over time; and (2) develop a “virtual telescope” that enlarges the missions possibilities by synthetically generating desired EUV channels derived from actual physical equipment flown on other mission. Towards this end, we use a deep neural network structured as an encoder-decoder to artificially generate images in different wavelengths from a limited number of observations. This approach can also improve other existing as well as the concept development of future missions that do not have as many observing instruments as SDO.
@inproceedings{cheung-2019-auto, title = {Auto-calibration and reconstruction of {SDO}’s Atmospheric Imaging Assembly channels with Deep Learning}, author = {Cheung, Mark and {Guedes dos Santos}, Luiz Fernando and Bose, Souvik and Neuberg, Brad and Salvatelli, Valentina and Baydin, Atılım Güneş and Janvier, Miho and Jin, Meng}, booktitle = {American Geophysical Union (AGU) Fall Meeting, San Francisco, CA, United States, December 9--13, 2019}, year = {2019}, url = {https://agu.confex.com/agu/fm19/meetingapp.cgi/Paper/628427} }
Acknowledgments
This project was partially conducted during the 2019 Frontier Development Lab (FDL) program, a co-operative agreement between NASA and the SETI Institute. We wish to thank IBM for providing computing power through access to the Accelerated Computing Cloud, as well as NASA, Google Cloud and Lockheed Martin for supporting this project. L.F.G.S was supported by the National Science Foundation under Grant No. AGS-1433086. M.C.M.C. and M.J. acknowledge support from NASA’s SDO/AIA (NNG04EA00C) contract to the LMSAL. S.B. acknowledges the support from the Research Council of Norway, project number 250810, and through its Centers of Excellence scheme, project number 262622. This project was also partially performed with funding from Google Cloud Platform research credits program. We thank the NASA’s Living With a Star Program, which SDO is part of, with AIA, and HMI instruments on-board. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA). A.G.B. is supported by EPSRC/MURI grant EP/N019474/1 and by Lawrence Berkeley National Lab. Software: We acknowledge for CUDA processing cuDNN (Chetlur et al. 2014), for data analysis and processing we used Sunpy (Mumford et al. 2020), Numpy (van der Walt et al. 2011), Pandas (Wes McKinney 2010), SciPy (Virtanen et al. 2020), scikit-image (van der Walt et al. 2014) and scikit-learn(Pedregosa et al. 2011). Finally all plots were done using Matplotlib (Hunter 2007) and Astropy (Price-Whelan et al. 2018).