Astroinformatics

You are here: Home Recent Papers Astroinformatics

Diffusive nested sampling
We introduce a general Monte Carlo method based on Nested Sampling (NS), for sampling complex probability distributions and estimating the normalising con- stant. The method uses one or more particles, which explore a mixture of nested probability distributions, each succes- sive distribution occupying ∼ e−1 times the enclosed prior mass of the previous distribution. While NS technically re- quires independent generation of particles, Markov Chain Monte Carlo (MCMC) exploration fits naturally into this technique. We illustrate the new method on a test problem and find that it can achieve four times the accuracy of clas- sic MCMC-based Nested Sampling, for the same computa- tional effort; equivalent to a factor of 16 speedup. An ad- ditional benefit is that more samples and a more accurate evidence value can be obtained simply by continuing the run for longer, as in standard MCMC.
DeepSky: Identifying Absorption Bumps via Deep Learning
The pervasive interstellar grains provide significant insights to help us understand the formation and evolution of stars, planetary systems, and galaxies, and could potentially lead us to the secret of the origin of life. One of the most effective ways to analyze the dusts is via their interaction and interference on some background light. The observable extinction curves and spectral features carry the information about the size and composition of the dusts. Among the features, the broad 2175 Å absorption bump is one of the most significant spectroscopic interstellar extinction features. Traditionally, astronomers apply conventional statistical and signal processing techniques to detect the existence of absorption bumps. These approaches require labor-intensive preprocessing and the co-existence of some other reference features to alleviate the influence from the noises. Conventional approaches not only involve substantial labor cost in complicated workflows, but also demand well-trained expertise to make subtle and error-prone conditional decisions. In this paper, we propose to leverage deep learning to automate the detection workflow without minute feature engineering. We design and analyze deep convolutional neural networks for detecting absorption bumps. We further propose the framework of deep learning mechanisms and models (collectively called DeepSky) for scientific discovery in astronomy. The prototype of DeepSky demonstrates efficient and effective results using limited labeled data. With well-designed data augmentation, our trained model achieved about 99% accuracy in prediction using the real-world data.
Celeste: Variational inference for a generative model of astronomical images
We present a new, fully generative model of op- tical telescope image sets, along with a varia- tional procedure for inference. Each pixel inten- sity is treated as a Poisson random variable, with a rate parameter dependent on latent properties of stars and galaxies. Key latent properties are themselves random, with scientific prior distribu- tions constructed from large ancillary data sets. We check our approach on synthetic images. We also run it on images from a major sky survey, where it exceeds the performance of the current state-of-the-art method for locating celestial bod- ies and measuring their colors.
A Flight through the Universe
The authors describe the creation of a tridimensional fly-through animation across the largest map of galaxies to date. This project represented a challenge: creating a scientifically accurate representation of the galaxy distribution that was aesthetically pleasing. The animation shows almost half a million galaxies as the viewer travels through the vast intergalactic regions, giving a glimpse of the sheer size of the universe
Streaming Algorithms for Halo Finders
Cosmological N-body simulations are essential for studies of the large-scale distribution of matter and galaxies in the Universe. This analysis often involves finding clusters of particles and retrieving their properties. Detecting such “halos” among a very large set of particles is a computationally intensive problem, usually executed on the same super-computers that produced the simulations, requiring huge amounts of memory. Recently, a new area of computer science emerged. This area, called streaming algorithms, provides new theoretical methods to compute data analytics in a scalable way using only a single pass over a data sets and logarithmic memory. The main contribution of this paper is a novel connection between the N-body simulations and the streaming algorithms. In particular, we investigate a link between halo finders and the problem of finding frequent items (heavy hitters) in a data stream, that should greatly reduce the computational resource requirements, especially the memory needs. Based on this connection, we can build a new halo finder by running efficient heavy hitter algorithms as a black-box. We implement two representatives of the family of heavy hitter algorithms, the Count-Sketch algorithm (CS) and the Pick-and-Drop sampling (PD), and evaluate their accuracy and memory usage. Comparison with other halo-finding algorithms from [1] shows that our halo finder can locate the largest haloes using significantly smaller memory space and with comparable running time. This streaming approach makes it possible to run and analyze extremely large data sets from N-body simulations on a smaller machine, rather than on supercomputers. Our findings demonstrate the connection between the halo search problem and streaming algorithms as a promising initial direction of further research.
StarWatch 2.0: RFI Filter for SETI Signals
We extend our system for radio astronomical monitoring by a cross-validation filter, separating near Earth radio frequency interference (RFI) from deep space signals. The filter searches for similar signals in a nearby frequency band, coming from a different spatial direction than the tested signal. The filter passes the signals which do not have such duplicates. We apply this technique to a database of SETI Institute (setilive.org), containing 1.5 millions of sky observations in a frequency range 0.5-11.2 GHz, where our primary selection identified 28 strong signals possessing extraterrestrial (ET) signature. Crossvalidation allows to filter out 24 of those signals as satellite RFI. We present parameters for the remaining 4 signals and discuss statistical significance of these findings.
Machine learning classification of SDSS transient survey images
We show that multiple machine learning algorithms can match human performance in classifying transient imaging data from the Sloan Digital Sky Survey (SDSS) supernova survey into real objects and artefacts. This is a first step in any transient science pipeline and is currently still done by humans, but future surveys such as the Large Synoptic Survey Telescope (LSST) will necessitate fully machine-enabled solutions. Using features trained from eigenimage analysis (principal component analysis, PCA) of single-epoch g, r and i difference images, we can reach a completeness (recall) of 96 per cent, while only incorrectly classifying at most 18 per cent of artefacts as real objects, corresponding to a precision (purity) of 84 per cent. In general, random forests performed best, followed by the k-nearest neighbour and the SkyNet artificial neural net algorithms, compared to other methods such as naive Bayes and kernel support vector machine. Our results show that PCA-based machine learning can match human success levels and can naturally be extended by including multiple epochs of data, transient colours and host galaxy information which should allow for significant further improvements, especially at low signal-to-noise.
Dynamic temperature selection for parallel tempering in Markov chain Monte Carlo simulations
Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multimodal probability distributions. Most popular methods, such as MCMC sampling, perform poorly on strongly multimodal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versions of the target distribution with reduced contrast levels. Gaps between modes can be traversed at higher temperatures, while individual modes can be efficiently explored at lower temperatures. In this paper, we investigate how one might choose the ladder of temperatures to achieve more efficient sampling, as measured by the autocorrelation time of the sampler. In particular, we present a simple, easily implemented algorithm for dynamically adapting the temperature configuration of a sampler while sampling. This algorithm dynamically adjusts the temperature spacing to achieve a uniform rate of exchanges between chains at neighbouring temperatures. We compare the algorithm to conventional geometric temperature configurations on a number of test distributions and on an astrophysical inference problem, reporting efficiency gains by a factor of 1.2-2.5 over a well-chosen geometric temperature configuration and by a factor of 1.5-5 over a poorly chosen configuration. On all of these problems, a sampler using the dynamical adaptations to achieve uniform acceptance ratios between neighbouring chains outperforms one that does not.
ELM: an Algorithm to Estimate the Alpha Abundance from Low-resolution Spectra
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSS spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.
The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations
At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth’s turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.
Automated detection of solar eruptions
Observation of the solar atmosphere reveals a wide range of motions, from small scale jets and spicules to global-scale coronal mass ejections (CMEs). Identifying and characterizing these motions are essential to advancing our understanding of the drivers of space weather. Both automated and visual identifications are currently used in identifying Coronal Mass Ejections. To date, eruptions near the solar surface, which may be precursors to CMEs, have been identified primarily by visual inspection. Here we report on Eruption Patrol (EP): a software module that is designed to automatically identify eruptions from data collected by the Atmospheric Imaging Assembly on the Solar Dynamics Observatory (SDO/AIA). We describe the method underlying the module and compare its results to previous identifications found in the Heliophysics Event Knowledgebase. EP identifies eruptions events that are consistent with those found by human annotations, but in a significantly more consistent and quantitative manner. Eruptions are found to be distributed within 15 Mm of the solar surface. They possess peak speeds ranging from 4 to 100 km/s and display a power-law probability distribution over that range. These characteristics are consistent with previous observations of prominences.
An improved SPH scheme for cosmological simulations
We present an implementation of smoothed particle hydrodynamics (SPH) with improved accuracy for simulations of galaxies and the large-scale structure. In particular, we implement and test a vast majority of SPH improvement in the developer version of GADGET-3. We use the Wendland kernel functions, a particle wake-up time-step limiting mechanism and a time-dependent scheme for artificial viscosity including high-order gradient computation and shear flow limiter. Additionally, we include a novel prescription for time-dependent artificial conduction, which corrects for gravitationally induced pressure gradients and improves the SPH performance in capturing the development of gas-dynamical instabilities. We extensively test our new implementation in a wide range of hydrodynamical standard tests including weak and strong shocks as well as shear flows, turbulent spectra, gas mixing, hydrostatic equilibria and self-gravitating gas clouds. We jointly employ all modifications; however, when necessary we study the performance of individual code modules. We approximate hydrodynamical states more accurately and with significantly less noise than standard GADGET-SPH. Furthermore, the new implementation promotes the mixing of entropy between different fluid phases, also within cosmological simulations. Finally, we study the performance of the hydrodynamical solver in the context of radiative galaxy formation and non-radiative galaxy cluster formation. We find galactic discs to be colder and more extended and galaxy clusters showing entropy cores instead of steadily declining entropy profiles. In summary, we demonstrate that our improved SPH implementation overcomes most of the undesirable limitations of standard GADGET-SPH, thus becoming the core of an efficient code for large cosmological simulations.
Cosmicflows Constrained Local UniversE Simulations
This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, i.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.
Improving the convergence properties of the moving-mesh code AREPO
Accurate numerical solutions of the equations of hydrodynamics play an ever more important role in many fields of astrophysics. In this work, we reinvestigate the accuracy of the moving-mesh code AREPO and show how its convergence order can be improved for general problems. In particular, we clarify that for certain problems AREPO only reaches first-order convergence for its original formulation. This can be rectified by simple modifications we propose to the time integration scheme and the spatial gradient estimates of the code, both improving the accuracy of the code. We demonstrate that the new implementation is indeed second-order accurate under the L1 norm, and in particular substantially improves conservation of angular momentum. Interestingly, whereas these improvements can significantly change the results of smooth test problems, we also find that cosmological simulations of galaxy formation are unaffected, demonstrating that the numerical errors eliminated by the new formulation do not impact these simulations. In contrast, simulations of binary stars followed over a large number of orbital times are strongly affected, as here it is particularly crucial to avoid a long-term build up of errors in angular momentum conservation.
The infrared luminosities of ˜332 000 SDSS galaxies predicted from artificial neural networks and the Herschel Stripe 82 survey
The total infrared (IR) luminosity (LIR) can be used as a robust measure of a galaxy’s star formation rate (SFR), even in the presence of an active galactic nucleus (AGN), or when optical emission lines are weak. Unfortunately, existing all sky far-IR surveys, such as the Infrared Astronomical Satellite (IRAS) and AKARI, are relatively shallow and are biased towards the highest SFR galaxies and lowest redshifts. More sensitive surveys with the Herschel Space Observatory are limited to much smaller areas. In order to construct a large sample of LIR measurements for galaxies in the nearby Universe, we employ artificial neural networks (ANNs), using 1136 galaxies in the Herschel Stripe 82 sample as the training set. The networks are validated using two independent data sets (IRAS and AKARI) and demonstrated to predict the LIR with a scatter σ ˜ 0.23 dex, and with no systematic offset. Importantly, the ANN performs well for both star-forming galaxies and those with an AGN. A public catalogue is presented with our LIR predictions which can be used to determine SFRs for 331 926 galaxies in the Sloan Digital Sky Survey (SDSS), including ˜129 000 SFRs for AGN-dominated galaxies for which SDSS SFRs have large uncertainties.
An implicit scheme for solving the anisotropic diffusion of heat and cosmic rays in the RAMSES code
Astrophysical plasmas are subject to a tight connection between magnetic fields and the diffusion of particles, which leads to an anisotropic transport of energy. Under the fluid assumption, this effect can be reduced to an advection-diffusion equation, thereby augmenting the equations of magnetohydrodynamics. We introduce a new method for solving the anisotropic diffusion equation using an implicit finite-volume method with adaptive mesh refinement and adaptive time-stepping in the ramses code. We apply this numerical solver to the diffusion of cosmic ray energy and diffusion of heat carried by electrons, which couple to the ion temperature. We test this new implementation against several numerical experiments and apply it to a simple supernova explosion with a uniform magnetic field.
The EAGLE simulations of galaxy formation: the importance of the hydrodynamics scheme
We present results from a subset of simulations from the `Evolution and Assembly of GaLaxies and their Environments’ (EAGLE) suite in which the formulation of the hydrodynamics scheme is varied. We compare simulations that use the same subgrid models without recalibration of the parameters but employing the standard GADGET flavour of smoothed particle hydrodynamics (SPH) instead of the more recent state-of-the-art ANARCHY formulation of SPH that was used in the fiducial EAGLE runs. We find that the properties of most galaxies, including their masses and sizes, are not significantly affected by the details of the hydrodynamics solver. However, the star formation rates of the most massive objects are affected by the lack of phase mixing due to spurious surface tension in the simulation using standard SPH. This affects the efficiency with which AGN activity can quench star formation in these galaxies and it also leads to differences in the intragroup medium that affect the X-ray emission from these objects. The differences that can be attributed to the hydrodynamics solver are, however, likely to be less important at lower resolution. We also find that the use of a time-step limiter is important for achieving the feedback efficiency required to match observations of the low-mass end of the galaxy stellar mass function.
Newtonian CAFE: a new ideal MHD code to study the solar atmosphere
We present a new code designed to solve the equations of classical ideal magnetohydrodynamics (MHD) in three dimensions, submitted to a constant gravitational field. The purpose of the code centres on the analysis of solar phenomena within the photosphere-corona region. We present 1D and 2D standard tests to demonstrate the quality of the numerical results obtained with our code. As solar tests we present the transverse oscillations of Alfvénic pulses in coronal loops using a 2.5D model, and as 3D tests we present the propagation of impulsively generated MHD-gravity waves and vortices in the solar atmosphere. The code is based on high-resolution shock-capturing methods, uses the Harten-Lax-van Leer-Einfeldt (HLLE) flux formula combined with Minmod, MC, and WENO5 reconstructors. The divergence free magnetic field constraint is controlled using the Flux Constrained Transport method.
sick: The Spectroscopic Inference Crank
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal-to-noise ratio spectra of M67 stars reveals atomic diffusion processes on the order of 0.05 dex, previously only measurable with differential analysis techniques in high-resolution spectra. sick is easy to use, well-tested, and freely available online through GitHub under the MIT license.
Classification of large-scale stellar spectra based on the non-linearly assembling learning machine
An important problem to be solved of traditional classification methods is they cannot deal with large-scale classification because of very high time complexity. In order to solve above problem, inspired by the thinking of collaborative management, the non-linearly assembling learning machine (NALM) is proposed and used in the large-scale stellar spectral classification. In NALM, the large-scale dataset is firstly divided into several subsets, and then the traditional classifiers such as support vector machine (SVM) runs on the subset, finally, the classification results on each subset are assembled and the overall classification decision is obtained. In comparative experiments, we investigate the performance of NALM in the stellar spectral subclasses classification compared with SVM. We apply SVM and NALM respectively to classify the four subclasses of K-type spectra, three subclasses of F-type spectra and three subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS). The comparative experiment results show that the performance of NALM is much better than SVM in view of the classification accuracy and the computation time.
Next 20 items »[1] 2 3 4 5 6 7 … 31