Manuel Gnida wrote this article for SLAC National Accelerator Laboratory in Menlo Park, California.
Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks – a form of artificial intelligence – can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods. Postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published August 30, 2017 in Nature, said:
Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip.
The team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster, that’s closer to us. The distortions provide important clues about how mass is distributed in space and how that distribution changes over time – properties linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that’s accelerating the expansion of the universe.
Until now this type of analysis has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models. This can take weeks to months for a single lens.
But with the neural networks, the researchers were able to do the same analysis in a few seconds, which they demonstrated using real images from NASA’s Hubble Space Telescope and simulated ones.
To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods. In a separate paper, submitted to The Astrophysical Journal Letters, the team reports how these networks can also determine the uncertainties of their analyses.
In the video below, KIPAC researcher Phil Marshall explains the optical principles of gravitational lensing using a wineglass.
The study’s lead author is Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC. He said:
The neural networks we tested – three publicly available neural nets and one that we developed ourselves – were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy.
This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.
The ability to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion could transform astrophysics in a way that is much needed for future sky surveys that will look deeper into the universe – and produce more data – than ever before.
The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, will provide unparalleled views of the universe and is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands. Perreault Levasseur commented:
We won’t have enough people to analyze all these data in a timely manner with the traditional methods. Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.
Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information.
In the artificial version, the neurons are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on. KIPAC staff scientist Phil Marshall, a co-author of the paper, said:
The amazing thing is that neural networks learn by themselves what features to look for. This is comparable to the way small children learn to recognize objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.
But in this case, Hezaveh said:
It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.
Although the KIPAC scientists ran their tests on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their computations on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones. KIPAC faculty member Roger Blandford, who was not a co-author on the paper, explained:
Neural nets have been applied to astrophysical problems in the past with mixed outcomes. But new algorithms combined with modern graphics processing units, or GPUs, can produce extremely fast and reliable results, as the gravitational lens problem tackled in this paper dramatically demonstrates. There is considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics and other fields.
Part of this work was funded by the DOE Office of Science.
Bottom line: Researchers at SLAC and Stanford have for the first time shown that neural networks – a form of artificial intelligence – can accurately analyze distortions in spacetime known as gravitational lenses, 10 million times faster than traditional methods. They say the ability to sift through large amounts of data and perform complex analyses quickly and in an automated fashion could transform astrophysics.
from EarthSky http://ift.tt/2gJ7h9H
Manuel Gnida wrote this article for SLAC National Accelerator Laboratory in Menlo Park, California.
Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks – a form of artificial intelligence – can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods. Postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published August 30, 2017 in Nature, said:
Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip.
The team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster, that’s closer to us. The distortions provide important clues about how mass is distributed in space and how that distribution changes over time – properties linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that’s accelerating the expansion of the universe.
Until now this type of analysis has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models. This can take weeks to months for a single lens.
But with the neural networks, the researchers were able to do the same analysis in a few seconds, which they demonstrated using real images from NASA’s Hubble Space Telescope and simulated ones.
To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods. In a separate paper, submitted to The Astrophysical Journal Letters, the team reports how these networks can also determine the uncertainties of their analyses.
In the video below, KIPAC researcher Phil Marshall explains the optical principles of gravitational lensing using a wineglass.
The study’s lead author is Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC. He said:
The neural networks we tested – three publicly available neural nets and one that we developed ourselves – were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy.
This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.
The ability to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion could transform astrophysics in a way that is much needed for future sky surveys that will look deeper into the universe – and produce more data – than ever before.
The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, will provide unparalleled views of the universe and is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands. Perreault Levasseur commented:
We won’t have enough people to analyze all these data in a timely manner with the traditional methods. Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.
Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information.
In the artificial version, the neurons are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on. KIPAC staff scientist Phil Marshall, a co-author of the paper, said:
The amazing thing is that neural networks learn by themselves what features to look for. This is comparable to the way small children learn to recognize objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.
But in this case, Hezaveh said:
It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.
Although the KIPAC scientists ran their tests on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their computations on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones. KIPAC faculty member Roger Blandford, who was not a co-author on the paper, explained:
Neural nets have been applied to astrophysical problems in the past with mixed outcomes. But new algorithms combined with modern graphics processing units, or GPUs, can produce extremely fast and reliable results, as the gravitational lens problem tackled in this paper dramatically demonstrates. There is considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics and other fields.
Part of this work was funded by the DOE Office of Science.
Bottom line: Researchers at SLAC and Stanford have for the first time shown that neural networks – a form of artificial intelligence – can accurately analyze distortions in spacetime known as gravitational lenses, 10 million times faster than traditional methods. They say the ability to sift through large amounts of data and perform complex analyses quickly and in an automated fashion could transform astrophysics.
from EarthSky http://ift.tt/2gJ7h9H
Aucun commentaire:
Enregistrer un commentaire