Machine learning model has increasingly enhanced the process of human progress. With it, people can complete repeated tasks faster, and even provide some systematic insights Astronomers at the University of California, Berkeley, after modeling the gravitational microlens event, were surprised to find that both cases can occur, thus putting forward a new unified theory for this phenomenon
Gravitational lensing occurs when light from distant stars and other stellar objects bends around nearby objects between it and the observer, temporarily providing a brighter but distorted view of distant objects. According to the bending of light (and our knowledge of distant objects), we can also learn a lot about the stars, planets or systems that light bends.
For example, an instantaneous surge in brightness indicates that a planetary body is crossing the line of sight, and this type of abnormal reading, which is somehow called "degradation", has been used to find thousands of exoplanets.
Due to the limitations of observing them, it is difficult to quantify these events and objects except for a few basic concepts such as mass. Retrogradity is generally considered to belong to two possibilities: that is, distant light passing closer to a star or planet in a system. Ambiguous situations are often coordinated with other observed data. For example, we know through other means that the planets are too small to cause the distorted scale we see.
Zhang Keming, a doctoral student at the University of California, Berkeley, is studying a method to quickly analyze and classify such lens events, because they will appear in large numbers as we survey the sky more often and in more detail. He and his colleagues trained a machine learning model on the data of known gravitational microlens events, and then put it on other events that are not so easy to quantify.
The results were unexpected: in addition to cleverly calculating when an observed event belonged to one of the two main types of regression, it also found many events that did not regress.
"The previous two degenerative theories deal with the situation that the background star seems to be close to the foreground star or the foreground planet. The artificial intelligence algorithm not only shows us hundreds of examples in these two cases, but also shows that the star is not close to the star or planet, and cannot be explained by any previous theory," Zhang said in a press release in Berkeley.
Now, it is likely that this is due to poor model adjustment, or just lack of confidence in their own calculations. But Zhang seems to believe that AI has found something that human observers systematically ignore. They finally put forward a new "unified" theory, that is, how to explain the degradation phenomenon in these observations. Two known theories are only the most common cases.
They studied more than 20 recent papers on observing low light events, and found that astronomers have been mistakenly classifying what they see as one type or another, and the new theory is more suitable for the existing data than the two types.
"These low light events that people see actually show this new degeneration, but they just don't realize it. This is really just machine learning observing thousands of events, and it becomes impossible to miss it," said Scott Gaudi, Professor of astronomy at Ohio State University, who is the co-author of the paper.
It should be noted that artificial intelligence has not formulated and proposed new theories -- it is entirely human intelligence. But without the systematization and confident calculation of artificial intelligence, it is likely that the simplified and incorrect theory will last for many years. Just as people learned to trust calculators and later computers, we are also learning to trust some artificial intelligence models to output an interesting truth without preconceptions and assumptions - that is, if we do not encode our preconceptions and assumptions.
A paper published in the journal Nature astronomy describes the new theory and the process that led to it. This may not be news to astronomers (it is a preprint of last year), but machine learning and ordinary scientists may cherish this interesting development.