How James Webb's exoplanet measurements have been interpreted wrong
/2022/09/16/image/png/2FDifNaZKOigX7ZRpjBpFXzxsHP1mRjd7l6TYbpY.png)
NASA's James Webb Space Telescope has been bringing us images of the universe with unprecedented clarity. In addition, astronomers have also been harnessing the telescope's light-parsing precision to decode the atmospheres surrounding of nearby worlds.
Not as accurate as previously assumed
However, a new MIT study suggests that the method for interpreting the telescope's data may not be as accurate as previously believed, according to a press release by the institution published Thursday.
The researchers speculate that the properties of planetary atmospheres determined by the telescope, such as their temperature, pressure, and elemental composition, could be off by an order of magnitude. They have published their views in a study in Nature Astronomy.
"There is a scientifically significant difference between a compound like water being present at 5 percent versus 25 percent, which current models cannot differentiate," said study co-leader Julien de Wit, assistant professor in MIT's Department of Earth, Atmospheric, and Planetary Sciences (EAPS).
"Currently, the model we use to decrypt spectral information is not up to par with the precision and quality of data we have from the James Webb telescope," adds EAPS graduate student Prajwal Niraula. "We need to up our game and tackle together the opacity problem."
The errors all have to do with opacity models - the tools that model how light interacts with matter as a function of the matter's properties. The researchers argue that these models need significant retuning in order to match the precision of Webb's data.

"So far, this Rosetta Stone has been doing OK," de Wit says. "But now that we're going to the next level with Webb's precision, our translation process will prevent us from catching important subtleties, such as those making the difference between a planet being habitable or not."
Putting models to the test
The researchers put the most commonly used opacity model to the test to see what atmospheric properties the model would derive if it were tweaked to assume certain limitations in our understanding of how light and matter interact.
Based on their analysis, the team concluded that existing opacity models by the Webb telescope hit an "accuracy wall” where they weren’t sensitive enough to tell whether a planet has an atmospheric temperature of 300 Kelvin or 600 Kelvin, or whether a certain gas takes up 5 percent or 25 percent of an atmospheric layer.
"That difference matters in order for us to constrain planetary formation mechanisms and reliably identify biosignatures," said Niraula.
Not all is lost, however. The team also found that every model produced a "good fit" with the data that generated a light spectrum from that chemical composition that was close enough to, or "fit" with the original spectrum.
"We found that there are enough parameters to tweak, even with a wrong model, to still get a good fit, meaning you wouldn't know that your model is wrong and what it's telling you is wrong," de Wit explained.
The researchers argued that there is a need for more laboratory measurements and theoretical calculations to refine the models' assumptions of how light and various molecules interact.
"There is so much that could be done if we knew perfectly how light and matter interact," Niraula says. "We know that well enough around the Earth's conditions, but as soon as we move to different types of atmospheres, things change, and that's a lot of data, with increasing quality, that we risk misinterpreting."
A new study shows how brain regions and neural networks add to a person’s general intelligence, supporting the emergence of Network Neuroscience Theory.