One of the most exciting discoveries of astronomy in recent years was the measurement of an acceleration term in the universe’s rate of expansion. Announced by both the Supernova Cosmology Project at the Lawrence Berkeley National Laboratory and the High-z Supernova Search Team, these results at once confirmed one another an revolutionized how astronomers view the universe. This discovery meant, quite simply, that our universe will expand forever, tearing itself apart and ever increasing rates. Someday, the expansion of space will carry everything we are not gravitationally attached to so far away so fast that the light will get red-shifted beyond all easy (and perhaps even all possible) reach. (image credit: NASA, ESA, CXC, JPL-Caltech, J. Hester and A. Loll (Arizona State Univ.), R. Gehrz (Univ. Minn.), and STScI)
Needless to say, not everyone embraced the change.
These results rested solely on our understanding of type 1a supernova, which is an uncomfortable place to be (but that understanding seems to be solid, despite a mis-leading press-release earlier today.
There has always been the active question: Are the results real or is there something going on with the supernovae that we don’t understand that is making it look like the acceleration is there. Astronomers have taken two tactics in dealing with this problem: 1) Look for other lines of evidence that confirm the supernovae result, and 2) try and figure out supernovae better.
The new evidence hasn’t been easy to find, but since the supernovae results were announced in the late 1990s, other evidence has been found in the cosmic microwave background, in the large scale structure of the universe, and also in gravitational lenses. So… Even if we don’t fully understand supernovae, everyone (well, almost everyone at least) grudgingly acknowledges the universe is accelerating apart.
But the degree to which it is accelerating apart at what period of time (so far) can only be easily measured with supernovae. This means we really need to understand supernovae.
Now, at the most simplistic level (where accuracy is given up for clarity), Type 1a Supernovae are formed when a white dwarf star gravitationally consumes too much mass from a companion star and gravitationally collapses and explodes in one violent step. Since all white dwarfs become explosive (to first order) at the same mass, they all have the same amount of material to contribute to the explosion and they all create an explosion of the same size. Thanks to the luminosity-distance relationship, we can measure how bright a supernova appears with a telescope and compare that to how luminous it actually is to calculate how far away it is (This is the same thing you do when you estimate the distance to a motorcycle on how bright its headlight appears). Once you know how far away a supernova is, you can measure its recession rate with a spectroscope. (These are actually very hard to do technically, but that’s what graduate students are for).
At the next level of complexity, we know that Type 1a actually show some variation. Some take a longer period of time to fade away and give off more energy. Others fade faster and are fainter. Now, since we know how the supernovae’s total light output changes as a function of their lightcurves’ shapes (with error bars), we can correct for this second order effect. No big deal. This is like knowing that when you put giant wheels on your car you have to fix your odometer to compensate for the car go farther for every one turn of the tires. Like I said, no big deal, this is something astronomers just have to be more aware of. We even think we understand why this is happening (see link). Okay, nothing to worry about, move along.
But there is still another problem. It a fact that there are more metals in the universe today (defined by astronomers as anything heavier than Helium). There are Sun’s creating carbon, and iron and everything in between all across the cosmos, and every exploding star releases a variety of everything and anything nuclear reactions can create. This means the stars that are forming today are forming out of materials that where just a twinkle in a young giant star’s eye some day in the past. The first stars were almost pure hydrogen and helium. Those stars have very different physics from today’s stars. Metals moderate the formation of stars, making stars form smaller and burn in a more controlled way. When white dwarf stars first started forming, they had fewer metals than modern white dwarfs and that could have effected how supernovae explode, causing supernovae to vary as a function of time in ways that we don’t know about.
Earlier today a press released crossed-my inbox that said, “distant supernovae were an average of 12 per cent brighter. The distant supernovae were brighter because they were younger.” On first read, it would seem to imply that supernovae have gotten brighter as a function of going back in time. This would imply that supernovae luminocities are a function of lightcurve shape and when the supernovae exploded. Eek! Things got much more complicated! But, the press release goes on to say, (words of my former classmate at U-Texas and now U-Toronto Post Doctoral Fellow, Andy Howell) “We found that the early-universe type 1a supernovae had a higher wattage, but as long as we can figure out the wattage, we should be able to correct for that. Learning more about Dark Energy is going to take very precise corrections though, and we aren’t sure how well we can do that yet.” This seems to imply that yes, things are getting complex but we can indeed cope, but there will be error bars.
Not liking this new reality, I went and found the actual paper (subscription required to get beyond the abstract, but please read the abstract). Ummm, if I’m reading this right, the average luminosity of supernovae is changing, because the ratio of brighter ones to fainter ones is changing, but the type 1a supernovae themselves are still totally predictable within error bars. The faint ones act the same (but there are fewer), the bright ones act the same (but there are more of them), and the average changes while the physics (within error bars) is respectably well (within error bars) understood.
No, type 1a supernovae aren’t totally standard candles. They don’t all give off the exact same amount of light and they don’t all explode in the same way. They are a family of candles and we know how bright they all are, each in their own unique way (with error bars).
Thanks for digging into that article for us, I was afraid I was going to have to grab a subscription, or at least find it at the campus library. I also heartily approve of your copious use of within error bars(within error bars. ;))
For those without access to APJ, a PDF can be obtained from arXiv: http://arxiv.org/PS_cache/astro-ph/pdf/0701/0701912v2.pdf.
Sorry. Drop the period…
http://arxiv.org/PS_cache/astro-ph/pdf/0701/0701912v2.pdf
Realizing that the distant population of supernova ‘type Ia’ is not ‘in family’ with local events is just the first step. The next step is to go back to 1994, when it was assumed (without spectral detail) that the distance population were identical to the brightest local events. We have to take the current crop of hypernova back to 1994 with us, and determine how similar the distant events observed in 1994 were to these brighter, longer burning nova we now know exist.
The real crux of the matter is this: If Riess & Co. were comparing local supernova with distant hypernova, the proof of time dilation in the distant sample is lost – it is in fact nullified; at least as an astrophysical proof of a relativist concept.
The next step is to look very hard at the Goldhaber & Permutter papers between 1999 and 2002. The ‘stretch factor’, the single parametric used to scale the light curve width and magnitude; is clearly no longer applicable to this complex family of events. Why then, didn’t the error analysis reveal a bias, this increase in absolute magnitude with increasing distance which is now becoming quite evident? The evidence was hidden by this same parametric assumption; that the light curve widths were time dilated, and in making the ‘correction’, the magnitude of more distant events has been scaled down by shortening the light curves.
It may take another generation of scopes to fully test the hypothesis contained in the last three paragraphs, but this is the only reasonable interpretation I can find for why Goldhaber’s supernova appeared to behave so uniformly; yet today they do not.
There is ample reason to question whether the Universe is accelerating. It seems to violate the First Law of Thermodynamics.
Another paper, another challenge:
http://arxiv.org/PS_cache/arxiv/pdf/…710.3896v1.pdf
Ellis et al
“We analyze the mean rest-frame ultraviolet (UV) spectrum of Type Ia Supernovae (SNe Ia) and its dispersion using high signal-to-noise Keck-I/LRIS-B spectroscopy for a sample of 36 events at intermediate redshift (z=0.5) discovered by the Canada-France-Hawaii Telescope Supernova Legacy Survey (SNLS)…
Although the mean SN Ia spectrum has not evolved significantly over the past 40% of cosmic history, precise evolutionary constraints are limited by the absence of a comparable sample of high quality local spectra. Within the high-redshift sample, we discover significant UV spectral variations and exclude dust extinction as the primary cause by examining trends with the optical SN color. Although progenitor metallicity may drive some of these trends, the variations we see are much larger than predicted in recent models and do not follow expected patterns.”
This finding has a certain sense from physical point of view and it can be explained by Aether Wave Theory by following way:
Our universe generation is formed by interior of giant dense star, which is collapsing gradually, thus making itself more dense. We are formed by dense material of such star (so called the Aether) as standing waves, which are moving more and more slowly gradually in such dense environment, so that our Universe expands uniformly in all directions from our perspective. This is considerably easier to imagine, then the classical dotted balloon example, isn’t it?
But this is not all. As we can observe the remote objects, we can see them in the less dense environment, the those ones at the proximity. Because the observable matter is more dense, then the vacuum, it collapses too, but more slowly, being “precollapsed” already. So here’s an apparent difference between vacuum density and matter here and at the places of remote objects. This can be interpreted as a gradual increasing of gravitational constant (and another physical constants) and as a dilatation of matter with respect of vacuum.
Here are observable consequences at the case, we can observe the movement of bodies in gravitational field at the distance. Because the gravity force decreases, we can see remote objects as more interacting, then these closer one. Such effect affects the shape of large galaxies, for example and the MOND theory takes account into it. This theory suggests, the gravitational force decrease with distance by more slowly, then Newtonian theory predicts and this effect keeps the giant galaxies more compact, so all stars inside them are rotating as a single body.
The decreasing of intensity of standard candle supernovae at the distance is related to such phenomena too. In more dense enevironment, the intesity of supernovae explosions will get more and more subtle, which effectivelly means, the older supernovae appears stronger and closer, then really are.
Surprisingly enough, albeit subtle, the expansion of matter with respect to vacuum is directly observable, too. By latest optical measurements, the iridium prototype of meter expands slightly (http://www.physorg.com/news64.html). It means, all the distances will elongate slightly at the future, which can be interpreted as a dilatation of time, which disappears gradually from our universe.
Awesome information love the content and look of the site. Get a great cell phone here.