If you’ve ever heard of Albert Einstein, chances are you know at least one equation that he himself is famous for deriving: E = mc2. This simple equation details a relationship between the energy (E) of a system, its rest mass (m), and a fundamental constant that relates the two, the speed of light squared (c2). Despite the fact that this equation is one of the simplest ones you can write down, what it means is dramatic and profound.
At a fundamental level, there is an equivalence between the mass of an object and the inherent energy stored within it. Mass is only one form of energy among many, such as electrical, thermal, or chemical energy, and therefore energy can be transformed from any of these forms into mass, and vice versa. The profound implications of Einstein’s equations touch us in many ways in our day-to-day lives. Here are the five lessons everyone should learn.
This iron-nickel meteorite, examined and photographed by Opportunity, represents the first such object ever found on the Martian surface. If you were to take this object and chop it up into its individual, constituent protons, neutrons, and electrons, you would find that the whole is actually less massive than the sum of its parts.
NASA / JPL / Cornell
1.) Mass is not conserved. When you think about the things that change versus the things that stay the same in this world, mass is one of those quantities we typically hold constant without thinking about it too much. If you take a block of iron and chop it up into a bunch of iron atoms, you fully expect that the whole equals the sum of its parts. That’s an assumption that’s clearly true, but only if mass is conserved.
In the real world, though, according to Einstein, mass is not conserved at all. If you were to take an iron atom, containing 26 protons, 30 neutrons, and 26 electrons, and were to place it on a scale, you’d find some disturbing facts.
YOU MAY ALSO LIKE
Let Caregivers Be Caregivers
Will You Miss The Carbon Rush?
Amazon Web Services BrandVoice
Increasing Access To Blockchain And Ledger Databases
An iron atom with all of its electrons weighs slightly less than an iron nucleus and its electrons do separately,
An iron nucleus weighs significantly less than 26 protons and 30 neutrons do separately.
And if you try and fuse an iron nucleus into a heavier one, it will require you to input more energy than you get out.
Iron-56 may be the most tightly-bound nucleus, with the greatest amount of binding energy per nucleon. In order to get there, though, you have to build up element-by-element. Deuterium, the first step up from free protons, has an extremely low binding energy, and thus is easily destroyed by relatively modest-energy collisions.
Each one of these facts is true because mass is just another form of energy. When you create something that’s more energetically stable than the raw ingredients that it’s made from, the process of creation must release enough energy to conserve the total amount of energy in the system.
When you bind an electron to an atom or molecule, or allow those electrons to transition to the lowest-energy state, those binding transitions must give off energy, and that energy must come from somewhere: the mass of the combined ingredients. This is even more severe for nuclear transitions than it is for atomic ones, with the former class typically being about 1000 times more energetic than the latter class.
In fact, leveraging the consequences of E = mc2 is how we get the second valuable lesson out of it.
Countless scientific tests of Einstein’s general theory of relativity have been performed, subjecting the idea to some of the most stringent constraints ever obtained by humanity. Einstein’s first solution was for the weak-field limit around a single mass, like the Sun; he applied these results to our Solar System with dramatic success. We can view this orbit as Earth (or any planet) being in free-fall around the Sun, traveling in a straight-line path in its own frame of reference. All masses and all sources of energy contribute to the curvature of spacetime.
LIGO scientific collaboration / T. Pyle / Caltech / MIT
2.) Energy is conserved, but only if you account for changing masses. Imagine the Earth as it orbits the Sun. Our planet orbits quickly: with an average speed of around 30 km/s, the speed required to keep it in a stable, elliptical orbit at an average distance of 150,000,000 km (93 million miles) from the Sun. If you put the Earth and Sun both on a scale, independently and individually, you would find that they weighed more than the Earth-Sun system as it is right now.
When you have any attractive force that binds two objects together — whether that’s the electric force holding an electron in orbit around a nucleus, the nuclear force holding protons and neutrons together, or the gravitational force holding a planet to a star — the whole is less massive than the individual parts. And the more tightly you bind these objects together, the more energy the binding process emits, and the lower the rest mass of the end product.
Whether in an atom, molecule, or ion, the transitions of electrons from a higher energy level to a lower energy level will result in the emission of radiation at a very particular wavelength. This produces the phenomenon we see as emission lines, and is responsible for the variety of colors we see in a fireworks display. Even atomic transitions such as this must conserve energy, and that means losing mass in the correct proportion to account for the energy of the produced photon.
When you bring a free electron in from a large distance away to bind to a nucleus, it’s a lot like bringing in a free-falling comet from the outer reaches of the Solar System to bind to the Sun: unless it loses energy, it will come in, make a close approach, and slingshot back out again.
However, if there’s some other way for the system to shed energy, things can become more tightly bound. Electrons do bind to nuclei, but only if they emit photons in the process. Comets can enter stable, periodic orbits, but only if another planet steals some of their kinetic energy. And protons and neutrons can bind together in large numbers, producing a much lighter nucleus and emitting high-energy photons (and other particles) in the process. That last scenario is at the heart of perhaps the most valuable and surprising lesson of all.
A composite of 25 images of the Sun, showing solar outburst/activity over a 365 day period. Without the right amount of nuclear fusion, which is made possible through quantum mechanics, none of what we recognize as life on Earth would be possible. Over its history, approximately 0.03% of the mass of the Sun, or around the mass of Saturn, has been converted into energy via E = mc^2.
NASA / Solar Dynamics Observatory / Atmospheric Imaging Assembly / S. Wiessinger; post-processing by E. Siegel
3.) Einstein’s E = mc2 is responsible for why the Sun (like any star) shines. Inside the core of our Sun, where the temperatures rise over a critical temperature of 4,000,000 K (up to nearly four times as large), the nuclear reactions powering our star take place. Protons are fused together under such extreme conditions that they can form a deuteron — a bound state of a proton and neutron — while emitting a positron and a neutrino to conserve energy.
Additional protons and deuterons can then bombard the newly formed particle, fusing these nuclei in a chain reaction until helium-4, with two protons and two neutrons, is created. This process occurs naturally in all main-sequence stars, and is where the Sun gets its energy from.
The proton-proton chain is responsible for producing the vast majority of the Sun’s power. Fusing two He-3 nuclei into He-4 is perhaps the greatest hope for terrestrial nuclear fusion, and a clean, abundant, controllable energy source, but all of these reaction must occur in the Sun.
Borb / Wikimedia Commons
If you were to put this end product of helium-4 on a scale and compare it to the four protons that were used up to create it, you’d find that it was about 0.7% lighter: helium-4 has only 99.3% of the mass of four protons. Even though two of these protons have converted into neutrons, the binding energy is so strong that approximately 28 MeV of energy gets emitted in the process of forming each helium-4 nucleus.
In order to produce the energy we see it produce, the Sun needs to fuse 4 × 1038 protons into helium-4 every second. The result of that fusion is that 596 million tons of helium-4 are produced with each second that passes, while 4 million tons of mass are converted into pure energy via E = mc2. Over the lifetime of the entire Sun, it’s lost approximately the mass of the planet Saturn due to the nuclear reactions in its core.
A nuclear-powered rocket engine, preparing for testing in 1967. This rocket is powered by Mass/Energy conversion, and is underpinned by the famous equation E=mc^2.
4.) Converting mass into energy is the most energy-efficient process in the Universe. What could be better than 100% efficiency? Absolutely nothing; 100% is the greatest energy gain you could ever hope for out of a reaction.
Well, if you look at the equation E = mc2, it tells you that you can convert mass into pure energy, and tells you how much energy you’ll get out. For every 1 kilogram of mass that you convert, you get a whopping 9 × 1016 joules of energy out: the equivalent of 21 Megatons of TNT. Whenever we experience a radioactive decay, a fission or fusion reaction, or an annihilation event between matter and antimatter, the mass of the reactants is larger than the mass of the products; the difference is how much energy is released.
Nuclear weapon test Mike (yield 10.4 Mt) on Enewetak Atoll. The test was part of the Operation Ivy. Mike was the first hydrogen bomb ever tested. A release of this much energy corresponds to approximately 500 grams of matter being converted into pure energy: an astonishingly large explosion for such a tiny amount of mass.
National Nuclear Security Administration / Nevada Site Office
In all cases, the energy that comes out — in all its combined forms — is exactly equal to the energy equivalent of the mass loss between products and reactants. The ultimate example is the case of matter-antimatter annihilation, where a particle and its antiparticle meet and produce two photons of the exact rest energy of the two particles.
Take an electron and a positron and let them annihilate, and you’ll always get two photons of exactly 511 keV of energy out. It’s no coincidence that the rest mass of electrons and positrons are each 511 keV/c2: the same value, just accounting for the conversion of mass into energy by a factor of c2. Einstein’s most famous equation teaches us that any particle-antiparticle annihilation has the potential to be the ultimate energy source: a method to convert the entirety of the mass of your fuel into pure, useful energy.
The top quark is the most massive particle known in the Standard Model, and is also the shortest-lived of all the known particles, with a mean lifetime of 5 × 10^-25 s. When we produce it in particle accelerators by having enough free energy available to create them via E = mc^2, we produce top-antitop pairs, but they do not live for long enough to form a bound state. They exist only as free quarks, and then decay.
Raeky / Wikimedia Commons
5.) You can use energy to create matter — massive particles — out of nothing but pure energy. This is perhaps the most profound lesson of all. If you took two billiard balls and smashed one into the other, you’d always expect the results to have something in common: they’d always result in two and only two billiard balls.
With particles, though, the story is different. If you take two electrons and smash them together, you’ll get two electrons out, but with enough energy, you might also get a new matter-antimatter pair of particles out, too. In other words, you will have created two new, massive particles where none existed previously: a matter particle (electron, muon, proton, etc.) and an antimatter particle (positron, antimuon, antiproton, etc.).
Whenever two particles collide at high enough energies, they have the opportunity to produce additional particle-antiparticle pairs, or new particles as the laws of quantum physics allow. Einstein’s E = mc^2 is indiscriminate this way. In the early Universe, enormous numbers of neutrinos and antineutrinos are produced this way in the first fraction-of-a-second of the Universe, but they neither decay nor are efficient at annihilating away.
E. Siegel / Beyond The Galaxy
This is how particle accelerators successfully create the new particles they’re searching for: by providing enough energy to create those particles (and, if necessary, their antiparticle counterparts) from a rearrangement of Einstein’s most famous equation. Given enough free energy, you can create any particle(s) with mass m, so long as there’s enough energy to satisfy the requirement that there’s enough available energy to make that particle via m = E/c2. If you satisfy all the quantum rules and have enough energy to get there, you have no choice but to create new particles.
The production of matter/antimatter pairs (left) from pure energy is a completely reversible reaction (right), with matter/antimatter annihilating back to pure energy. When a photon is created and then destroyed, it experiences those events simultaneously, while being incapable of experiencing anything else at all.
Dmitri Pogosyan / University of Alberta
Einstein’s E = mc2 is a triumph for the simple rules of fundamental physics. Mass isn’t a fundamental quantity, but energy is, and mass is just one possible form of energy. Mass can be converted into energy and back again, and underlies everything from nuclear power to particle accelerators to atoms to the Solar System. So long as the laws of physics are what they are, it couldn’t be any other way. As Einstein himself said:
It followed from the special theory of relativity that mass and energy are both but different manifestations of the same thing — a somewhat unfamiliar conception for the average mind.
More than 60 years after Einstein’s death, it’s long past time to bring his famous equation down to Earth. The laws of nature aren’t just for physicists; they’re for every curious person on Earth to experience, appreciate, and enjoy.
Starts With A Bang is dedicated to exploring the story of what we know about the Universe as well as how we know it, with a focus on physics, astronomy, and the scientific story that the Universe tells us about itself. Written by Ph.D. scientists and edited/created by astrophysicist Ethan Siegel, our goal is to share the joy, wonder and awe of scientific discovery.
The phenomenon known as “tunneling” is one of the best-known predictions of quantum physics, because it so dramatically confounds our classical intuition for how objects ought to behave. If you create a narrow region of space that a particle would have to have a relatively high energy to enter, classical reasoning tells us that low-energy particles heading toward that region should reflect off the boundary with 100% probability. Instead, there is a tiny chance of finding those particles on the far side of the region, with no loss of energy. It’s as if they simply evaded the “barrier” region by making a “tunnel” through it.
It’s very important to note that this phenomenon is absolutely and unquestionably real, demonstrated in countless ways. The most dramatic of these is sunlight— the Sun wouldn’t be able to fuse hydrogen into helium without quantum tunneling— but it’s also got more down-to-earth technological applications. Tunneling serves as the basis for Scanning Tunneling Microscopy, which uses the tunneling of electrons across a tiny gap between a sharp tip and a surface to produce maps of that surface that can readily resolve single atoms. It’s also essential for the Josephson effect, which is the basis of superconducting detectors of magnetic fields and some of the superconducting systems proposed for quantum computing.
So, there is absolutely no debate among physicists about whether quantum tunneling is a thing that happens. Physicists get a bit twitchy without something to argue over, though, and you don’t have to dig into tunneling (heh) very far to find a disputed question, namely “How long does quantum tunneling take?”
This is an active area of research, and one I’ve written about before. The tricky part is that the distances involved in quantum tunneling are necessarily very small, making the times involved extremely short. It’s also very difficult to ensure that you know where and when the process starts, because, again, the whole business needs to be quantum, with all the measurement and uncertainty issues that brings in.
In the old post linked above, I talked about a couple of experiments involving intense and ultra-fast laser pulses, which rip an electron out of an atom, and then deflect its path in a direction that varies in time. This is a really clever trick, and the experiments are impressive technical achievements; unfortunately, they don’t entirely agree, with some experiments suggesting a short but definitely not zero tunneling time, and others finding a time so short it might as well be zero. So the question isn’t completely settled…
The latest contribution to the ongoing argument showed up on the arxiv just last night, in the form of a new tunneling-time paper from Aephraim Steinberg’s group at the University of Toronto. This one uses the internal states of atoms tunneling through a barrier to make a kind of clock that only “ticks” while the atoms are inside the barrier region.
As with so many things involving atomic physics these days, the key enabling technology here is Bose-Einstein Condensation. They’re able to measure the tunneling of rubidium atoms (which many thousands of times bigger and heavier than the electrons in the pulsed-laser experiments) across a barrier a bit more than a micron thick (several thousand times the distance in the pulsed-laser experiments) because the atoms are incredibly cold and slow-moving. The temperature of their atom cloud is just a few billionths of a degree above absolute zero, and they push them into the barrier at speeds of just a few millimeters per second.
The big advantage this offers is that unlike electrons, which are point particles, atoms have complicated internal structure and can be put in a bunch of different states. This lets them make an energy barrier out of a thin sheet of laser light that increases the energy of the atom in the light. They can control the energy shift by adjusting the laser parameters to get any height they want— they can even “turn off” the barrier without turning off the laser, by making a small shift in the laser frequency, which is crucial for establishing the timing.
The laser also changes the internal state of the atoms in a way that varies in time, letting them use the atoms as a kind of clock. They prepare a sample that’s exclusively in one particular state, and set the laser up in such a way that it drives a slow evolution into a different internal state. They separate the two different states on the far side of the barrier, and measure the probability of changing states. Once they have that, it’s relatively easy to convert that into a measurement of how much time the atoms spent interacting with the laser.
They end up with a number that’s definitely not zero— between 0.55ms and 0.69ms— that agrees well with one of the quantum methods for predicting tunneling time, and disagrees with a “semiclassical” model very badly. It’s always nice to get this kind of discrimination between models; their method also gives them a nice way to separate out the perturbation that comes from making the measurement from the “clock” they’re using, which is a nice bonus.
As a fellow cold-atom guy, I find this experiment very impressive and convincing, and there’s potential to extend this to other cool tunneling-related measurements, maybe even tracking the atoms as they move through the barrier. Physicists being physicists, though, I expect the argument over what, exactly, this all means will continue— I’d be a little surprised if zero-tunneling-time partisans gave up without finding some feature of this system to claim as a loophole.
Arcane disputes aside, though, it’s worth taking a step back to note how absolutely incredible it is that we can even have a sensible conversation about something as arcane as the amount of time a tunneling atom spends in places where classical physics says it can’t possibly be. The technology we’ve developed for probing the weirdest of quantum phenomena over the last few decades is mind-boggling, and continues to get better all the time.
Disclosure: Steinberg and I worked in the same research group at NIST in the late 1990’s— he was a postdoc working on BEC and I was a grad student on a different project. I actually had dinner with him a week ago in Toronto, but we didn’t discuss this experiment.
I’m an Associate Professor in the Department of Physics and Astronomy at Union College, and I write books about science for non-scientists. I have a BA in physics from Williams College and a Ph.D. in Chemical Physics from the University of Maryland, College Park (studying laser cooling at the National Institute of Standards and Technology in the lab of Bill Phillips, who shared the 1997 Nobel in Physics). I was a post-doc at Yale, and have been at Union since 2001. My books _How to Teach Physics to Your Dog_ and _How to teach Relativity to Your Dog_ explain modern physics through imaginary conversations with my German Shepherd; _Eureka: Discovering Your Inner Scientist_ (Basic, 2014), explains how we use the process of science in everyday activities, and my latest, _Breakfast With Einstein: The Exotic Physics of Everyday Objects_ (BenBella 2018) explains how quantum phenomena manifest in the course of an ordinary morning. I live in Niskayuna, NY with my wife, Kate Nepveu, our two kids, and Charlie the pupper.
An illustration of our cosmic history, from the Big Bang until the present, within the context of the expanding Universe. We cannot be certain, despite what many have contended, that the Universe began from a singularity. We can, however, break the illustration you see into the different eras based on properties the Universe had at those particular times. We are already in the Universe’s 6th and final era.
NASA / WMAP science team
The Universe is not the same today as it was yesterday. With each moment that goes by, a number of subtle but important changes occur, even if many of them are imperceptible on measurable, human timescales. The Universe is expanding, which means that the distances between the largest cosmic structures are increasing with time.
A second ago, the Universe was slightly smaller; a second from now, the Universe will be slightly larger. But those subtle changes both build up over large, cosmic timescales, and affect more than just distances. As the Universe expands, the relative importance of radiation, matter, neutrinos, and dark energy all change. The temperature of the Universe changes. And what you’d see in the sky would change dramatically as well. All told, there are six different eras we can break the Universe into, and we’re already in the final one.
The reason for this can be understood from the graph above. Everything that exists in our Universe has a certain amount of energy in it: matter, radiation, dark energy, etc. As the Universe expands, the volume that these forms of energy occupy changes, and each one will have its energy density evolve differently. In particular, if we define the observable horizon by the variable a, then:
matter will have its energy density evolve as 1/a3, since (for matter) density is just mass over volume, and mass can easily be converted to energy via E = mc2,
radiation will have its energy density evolve as 1/a4, since (for radiation) the number density is the number of particles divided by volume, and the energy of each individual photon stretches as the Universe expands, adding an additional factor of 1/a relative to matter,
and dark energy is a property of space itself, so its energy density remains constant (1/a0), irrespective of the Universe’s expansion or volume.
A Universe that has been around longer, therefore, will have expanded more. It will be cooler in the future and was hotter in the past; it was gravitationally more uniform in the past and is clumpier now; it was smaller in the past and will be much, much larger in the future.
By applying the laws of physics to the Universe, and comparing the possible solutions with the observations and measurements we’ve obtained, we can determine both where we came from and where we’re headed. We can extrapolate our past history all the way back to the beginning of the hot Big Bang and even before, to a period of . We can extrapolate our current Universe into the far distant future as well, and foresee the ultimate fate that awaits everything that exists.
When we draw the dividing lines based on how the Universe behaves, we find that there are six different eras that will come to pass.
Inflationary era: which preceded and set up the hot Big Bang.
Primordial Soup era: from the start of the hot Big Bang until the final transformative nuclear & particle interactions occur in the early Universe.
Plasma era: from the end of non-scattering nuclear and particle interactions until the Universe cools enough to stably form neutral matter.
Dark Ages era: from the formation of neutral matter until the first stars and galaxies reionize the intergalactic medium of the Universe completely.
Stellar era: from the end of reionization until the gravity-driven formation and growth of large-scale structure ceases, when the dark energy density dominates over the matter density.
Dark Energy era: the final stage of our Universe, where the expansion accelerates and disconnected objects speed irrevocably and irreversibly away from one another.
We already entered this final era billions of years ago. Most of the important events that will define our Universe’s history have already occurred.
1.) Inflationary era. Prior to the hot Big Bang, the Universe wasn’t filled with matter, antimatter, dark matter or radiation. It wasn’t filled with particles of any type. Instead, it was filled with a form of energy inherent to space itself: a form of energy that caused the Universe to expand both extremely rapidly and relentlessly, in an exponential fashion.
It stretched the Universe, from whatever geometry it once had, into a state indistinguishable from spatially flat.
It expanded a small, causally connected patch of the Universe to one much larger than our presently visible Universe: larger than the current causal horizon.
It took any particles that may have been present and expanded the Universe so rapidly that none of them are left inside a region the size of our visible Universe.
And the quantum fluctuations that occurred during inflation created the seeds of structure that gave rise to our vast cosmic web today.
And then, abruptly, some 13.8 billion years ago, inflation ended. All of that energy, once inherent to space itself, got converted into particles, antiparticles, and radiation. With this transition, the inflationary era ended, and the hot Big Bang began.
2.) Primordial Soup era. Once the expanding Universe is filled with matter, antimatter and radiation, it’s going to cool. Whenever particles collide, they’ll produce whatever particle-antiparticle pairs are allowed by the laws of physics. The primary restriction comes only from the energies of the collisions involved, as the production is governed by E = mc2.
As the Universe cools, the energy drops, and it becomes harder and harder to create more massive particle-antiparticle pairs, but annihilations and other particle reactions continue unabated. 1-to-3 seconds after the Big Bang, the antimatter is all gone, leaving only matter behind. 3-to-4 minutes after the Big Bang, stable deuterium can form, and nucleosynthesis of the light elements occurs. And after some radioactive decays and a few final nuclear reactions, all we have left is a hot (but cooling) ionized plasma consisting of photons, neutrinos, atomic nuclei and electrons.
3.) Plasma era. Once those light nuclei form, they’re the only positively (electrically) charged objects in the Universe, and they’re everywhere. Of course, they’re balanced by an equal amount of negative charge in the form of electrons. Nuclei and electrons form atoms, and so it might seem only natural that these two species of particle would find one another immediately, forming atoms and paving the way for stars.
Unfortunately for them, they’re vastly outnumbered — by more than a billion to one — by photons. Every time an electron and a nucleus bind together, a high-enough energy photon comes along and blasts them apart. It isn’t until the Universe cools dramatically, from billions of degrees to just thousands of degrees, that neutral atoms can finally form. (And even then, it’s only possible because of a special atomic transition.)
At the beginning of the Plasma era, the Universe’s energy content is dominated by radiation. By the end, it’s dominated by normal and dark matter. This third phase takes us to 380,000 years after the Big Bang.
S. G. Djorgovski et al., Caltech Digital Media Center
4.) Dark Ages era. Filled with neutral atoms, at last, gravitation can begin the process of forming structure in the Universe. But with all these neutral atoms around, what we presently know as visible light would be invisible all throughout the sky.
Why’s that? Because neutral atoms, particularly in the form of cosmic dust, are outstanding at blocking visible light.
In order to end these dark ages, the intergalactic medium needs to be reionized. That requires enormous amounts of star-formation and tremendous numbers of ultraviolet photons, and that requires time, gravitation, and the start of the cosmic web. The first major regions of reionization take place 200-250 million years after the Big Bang, but reionization doesn’t complete, on average, until the Universe is 550 million years old. At this point, the star-formation rate is still increasing, and the first massive galaxy clusters are just beginning to form.
NASA, ESA, A. Koekemoer (STScI), M. Jauzac (Durham University), C. Steinhardt (Niels Bohr Institute), and the BUFFALO team
5.) Stellar era. Once the dark ages are over, the Universe is now transparent to starlight. The great recesses of the cosmos are now accessible, with stars, star clusters, galaxies, galaxy clusters, and the great, growing cosmic web all waiting to be discovered. The Universe is dominated, energy-wise, by dark matter and normal matter, and the gravitationally bound structures continue to grow larger and larger.
The star-formation rate rises and rises, peaking about 3 billion years after the Big Bang. At this point, new galaxies continue to form, existing galaxies continue to grow and merge, and galaxy clusters attract more and more matter into them. But the amount of free gas within galaxies begins to drop, as the enormous amounts of star-formation have used up a large amount of it. Slowly but steadily, the star-formation rate drops.
As time goes forward, the stellar death rate will outpace the birth rate, a fact made worse by the following surprise: as the matter density drops with the expanding Universe, a new form of energy — dark energy — begins to appear and dominate. 7.8 billion years after the Big Bang, distant galaxies stop slowing down in their recession from one another, and begin speeding up again. The accelerating Universe is upon us. A little bit later, 9.2 billion years after the Big Bang, dark energy becomes the dominant component of energy in the Universe. At this point, we enter the final era.
NASA & ESA
6.) Dark Energy age. Once dark energy takes over, something bizarre happens: the large-scale structure in the Universe ceases to grow. The objects that were gravitationally bound to one another before dark energy’s takeover will remain bound, but those that were not yet bound by the onset of the dark energy age will never become bound. Instead, they will simply accelerate away from one another, leading lonely existences in the great expanse of nothingness.
The individual bound structures, like galaxies and groups/clusters of galaxies, will eventually merge to form one giant elliptical galaxy. The existing stars will die; new star formation will slow down to a trickle and then stop; gravitational interactions will eject most of the stars into the intergalactic abyss. Planets will spiral into their parent stars or stellar remnants, owing to decay by gravitational radiation. Even black holes, with extraordinarily long lifetimes, will eventually decay from Hawking radiation.
Image courtesy of Jeff Bryant
In the end, only black dwarf stars and isolated masses to small to ignite nuclear fusion will remain, sparsely populated and disconnected from one another in this empty, ever-expanding cosmos. These final-state corpses will exist even googols of years onward, continuing to persist as dark energy remains the dominant factor in our Universe.
This last era, of dark energy domination, has already begun. Dark energy became important for the Universe’s expansion 6 billion years ago, and began dominating the Universe’s energy content around the time our Sun and Solar System were being born. The Universe may have six unique stages, but for the entirety of Earth’s history, we’ve already been in the final one. Take a good look at the Universe around us. It will never be this rich — or this easy to access — ever again.
Cosmic rays, which are ultra-high energy particles originating from all over the Universe, strike protons in the upper atmosphere and produce showers of new particles. The fast-moving charged particles also emit light due to Cherenkov radiation as they move faster than the speed of light in Earth’s atmosphere, and produce secondary particles that can be detected here on Earth.
Simon Swordy (U. Chicago), NASA
When you hold out your palm and point it towards the sky, what is it that’s interacting with your hand? You might correctly surmise that there are ions, electrons and molecules all colliding with your hand, as the atmosphere is simply unavoidable here on Earth. You might also remember that photons, or particles of light, must be striking you, too.
But there’s something more striking your hand that, without relativity, simply wouldn’t be possible. Every second, approximately one muon — the unstable, heavy cousin of the electron — passes through your outstretched palm. These muons are made in the upper atmosphere, created by cosmic rays. With a mean lifetime of 2.2 microseconds, you might think the ~100+ km journey to your hand would be impossible. Yet relativity makes it so, and the palm of your hand can prove it. Here’s how.
While cosmic ray showers are common from high-energy particles, it’s mostly the muons which make it down to Earth’s surface, where they are detectable with the right setup.
Alberto Izquierdo; courtesy of Francisco Barradas Solas
Individual, subatomic particles are almost always invisible to human eyes, as the wavelengths of light we can see are unaffected by particles passing through our bodies. But if you create a pure vapor made out of 100% alcohol, a charged particle passing through it will leave a trail that can be visually detected by even as primitive an instrument as the human eye.
As a charged particle moves through the alcohol vapor, it ionizes a path of alcohol particles, which act as centers for the condensation of alcohol droplets. The trail that results is both long enough and long-lasting enough that human eyes can see it, and the speed and curvature of the trail (if you apply a magnetic field) can even tell you what type of particle it was.
This principle was first applied in particle physics in the form of a cloud chamber.
A completed cloud chamber can be built in a day out of readily-available materials and for less than $100. You can use it to prove the validity of Einstein’s relativity, if you know what you’re doing!
Instructables user ExperiencingPhysics
Today, a cloud chamber can be built, by anyone with commonly available parts, for a day’s worth of labor and less than $100 in parts. (I’ve published a guide here.) If you put the mantle from a smoke detector inside the cloud chamber, you’ll see particles emanate from it in all directions and leave tracks in your cloud chamber.
That’s because a smoke detector’s mantle contains radioactive elements such as Americium, which decays by emitting α-particles. In physics, α-particles are made up of two protons and two neutrons: they’re the same as a helium nucleus. With the low energies of the decay and the high mass of the α-particles, these particles make slow, curved tracks and can even be occasionally seen bouncing off of the cloud chamber’s bottom. It’s an easy test to see if your cloud chamber is working properly.
For an extra bonus of radioactive tracks, add the mantle of a smoke detector to the bottom of your cloud chamber, and watch the slow-moving particles emanating outward from it. Some will even bounce off the bottom!
If you build a cloud chamber like this, however, those α-particle tracks aren’t the only things you’ll see. In fact, even if you leave the chamber completely evacuated (i.e., you don’t put a source of any type inside or nearby), you’ll still see tracks: they’ll be mostly vertical and appear to be perfectly straight.
This is because of cosmic rays: high-energy particles that strike the top of Earth’s atmosphere, producing cascading particle showers. Most of the cosmic rays are made up of protons, but move with a wide variety of speeds and energies. The higher-energy particles will collide with particles in the upper atmosphere, producing particles like protons, electrons, and photons, but also unstable, short-lived particles like pions. These particle showers are a hallmark of fixed-target particle physics experiments, and they occur naturally from cosmic rays, too.
Although there are four major types of particles that can be detected in a cloud chamber, the long and straight tracks are the cosmic ray muons, which can be used to prove that special relativity is correct.
Wikimedia Commons user Cloudylabs
The thing about pions is that they come in three varieties: positively charged, neutral, and negatively charged. When you make a neutral pion, it just decays into two photons on very short (~10-16 s) timescales. But charged pions live longer (for around 10-8 s) and when they decay, they primarily decay into muons, which are point particles like electrons but have 206 times the mass.
Muons also are unstable, but they’re the longest-lived unstable fundamental particle as far as we know. Owing to their relatively small mass, they live for an astoundingly long 2.2 microseconds, on average. If you were to ask how far a muon could travel once created, you might think to multiply its lifetime (2.2 microseconds) by the speed of light (300,000 km/s), getting an answer of 660 meters. But that leads to a puzzle.
Cosmic ray shower and some of the possible interactions. Note that if a charged pion (left) strikes a nucleus before it decays, it produces a shower, but if it decays first (right), it produces a muon that will reach the surface.
Konrad Bernlöhr of the Max-Planck-Institute at Heidelberg
I told you earlier that if you hold out the palm of your hand, roughly one muon per second passes through it. But if they can only live for 2.2 microseconds, they’re limited by the speed of light, and they’re created in the upper atmosphere (around 100 km up), how is it possible for those muons to reach us?
You might start to think of excuses. You might imagine that some of the cosmic rays have enough energy to continue cascading and producing particle showers during their entire journey to the ground, but that’s not the story the muons tell when we measure their energies: the lowest ones are still created some 30 km up. You might imagine that the 2.2 microseconds is just an average, and maybe the rare muons that live for 3 or 4 times that long will make it down. But when you do the math, only 1-in-1050 muons would survive down to Earth; in reality, nearly 100% of the created muons arrive.
A light-clock, formed by a photon bouncing between two mirrors, will define time for any observer. Although the two observers may not agree with one another on how much time is passing, they will agree on the laws of physics and on the constants of the Universe, such as the speed of light. When relativity is applied correctly, their measurements will be found to be equivalent to one another, as the correct relativistic transformation will allow one observer to understand the observations of the other.
John D. Norton
How can we explain such a discrepancy? Sure, the muons are moving close to the speed of light, but we’re observing them from a reference frame where we’re stationary. We can measure the distance the muons travel, we can measure the time they live for, and even if we give them the benefit of the doubt and say that they’re moving at (rather than near) the speed of light, they shouldn’t even make it for 1 kilometer before decaying.
But this misses one of the key points of relativity! Unstable particles don’t experience time as you, an external observer, measures it. They experience time according to their own onboard clocks, which will run slower the closer they move to the speed of light. Time dilates for them, which means that we will observe them living longer than 2.2 microseconds from our reference frame. The faster they move, the farther we’ll see them travel.
One revolutionary aspect of relativistic motion, put forth by Einstein but previously built up by Lorentz, Fitzgerald, and others, that rapidly moving objects appeared to contract in space and dilate in time. The faster you move relative to someone at rest, the greater your lengths appear to be contracted, while the more time appears to dilate for the outside world. This picture, of relativistic mechanics, replaced the old Newtonian view of classical mechanics, and can explain the lifetime of a cosmic ray muon.
How does this work out for the muon? From its reference frame, time passes normally, so it will only live for 2.2 microseconds according to its own clocks. But it will experience reality as though it hurtles towards Earth’s surface extremely close to the speed of light, causing lengths to contract in its direction of motion.
If a muon moves at 99.999% the speed of light, every 660 meters outside of its reference frame will appear as though it’s just 3 meters in length. A journey of 100 km down to the surface would appear to be a journey of 450 meters in the muon’s reference frame, taking up just 1.5 microseconds of time according to the muon’s clock.
At high enough energies and velocities, relativity becomes important, allowing many more muons to survive than would without the effects of time dilation.
Frisch/Smith, Am. J. of Phys. 31 (5): 342–355 (1963) / Wikimedia Commons user D.H
This teaches us how to reconcile things for the muon: from our reference frame here on Earth, we see the muon travel 100 km in a timespan of about 4.5 milliseconds. This is just fine, because time is dilated for the muon and lengths are contracted for it: it sees itself as traveling 450 meters in 1.5 microseconds, and hence it can remain alive all the way down to its destination of Earth’s surface.
Without the laws of relativity, this cannot be explained! But at high velocities, which correspond to high particle energies, the effects of time dilation and length contraction enable not just a few but most of the created muons to survive. This is why, even all the way down here at the surface of the Earth, one muon per second still appears to pass through your upturned, outstretched hand.
The V-shaped track in the center of the image arises from a muon decaying to an electron and two neutrinos. The high-energy track with a kink in it is evidence of a mid-air particle decay. By colliding positrons and electrons at a specific, tunable energy, muon-antimuon pairs could be produced at will. The necessary energy for making a muon/antimuon pair from high-energy positrons colliding with electrons at rest is almost identical to the energy from electron/positron collisions necessary to create a Z-boson.
The Scottish Science & Technology Roadshow
If you ever doubted relativity, it’s hard to fault you: the theory itself seems so counterintuitive, and its effects are thoroughly outside the realm of our everyday experience. But there is an experimental test you can perform right at home, cheaply and with just a single day’s efforts, that allow you see the effects for yourself.
You can build a cloud chamber, and if you do, you will see those muons. If you installed a magnetic field, you’d see those muon tracks curve according to their charge-to-mass ratio: you’d immediately know they weren’t electrons. On rare occasion, you’d even see a muon decaying in mid-air. And, finally, if you measured their energies, you’d find that they were moving ultra-relativistically, at 99.999%+ the speed of light. If not for relativity, you wouldn’t see a single muon at all.
Time dilation and length contraction are real, and the fact that muons survive, from cosmic ray showers all the way down to Earth, prove it beyond a shadow of a doubt.
The Microseconds That Can Rule Out Relative Time! According to Albert Einstein’s Theory Of Special Relativity, your time and my time are different, subject to and conditional to the question of your speed of movement and my speed of movement. The speed in which we are moving toward each other or the speed in which […]
One of the most astonishing facts about science is how universally applicable the laws of nature are. Every particle obeys the same rules, experiences the same forces, and sees the same fundamental constants, no matter where or when they exist. Gravitationally, every single entity in the Universe experiences, depending on how you look at it, either the same gravitational acceleration or the same curvature of spacetime, no matter what properties it possesses. At least, that’s what things are like in theory. In practice, some things are notoriously difficult to measure……..
In 1915, Einstein’s theory of General Relativity gave us a brand new theory of gravity, based on the geometrical concept of curved spacetime. Matter and energy told space how to curve; curved space told matter and energy how to move. By 1922, scientists had discovered that if you fill the Universe uniformly with matter and energy, it won’t remain static, but will either expand or contract. By the end of the 1920s, led by the observations of Edwin Hubble, we had discovered our Universe was expanding, and had our first measurement of the expansion rate.
The journey to pin down exactly what that rate is has now hit a snag, with two different measurement techniques yielding inconsistent results. It could be an indicator of new physics. But there could be an even simpler solution, and nobody wants to talk about it.
Standard candles (L) and standard rulers (R) are two different techniques astronomers use to measure the expansion of space at various times/distances in the past. Based on how quantities like luminosity or angular size change with distance, we can infer the expansion history of the Universe.NASA / JPL-Caltech
The controversy is as follows: when we see a distant galaxy, we’re seeing it as it was in the past. But it isn’t simply that you look at light that took a billion years to arrive and conclude that the galaxy is a billion light years away. Instead, the galaxy will actually be more distant than that.
Why’s that? Because the space that makes up our Universe itself is expanding. This prediction of Einstein’s General Relativity, first recognized in the 1920s and then observationally validated by Edwin Hubble several years later, has been one of the cornerstones of modern cosmology.
A plot of the apparent expansion rate (y-axis) vs. distance (x-axis) is consistent with a Universe that expanded faster in the past, but where distant galaxies are accelerating in their recession today. This is a modern version of, extending thousands of times farther than, Hubble’s original work. Note the fact that the points do not form a straight line, indicating the expansion rate’s change over time.Ned Wright, based on the latest data from Betoule et al. (2014)
The big question is how to measure it. How do we measure how the Universe is expanding? All methods invariably rely on the same general rules:
you pick a point in the Universe’s past where you can make an observation,
you measure the properties you can measure about that distant point,
and you calculate how the Universe would have had to expand from then until now to reproduce what you see.
This could be from a wide variety of methods, ranging from observations of the nearby Universe to objects billions of light years away.
The Planck satellite’s data, combined with the other complementary data suites, gives us very tight constraints on the allowed values of cosmological parameters. The Hubble expansion rate today, in particular, is tightly constrained to be between 67 and 68 km/s/Mpc, with very little wiggle-room. The measurements from the Cosmic Distance Ladder method (Riess et al., 2018) are not consistent with this result.PLANCK 2018 RESULTS. VI. COSMOLOGICAL PARAMETERS; PLANCK COLLABORATION (2018)
For many years now, there’s been a controversy brewing. Two different measurement methods — one using the cosmic distance ladder and one using the first observable light in the Universe — give results that are mutually inconsistent. The tension has enormous implications that something may be wrong with how we conceive of the Universe.
There is another explanation, however, that’s much simpler than the idea that either something is wrong with the Universe or that some new physics is required. Instead, it’s possible that one (or more) method has a systematic error associated with it: an inherent flaw to the method that hasn’t been identified yet that’s biasing its results. Either method (or even both methods) could be at fault. Here’s the story of how.
The Variable Star RS Puppis, with its light echoes shining through the interstellar clouds. Variable stars come in many varieties; one of them, Cepheid variables, can be measured both within our own galaxy and in galaxies up to 50-60 million light years away. This enables us to extrapolate distances from our own galaxy to far more distant ones in the Universe.NASA, ESA, and the Hubble Heritage Team
The cosmic distance ladder is the oldest method we have to compute the distances to faraway objects. You start by measuring something close by: the distance to the Sun, for example. Then you use direct measurements of distant stars using the motion of the Earth around the Sun — known as parallax — to calculate the distance to nearby stars. Some of these nearby stars will include variable stars like Cepheids, which can be measured accurately in nearby and distant galaxies, and some of those galaxies will contain events like type Ia supernovae, which are some of the most distant objects of all.
Make all of these measurements, and you can derive distances to galaxies many billions of light years away. Put it all together with easily-measurable redshifts, and you’ll arrive at a measurement for the rate of expansion of the Universe.
The construction of the cosmic distance ladder involves going from our Solar System to the stars to nearby galaxies to distant ones. Each “step” carries along its own uncertainties, especially the Cepheid variable and supernovae steps; it also would be biased towards higher or lower values if we lived in an underdense or overdense region.NASA, ESA, A. FEILD (STSCI), AND A. RIESS (STSCI/JHU)
This is how dark energy was first discovered, and our best methods of the cosmic distance ladder give us an expansion rate of 73.2 km/s/Mpc, with an uncertainty of less than 3%.
Universal light-curve properties for Type Ia supernovae. This result, first obtained in the late 1990s, has recently been called into question; supernovae may not. in fact, have light curves that are as universal as previously thought.S. Blondin and Max Stritzinger
On the other hand, we have measurements of the Universe’s composition and expansion rate from the earliest available picture of it: the Cosmic Microwave Background. The minuscule, 1-part-in-30,000 temperature fluctuations display a very specific pattern on all scales, from the largest all-sky ones down to 0.07° or so, where its resolution is limited by the fundamental astrophysics of the Universe itself.
The final results from the Planck collaboration show an extraordinary agreement between the predictions of a dark energy/dark matter-rich cosmology (blue line) with the data (red points, black error bars) from the Planck team. All 7 acoustic peaks fit the data extraordinarily well.PLANCK 2018 RESULTS. VI. COSMOLOGICAL PARAMETERS; PLANCK COLLABORATION (2018)
Based on the full suite of data from Planck, we have exquisite measurements for what the Universe is made of and how it’s expanded over its history. The Universe is 31.5% matter (where 4.9% is normal matter and the rest is dark matter), 68.5% dark energy, and just 0.01% radiation. The Hubble expansion rate, today, is determined to be 67.4 km/s/Mpc, with an uncertainty of only around 1%. This creates an enormous tension with the cosmic distance ladder results.
An illustration of clustering patterns due to Baryon Acoustic Oscillations, where the likelihood of finding a galaxy at a certain distance from any other galaxy is governed by the relationship between dark matter and normal matter. As the Universe expands, this characteristic distance expands as well, allowing us to measure the Hubble constant, the dark matter density, and even the scalar spectral index. The results agree with the CMB data.ZOSIA ROSTOMIAN
In addition, we have another measurement from the distant Universe that gives another measurement, based on the way that galaxies cluster together on large scales. When you have a galaxy, you can ask a simple-sounding question: what is the probability of finding another galaxy a specific distance away?
Based on what we know about dark matter and normal matter, there’s an enhanced probability of finding a galaxy 500 million light years distant from another versus 400 million or 600 million. This is for today, and so as the Universe was smaller in the past, the distance scale corresponding to this probability enhancement changes as the Universe expands. This method is known as the inverse distance ladder, and gives a third method to measure the expanding Universe. It also gives an expansion rate of around 67 km/s/Mpc, again with a small uncertainty.
Modern measurement tensions from the distance ladder (red) with CMB (green) and BAO (blue) data. The red points are from the distance ladder method; the green and blue are from ‘leftover relic’ methods. Note that the errors on red vs. green/blue measurements do not overlap.AUBOURG, ÉRIC ET AL. PHYS.REV. D92 (2015) NO.12, 123516.
Now, it’s possible that both of these measurements have a flaw in them, too. In particular, many of these parameters are related, meaning that if you try and increase one, you have to decrease-or-increase others. While the data from Planck indicates a Hubble expansion rate of 67.4 km/s/Mpc, that rate could be higher, like 72 km/s/Mpc. If it were, that would simply mean we needed a smaller amount of matter (26% instead of 31.5%), a larger amount of dark energy (74% instead of 68.5%), and a larger scalar spectral index (ns) to characterize the density fluctuations (0.99 instead of 0.96).
This is deemed highly unlikely, but it illustrates how one small flaw, if we overlooked something, could keep these independent measurements from aligning.
Before Planck, the best-fit to the data indicated a Hubble parameter of approximately 71 km/s/Mpc, but a value of approximately 70 or above would now be too great for both the dark matter density (x-axis) we’ve seen via other means and the scalar spectral index (right side of the y-axis) that we require for the large-scale structure of the Universe to make sense.P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION (2015)
There are a lot of problems that arise for cosmology if the teams measuring the Cosmic Microwave Background and the inverse distance ladder are wrong. The Universe, from the measurements we have today, should not have the low dark matter density or the high scalar spectral index that a large Hubble constant would imply. If the value truly is closer to 73 km/s/Mpc, we may be headed for a cosmic revolution.
Correlations between certain aspects of the magnitude of temperature fluctuations (y-axis) as a function of decreasing angular scale (x-axis) show a Universe that is consistent with a scalar spectral index of 0.96 or 0.97, but not 0.99 or 1.00.P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION
On the other hand, if the cosmic distance ladder team is wrong, owing to a fault in any rung on the distance ladder, the crisis is completely evaded. There was one overlooked systematic, and once it’s resolved, every piece of the cosmic puzzle falls perfectly into place. Perhaps the value of the Hubble expansion rate really is somewhere between 66.5 and 68 km/s/Mpc, and all we had to do was identify one astronomical flaw to get there.
The fluctuations in the CMB, the formation and correlations between large-scale structure, and modern observations of gravitational lensing, among many others, all point towards the same picture: an accelerating Universe, containing and full of dark matter and dark energy.Chris Blake and Sam Moorfield
The possibility of needing to overhaul many of the most compelling conclusions we’ve reached over the past two decades is fascinating, and is worth investigating to the fullest. Both groups may be right, and there may be a physical reason why the nearby measurements are skewed relative to the more distant ones. Both groups may be wrong; they may both have erred.
But this controversy could end with the astronomical equivalent of a loose OPERA cable. The distance ladder group could have a flaw, and our large-scale cosmological measurements could be as good as gold. That would be the simplest solution to this fascinating saga. But until the critical data comes in, we simply don’t know. Meanwhile, our scientific curiosity demands that we investigate. No less than the entire Universe is at stake.