As far as raw explosive power goes, no other cataclysm in the Universe is both as common and as destructive as a core-collapse supernova. In one brief event lasting only seconds, a runaway reaction causes a star to give off as much energy as our Sun will emit over its entire 10-12 billion year lifetime. While many supernovae have been observed both historically and since the invention of the telescope, humanity has never witnessed one up close.
Recently, the nearby red supergiant star, Betelgeuse, has started exhibiting interesting signs of dimming, leading some to suspect that it might be on the verge of going supernova. While our Sun isn’t massive enough to experience that same fate, it’s a fun and macabre thought experiment to imagine what would happen if it did. Yes, we’d all die in short order, but not from either the blast wave or from radiation. Instead, the neutrinos would get us first. Here’s how.
An animation sequence of the 17th century supernova in the constellation of Cassiopeia. This… [+]
NASA, ESA, and the Hubble Heritage STScI/AURA)-ESA/Hubble Collaboration. Acknowledgement: Robert A. Fesen (Dartmouth College, USA) and James Long (ESA/Hubble)
A supernova — specifically, a core-collapse supernova — can only occur when a star many times more massive than our Sun runs out of nuclear fuel to burn in its core. All stars start off doing what our Sun does: fusing the most common element in the Universe, hydrogen, into helium through a series of chain reactions. During this part of a star’s life, it’s the radiation pressure from these nuclear fusion reactions that prevent the star’s interior from collapsing due to the enormous force of gravitation.
So what happens, then, when the star burns through all the hydrogen in its core? The radiation pressure drops and gravity starts to win in this titanic struggle, causing the core to contract. As it contracts, it heats up, and if the temperature can pass a certain critical threshold, the star will start fusing the next-lightest element in line, helium, to produce carbon.
This cutaway showcases the various regions of the surface and interior of the Sun, including the… [+]
Wikimedia Commons user Kelvinsong
This will occur in our own Sun some 5-to-7 billion years in the future, causing it to swell into a red giant. Our parent star will expand so much that Mercury, Venus, and possibly even Earth will be engulfed, but let’s instead imagine that we come up some clever plan to migrate our planet to a safe orbit, while mitigating the increased luminosity to prevent our planet from getting fried. This helium burning will last for hundreds of millions of years before our Sun runs out of helium and the core contracts and heats up once again.
For our Sun, that’s the end of the line, as we don’t have enough mass to move to the next stage and begin carbon fusion. In a star far more massive than our Sun, however, hydrogen-burning only takes millions of years to complete, and the helium-burning phase lasts merely hundreds of thousands of years. After that, the core’s contraction will enable carbon fusion to proceed, and things will move very quickly after that.
As it nears the end of its evolution, heavy elements produced by nuclear fusion inside the star are… [+]
NASA / CXC / S. Lee
Carbon fusion can produce elements such as oxygen, neon, and magnesium, but only takes hundreds of years to complete. When carbon becomes scarce in the core, it again contracts and heats up, leading to neon fusion (which lasts about a year), followed by oxygen fusion (lasting for a few months), and then silicon fusion (which lasts less than a day). In that final phase of silicon-burning, core temperatures can reach ~3 billion K, some 200 times the hottest temperatures currently found at the center of the Sun.
And then the critical moment occurs: the core runs out of silicon. Again, the pressure drops, but this time there’s nowhere to go. The elements that are produced from silicon fusion — elements like cobalt, nickel and iron — are more stable than the heavier elements that they’d conceivably fuse into. Instead, nothing there is capable of resisting gravitational collapse, and the core implodes.
Artist’s illustration (left) of the interior of a massive star in the final stages, pre-supernova,… [+]
This is where the core-collapse supernova happens. A runaway fusion reaction occurs, producing what’s basically one giant atomic nucleus made of neutrons in the star’s core, while the outer layers have a tremendous amount of energy injected into them. The fusion reaction itself lasts for only around 10 seconds, liberating about 1044 Joules of energy, or the mass-equivalent (via Einstein’s E = mc2) of about 1027 kg: as much as you’d release by transforming two Saturns into pure energy.
That energy goes into a mix of radiation (photons), the kinetic energy of the material in the now-exploding stellar material, and neutrinos. All three of these are more than capable of ending any life that’s managed to survive on an orbiting planet up to that point, but the big question of how we’d all die if the Sun went supernova depends on the answer to one question: who gets there first?
The anatomy of a very massive star throughout its life, culminating in a Type II Supernova when the… [+]
Nicole Rager Fuller/NSF
When the runaway fusion reaction occurs, the only delay in the light getting out comes from the fact that it’s produced in the core of this star, and the core is surrounded by the star’s outer layers. It takes a finite amount of time for that signal to propagate to the outermost surface of the star — the photosphere — where it’s then free to travel in a straight line at the speed of light.
As soon as it gets out, the radiation will scorch everything in its path, blowing the atmosphere (and any remaining ocean) clean off of the star-facing side of an Earth-like planet immediately, while the night side would last for seconds-to-minutes longer. The blast wave of the matter would follow soon afterwards, engulfing the remnants of our scorched world and quite possibly, dependent on the specifics of the explosion, destroying the planet entirely.
But any living creature would surely die even before the light or the blast wave from the supernova arrived; they’d never see their demise coming. Instead, the neutrinos — which interact with matter so rarely that an entire star, to them, functions like a pane of glass does to visible light — simply speed away omnidirectionally, from the moment of their creation, at speeds indistinguishable from the speed of light.
Moreover, neutrinos carry an enormous fraction of a supernova’s energy away: approximately 99% of it. In any given moment, with our paltry Sun emitting just ~4 × 1026 joules of energy each second, approximately 70 trillion (7 × 1013) neutrinos pass through your hand. The probability that they’ll interact is tiny, but occasionally it will happen, depositing the energy it carries into your body when it happens. Only a few neutrinos actually do this over the course of a typical day with our current Sun, but if it went supernova, the story would change dramatically.
A neutrino event, identifiable by the rings of Cerenkov radiation that show up along the… [+]
Super Kamiokande collaboration
When a supernova occurs, the neutrino flux increases by approximately a factor of 10 quadrillion (1016), while the energy-per-neutrino goes up by around a factor of 10, increasing the probability of a neutrino interacting with your body tremendously. When you work through the math, you’ll find that even with their extraordinary low probability of interaction, any living creature — from a single-celled organism to a complex human being — would be boiled from the inside out from neutrino interactions alone.
This is the scariest outcome imaginable, because you’d never see it coming. In 1987, we observed a supernova from 168,000 light-years away with both light and neutrinos. The neutrinos arrived at three different detectors across the world, spanning about 10 seconds from the earliest to the latest. The light from the supernova, however, didn’t begin arriving until hours later. By the time the first visual signatures arrived, everything on Earth would have already been vaporized for hours.
A supernova explosion enriches the surrounding interstellar medium with heavy elements. The outer… [+]
ESO / L. Calçada
Perhaps the scariest part of neutrinos is how there’s no good way to shield yourself from them. Even if you tried to block their path to you with lead, or a planet, or even a neutron star, more than 50% of the neutrinos would still get through. According to some estimates, not only would all life on an Earth-like planet be destroyed by neutrinos, but any life anywhere in a comparable solar system would meet that same fate, even out at the distance of Pluto, before the first light from the supernova ever arrived.
The only early detection system we’d ever be able to install to know something was coming is a sufficiently sensitive neutrino detector, which could detect the unique, surefire signatures of neutrinos generated from each of carbon, neon, oxygen, and silicon burning. We would know when each of these transitions happened, giving life a few hours to say their final goodbyes during the silicon-burning phase before the supernova occurred.
There are many natural neutrino signatures produced by stars and other processes in the Universe.… [+]
IceCube collaboration / NSF / University of Wisconsin
It’s horrifying to think that an event as fascinating and destructive as a supernova, despite all the spectacular effects it produces, would kill anything nearby before a single perceptible signal arrived, but that’s absolutely the case with neutrinos. Produced in the core of a supernova and carrying away 99% of its energy, all life on an Earth-like would receive a lethal dose of neutrinos within 1/20th of a second as every other location on the planet. No amount of shielding, even from being on the opposite side of the planet from the supernova, would help at all.
Whenever any star goes supernova, neutrinos are the first signal that can be detected from them, but by the time they arrive, it’s already too late. Even with how rarely they interact, they’d sterilize their entire solar system before the light or matter from the blast ever arrived. At the moment of a supernova’s ignition, the fate of death is sealed by the stealthiest killer of all: the elusive neutrino.
When Danielle Wuchenich hatched the idea for measurement startup Liquid Instruments, she was not chasing worldly success but a faster process for discovering the secrets of space. Her solution—a tool which jams 12 different electrical signal and frequency instruments into a single device—ended up being useful on Earth, with Apple, NASA and Texas Instruments employing the tool to ensure that the electronics they’re developing work.
Now Liquid Instruments’ chief strategy officer, Wuchenich was a graduate student at Australian National University, working on creating a tool called a phasemeter to measure gravitational waves in space, something only of use to high-level researchers. But in conducting the routine electrical measurements required for her research, she encountered a problem:
Every time she wanted to measure voltage over time, signal frequency or signal transmission, Wuchenich had to rely on separate devices with separate software and user interfaces, each with hefty price tags. To avoid this headache, Wuchenich programmed the high-tech phasemeter to do multiple kinds of measurements. In so doing, Wuchenich landed on a universally viable application for an otherwise esoteric product.
Over three years, a twelve-person founding team—consisting of Wuchenich, her lab mates and principal investigator CEO Daniel Shaddock—turned prototype into product. Liquid Instruments began selling its device, dubbed Moku:Lab, in 2017, an 8-inch tool the company argues is not only more efficient than the competition, but cheaper. Moku:Lab costs $6,500, whereas all the tools the device replaces cost up to $60,000, the company estimates. Shaddock says the product has the potential to fundamentally change the test and measurement industry.
“In the old days you had a typewriter for writing letters and a calculator for calculating. And they did the job pretty well. Then along came the computer, and it can write letters, it can calculate things, but it can do a whole lot more,” says Shaddock. “We’ve stumbled upon the formula for the computer for the test and measurement industry.”
So far, investors and scientists are buying it. The startup has raised $10.1 million from Anzu Partners, ANU Connect Ventures and Australian Capital Ventures Limited at a valuation of $33.7 million, with its 2018 revenue coming to around $750,000, according to Wuchenich. And Liquid boasts some big-name customers, including NASA, Texas Instruments, Apple and Nvidia.
Despite this early success, Robert W. Baird & Co. analyst Richard Eastman says Liquid Instruments faces a tough challenge breaking into an oligopoly dominated by five major companies—Keysight, Rohde & Schwarz, Tektronix, National Instruments and Anritsu. With several of these large players also selling single pieces of hardware that can make multiple measurements, Eastman is skeptical Liquid Instruments can make a dent. “I’m not sure it looks disruptive,” Eastman says.
Also, Liquid Instruments will need to prove it offers comparable precision to its rivals. J. Max Cortner, president of the Instrument & Measurement Society, says while Liquid Instruments offers a unique product, its specs are in mainstream ranges, which may not be good enough for its customer base of highly trained researchers. “That’s going to be their dividing line, their frontier. How do they expand this easy-to-use concept into the physical extremes?” Cortner says.
Wuchenich is hoping Moku:Lab’s ready-to-use software and a specialized computer chip called FPGA will separate it from the competition. She notes whatever Liquid Instruments loses on precision, it more than compensates with its low price point. “Bottom line—customers don’t want/can’t afford to overpay for specs they don’t need,” she wrote in an email.
It’ll be an uphill battle for a small startup like Liquid Instruments to compete with behemoths whose customers have been loyal for decades. But for Colonel Brian Neff, who heads the department of electrical engineering at the U.S. Air Force Academy and uses Moku:Lab to train his students, Liquid Instruments is a formidable challenger.
“There are advantages to this new way of thinking that I’d love to see some of the other players adopt, and if they don’t adopt, then I think it’s that’s just more promising for a company like Liquid Instruments to be able to come in and innovate a solution that hasn’t really been done to this point,” Neff says.
I cover billionaires and venture capital for Forbes. I’ve covered startups and debates in the business world for Inc. and breaking news for The Associated Press, WBUR and Metro Boston. I recently graduated from Tufts University, where I served as editor-in-chief of The Tufts Daily.
Compared to the unsolved mysteries of the universe, far less gets said about one of the most profound facts to have crystallized in physics over the past half-century: To an astonishing degree, nature is the way it is because it couldn’t be any different. “There’s just no freedom in the laws of physics that we have,” said Daniel Baumann, a theoretical physicist at the University of Amsterdam.
Since the 1960s, and increasingly in the past decade, physicists like Baumann have used a technique known as the “bootstrap” to infer what the laws of nature must be. This approach assumes that the laws essentially dictate one another through their mutual consistency — that nature “pulls itself up by its own bootstraps.” The idea turns out to explain a huge amount about the universe.
When bootstrapping, physicists determine how elementary particles with different amounts of “spin,” or intrinsic angular momentum, can consistently behave. In doing this, they rediscover the four fundamental forces that shape the universe. Most striking is the case of a particle with two units of spin: As the Nobel Prize winner Steven Weinberg showed in 1964, the existence of a spin-2 particle leads inevitably to general relativity — Albert Einstein’s theory of gravity. Einstein arrived at general relativity through abstract thoughts about falling elevators and warped space and time, but the theory also follows directly from the mathematically consistent behavior of a fundamental particle.
“I find this inevitability of gravity [and other forces] to be one of the deepest and most inspiring facts about nature,” said Laurentiu Rodina, a theoretical physicist at the Institute of Theoretical Physics at CEA Saclay who helped to modernize and generalize Weinberg’s proof in 2014. “Namely, that nature is above all self-consistent.”
How Bootstrapping Works
A particle’s spin reflects its underlying symmetries, or the ways it can be transformed that leave it unchanged. A spin-1 particle, for instance, returns to the same state after being rotated by one full turn. A spin-12 particle must complete two full rotations to come back to the same state, while a spin-2 particle looks identical after just half a turn. Elementary particles can only carry 0, 12, 1, 32 or 2 units of spin.
To figure out what behavior is possible for particles of a given spin, bootstrappers consider simple particle interactions, such as two particles annihilating and yielding a third. The particles’ spins place constraints on these interactions. An interaction of spin-2 particles, for instance, must stay the same when all participating particles are rotated by 180 degrees, since they’re symmetric under such a half-turn.
Interactions must obey a few other basic rules: Momentum must be conserved; the interactions must respect locality, which dictates that particles scatter by meeting in space and time; and the probabilities of all possible outcomes must add up to 1, a principle known as unitarity. These consistency conditions translate into algebraic equations that the particle interactions must satisfy. If the equation corresponding to a particular interaction has solutions, then these solutions tend to be realized in nature.
For example, consider the case of the photon, the massless spin-1 particle of light and electromagnetism. For such a particle, the equation describing four-particle interactions — where two particles go in and two come out, perhaps after colliding and scattering — has no viable solutions. Thus, photons don’t interact in this way. “This is why light waves don’t scatter off each other and we can see over macroscopic distances,” Baumann explained. The photon can participate in interactions involving other types of particles, however, such as spin-12 electrons. These constraints on the photon’s interactions lead to Maxwell’s equations, the 154-year-old theory of electromagnetism.
Or take gluons, particles that convey the strong force that binds atomic nuclei together. Gluons are also massless spin-1 particles, but they represent the case where there are multiple types of the same massless spin-1 particle. Unlike the photon, gluons can satisfy the four-particle interaction equation, meaning that they self-interact. Constraints on these gluon self-interactions match the description given by quantum chromodynamics, the theory of the strong force.
A third scenario involves spin-1 particles that have mass. Mass came about when a symmetry broke during the universe’s birth: A constant — the value of the omnipresent Higgs field — spontaneously shifted from zero to a positive number, imbuing many particles with mass. The breaking of the Higgs symmetry created massive spin-1 particles called W and Z bosons, the carriers of the weak force that’s responsible for radioactive decay.
Then “for spin-2, a miracle happens,” said Adam Falkowski, a theoretical physicist at the Laboratory of Theoretical Physics in Orsay, France. In this case, the solution to the four-particle interaction equation at first appears to be beset with infinities. But physicists find that this interaction can proceed in three different ways, and that mathematical terms related to the three different options perfectly conspire to cancel out the infinities, which permits a solution.
That solution is the graviton: a spin-2 particle that couples to itself and all other particles with equal strength. This evenhandedness leads straight to the central tenet of general relativity: the equivalence principle, Einstein’s postulate that gravity is indistinguishable from acceleration through curved space-time, and that gravitational mass and intrinsic mass are one and the same. Falkowski said of the bootstrap approach, “I find this reasoning much more compelling than the abstract one of Einstein.”
Thus, by thinking through the constraints placed on fundamental particle interactions by basic symmetries, physicists can understand the existence of the strong and weak forces that shape atoms, and the forces of electromagnetism and gravity that sculpt the universe at large.
In addition, bootstrappers find that many different spin-0 particles are possible. The only known example is the Higgs boson, the particle associated with the symmetry-breaking Higgs field that imbues other particles with mass. A hypothetical spin-0 particle called the inflaton may have driven the initial expansion of the universe. These particles’ lack of angular momentum means that fewer symmetries restrict their interactions. Because of this, bootstrappers can infer less about nature’s governing laws, and nature itself has more creative license.
Spin-12 matter particles also have more freedom. These make up the family of massive particles we call matter, and they are individually differentiated by their masses and couplings to the various forces. Our universe contains, for example, spin-12 quarks that interact with both gluons and photons, and spin-12 neutrinos that interact with neither.
The spin spectrum stops at 2 because the infinities in the four-particle interaction equation kill off all massless particles that have higher spin values. Higher-spin states can exist if they’re extremely massive, and such particles do play a role in quantum theories of gravity such as string theory. But higher-spin particles can’t be detected, and they can’t affect the macroscopic world.
Spin-32 particles could complete the 0, 12, 1, 32, 2 pattern, but only if “supersymmetry” is true in the universe — that is, if every force particle with integer spin has a corresponding matter particle with half-integer spin. In recent years, experiments have ruled out many of the simplest versions of supersymmetry. But the gap in the spin spectrum strikes some physicists as a reason to hold out hope that supersymmetry is true and spin-32 particles exist.
In his work, Baumann applies the bootstrap to the beginning of the universe. A recent Quanta article described how he and other physicists used symmetries and other principles to constrain the possibilities for those first moments.
It’s “just aesthetically pleasing,” Baumann said, “that the laws are inevitable — that there is some inevitability of the laws of physics that can be summarized by a short handful of principles that then lead to building blocks that then build up the macroscopic world.”
Computational Modeling Physics First with Bootstrap seeks to explore how modeling practices and computational thinking can be integrated and synergistically serve as important orienting frameworks for teaching physics. The project is supported by National Science Foundation and 100Kin10.
If you’ve ever heard of Albert Einstein, chances are you know at least one equation that he himself is famous for deriving: E = mc2. This simple equation details a relationship between the energy (E) of a system, its rest mass (m), and a fundamental constant that relates the two, the speed of light squared (c2). Despite the fact that this equation is one of the simplest ones you can write down, what it means is dramatic and profound.
At a fundamental level, there is an equivalence between the mass of an object and the inherent energy stored within it. Mass is only one form of energy among many, such as electrical, thermal, or chemical energy, and therefore energy can be transformed from any of these forms into mass, and vice versa. The profound implications of Einstein’s equations touch us in many ways in our day-to-day lives. Here are the five lessons everyone should learn.
This iron-nickel meteorite, examined and photographed by Opportunity, represents the first such object ever found on the Martian surface. If you were to take this object and chop it up into its individual, constituent protons, neutrons, and electrons, you would find that the whole is actually less massive than the sum of its parts.
NASA / JPL / Cornell
1.) Mass is not conserved. When you think about the things that change versus the things that stay the same in this world, mass is one of those quantities we typically hold constant without thinking about it too much. If you take a block of iron and chop it up into a bunch of iron atoms, you fully expect that the whole equals the sum of its parts. That’s an assumption that’s clearly true, but only if mass is conserved.
In the real world, though, according to Einstein, mass is not conserved at all. If you were to take an iron atom, containing 26 protons, 30 neutrons, and 26 electrons, and were to place it on a scale, you’d find some disturbing facts.
YOU MAY ALSO LIKE
Let Caregivers Be Caregivers
Will You Miss The Carbon Rush?
Amazon Web Services BrandVoice
Increasing Access To Blockchain And Ledger Databases
An iron atom with all of its electrons weighs slightly less than an iron nucleus and its electrons do separately,
An iron nucleus weighs significantly less than 26 protons and 30 neutrons do separately.
And if you try and fuse an iron nucleus into a heavier one, it will require you to input more energy than you get out.
Iron-56 may be the most tightly-bound nucleus, with the greatest amount of binding energy per nucleon. In order to get there, though, you have to build up element-by-element. Deuterium, the first step up from free protons, has an extremely low binding energy, and thus is easily destroyed by relatively modest-energy collisions.
Each one of these facts is true because mass is just another form of energy. When you create something that’s more energetically stable than the raw ingredients that it’s made from, the process of creation must release enough energy to conserve the total amount of energy in the system.
When you bind an electron to an atom or molecule, or allow those electrons to transition to the lowest-energy state, those binding transitions must give off energy, and that energy must come from somewhere: the mass of the combined ingredients. This is even more severe for nuclear transitions than it is for atomic ones, with the former class typically being about 1000 times more energetic than the latter class.
In fact, leveraging the consequences of E = mc2 is how we get the second valuable lesson out of it.
Countless scientific tests of Einstein’s general theory of relativity have been performed, subjecting the idea to some of the most stringent constraints ever obtained by humanity. Einstein’s first solution was for the weak-field limit around a single mass, like the Sun; he applied these results to our Solar System with dramatic success. We can view this orbit as Earth (or any planet) being in free-fall around the Sun, traveling in a straight-line path in its own frame of reference. All masses and all sources of energy contribute to the curvature of spacetime.
LIGO scientific collaboration / T. Pyle / Caltech / MIT
2.) Energy is conserved, but only if you account for changing masses. Imagine the Earth as it orbits the Sun. Our planet orbits quickly: with an average speed of around 30 km/s, the speed required to keep it in a stable, elliptical orbit at an average distance of 150,000,000 km (93 million miles) from the Sun. If you put the Earth and Sun both on a scale, independently and individually, you would find that they weighed more than the Earth-Sun system as it is right now.
When you have any attractive force that binds two objects together — whether that’s the electric force holding an electron in orbit around a nucleus, the nuclear force holding protons and neutrons together, or the gravitational force holding a planet to a star — the whole is less massive than the individual parts. And the more tightly you bind these objects together, the more energy the binding process emits, and the lower the rest mass of the end product.
Whether in an atom, molecule, or ion, the transitions of electrons from a higher energy level to a lower energy level will result in the emission of radiation at a very particular wavelength. This produces the phenomenon we see as emission lines, and is responsible for the variety of colors we see in a fireworks display. Even atomic transitions such as this must conserve energy, and that means losing mass in the correct proportion to account for the energy of the produced photon.
When you bring a free electron in from a large distance away to bind to a nucleus, it’s a lot like bringing in a free-falling comet from the outer reaches of the Solar System to bind to the Sun: unless it loses energy, it will come in, make a close approach, and slingshot back out again.
However, if there’s some other way for the system to shed energy, things can become more tightly bound. Electrons do bind to nuclei, but only if they emit photons in the process. Comets can enter stable, periodic orbits, but only if another planet steals some of their kinetic energy. And protons and neutrons can bind together in large numbers, producing a much lighter nucleus and emitting high-energy photons (and other particles) in the process. That last scenario is at the heart of perhaps the most valuable and surprising lesson of all.
A composite of 25 images of the Sun, showing solar outburst/activity over a 365 day period. Without the right amount of nuclear fusion, which is made possible through quantum mechanics, none of what we recognize as life on Earth would be possible. Over its history, approximately 0.03% of the mass of the Sun, or around the mass of Saturn, has been converted into energy via E = mc^2.
NASA / Solar Dynamics Observatory / Atmospheric Imaging Assembly / S. Wiessinger; post-processing by E. Siegel
3.) Einstein’s E = mc2 is responsible for why the Sun (like any star) shines. Inside the core of our Sun, where the temperatures rise over a critical temperature of 4,000,000 K (up to nearly four times as large), the nuclear reactions powering our star take place. Protons are fused together under such extreme conditions that they can form a deuteron — a bound state of a proton and neutron — while emitting a positron and a neutrino to conserve energy.
Additional protons and deuterons can then bombard the newly formed particle, fusing these nuclei in a chain reaction until helium-4, with two protons and two neutrons, is created. This process occurs naturally in all main-sequence stars, and is where the Sun gets its energy from.
The proton-proton chain is responsible for producing the vast majority of the Sun’s power. Fusing two He-3 nuclei into He-4 is perhaps the greatest hope for terrestrial nuclear fusion, and a clean, abundant, controllable energy source, but all of these reaction must occur in the Sun.
Borb / Wikimedia Commons
If you were to put this end product of helium-4 on a scale and compare it to the four protons that were used up to create it, you’d find that it was about 0.7% lighter: helium-4 has only 99.3% of the mass of four protons. Even though two of these protons have converted into neutrons, the binding energy is so strong that approximately 28 MeV of energy gets emitted in the process of forming each helium-4 nucleus.
In order to produce the energy we see it produce, the Sun needs to fuse 4 × 1038 protons into helium-4 every second. The result of that fusion is that 596 million tons of helium-4 are produced with each second that passes, while 4 million tons of mass are converted into pure energy via E = mc2. Over the lifetime of the entire Sun, it’s lost approximately the mass of the planet Saturn due to the nuclear reactions in its core.
A nuclear-powered rocket engine, preparing for testing in 1967. This rocket is powered by Mass/Energy conversion, and is underpinned by the famous equation E=mc^2.
4.) Converting mass into energy is the most energy-efficient process in the Universe. What could be better than 100% efficiency? Absolutely nothing; 100% is the greatest energy gain you could ever hope for out of a reaction.
Well, if you look at the equation E = mc2, it tells you that you can convert mass into pure energy, and tells you how much energy you’ll get out. For every 1 kilogram of mass that you convert, you get a whopping 9 × 1016 joules of energy out: the equivalent of 21 Megatons of TNT. Whenever we experience a radioactive decay, a fission or fusion reaction, or an annihilation event between matter and antimatter, the mass of the reactants is larger than the mass of the products; the difference is how much energy is released.
Nuclear weapon test Mike (yield 10.4 Mt) on Enewetak Atoll. The test was part of the Operation Ivy. Mike was the first hydrogen bomb ever tested. A release of this much energy corresponds to approximately 500 grams of matter being converted into pure energy: an astonishingly large explosion for such a tiny amount of mass.
National Nuclear Security Administration / Nevada Site Office
In all cases, the energy that comes out — in all its combined forms — is exactly equal to the energy equivalent of the mass loss between products and reactants. The ultimate example is the case of matter-antimatter annihilation, where a particle and its antiparticle meet and produce two photons of the exact rest energy of the two particles.
Take an electron and a positron and let them annihilate, and you’ll always get two photons of exactly 511 keV of energy out. It’s no coincidence that the rest mass of electrons and positrons are each 511 keV/c2: the same value, just accounting for the conversion of mass into energy by a factor of c2. Einstein’s most famous equation teaches us that any particle-antiparticle annihilation has the potential to be the ultimate energy source: a method to convert the entirety of the mass of your fuel into pure, useful energy.
The top quark is the most massive particle known in the Standard Model, and is also the shortest-lived of all the known particles, with a mean lifetime of 5 × 10^-25 s. When we produce it in particle accelerators by having enough free energy available to create them via E = mc^2, we produce top-antitop pairs, but they do not live for long enough to form a bound state. They exist only as free quarks, and then decay.
Raeky / Wikimedia Commons
5.) You can use energy to create matter — massive particles — out of nothing but pure energy. This is perhaps the most profound lesson of all. If you took two billiard balls and smashed one into the other, you’d always expect the results to have something in common: they’d always result in two and only two billiard balls.
With particles, though, the story is different. If you take two electrons and smash them together, you’ll get two electrons out, but with enough energy, you might also get a new matter-antimatter pair of particles out, too. In other words, you will have created two new, massive particles where none existed previously: a matter particle (electron, muon, proton, etc.) and an antimatter particle (positron, antimuon, antiproton, etc.).
Whenever two particles collide at high enough energies, they have the opportunity to produce additional particle-antiparticle pairs, or new particles as the laws of quantum physics allow. Einstein’s E = mc^2 is indiscriminate this way. In the early Universe, enormous numbers of neutrinos and antineutrinos are produced this way in the first fraction-of-a-second of the Universe, but they neither decay nor are efficient at annihilating away.
E. Siegel / Beyond The Galaxy
This is how particle accelerators successfully create the new particles they’re searching for: by providing enough energy to create those particles (and, if necessary, their antiparticle counterparts) from a rearrangement of Einstein’s most famous equation. Given enough free energy, you can create any particle(s) with mass m, so long as there’s enough energy to satisfy the requirement that there’s enough available energy to make that particle via m = E/c2. If you satisfy all the quantum rules and have enough energy to get there, you have no choice but to create new particles.
The production of matter/antimatter pairs (left) from pure energy is a completely reversible reaction (right), with matter/antimatter annihilating back to pure energy. When a photon is created and then destroyed, it experiences those events simultaneously, while being incapable of experiencing anything else at all.
Dmitri Pogosyan / University of Alberta
Einstein’s E = mc2 is a triumph for the simple rules of fundamental physics. Mass isn’t a fundamental quantity, but energy is, and mass is just one possible form of energy. Mass can be converted into energy and back again, and underlies everything from nuclear power to particle accelerators to atoms to the Solar System. So long as the laws of physics are what they are, it couldn’t be any other way. As Einstein himself said:
It followed from the special theory of relativity that mass and energy are both but different manifestations of the same thing — a somewhat unfamiliar conception for the average mind.
More than 60 years after Einstein’s death, it’s long past time to bring his famous equation down to Earth. The laws of nature aren’t just for physicists; they’re for every curious person on Earth to experience, appreciate, and enjoy.
Starts With A Bang is dedicated to exploring the story of what we know about the Universe as well as how we know it, with a focus on physics, astronomy, and the scientific story that the Universe tells us about itself. Written by Ph.D. scientists and edited/created by astrophysicist Ethan Siegel, our goal is to share the joy, wonder and awe of scientific discovery.
The phenomenon known as “tunneling” is one of the best-known predictions of quantum physics, because it so dramatically confounds our classical intuition for how objects ought to behave. If you create a narrow region of space that a particle would have to have a relatively high energy to enter, classical reasoning tells us that low-energy particles heading toward that region should reflect off the boundary with 100% probability. Instead, there is a tiny chance of finding those particles on the far side of the region, with no loss of energy. It’s as if they simply evaded the “barrier” region by making a “tunnel” through it.
It’s very important to note that this phenomenon is absolutely and unquestionably real, demonstrated in countless ways. The most dramatic of these is sunlight— the Sun wouldn’t be able to fuse hydrogen into helium without quantum tunneling— but it’s also got more down-to-earth technological applications. Tunneling serves as the basis for Scanning Tunneling Microscopy, which uses the tunneling of electrons across a tiny gap between a sharp tip and a surface to produce maps of that surface that can readily resolve single atoms. It’s also essential for the Josephson effect, which is the basis of superconducting detectors of magnetic fields and some of the superconducting systems proposed for quantum computing.
So, there is absolutely no debate among physicists about whether quantum tunneling is a thing that happens. Physicists get a bit twitchy without something to argue over, though, and you don’t have to dig into tunneling (heh) very far to find a disputed question, namely “How long does quantum tunneling take?”
This is an active area of research, and one I’ve written about before. The tricky part is that the distances involved in quantum tunneling are necessarily very small, making the times involved extremely short. It’s also very difficult to ensure that you know where and when the process starts, because, again, the whole business needs to be quantum, with all the measurement and uncertainty issues that brings in.
In the old post linked above, I talked about a couple of experiments involving intense and ultra-fast laser pulses, which rip an electron out of an atom, and then deflect its path in a direction that varies in time. This is a really clever trick, and the experiments are impressive technical achievements; unfortunately, they don’t entirely agree, with some experiments suggesting a short but definitely not zero tunneling time, and others finding a time so short it might as well be zero. So the question isn’t completely settled…
The latest contribution to the ongoing argument showed up on the arxiv just last night, in the form of a new tunneling-time paper from Aephraim Steinberg’s group at the University of Toronto. This one uses the internal states of atoms tunneling through a barrier to make a kind of clock that only “ticks” while the atoms are inside the barrier region.
As with so many things involving atomic physics these days, the key enabling technology here is Bose-Einstein Condensation. They’re able to measure the tunneling of rubidium atoms (which many thousands of times bigger and heavier than the electrons in the pulsed-laser experiments) across a barrier a bit more than a micron thick (several thousand times the distance in the pulsed-laser experiments) because the atoms are incredibly cold and slow-moving. The temperature of their atom cloud is just a few billionths of a degree above absolute zero, and they push them into the barrier at speeds of just a few millimeters per second.
The big advantage this offers is that unlike electrons, which are point particles, atoms have complicated internal structure and can be put in a bunch of different states. This lets them make an energy barrier out of a thin sheet of laser light that increases the energy of the atom in the light. They can control the energy shift by adjusting the laser parameters to get any height they want— they can even “turn off” the barrier without turning off the laser, by making a small shift in the laser frequency, which is crucial for establishing the timing.
The laser also changes the internal state of the atoms in a way that varies in time, letting them use the atoms as a kind of clock. They prepare a sample that’s exclusively in one particular state, and set the laser up in such a way that it drives a slow evolution into a different internal state. They separate the two different states on the far side of the barrier, and measure the probability of changing states. Once they have that, it’s relatively easy to convert that into a measurement of how much time the atoms spent interacting with the laser.
They end up with a number that’s definitely not zero— between 0.55ms and 0.69ms— that agrees well with one of the quantum methods for predicting tunneling time, and disagrees with a “semiclassical” model very badly. It’s always nice to get this kind of discrimination between models; their method also gives them a nice way to separate out the perturbation that comes from making the measurement from the “clock” they’re using, which is a nice bonus.
As a fellow cold-atom guy, I find this experiment very impressive and convincing, and there’s potential to extend this to other cool tunneling-related measurements, maybe even tracking the atoms as they move through the barrier. Physicists being physicists, though, I expect the argument over what, exactly, this all means will continue— I’d be a little surprised if zero-tunneling-time partisans gave up without finding some feature of this system to claim as a loophole.
Arcane disputes aside, though, it’s worth taking a step back to note how absolutely incredible it is that we can even have a sensible conversation about something as arcane as the amount of time a tunneling atom spends in places where classical physics says it can’t possibly be. The technology we’ve developed for probing the weirdest of quantum phenomena over the last few decades is mind-boggling, and continues to get better all the time.
Disclosure: Steinberg and I worked in the same research group at NIST in the late 1990’s— he was a postdoc working on BEC and I was a grad student on a different project. I actually had dinner with him a week ago in Toronto, but we didn’t discuss this experiment.
I’m an Associate Professor in the Department of Physics and Astronomy at Union College, and I write books about science for non-scientists. I have a BA in physics from Williams College and a Ph.D. in Chemical Physics from the University of Maryland, College Park (studying laser cooling at the National Institute of Standards and Technology in the lab of Bill Phillips, who shared the 1997 Nobel in Physics). I was a post-doc at Yale, and have been at Union since 2001. My books _How to Teach Physics to Your Dog_ and _How to teach Relativity to Your Dog_ explain modern physics through imaginary conversations with my German Shepherd; _Eureka: Discovering Your Inner Scientist_ (Basic, 2014), explains how we use the process of science in everyday activities, and my latest, _Breakfast With Einstein: The Exotic Physics of Everyday Objects_ (BenBella 2018) explains how quantum phenomena manifest in the course of an ordinary morning. I live in Niskayuna, NY with my wife, Kate Nepveu, our two kids, and Charlie the pupper.
An illustration of our cosmic history, from the Big Bang until the present, within the context of the expanding Universe. We cannot be certain, despite what many have contended, that the Universe began from a singularity. We can, however, break the illustration you see into the different eras based on properties the Universe had at those particular times. We are already in the Universe’s 6th and final era.
NASA / WMAP science team
The Universe is not the same today as it was yesterday. With each moment that goes by, a number of subtle but important changes occur, even if many of them are imperceptible on measurable, human timescales. The Universe is expanding, which means that the distances between the largest cosmic structures are increasing with time.
A second ago, the Universe was slightly smaller; a second from now, the Universe will be slightly larger. But those subtle changes both build up over large, cosmic timescales, and affect more than just distances. As the Universe expands, the relative importance of radiation, matter, neutrinos, and dark energy all change. The temperature of the Universe changes. And what you’d see in the sky would change dramatically as well. All told, there are six different eras we can break the Universe into, and we’re already in the final one.
The reason for this can be understood from the graph above. Everything that exists in our Universe has a certain amount of energy in it: matter, radiation, dark energy, etc. As the Universe expands, the volume that these forms of energy occupy changes, and each one will have its energy density evolve differently. In particular, if we define the observable horizon by the variable a, then:
matter will have its energy density evolve as 1/a3, since (for matter) density is just mass over volume, and mass can easily be converted to energy via E = mc2,
radiation will have its energy density evolve as 1/a4, since (for radiation) the number density is the number of particles divided by volume, and the energy of each individual photon stretches as the Universe expands, adding an additional factor of 1/a relative to matter,
and dark energy is a property of space itself, so its energy density remains constant (1/a0), irrespective of the Universe’s expansion or volume.
A Universe that has been around longer, therefore, will have expanded more. It will be cooler in the future and was hotter in the past; it was gravitationally more uniform in the past and is clumpier now; it was smaller in the past and will be much, much larger in the future.
By applying the laws of physics to the Universe, and comparing the possible solutions with the observations and measurements we’ve obtained, we can determine both where we came from and where we’re headed. We can extrapolate our past history all the way back to the beginning of the hot Big Bang and even before, to a period of . We can extrapolate our current Universe into the far distant future as well, and foresee the ultimate fate that awaits everything that exists.
When we draw the dividing lines based on how the Universe behaves, we find that there are six different eras that will come to pass.
Inflationary era: which preceded and set up the hot Big Bang.
Primordial Soup era: from the start of the hot Big Bang until the final transformative nuclear & particle interactions occur in the early Universe.
Plasma era: from the end of non-scattering nuclear and particle interactions until the Universe cools enough to stably form neutral matter.
Dark Ages era: from the formation of neutral matter until the first stars and galaxies reionize the intergalactic medium of the Universe completely.
Stellar era: from the end of reionization until the gravity-driven formation and growth of large-scale structure ceases, when the dark energy density dominates over the matter density.
Dark Energy era: the final stage of our Universe, where the expansion accelerates and disconnected objects speed irrevocably and irreversibly away from one another.
We already entered this final era billions of years ago. Most of the important events that will define our Universe’s history have already occurred.
1.) Inflationary era. Prior to the hot Big Bang, the Universe wasn’t filled with matter, antimatter, dark matter or radiation. It wasn’t filled with particles of any type. Instead, it was filled with a form of energy inherent to space itself: a form of energy that caused the Universe to expand both extremely rapidly and relentlessly, in an exponential fashion.
It stretched the Universe, from whatever geometry it once had, into a state indistinguishable from spatially flat.
It expanded a small, causally connected patch of the Universe to one much larger than our presently visible Universe: larger than the current causal horizon.
It took any particles that may have been present and expanded the Universe so rapidly that none of them are left inside a region the size of our visible Universe.
And the quantum fluctuations that occurred during inflation created the seeds of structure that gave rise to our vast cosmic web today.
And then, abruptly, some 13.8 billion years ago, inflation ended. All of that energy, once inherent to space itself, got converted into particles, antiparticles, and radiation. With this transition, the inflationary era ended, and the hot Big Bang began.
2.) Primordial Soup era. Once the expanding Universe is filled with matter, antimatter and radiation, it’s going to cool. Whenever particles collide, they’ll produce whatever particle-antiparticle pairs are allowed by the laws of physics. The primary restriction comes only from the energies of the collisions involved, as the production is governed by E = mc2.
As the Universe cools, the energy drops, and it becomes harder and harder to create more massive particle-antiparticle pairs, but annihilations and other particle reactions continue unabated. 1-to-3 seconds after the Big Bang, the antimatter is all gone, leaving only matter behind. 3-to-4 minutes after the Big Bang, stable deuterium can form, and nucleosynthesis of the light elements occurs. And after some radioactive decays and a few final nuclear reactions, all we have left is a hot (but cooling) ionized plasma consisting of photons, neutrinos, atomic nuclei and electrons.
3.) Plasma era. Once those light nuclei form, they’re the only positively (electrically) charged objects in the Universe, and they’re everywhere. Of course, they’re balanced by an equal amount of negative charge in the form of electrons. Nuclei and electrons form atoms, and so it might seem only natural that these two species of particle would find one another immediately, forming atoms and paving the way for stars.
Unfortunately for them, they’re vastly outnumbered — by more than a billion to one — by photons. Every time an electron and a nucleus bind together, a high-enough energy photon comes along and blasts them apart. It isn’t until the Universe cools dramatically, from billions of degrees to just thousands of degrees, that neutral atoms can finally form. (And even then, it’s only possible because of a special atomic transition.)
At the beginning of the Plasma era, the Universe’s energy content is dominated by radiation. By the end, it’s dominated by normal and dark matter. This third phase takes us to 380,000 years after the Big Bang.
S. G. Djorgovski et al., Caltech Digital Media Center
4.) Dark Ages era. Filled with neutral atoms, at last, gravitation can begin the process of forming structure in the Universe. But with all these neutral atoms around, what we presently know as visible light would be invisible all throughout the sky.
Why’s that? Because neutral atoms, particularly in the form of cosmic dust, are outstanding at blocking visible light.
In order to end these dark ages, the intergalactic medium needs to be reionized. That requires enormous amounts of star-formation and tremendous numbers of ultraviolet photons, and that requires time, gravitation, and the start of the cosmic web. The first major regions of reionization take place 200-250 million years after the Big Bang, but reionization doesn’t complete, on average, until the Universe is 550 million years old. At this point, the star-formation rate is still increasing, and the first massive galaxy clusters are just beginning to form.
NASA, ESA, A. Koekemoer (STScI), M. Jauzac (Durham University), C. Steinhardt (Niels Bohr Institute), and the BUFFALO team
5.) Stellar era. Once the dark ages are over, the Universe is now transparent to starlight. The great recesses of the cosmos are now accessible, with stars, star clusters, galaxies, galaxy clusters, and the great, growing cosmic web all waiting to be discovered. The Universe is dominated, energy-wise, by dark matter and normal matter, and the gravitationally bound structures continue to grow larger and larger.
The star-formation rate rises and rises, peaking about 3 billion years after the Big Bang. At this point, new galaxies continue to form, existing galaxies continue to grow and merge, and galaxy clusters attract more and more matter into them. But the amount of free gas within galaxies begins to drop, as the enormous amounts of star-formation have used up a large amount of it. Slowly but steadily, the star-formation rate drops.
As time goes forward, the stellar death rate will outpace the birth rate, a fact made worse by the following surprise: as the matter density drops with the expanding Universe, a new form of energy — dark energy — begins to appear and dominate. 7.8 billion years after the Big Bang, distant galaxies stop slowing down in their recession from one another, and begin speeding up again. The accelerating Universe is upon us. A little bit later, 9.2 billion years after the Big Bang, dark energy becomes the dominant component of energy in the Universe. At this point, we enter the final era.
NASA & ESA
6.) Dark Energy age. Once dark energy takes over, something bizarre happens: the large-scale structure in the Universe ceases to grow. The objects that were gravitationally bound to one another before dark energy’s takeover will remain bound, but those that were not yet bound by the onset of the dark energy age will never become bound. Instead, they will simply accelerate away from one another, leading lonely existences in the great expanse of nothingness.
The individual bound structures, like galaxies and groups/clusters of galaxies, will eventually merge to form one giant elliptical galaxy. The existing stars will die; new star formation will slow down to a trickle and then stop; gravitational interactions will eject most of the stars into the intergalactic abyss. Planets will spiral into their parent stars or stellar remnants, owing to decay by gravitational radiation. Even black holes, with extraordinarily long lifetimes, will eventually decay from Hawking radiation.
Image courtesy of Jeff Bryant
In the end, only black dwarf stars and isolated masses to small to ignite nuclear fusion will remain, sparsely populated and disconnected from one another in this empty, ever-expanding cosmos. These final-state corpses will exist even googols of years onward, continuing to persist as dark energy remains the dominant factor in our Universe.
This last era, of dark energy domination, has already begun. Dark energy became important for the Universe’s expansion 6 billion years ago, and began dominating the Universe’s energy content around the time our Sun and Solar System were being born. The Universe may have six unique stages, but for the entirety of Earth’s history, we’ve already been in the final one. Take a good look at the Universe around us. It will never be this rich — or this easy to access — ever again.
Cosmic rays, which are ultra-high energy particles originating from all over the Universe, strike protons in the upper atmosphere and produce showers of new particles. The fast-moving charged particles also emit light due to Cherenkov radiation as they move faster than the speed of light in Earth’s atmosphere, and produce secondary particles that can be detected here on Earth.
Simon Swordy (U. Chicago), NASA
When you hold out your palm and point it towards the sky, what is it that’s interacting with your hand? You might correctly surmise that there are ions, electrons and molecules all colliding with your hand, as the atmosphere is simply unavoidable here on Earth. You might also remember that photons, or particles of light, must be striking you, too.
But there’s something more striking your hand that, without relativity, simply wouldn’t be possible. Every second, approximately one muon — the unstable, heavy cousin of the electron — passes through your outstretched palm. These muons are made in the upper atmosphere, created by cosmic rays. With a mean lifetime of 2.2 microseconds, you might think the ~100+ km journey to your hand would be impossible. Yet relativity makes it so, and the palm of your hand can prove it. Here’s how.
While cosmic ray showers are common from high-energy particles, it’s mostly the muons which make it down to Earth’s surface, where they are detectable with the right setup.
Alberto Izquierdo; courtesy of Francisco Barradas Solas
Individual, subatomic particles are almost always invisible to human eyes, as the wavelengths of light we can see are unaffected by particles passing through our bodies. But if you create a pure vapor made out of 100% alcohol, a charged particle passing through it will leave a trail that can be visually detected by even as primitive an instrument as the human eye.
As a charged particle moves through the alcohol vapor, it ionizes a path of alcohol particles, which act as centers for the condensation of alcohol droplets. The trail that results is both long enough and long-lasting enough that human eyes can see it, and the speed and curvature of the trail (if you apply a magnetic field) can even tell you what type of particle it was.
This principle was first applied in particle physics in the form of a cloud chamber.
A completed cloud chamber can be built in a day out of readily-available materials and for less than $100. You can use it to prove the validity of Einstein’s relativity, if you know what you’re doing!
Instructables user ExperiencingPhysics
Today, a cloud chamber can be built, by anyone with commonly available parts, for a day’s worth of labor and less than $100 in parts. (I’ve published a guide here.) If you put the mantle from a smoke detector inside the cloud chamber, you’ll see particles emanate from it in all directions and leave tracks in your cloud chamber.
That’s because a smoke detector’s mantle contains radioactive elements such as Americium, which decays by emitting α-particles. In physics, α-particles are made up of two protons and two neutrons: they’re the same as a helium nucleus. With the low energies of the decay and the high mass of the α-particles, these particles make slow, curved tracks and can even be occasionally seen bouncing off of the cloud chamber’s bottom. It’s an easy test to see if your cloud chamber is working properly.
For an extra bonus of radioactive tracks, add the mantle of a smoke detector to the bottom of your cloud chamber, and watch the slow-moving particles emanating outward from it. Some will even bounce off the bottom!
If you build a cloud chamber like this, however, those α-particle tracks aren’t the only things you’ll see. In fact, even if you leave the chamber completely evacuated (i.e., you don’t put a source of any type inside or nearby), you’ll still see tracks: they’ll be mostly vertical and appear to be perfectly straight.
This is because of cosmic rays: high-energy particles that strike the top of Earth’s atmosphere, producing cascading particle showers. Most of the cosmic rays are made up of protons, but move with a wide variety of speeds and energies. The higher-energy particles will collide with particles in the upper atmosphere, producing particles like protons, electrons, and photons, but also unstable, short-lived particles like pions. These particle showers are a hallmark of fixed-target particle physics experiments, and they occur naturally from cosmic rays, too.
Although there are four major types of particles that can be detected in a cloud chamber, the long and straight tracks are the cosmic ray muons, which can be used to prove that special relativity is correct.
Wikimedia Commons user Cloudylabs
The thing about pions is that they come in three varieties: positively charged, neutral, and negatively charged. When you make a neutral pion, it just decays into two photons on very short (~10-16 s) timescales. But charged pions live longer (for around 10-8 s) and when they decay, they primarily decay into muons, which are point particles like electrons but have 206 times the mass.
Muons also are unstable, but they’re the longest-lived unstable fundamental particle as far as we know. Owing to their relatively small mass, they live for an astoundingly long 2.2 microseconds, on average. If you were to ask how far a muon could travel once created, you might think to multiply its lifetime (2.2 microseconds) by the speed of light (300,000 km/s), getting an answer of 660 meters. But that leads to a puzzle.
Cosmic ray shower and some of the possible interactions. Note that if a charged pion (left) strikes a nucleus before it decays, it produces a shower, but if it decays first (right), it produces a muon that will reach the surface.
Konrad Bernlöhr of the Max-Planck-Institute at Heidelberg
I told you earlier that if you hold out the palm of your hand, roughly one muon per second passes through it. But if they can only live for 2.2 microseconds, they’re limited by the speed of light, and they’re created in the upper atmosphere (around 100 km up), how is it possible for those muons to reach us?
You might start to think of excuses. You might imagine that some of the cosmic rays have enough energy to continue cascading and producing particle showers during their entire journey to the ground, but that’s not the story the muons tell when we measure their energies: the lowest ones are still created some 30 km up. You might imagine that the 2.2 microseconds is just an average, and maybe the rare muons that live for 3 or 4 times that long will make it down. But when you do the math, only 1-in-1050 muons would survive down to Earth; in reality, nearly 100% of the created muons arrive.
A light-clock, formed by a photon bouncing between two mirrors, will define time for any observer. Although the two observers may not agree with one another on how much time is passing, they will agree on the laws of physics and on the constants of the Universe, such as the speed of light. When relativity is applied correctly, their measurements will be found to be equivalent to one another, as the correct relativistic transformation will allow one observer to understand the observations of the other.
John D. Norton
How can we explain such a discrepancy? Sure, the muons are moving close to the speed of light, but we’re observing them from a reference frame where we’re stationary. We can measure the distance the muons travel, we can measure the time they live for, and even if we give them the benefit of the doubt and say that they’re moving at (rather than near) the speed of light, they shouldn’t even make it for 1 kilometer before decaying.
But this misses one of the key points of relativity! Unstable particles don’t experience time as you, an external observer, measures it. They experience time according to their own onboard clocks, which will run slower the closer they move to the speed of light. Time dilates for them, which means that we will observe them living longer than 2.2 microseconds from our reference frame. The faster they move, the farther we’ll see them travel.
One revolutionary aspect of relativistic motion, put forth by Einstein but previously built up by Lorentz, Fitzgerald, and others, that rapidly moving objects appeared to contract in space and dilate in time. The faster you move relative to someone at rest, the greater your lengths appear to be contracted, while the more time appears to dilate for the outside world. This picture, of relativistic mechanics, replaced the old Newtonian view of classical mechanics, and can explain the lifetime of a cosmic ray muon.
How does this work out for the muon? From its reference frame, time passes normally, so it will only live for 2.2 microseconds according to its own clocks. But it will experience reality as though it hurtles towards Earth’s surface extremely close to the speed of light, causing lengths to contract in its direction of motion.
If a muon moves at 99.999% the speed of light, every 660 meters outside of its reference frame will appear as though it’s just 3 meters in length. A journey of 100 km down to the surface would appear to be a journey of 450 meters in the muon’s reference frame, taking up just 1.5 microseconds of time according to the muon’s clock.
At high enough energies and velocities, relativity becomes important, allowing many more muons to survive than would without the effects of time dilation.
Frisch/Smith, Am. J. of Phys. 31 (5): 342–355 (1963) / Wikimedia Commons user D.H
This teaches us how to reconcile things for the muon: from our reference frame here on Earth, we see the muon travel 100 km in a timespan of about 4.5 milliseconds. This is just fine, because time is dilated for the muon and lengths are contracted for it: it sees itself as traveling 450 meters in 1.5 microseconds, and hence it can remain alive all the way down to its destination of Earth’s surface.
Without the laws of relativity, this cannot be explained! But at high velocities, which correspond to high particle energies, the effects of time dilation and length contraction enable not just a few but most of the created muons to survive. This is why, even all the way down here at the surface of the Earth, one muon per second still appears to pass through your upturned, outstretched hand.
The V-shaped track in the center of the image arises from a muon decaying to an electron and two neutrinos. The high-energy track with a kink in it is evidence of a mid-air particle decay. By colliding positrons and electrons at a specific, tunable energy, muon-antimuon pairs could be produced at will. The necessary energy for making a muon/antimuon pair from high-energy positrons colliding with electrons at rest is almost identical to the energy from electron/positron collisions necessary to create a Z-boson.
The Scottish Science & Technology Roadshow
If you ever doubted relativity, it’s hard to fault you: the theory itself seems so counterintuitive, and its effects are thoroughly outside the realm of our everyday experience. But there is an experimental test you can perform right at home, cheaply and with just a single day’s efforts, that allow you see the effects for yourself.
You can build a cloud chamber, and if you do, you will see those muons. If you installed a magnetic field, you’d see those muon tracks curve according to their charge-to-mass ratio: you’d immediately know they weren’t electrons. On rare occasion, you’d even see a muon decaying in mid-air. And, finally, if you measured their energies, you’d find that they were moving ultra-relativistically, at 99.999%+ the speed of light. If not for relativity, you wouldn’t see a single muon at all.
Time dilation and length contraction are real, and the fact that muons survive, from cosmic ray showers all the way down to Earth, prove it beyond a shadow of a doubt.
The Microseconds That Can Rule Out Relative Time! According to Albert Einstein’s Theory Of Special Relativity, your time and my time are different, subject to and conditional to the question of your speed of movement and my speed of movement. The speed in which we are moving toward each other or the speed in which […]
One of the most astonishing facts about science is how universally applicable the laws of nature are. Every particle obeys the same rules, experiences the same forces, and sees the same fundamental constants, no matter where or when they exist. Gravitationally, every single entity in the Universe experiences, depending on how you look at it, either the same gravitational acceleration or the same curvature of spacetime, no matter what properties it possesses. At least, that’s what things are like in theory. In practice, some things are notoriously difficult to measure……..
In 1915, Einstein’s theory of General Relativity gave us a brand new theory of gravity, based on the geometrical concept of curved spacetime. Matter and energy told space how to curve; curved space told matter and energy how to move. By 1922, scientists had discovered that if you fill the Universe uniformly with matter and energy, it won’t remain static, but will either expand or contract. By the end of the 1920s, led by the observations of Edwin Hubble, we had discovered our Universe was expanding, and had our first measurement of the expansion rate.
The journey to pin down exactly what that rate is has now hit a snag, with two different measurement techniques yielding inconsistent results. It could be an indicator of new physics. But there could be an even simpler solution, and nobody wants to talk about it.
Standard candles (L) and standard rulers (R) are two different techniques astronomers use to measure the expansion of space at various times/distances in the past. Based on how quantities like luminosity or angular size change with distance, we can infer the expansion history of the Universe.NASA / JPL-Caltech
The controversy is as follows: when we see a distant galaxy, we’re seeing it as it was in the past. But it isn’t simply that you look at light that took a billion years to arrive and conclude that the galaxy is a billion light years away. Instead, the galaxy will actually be more distant than that.
Why’s that? Because the space that makes up our Universe itself is expanding. This prediction of Einstein’s General Relativity, first recognized in the 1920s and then observationally validated by Edwin Hubble several years later, has been one of the cornerstones of modern cosmology.
A plot of the apparent expansion rate (y-axis) vs. distance (x-axis) is consistent with a Universe that expanded faster in the past, but where distant galaxies are accelerating in their recession today. This is a modern version of, extending thousands of times farther than, Hubble’s original work. Note the fact that the points do not form a straight line, indicating the expansion rate’s change over time.Ned Wright, based on the latest data from Betoule et al. (2014)
The big question is how to measure it. How do we measure how the Universe is expanding? All methods invariably rely on the same general rules:
you pick a point in the Universe’s past where you can make an observation,
you measure the properties you can measure about that distant point,
and you calculate how the Universe would have had to expand from then until now to reproduce what you see.
This could be from a wide variety of methods, ranging from observations of the nearby Universe to objects billions of light years away.
The Planck satellite’s data, combined with the other complementary data suites, gives us very tight constraints on the allowed values of cosmological parameters. The Hubble expansion rate today, in particular, is tightly constrained to be between 67 and 68 km/s/Mpc, with very little wiggle-room. The measurements from the Cosmic Distance Ladder method (Riess et al., 2018) are not consistent with this result.PLANCK 2018 RESULTS. VI. COSMOLOGICAL PARAMETERS; PLANCK COLLABORATION (2018)
For many years now, there’s been a controversy brewing. Two different measurement methods — one using the cosmic distance ladder and one using the first observable light in the Universe — give results that are mutually inconsistent. The tension has enormous implications that something may be wrong with how we conceive of the Universe.
There is another explanation, however, that’s much simpler than the idea that either something is wrong with the Universe or that some new physics is required. Instead, it’s possible that one (or more) method has a systematic error associated with it: an inherent flaw to the method that hasn’t been identified yet that’s biasing its results. Either method (or even both methods) could be at fault. Here’s the story of how.
The Variable Star RS Puppis, with its light echoes shining through the interstellar clouds. Variable stars come in many varieties; one of them, Cepheid variables, can be measured both within our own galaxy and in galaxies up to 50-60 million light years away. This enables us to extrapolate distances from our own galaxy to far more distant ones in the Universe.NASA, ESA, and the Hubble Heritage Team
The cosmic distance ladder is the oldest method we have to compute the distances to faraway objects. You start by measuring something close by: the distance to the Sun, for example. Then you use direct measurements of distant stars using the motion of the Earth around the Sun — known as parallax — to calculate the distance to nearby stars. Some of these nearby stars will include variable stars like Cepheids, which can be measured accurately in nearby and distant galaxies, and some of those galaxies will contain events like type Ia supernovae, which are some of the most distant objects of all.
Make all of these measurements, and you can derive distances to galaxies many billions of light years away. Put it all together with easily-measurable redshifts, and you’ll arrive at a measurement for the rate of expansion of the Universe.
The construction of the cosmic distance ladder involves going from our Solar System to the stars to nearby galaxies to distant ones. Each “step” carries along its own uncertainties, especially the Cepheid variable and supernovae steps; it also would be biased towards higher or lower values if we lived in an underdense or overdense region.NASA, ESA, A. FEILD (STSCI), AND A. RIESS (STSCI/JHU)
This is how dark energy was first discovered, and our best methods of the cosmic distance ladder give us an expansion rate of 73.2 km/s/Mpc, with an uncertainty of less than 3%.
Universal light-curve properties for Type Ia supernovae. This result, first obtained in the late 1990s, has recently been called into question; supernovae may not. in fact, have light curves that are as universal as previously thought.S. Blondin and Max Stritzinger
On the other hand, we have measurements of the Universe’s composition and expansion rate from the earliest available picture of it: the Cosmic Microwave Background. The minuscule, 1-part-in-30,000 temperature fluctuations display a very specific pattern on all scales, from the largest all-sky ones down to 0.07° or so, where its resolution is limited by the fundamental astrophysics of the Universe itself.
The final results from the Planck collaboration show an extraordinary agreement between the predictions of a dark energy/dark matter-rich cosmology (blue line) with the data (red points, black error bars) from the Planck team. All 7 acoustic peaks fit the data extraordinarily well.PLANCK 2018 RESULTS. VI. COSMOLOGICAL PARAMETERS; PLANCK COLLABORATION (2018)
Based on the full suite of data from Planck, we have exquisite measurements for what the Universe is made of and how it’s expanded over its history. The Universe is 31.5% matter (where 4.9% is normal matter and the rest is dark matter), 68.5% dark energy, and just 0.01% radiation. The Hubble expansion rate, today, is determined to be 67.4 km/s/Mpc, with an uncertainty of only around 1%. This creates an enormous tension with the cosmic distance ladder results.
An illustration of clustering patterns due to Baryon Acoustic Oscillations, where the likelihood of finding a galaxy at a certain distance from any other galaxy is governed by the relationship between dark matter and normal matter. As the Universe expands, this characteristic distance expands as well, allowing us to measure the Hubble constant, the dark matter density, and even the scalar spectral index. The results agree with the CMB data.ZOSIA ROSTOMIAN
In addition, we have another measurement from the distant Universe that gives another measurement, based on the way that galaxies cluster together on large scales. When you have a galaxy, you can ask a simple-sounding question: what is the probability of finding another galaxy a specific distance away?
Based on what we know about dark matter and normal matter, there’s an enhanced probability of finding a galaxy 500 million light years distant from another versus 400 million or 600 million. This is for today, and so as the Universe was smaller in the past, the distance scale corresponding to this probability enhancement changes as the Universe expands. This method is known as the inverse distance ladder, and gives a third method to measure the expanding Universe. It also gives an expansion rate of around 67 km/s/Mpc, again with a small uncertainty.
Modern measurement tensions from the distance ladder (red) with CMB (green) and BAO (blue) data. The red points are from the distance ladder method; the green and blue are from ‘leftover relic’ methods. Note that the errors on red vs. green/blue measurements do not overlap.AUBOURG, ÉRIC ET AL. PHYS.REV. D92 (2015) NO.12, 123516.
Now, it’s possible that both of these measurements have a flaw in them, too. In particular, many of these parameters are related, meaning that if you try and increase one, you have to decrease-or-increase others. While the data from Planck indicates a Hubble expansion rate of 67.4 km/s/Mpc, that rate could be higher, like 72 km/s/Mpc. If it were, that would simply mean we needed a smaller amount of matter (26% instead of 31.5%), a larger amount of dark energy (74% instead of 68.5%), and a larger scalar spectral index (ns) to characterize the density fluctuations (0.99 instead of 0.96).
This is deemed highly unlikely, but it illustrates how one small flaw, if we overlooked something, could keep these independent measurements from aligning.
Before Planck, the best-fit to the data indicated a Hubble parameter of approximately 71 km/s/Mpc, but a value of approximately 70 or above would now be too great for both the dark matter density (x-axis) we’ve seen via other means and the scalar spectral index (right side of the y-axis) that we require for the large-scale structure of the Universe to make sense.P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION (2015)
There are a lot of problems that arise for cosmology if the teams measuring the Cosmic Microwave Background and the inverse distance ladder are wrong. The Universe, from the measurements we have today, should not have the low dark matter density or the high scalar spectral index that a large Hubble constant would imply. If the value truly is closer to 73 km/s/Mpc, we may be headed for a cosmic revolution.
Correlations between certain aspects of the magnitude of temperature fluctuations (y-axis) as a function of decreasing angular scale (x-axis) show a Universe that is consistent with a scalar spectral index of 0.96 or 0.97, but not 0.99 or 1.00.P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION
On the other hand, if the cosmic distance ladder team is wrong, owing to a fault in any rung on the distance ladder, the crisis is completely evaded. There was one overlooked systematic, and once it’s resolved, every piece of the cosmic puzzle falls perfectly into place. Perhaps the value of the Hubble expansion rate really is somewhere between 66.5 and 68 km/s/Mpc, and all we had to do was identify one astronomical flaw to get there.
The fluctuations in the CMB, the formation and correlations between large-scale structure, and modern observations of gravitational lensing, among many others, all point towards the same picture: an accelerating Universe, containing and full of dark matter and dark energy.Chris Blake and Sam Moorfield
The possibility of needing to overhaul many of the most compelling conclusions we’ve reached over the past two decades is fascinating, and is worth investigating to the fullest. Both groups may be right, and there may be a physical reason why the nearby measurements are skewed relative to the more distant ones. Both groups may be wrong; they may both have erred.
But this controversy could end with the astronomical equivalent of a loose OPERA cable. The distance ladder group could have a flaw, and our large-scale cosmological measurements could be as good as gold. That would be the simplest solution to this fascinating saga. But until the critical data comes in, we simply don’t know. Meanwhile, our scientific curiosity demands that we investigate. No less than the entire Universe is at stake.