Why did the markets move? Most investors, analysts and even financial journalists will look, first and foremost, for news. Perhaps the jobs data were published, a firm announced it was being acquired or a central banker gave a sombre speech. Yet a small, dedicated cult of “chartists” or “technical analysts” believes that the movement of stocks, bonds and currencies can be divined by the making and interpreting of charts.
Their methods are many, varied and wackily named. A “death cross” is when a short-term moving average of an asset’s price falls below a long-term moving average. “Fibonacci retracement levels” rely on the idea that an asset climbing in price will fall back before rising again. Such backsliding is supposed to stop at levels based on Fibonacci numbers, like a 61.8% drop. The “ichimoku cloud”, loved by Japanese traders, sees the construction of a cloud by—bear with this—shading the area between two averages of high and low prices over the past week, month or two months.
A price above the cloud is auspicious; one below it is ominous. A true chartist needs only such information and “does not even care to know what business or industry a company is in, as long as he or she can study its chart”, as Burton Malkiel, an economist at Princeton and author of “A Random Walk Down Wall Street”, has noted.
These methods, though patently mad, have attracted attention lately because of how the s&p 500, the leading index of American stocks, has wiggled around. After slumping to a low of 3,637 on June 17th the index began to climb. On August 16th it peaked at an intraday high of around 4,325, a whisker away from its 200-day moving average of 4,326—a supposedly critical technical level.
An asset that has fallen in price but is rising is supposed to meet “resistance” at such levels. To chartists it is concerning when an asset fails to “break through” a resistance barrier—it is an indication of a bear-market rally, rather than a true bull market. And so, this time, it appears to have been: stocks have slumped by around 8% since August 16th.
Plenty of mainstream investors use some version of trend-following. Factor investing, invented by Eugene Fama, the Nobel prize-winning economist, and Kenneth French, is used by successful quantitative funds, like aqr Capital Management. It breaks down returns into component factors like “size” (small companies earn better returns than bigger ones) or “quality” (low-debt, stable businesses earn better returns than riskier ones).
Another such factor is momentum: stocks that are rising tend to keep rising. Still, their approach is a little more sophisticated than looking at a price chart. aqr’s algorithms tend to combine factors like momentum with others. They might buy, say, a small or high-quality firm whose share price has recently risen.
It is nevertheless possible to understand the chartists’ obsession with levels and trends. There is no real difference between a euro being worth $1.0001 or $0.9999, but these “big figures” in foreign-exchange markets assume importance. This is in part symbolic and in part practical: clients tend to place orders near round numbers and derivatives tend to be sold with round “strike prices”. That means it will take a lot more activity for the euro to fall from $1.0001 to $0.9999 than for it to fall from $1.0487 to $1.0485.
When placing orders, investors try to figure out where others are placing theirs. That can help them place a stop-loss order, to close a trade that moves against them, at a sensible level. If enough investors look at technical levels to inform their behaviour, then they begin to matter. Perhaps the real value of technical analysis is what its use tells you about market conditions.
No one bothers with the chartists’ pretty drawings when the economy is good, profits are high and stocks are moving smoothly higher—nor, indeed, in the depths of a frantic bear market, when prices will plunge through any and all levels technical analysts are wont to draw. Much as people who are feeling restless about the direction of their lives are more prone to become interested in astrology, investors who are uneasy about the direction of the markets will reach for the easy reassurance of an eye-catching diagram.
That some are laying the blame for the end of the summer rally on a technical tripwire suggests they have little idea what is really going on. Perhaps Buttonwood should derive a technical indicator of her own: the more regularly chartist analysis lands in her inbox, the clearer it is that no one has any clue as to why the markets are moving.
Correction (September 2nd 2022): An earlier version of this article said Eugene Fama and Kenneth French won the Nobel prize for their work on factor investing. They did not. Mr Fama won a Nobel prize for his work on the efficient-markets hypothesis. Mr French is yet to win a Nobel. Sorry.
Is it really that simple? I’ll fully admit that most of my understanding of the stock market is based on intuition and reading books with flashy titles. In other words, I’m far above the moving charalatan average in most aspects of my life, and you also sound like you might actually know what you’re talking about based on how definitive your answer is. If you also, for example, back-test a moving average strategy it usually ends up terrible I’ve noticed–either making almost no money or losing money.
Yet, I really like watching the stock market. It’s my favorite hobby in fact. It certainly feels like a death cross is going to be a bad time when it happens, but maybe the truth is more complicated then the cryptic silly lines I gain so much joy from.
You might already know this, but the stock market death-crossed around January and then it went down for a long time after that. But it also doesn’t always go down so dramatically every time there is a death cross. I’m not keeping accurate tallies on the whole thing, but it seems like it’s often bad when those two lines cross and often good when they cross in the opposite direction–the so-called “golden cross”.
I did actually put a little money on a stock because it golden-crossed recently and it went way up today, which was pretty exciting news. And I’m also avoiding a lot of stocks because they are about to death-crossed or just have recently on the weekly charts, but you know, I can also understand how the language I’m using might be mistaken for a seance organized by a bunch of teenagers dressed like vampires.
Mostly I listen to smart people like Warran Buffet and buy value stocks with good p/e, price-to-book, lots of equity, less debt, good earnings etc. It would be silly not to listen to the richest man in the world about what kind stocks to buy. Yet, it still feels like there’s something else there, even if my little hobby is about as realistic as playing a round of magic the gathering. It’s, you know, my little hour of make believe, that I put some money into in order to escape my otherwise mundane and factual existence.
The answers, according to a paper published in The Astrophysical Journal are 42,777 and sometime in the next 2,000 years. It’s a decent explanation for the Fermi Paradox, which asks why we still haven’t received any messages from other civilizations despite there being a high probability of them existing.
It estimates the number of possible CETIs—communicating extraterrestrial intelligent civilizations—within our Milky Way galaxy. It also looks at how probable it is that one of them could contact us and when.There are, of course, some huge unknowns behind these seemingly very precise estimates that if known would make a massive difference to the results:
The probability of life appearing on rocky planets and eventually evolving into a civilization advanced enough to contact another.
At what stage of their host star’s evolution such advanced civilizations would be born.
So the figure of 42,777—which has an error rate of a few hundred each side—is on the optimistic side. It’s based on an estimate that only 0.1% of civilizations could become advanced enough to contact another. This is where the Great Filter comes in.
It also takes into account the idea that any civilization would need to survive for about three million years, give or take, to reach that point.
Even if a message is ever sent to us from an advanced civilization elsewhere in the Milky Way the question remains as to whether humans can survive long enough to receive it. The authors suggest that we will need to wait as little as 2,000 years to receive one alien signal.
That’s the optimistic calculations. The authors’ pessimistic estimates are for just 0.001% of civilizations–about 111—to become advanced enough to contact another. The upshot of that would be that humans would need to wait for 400,000 years to receive a message.
“The minimum value (0.001%) we take may also be overestimated,” write the authors, Wenjie Song and He Gao at Beijing Normal University’s Department of Astronomy. “If so, the number of CETIs would become even lower, and the opportunities for communication between CETIs would become extremely small.”
The only message ever received on Earth that could have come from an extraterrestrial intelligence is the Wow! Signal, which was received in 1977 at the Big Ear radio telescope, Ohio. It was heard for 72 seconds—the maximum possible at the time—and was never repeated.
The source of that signal remains unknown though a recent paper found only one Sun-like star (called 2MASS 19281982- 2640123) in a sample of 66 in the region of the night sky that Wow! came from. It’s 18,000 light-years away.
The authors note that astronomers did send the “Arecibo message” to the Great Hercules Globular Cluster (M13) in 1974 using the now collapse Arecibo radio telescope. However, it wasn’t much good. “If there are indeed CETIs in M13, their detection ability needs to be 21 orders of magnitude higher than ours to detect our signal,” write the authors. “Conversely, if they transmit a similar signal, we need to improve the detection ability by 21 orders of magnitude to detect it.”
Space is big—really big—and even in-galaxy messaging is completely impractical. Even if we’re not alone it’s doubtful we’ll ever find out.But that doesn’t stop us looking for Earth-like exoplanets around 2MASS 19281982- 2640123.Wishing you clear skies and wide eyes.
Here’s a good sign for alien hunters: More than 300 million worlds with similar conditions to Earth are scattered throughout the Milky Way galaxy. A new analysis concludes that roughly half of the galaxy’s sunlike stars host rocky worlds in habitable zones where liquid water could pool or flow over the planets’ surfaces.
“This is the science result we’ve all been waiting for,” says Natalie Batalha, an astronomer with the University of California, Santa Cruz, who worked on the new study.
The number of sunlike stars with worlds similar to Earth “could have been one in a thousand, or one in a million—nobody really knew,” says Seth Shostak, an astronomer at the Search for Extraterrestrial Intelligence (SETI) Institute who was not involved with the new study.
Astronomers estimated the number of these planets using data from NASA’s planet-hunting Kepler spacecraft. For nine years, Kepler stared at the stars and watched for the brief twinkles produced when orbiting planets blot out a portion of their star’s light. By the end of its mission in 2018, Kepler had spotted some 2,800 exoplanets—many of them nothing like the worlds orbiting our sun.
But Kepler’s primary goal was always to determine how common planets like Earth are. The calculation required help from the European Space Agency’s Gaia spacecraft, which monitors stars across the galaxy. With Gaia’s observations in hand, scientists were finally able to determine that the Milky Way is populated by hundreds of millions of Earth-size planets orbiting sunlike stars—and that the nearest one is probably within 20 light-years of the solar system…..
At dusk on Monday, April 24, 2021 right across the world the year’s second of four “supermoons” will rise. It will be ever-so slightly bigger than most full Moons because of its closeness to Earth in its egg-shaped orbit, but not so much that you’ll notice.
It will still look spectacular at it appears on your eastern horizon at dusk—as all full Moons do—but while the effect on you will be slight, the effects of Monday’s “supermoon” on the natural world will be dramatic.
Routinely derided astronomers they may be, but geographers know only too well that “supermoons” are actual physical phenomenons with consequences for the natural world.
The most recent published research reveals that, according to a 25-year study, “supermoons” cause bigger tidal ranges, higher water levels and more severe erosion.
In today’s video, we will talk about the super pink full moon on April 26 2021, and the 10 rituals you need to do! Get Your FREE Numerology Reading Here… http://numerologysisters.com/freereading April’s pink full moon is also a supermoon. It will be the biggest and brightest in 2021. It is called the pink moon after the flower phlox, the pink flower that blooms in spring.
A “supermoon” is a full moon that appears much larger than a normal full Moon. Technically they’re known as perigee full Moons by astronomers. The Moon’s orbital path around Earth is a slight ellipse, so each month there’s a near-point (perigee) and a far-point (apogee). At perigee it appears a little larger than the average apparent size (a “supermoon”), and at apogee, a little smaller (a “micromoon”).
The second of four “supermoons” or “perigee full Moons” of 2021, April’s full Moon will appear about 6% larger than an average full Moon.
The daily rise and fall of sea levels are called tides. They are caused by the Moon’s gravitational pull on the oceans as it orbits Earth, but also the Sun’s gravitational pull. They combine during a New Moon and a full Moon.
The main physical effect of a supermoon is a king tide, which increases the risk of coastal inundation. A king tide is an unusually high tide that results from a stronger lunar gravitational force than normal.
It’s also known as a perigean spring tide and is an entirely predictable astronomical tide.
Although the effect is magnified the closer the Moon is to the Earth, a supermoon can occur at both New Moon and full Moon. In practice, a supermoon at New Moon is barely mentioned in the media, though its physical effects are just as strong.
That’s because the Moon aligns with the Sun and the Earth every 14 days. At full Moon the Earth is in between the Sun and the Moon, while at New Moon the Moon is between the Sun and the Earth. At both times of the month the resulting alignment causes a tidal force. When the Moon is closer to the Earth than normal during a New Moon or a full Moon—so, during a supermoon—that tidal force is increased.
The distance to the Moon from Earth’s center changes from 406,000 km at apogee to about 357,000 km at perigee.
The research showed a long-term correlation between erosion across the beach and the Moon’s cycles, and suggested that a supermoon increases the risk of more severe beach erosion near the shoreline. These supermoon-driven king tides are more likely to cause coastal disasters when they occur simultaneously with storm surges and high waves.
So as you gaze at the beautiful “supermoon” appearing in the east on Monday evening, bear in mind that its greater gravitational force is what really makes it an important event for our planet. As rising sea levels kick-in, supermoons and the king tides they bring could mean even worse flooding for coastal communities.
I’m an experienced science, technology and travel journalist and stargazer writing about exploring the night sky, solar and lunar eclipses, moon-gazing, astro-travel, astronomy and space exploration. I’m the editor of WhenIsTheNextEclipse.com and the author of “A Stargazing Program for Beginners: A Pocket Field Guide” (Springer, 2015), as well as many eclipse-chasing guides.
Some astronomers complain about the name supermoon. They like to call supermoons hype. But supermoons aren’t hype. They’re special. Many people now know and use the word supermoon. We notice even some diehards are starting to use it now. Such is the power of folklore.
Before we called them supermoons, we in astronomy called these moons perigean full moons, or perigean new moons. Perigee just means near Earth.
The moon is full, or opposite Earth from the sun, once each month. It’s new, or more or less between the Earth and sun, once each month. And, every month, as it orbits Earth, the moon comes closest to Earth, or to perigee. The moon naturally swings farthest away once each month, too; that point is called apogee.
No doubt about it. Supermoon is a catchier term than perigean new moon or perigean full moon. That’s probably why the term supermoon has entered the popular culture. For example, Supermoon is the title track of Sophie Hunger’s 2015 album. It’s a nice song! Check it out.
The hype aspect of supermoons probably stems from an erroneous impression people had when the word supermoon came into popular usage … maybe a few decades ago? Some people mistakenly believed a full supermoon would look much, much bigger to the eye. It doesn’t. Full supermoons don’t look bigger to the eye than ordinary full moons, although experienced observers say they can detect a difference.
But supermoons do look brighter than ordinary full moons! The angular diameter of a supermoon is about 7% greater than that of the average-size full moon and 14% greater than the angular diameter of a micro-moon (year’s farthest and smallest full moon). Yet, a supermoon exceeds the area (disk size) and brightness of an average-size full moon by some 15% – and the micro-moon by some 30%. For a visual reference, the size difference between a supermoon and micro-moon is proportionally similar to that of a U.S. quarter versus a U.S. nickel.
So go outside on the night of a full supermoon, and – if you’re a regular observer of nature – you’ll surely notice the supermoon is exceptionally bright!
In September 2019, my colleague Anna Kapinska gave a presentation showing interesting objects she’d found while browsing our new radio astronomical data. She had started noticing very weird shapes she couldn’t fit easily to any known type of object.
Among them, labelled by Anna as WTF?, was a picture of a ghostly circle of radio emission, hanging out in space like a cosmic smoke-ring. None of us had ever seen anything like it before, and we had no idea what it was. A few days later, our colleague Emil Lenc found a second one, even more spooky than Anna’s.
EMU plans to boldly probe parts of the Universe where no telescope has gone before. It can do so because ASKAP can survey large swathes of the sky very quickly, probing to a depth previously only reached in tiny areas of sky, and being especially sensitive to faint, diffuse objects like these.
Join our readers who subscribe to free evidence-based news
I predicted a couple of years ago this exploration of the unknown would probably make unexpected discoveries, which I called WTFs. But none of us expected to discover something so unexpected, so quickly. Because of the enormous data volumes, I expected the discoveries would be made using machine learning. But these discoveries were made with good old-fashioned eyeballing.
Our team searched the rest of the data by eye, and we found a few more of the mysterious round blobs. We dubbed them ORCs, which stands for “odd radio circles”. But the big question, of course, is: “what are they?”
At first we suspected an imaging artefact, perhaps generated by a software error. But we soon confirmed they are real, using other radio telescopes. We still have no idea how big or far away they are. They could be objects in our galaxy, perhaps a few light-years across, or they could be far away in the Universe and maybe millions of light years across.
When we look in images taken with optical telescopes at the position of ORCs, we see nothing. The rings of radio emission are probably caused by clouds of electrons, but why don’t we see anything in visible wavelengths of light? We don’t know, but finding a puzzle like this is the dream of every astronomer.
We have ruled out several possibilities for what ORCs might be.
Could they be supernova remnants, the clouds of debris left behind when a star in our galaxy explodes? No. They are far from most of the stars in the Milky Way and there are too many of them.
Could they be the rings of radio emission sometimes seen in galaxies undergoing intense bursts of star formation? Again, no. We don’t see any underlying galaxy that would be hosting the star formation.
Could they be the giant lobes of radio emission we see in radio galaxies, caused by jets of electrons squirting out from the environs of a supermassive black hole? Not likely, because the ORCs are very distinctly circular, unlike the tangled clouds we see in radio galaxies.
Could they be Einstein rings, in which radio waves from a distant galaxy are being bent into a circle by the gravitational field of a cluster of galaxies? Still no. ORCs are too symmetrical, and we don’t see a cluster at their centre.
A genuine mystery
In our paper about ORCs, which is forthcoming in the Publications of the Astronomical Society of Australia, we run through all the possibilities and conclude these enigmatic blobs don’t look like anything we already know about.
So we need to explore things that might exist but haven’t yet been observed, such as a vast shockwave from some explosion in a distant galaxy. Such explosions may have something to do with fast radio bursts, or the neutron star and black hole collisions that generate gravitational waves.
Or perhaps they are something else entirely. Two Russian scientists have even suggested ORCs might be the “throats” of wormholes in spacetime.
From the handful we’ve found so far, we estimate there are about 1,000 ORCs in the sky. My colleague Bärbel Koribalski notes the search is now on, with telescopes around the world, to find more ORCs and understand their cause.
It’s a tricky job, because ORCS are very faint and difficult to find. Our team is brainstorming all these ideas and more, hoping for the eureka moment when one of us, or perhaps someone else, suddenly has the flash of inspiration that solves the puzzle.
It’s an exciting time for us. Most astronomical research is aimed at refining our knowledge of the Universe, or testing theories. Very rarely do we get the challenge of stumbling across a new type of object which nobody has seen before, and trying to figure out what it is.
Is it a completely new phenomenon, or something we already know about but viewed in a weird way? And if it really is completely new, how does that change our understanding of the Universe? Watch this space!
By: Ray Norris Professor, School of Science, Western Sydney University
A new study using observations from NASA’s Fermi Gamma-ray Space Telescope reveals the first clear-cut evidence that the expanding debris of exploded stars produces some of the fastest-moving matter in the universe. This discovery is a major step toward meeting one of Fermi’s primary mission goals. Cosmic rays are subatomic particles that move through space at nearly the speed of light. About 90 percent of them are protons, with the remainder consisting of electrons and atomic nuclei.
In their journey across the galaxy, the electrically charged particles become deflected by magnetic fields. This scrambles their paths and makes it impossible to trace their origins directly. Through a variety of mechanisms, these speedy particles can lead to the emission of gamma rays, the most powerful form of light and a signal that travels to us directly from its sources. Two supernova remnants, known as IC 443 and W44, are expanding into cold, dense clouds of interstellar gas.
This material emits gamma rays when struck by high-speed particles escaping the remnants. Scientists have been unable to ascertain which particle is responsible for this emission because cosmic-ray protons and electrons give rise to gamma rays with similar energies. Now, after analyzing four years of data, Fermi scientists see a gamma-ray feature from both remnants that, like a fingerprint, proves the culprits are protons. When cosmic-ray protons smash into normal protons, they produce a short-lived particle called a neutral pion.
The pion quickly decays into a pair of gamma rays. This emission falls within a specific band of energies associated with the rest mass of the neutral pion, and it declines steeply toward lower energies. Detecting this low-end cutoff is clear proof that the gamma rays arise from decaying pions formed by protons accelerated within the supernova remnants. This video is public domain and can be downloaded at: http://svs.gsfc.nasa.gov/goto?11209 Like our videos? Subscribe to NASA’s Goddard Shorts HD podcast: http://svs.gsfc.nasa.gov/vis/iTunes/f… Or find NASA Goddard Space Flight Center on Facebook: http://www.facebook.com/NASA.GSFC Or find us on Twitter: http://twitter.com/NASAGoddard
In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.
Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance — pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?
The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?
Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”
The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.
“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”
Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”
The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”
But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”
In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”
The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”
After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.
Garden of Cosmic Delights
Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.
In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point.
Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.
Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.
The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)
Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.
The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.
One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.
The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.
The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.
In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.
“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”
Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.
He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary’s defenders, to walk the other way.
It’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.
According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.
In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.
That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees.
For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.
There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.
Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems.
No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function — it’s less clear.”