One of the most important open questions in science is how our consciousness is established. In the 1990s, long before winning the 2020 Nobel Prize in Physics for his prediction of black holes, physicist Roger Penrose teamed up with anaesthesiologist Stuart Hameroff to propose an ambitious answer.
They claimed that the brain’s neuronal system forms an intricate network and that the consciousness this produces should obey the rules of quantum mechanics – the theory that determines how tiny particles like electrons move around. This, they argue, could explain the mysterious complexity of human consciousness.
Penrose and Hameroff were met with incredulity. Quantum mechanical laws are usually only found to apply at very low temperatures. Quantum computers, for example, currently operate at around -272°C. At higher temperatures, classical mechanics takes over. Since our body works at room temperature, you would expect it to be governed by the classical laws of physics. For this reason, the quantum consciousness theory has been dismissed outright by many scientists – though others are persuaded supporters.
Instead of entering into this debate, I decided to join forces with colleagues from China, led by Professor Xian-Min Jin at Shanghai Jiaotong University, to test some of the principles underpinning the quantum theory of consciousness.
In our new paper, we’ve investigated how quantum particles could move in a complex structure like the brain – but in a lab setting. If our findings can one day be compared with activity measured in the brain, we may come one step closer to validating or dismissing Penrose and Hameroff’s controversial theory.
Brains and fractals
Our brains are composed of cells called neurons, and their combined activity is believed to generate consciousness. Each neuron contains microtubules, which transport substances to different parts of the cell. The Penrose-Hameroff theory of quantum consciousness argues that microtubules are structured in a fractal pattern which would enable quantum processes to occur.
Fractals are structures that are neither two-dimensional nor three-dimensional, but are instead some fractional value in between. In mathematics, fractals emerge as beautiful patterns that repeat themselves infinitely, generating what is seemingly impossible: a structure that has a finite area, but an infinite perimeter.
This might sound impossible to visualise, but fractals actually occur frequently in nature. If you look closely at the florets of a cauliflower or the branches of a fern, you’ll see that they’re both made up of the same basic shape repeating itself over and over again, but at smaller and smaller scales. That’s a key characteristic of fractals.
The same happens if you look inside your own body: the structure of your lungs, for instance, is fractal, as are the blood vessels in your circulatory system. Fractals also feature in the enchanting repeating artworks of MC Escher and Jackson Pollock, and they’ve been used for decades in technology, such as in the design of antennas. These are all examples of classical fractals – fractals that abide by the laws of classical physics rather than quantum physics.
It’s easy to see why fractals have been used to explain the complexity of human consciousness. Because they’re infinitely intricate, allowing complexity to emerge from simple repeated patterns, they could be the structures that support the mysterious depths of our minds.
But if this is the case, it could only be happening on the quantum level, with tiny particles moving in fractal patterns within the brain’s neurons. That’s why Penrose and Hameroff’s proposal is called a theory of “quantum consciousness”.
We’re not yet able to measure the behaviour of quantum fractals in the brain – if they exist at all. But advanced technology means we can now measure quantum fractals in the lab. In recent research involving a scanning tunnelling microscope (STM), my colleagues at Utrecht and I carefully arranged electrons in a fractal pattern, creating a quantum fractal.
When we then measured the wave function of the electrons, which describes their quantum state, we found that they too lived at the fractal dimension dictated by the physical pattern we’d made. In this case, the pattern we used on the quantum scale was the Sierpiński triangle, which is a shape that’s somewhere between one-dimensional and two-dimensional.
This was an exciting finding, but STM techniques cannot probe how quantum particles move – which would tell us more about how quantum processes might occur in the brain. So in our latest research, my colleagues at Shanghai Jiaotong University and I went one step further. Using state-of-the-art photonics experiments, we were able to reveal the quantum motion that takes place within fractals in unprecedented detail.
We achieved this by injecting photons (particles of light) into an artificial chip that was painstakingly engineered into a tiny Sierpiński triangle. We injected photons at the tip of the triangle and watched how they spread throughout its fractal structure in a process called quantum transport. We then repeated this experiment on two different fractal structures, both shaped as squares rather than triangles. And in each of these structures we conducted hundreds of experiments.
Our observations from these experiments reveal that quantum fractals actually behave in a different way to classical ones. Specifically, we found that the spread of light across a fractal is governed by different laws in the quantum case compared to the classical case.
This new knowledge of quantum fractals could provide the foundations for scientists to experimentally test the theory of quantum consciousness. If quantum measurements are one day taken from the human brain, they could be compared against our results to definitely decide whether consciousness is a classical or a quantum phenomenon.
Our work could also have profound implications across scientific fields. By investigating quantum transport in our artificially designed fractal structures, we may have taken the first tiny steps towards the unification of physics, mathematics and biology, which could greatly enrich our understanding of the world around us as well as the world that exists in our heads.
In September 2019, my colleague Anna Kapinska gave a presentation showing interesting objects she’d found while browsing our new radio astronomical data. She had started noticing very weird shapes she couldn’t fit easily to any known type of object.
Among them, labelled by Anna as WTF?, was a picture of a ghostly circle of radio emission, hanging out in space like a cosmic smoke-ring. None of us had ever seen anything like it before, and we had no idea what it was. A few days later, our colleague Emil Lenc found a second one, even more spooky than Anna’s.
EMU plans to boldly probe parts of the Universe where no telescope has gone before. It can do so because ASKAP can survey large swathes of the sky very quickly, probing to a depth previously only reached in tiny areas of sky, and being especially sensitive to faint, diffuse objects like these.
Join our readers who subscribe to free evidence-based news
I predicted a couple of years ago this exploration of the unknown would probably make unexpected discoveries, which I called WTFs. But none of us expected to discover something so unexpected, so quickly. Because of the enormous data volumes, I expected the discoveries would be made using machine learning. But these discoveries were made with good old-fashioned eyeballing.
Our team searched the rest of the data by eye, and we found a few more of the mysterious round blobs. We dubbed them ORCs, which stands for “odd radio circles”. But the big question, of course, is: “what are they?”
At first we suspected an imaging artefact, perhaps generated by a software error. But we soon confirmed they are real, using other radio telescopes. We still have no idea how big or far away they are. They could be objects in our galaxy, perhaps a few light-years across, or they could be far away in the Universe and maybe millions of light years across.
When we look in images taken with optical telescopes at the position of ORCs, we see nothing. The rings of radio emission are probably caused by clouds of electrons, but why don’t we see anything in visible wavelengths of light? We don’t know, but finding a puzzle like this is the dream of every astronomer.
We have ruled out several possibilities for what ORCs might be.
Could they be supernova remnants, the clouds of debris left behind when a star in our galaxy explodes? No. They are far from most of the stars in the Milky Way and there are too many of them.
Could they be the rings of radio emission sometimes seen in galaxies undergoing intense bursts of star formation? Again, no. We don’t see any underlying galaxy that would be hosting the star formation.
Could they be the giant lobes of radio emission we see in radio galaxies, caused by jets of electrons squirting out from the environs of a supermassive black hole? Not likely, because the ORCs are very distinctly circular, unlike the tangled clouds we see in radio galaxies.
Could they be Einstein rings, in which radio waves from a distant galaxy are being bent into a circle by the gravitational field of a cluster of galaxies? Still no. ORCs are too symmetrical, and we don’t see a cluster at their centre.
A genuine mystery
In our paper about ORCs, which is forthcoming in the Publications of the Astronomical Society of Australia, we run through all the possibilities and conclude these enigmatic blobs don’t look like anything we already know about.
So we need to explore things that might exist but haven’t yet been observed, such as a vast shockwave from some explosion in a distant galaxy. Such explosions may have something to do with fast radio bursts, or the neutron star and black hole collisions that generate gravitational waves.
Or perhaps they are something else entirely. Two Russian scientists have even suggested ORCs might be the “throats” of wormholes in spacetime.
From the handful we’ve found so far, we estimate there are about 1,000 ORCs in the sky. My colleague Bärbel Koribalski notes the search is now on, with telescopes around the world, to find more ORCs and understand their cause.
It’s a tricky job, because ORCS are very faint and difficult to find. Our team is brainstorming all these ideas and more, hoping for the eureka moment when one of us, or perhaps someone else, suddenly has the flash of inspiration that solves the puzzle.
It’s an exciting time for us. Most astronomical research is aimed at refining our knowledge of the Universe, or testing theories. Very rarely do we get the challenge of stumbling across a new type of object which nobody has seen before, and trying to figure out what it is.
Is it a completely new phenomenon, or something we already know about but viewed in a weird way? And if it really is completely new, how does that change our understanding of the Universe? Watch this space!
By: Ray Norris Professor, School of Science, Western Sydney University
A new study using observations from NASA’s Fermi Gamma-ray Space Telescope reveals the first clear-cut evidence that the expanding debris of exploded stars produces some of the fastest-moving matter in the universe. This discovery is a major step toward meeting one of Fermi’s primary mission goals. Cosmic rays are subatomic particles that move through space at nearly the speed of light. About 90 percent of them are protons, with the remainder consisting of electrons and atomic nuclei.
In their journey across the galaxy, the electrically charged particles become deflected by magnetic fields. This scrambles their paths and makes it impossible to trace their origins directly. Through a variety of mechanisms, these speedy particles can lead to the emission of gamma rays, the most powerful form of light and a signal that travels to us directly from its sources. Two supernova remnants, known as IC 443 and W44, are expanding into cold, dense clouds of interstellar gas.
This material emits gamma rays when struck by high-speed particles escaping the remnants. Scientists have been unable to ascertain which particle is responsible for this emission because cosmic-ray protons and electrons give rise to gamma rays with similar energies. Now, after analyzing four years of data, Fermi scientists see a gamma-ray feature from both remnants that, like a fingerprint, proves the culprits are protons. When cosmic-ray protons smash into normal protons, they produce a short-lived particle called a neutral pion.
The pion quickly decays into a pair of gamma rays. This emission falls within a specific band of energies associated with the rest mass of the neutral pion, and it declines steeply toward lower energies. Detecting this low-end cutoff is clear proof that the gamma rays arise from decaying pions formed by protons accelerated within the supernova remnants. This video is public domain and can be downloaded at: http://svs.gsfc.nasa.gov/goto?11209 Like our videos? Subscribe to NASA’s Goddard Shorts HD podcast: http://svs.gsfc.nasa.gov/vis/iTunes/f… Or find NASA Goddard Space Flight Center on Facebook: http://www.facebook.com/NASA.GSFC Or find us on Twitter: http://twitter.com/NASAGoddard
Albert Einstein’s work so revolutionized physics that it is difficult to discuss him without slipping into hagiography. Indeed, his brilliance is so storied that his surname has become synonymous with “genius,” and his brain preserved for study.
And yet, while Einstein was undeniably a smart cookie, one cannot look back at the course of history without noticing that the dominoes were all there, set up, and waiting for someone like him to start toppling them. Part of Einstein’s brilliance was merely realizing this. Avi Loeb, a professor of physics at Harvard University with a regular a column in Scientific American, told me that he thinks that Einstein’s physics revelations would have been developed by others even if Einstein hadn’t been born. “It would take maybe a few more decades,” Loeb clarified. “Many of the things that Einstein personally was responsible for — there at least 10 touchstones in physics where each of them is a major intellectual achievement — you know, they would be discovered by different people, I think,” Loeb continued. “That illustrates his genius.”
Loeb is advising on a public project celebrating Einstein’s life and work at Hebrew University, which hosts an archive of Einstein’s documents. The project, “Einstein: Visualize the Impossible,” is slated to be an interactive online exhibition to engage the public with Einstein’s work. As a fellow physicist, Einstein’s work and his life have weighed on Loeb’s mind for years, which is why he was interested in helping curate.
In considering Einstein’s legacy, though, Loeb says we have to reckon with what has and hasn’t changed about the physics world. In the 1890s, when Einstein was in college, physics knowledge was a shell of what it is today. Quantum mechanics, dark matter, nuclear physics and most fundamental particles were unknown, and astronomers knew little about the nature of the universe — or even that there were other galaxies outside our own. Nowadays, many of the biggest physics discoveries happen by virtue of some of the largest and most expensive scientific instruments ever built: gravitational wave observatories, say, or the Large Hadron Collider at CERN.
Given the landscape of physics today, could an Einstein-like physicist exist again — someone who, say, works in a patent office, quietly pondering the nature of space-time, yet whose revelations cause much of the field to be completely rethought?
Loeb thought so. “There are some dark clouds in physics,” Loeb told me. “People will tell you, ‘we just need to figure out which particles makes the dark matter, it’s just another particle. It has some weak interaction, and that’s pretty much it.’ But I think there is a very good chance that we are missing some very important ingredients that a brilliant person might recognize in the coming years.” Loeb even said the potential for a revolutionary physics breakthrough today “is not smaller — it’s actually bigger right now” than it was in Einstein’s time.
I spoke with Loeb via phone about Einstein’s legacy, and how physics has become “stuck” on certain problems; as always, this interview has been condensed and edited for print.
To start, let’s talk about some of Einstein’s contributions to science. What compelled you to help curate this celebration of Einstein’s legacy?
Well, to start, Einstein’s special theory of relativity revolutionized our notion of space and time. The fact that space and time are entities that are lumped together and that the speed of light is the ultimate speed, and, and that you can convert mass to energy, which is demonstrated by nuclear energy in particular. Then later on, he made the extremely important contributions to quantum mechanics, and of course developed the general theory of relativity that he published in November 1915, 105 years ago.
And amazingly, exactly a hundred years later, in August, 2015, gravitational waves were detected by the LIGO experiment — and they demonstrated that not only do gravitational waves exist, which are ripples in space and time that Einstein’s theory forecasted, but also that the forces of these gravitational waves are black holes, which are also a prediction of Einstein’s theory.
Obviously Einstein was very visionary, but also in a sense, he had peers — people like Karl Schwarzchild and Edwin Hubble — who were doing work that would help him test and correlate his theories. I’ve wondered, say, if Einstein were born 30 years later, would someone else have figured out relativity, and the photoelectric effect, and so on?
That’s a good question. Physics is about nature, right? So we’re trying to learn about nature. We’re trying to understand nature and you know, so, in that sense, we collect data and eventually someone comes up with the right idea. The question is, how long does that take? What I’m saying is, I believe that the same ideas would have been developed.
I don’t know how close to the time that Einstein and thought about them, but eventually. . . . it would take maybe a few more decades or something. But the most important thing is, I think it would have been fragmented. So, you know, many of the things that Einstein personally was responsible for — like there at least 10 touchstones in physics where each of them is a major intellectual achievement — they would be discovered by different people. So the fact that he came up with with all of them illustrates his genius.
But you know, if you look at people that got the Nobel prize, there are many people — examples of people that got it once for one major discovery, that’s pretty much what they did for their life. Either they did it early on in their life or late, but doesn’t matter. And that’s not true about Einstein. So he didn’t only deviate from the beaten path and, and come up with original ideas, but he did it multiple times. And by that, you know, it contributed to humanity. A great deal, I should say, like for example, his a general theory of relativity — this idea that space and time and gravity are connected.
It seems like physics has changed between Einstein’s day and now. Most of the underlying physical principles of our universe appear to have been well-defined and tested by now — say, the standard model of particle physics, or relativity and gravitation. And a lot of advances happen now because of data from huge teams working on government-funded instruments. Given the landscape of physics, is it actually possible that there could be somebody else like Einstein nowadays, someone who revolutionizes the whole field? Or do you think things have sort of fundamentally changed — both in terms of funding of experiments and of our understanding of the universe — so that such a thing is no longer possible?
I mean, we do have much bigger experiments as you said, and much more data in some fields. But we still need people that think about the blueprint of physics, that think about the fundamental assumptions that everyone else is making that might be wrong. We need critical thinking. And there are some dark clouds on the horizon, just as they were 150 years ago. You know, back then, back then it was the blackbody radiation. And people at the time thought, “well, we just need to clarify that dark cloud, and then we finish physics.” [Editor’s note: in the 1890s, the fact that objects glowed different colors as they heated up was one of the great mysteries of physics. It turned out to be related to quantum mechanics, the study of which prompted an ongoing revolution in physics.]
And right now there are some dark clouds, too, you know. Like, there is the nature of dark matter, or the nature of the cosmological constant, or that we don’t know where the vacuum gets its energy from. People will tell you, “oh, these are just minute details. You know, we just need to figure out which particles makes the dark matter, it’s just another particle. It has some weak interaction, and that’s pretty much it. And the dark energy, you know, it’s just the vacuum energy density, you know, for some reason it’s more maybe, because otherwise we wouldn’t exist here.” You know, we can give each other awards and celebrate the end of physics.
I think it’s pretty much similar [to the 19th century situation]. And I think there, there is a very good chance that we are missing some very important ingredients that a brilliant person might recognize in the coming years, in the coming decades.
What are some of the “dark clouds” in physics, as you say?
One of the challenges is unifying quantum mechanics and gravity. So you have this huge contingency of string theories that agree among themselves that they are leading the frontier, but nevertheless, they haven’t provided any concrete predictions that can be tested by experiments over the past 40 years. [Editor’s note: String theory unifies quantum mechanics and gravity, but it is, as Loeb mentions, not testable as far as anyone knows.]
[String theorists] are still advocating that they’re the smartest physicists — although they’re not doing physics, because in my book, physics is about testing your ideas against reality, with experiments. And, you know, I very much believe that put your theory to the guillotine of experimental data, and it may cut its head off. But if you don’t risk your theory by testing it, you can be very proud of yourself. The only way that you maintain your humility is by recognizing that there is something superior to your ideas, which is nature. And it’s a learning experience where you’re not supposed to know everything in advance.
And that’s unfortunately not popular these days. Today, it’s all about impressing each other. And that’s part of social media, you know, trying to impress other people to say things that look smart, that look very intelligent, that completely align with what everyone else is saying so that they will like you, that you would have more likes on Twitter. Okay. So that’s the motivation, so that you can get more awards, more grants so that you can get a tenure appointment and everyone would respect you.
That’s wrong. That was clearly not the motivation of Einstein. He was not trying to be liked, and that’s why he was working in a patent office. But his ideas happened to be right. And in a way he was naive in that sense, but that’s the right approach — you should be always learning.
So I would say there is the same potential — even greater now — because we are at a time when we recognize the success of physics. It has a huge impact on the economy, on politics, and so forth. So we recognize that — but if you look at the frontiers of physics, which is blue sky research, you know, it’s supposed to be open minded — but it’s not open-minded. There are groups of people, entrenched in ideas that will never be tested and they believe that they’re leading the frontier.
Right. So are you saying that the premise of the some of the major experiments might even be wrong? Like, all the prominent dark matter experiments are trying to find this weakly-interacting, supersymmetric particle, but even that assumption may be wrong?
So here is an example: Supersymmetry, you know, that was an idea advocated for decades now. [Editor’s note: Supersymmetry is the theory that for every fundamental particle, there is a “partner” particle; so for the electron, there would be a supersymmetric “selectron,” and for the top quark, there would be a supersymmetric “squark,” and so on. Dark matter is theorized to be made of one of these particles. Yet none of the supersymmetric particles have ever been observed.] And people celebrated this idea, and gave each other awards. The Large Hadron Collider in CERN was supposed to detect the lightest supersymmetric particles — and it didn’t. There’s no evidence for supersymmetry.
So obviously what people say is, “oh, maybe it’s around the corner.” But it’s already ruled out — the most natural versions of supersymmetry are ruled out. So here’s an idea that was celebrated as part of the mainstream — not only celebrated, but it was the foundation for string theory. So they put it as a building block: “We know it exists, put it as a brick at the bottom of the tower that we are building called string theory, called superstring theory. And let’s assume that we know it it’s completely trivial, experimentalists will eventually find it, we don’t even need to think about it — let’s put it as a building block of our tower.”
Doesn’t exist. LHD [Large Hadron Collider] didn’t find it. So then, people say, “okay, weakly interacting massive particles are dark matter — but for decades, they haven’t found anything. [Editor’s note: One prominent theory to explain dark matter is that it consists of particles that are heavy but rarely interact with normal matter, though they bounce off of themselves and have a gravitational interaction. Most of the major experiments searching for dark matter are attempting to find this type of weakly interacting massive particle, or WIMP for short.]
And so I asked the experimentalists, “how long will you continue to search for WIMPs, these weakly interacting particles, since the limits are orders of magnitude below the expectation?” And he said, “I will continue to search for WIMPs as long as I get funding.”
So in the mainstream approach, there is this stubbornness — like, we stick to the ideas that we believe in. And then anyone that deviates from that will be sidelined. You know, anyone that considers any other theory for unifying quantum mechanics and gravity through string theory is sidelined, even though there is no reasonable evidence for string theory. So I would say the potential now for a breakthrough that will be really revolutionary is not smaller — it’s actually bigger right now [than it was in Einstein’s]. It’s just, the social pressure is stronger.
So we do need — we desperately need another Einstein. There is no doubt.
How to introduce an effective diversity and inclusion strategy, according to Monzo’s former diversity chief Summary List PlacementThere has been a surge in initiatives to promote diversity and inclusion in the workplace. Since 2015, the number of executives with D&I job titles increased 113%, according to ZoomInfo, reflecting how diversity has become a pressing priority for businesses. Sheree Atcheson, the new diversity, equity, and inclusion director at employee engagement platform Peakon, has a wealth of experience in the field, having previously lead diversity and inclusion initiatives at Deloitte and UK neobank Monzo. She says that there is growing momentum behind diversity efforts, which has only been accelerated by the Black Lives Matter movement. Citing an…
The 15 richest people in the fashion industry, ranked Summary List PlacementFashion is a $2.5 trillion global industry that has made its leading players, from designers and CEOs to founders and heiresses, very rich. Business Insider has compiled a list of the richest people in the fashion industry, based on Forbes’ Real Time Billionaires ranking and Bloomberg Billionaires Index — and the top 15 are worth a combined $410.8 billion. The wealthiest person in fashion is Bernard Arnault, the chairman of LVMH, the world’s largest maker of luxury goods that’s behind brands such as Louis Vuitton, Dom Perignon, Christian Dior, and as of November 2019, Tiffany & Co. Others…
POWER LIST: The 10 best industries for entrepreneurs to start profitable businesses as America adjusts to life in a pandemic Summary List Placement If you want to bet on America and the country’s ability to prevail, start a business. As we saw during the Great Recession when startups like Uber, Airbnb, and Pinterest emerged, many fast growing companies were formed during lean times. Consumer spending has ticked up in categories like “food at home,” education, and medical care in the months since social distancing measures began in earnest in March. Some industries are better positioned for recovery and growth than others, so Business Insider compiled a list of the businesses we expect to grow, based on expert analysis, interviews with…
Meet the ‘K-shaped’ recession, where professional workers are largely fine and everyone else is doing awful Summary List PlacementSince it became obvious the coronavirus pandemic would create a recession, economists have debated the “shape” of it, with big implications for the future of the economy. Would it be a “V” shape, with a rapid recovery, a “U” shape, with a softer recovery, or the dreaded “L” shape, with no recovery at all? Joe Biden weighed in on the matter in a speech on Friday, blaming President Trump for creating an unusual “K shape.” He said many Americans thought the country would bounce back economically within months of March’s record job losses, but Trump’s leadership botched that…
It would change everything we know about life in the Solar System and far beyond.
Or would it? What if we accidentally transported life to Mars on a spacecraft? And what if that is how life moves around the Universe?
A new paper published this week in Frontiers in Microbiology explores the possibility that microbes and extremophiles may migrate between planets and distribute life around the Universe—and that includes on spacecraft sent from Earth to Mars.
What is ‘panspermia?’
It’s an untested, unproven and rather wild theory regarding the interplanetary transfer of life. It theorizes that microscopic life-forms, such as bacteria, can be transported through space and land on another planet. Thus sparking life elsewhere.
It could happen by accident—such as on spacecraft—via comets and asteroids in the Solar System, and perhaps even between star systems on interstellar objects like ʻOumuamua.
However, for “panspermia” to have any credence requires proof that bacteria could survive a long journey through the vacuum, temperature fluctuations, and intense UV radiation in outer space.
Cue the “Tanpopo” project.
What is the ‘Tanpopo’ mission?
Tanpopo—dandelion in English—is a scientific experiment to see if bacteria can survive in the extremes of outer space.
The researchers from Tokyo University—in conjunction with Japanese national space agency JAXA—wanted to see if the bacteria deinococcus could survive in space, so had it placed in exposure panels on the outside of the International Space Station (ISS). It’s known as being resistant to radiation. Dried samples of different thicknesses were exposed to space environment for one, two, or three years and then tested to see if any survived.
“The results suggest that deinococcus could survive during the travel from Earth to Mars and vice versa, which is several months or years in the shortest orbit,” said Akihiko Yamagishi, a Professor at Tokyo University of Pharmacy and Life Sciences and principal investigator of Tanpopo.
That means spacecraft visiting Mars could theoretically carry microorganisms and potentially contaminate its surface.
However, this isn’t just about Earth and Mars—the ramifications of panspermia, if proven, are far-reaching.
“The origin of life on Earth is the biggest mystery of human beings (and) scientists can have totally different points of view on the matter,” said Dr. Yamagishi. “Some think that life is very rare and happened only once in the Universe, while others think that life can happen on every suitable planet.”
This is bacteria surviving in space for a long period when shielded by rock—typically an asteroid or a comet—which could travel between planets, potentially spreading bacteria and biologically-rich matter around the Solar System.
However, the theory of panspermia goes even further than that.
What is ‘interstellar panspermia’ and ‘galactic panspermia?’
This is the hypothesis—and it’s one with zero evidence—that life exists throughout the galaxy and/or Universe specifically because bacteria and microorganisms are spread around by asteroids, comets, space dust and possibly even interstellar spacecraft from alien civilizations.
In 2018 a paper concluded that the likelihood of Galactic panspermia is strongly dependent upon the survival lifetime of the organisms as well as the velocity of the comet or asteroid—positing that the entire Milky Way could potentially be exchanging biotic components across vast distances.
Such theories have gained credence in the last few years with the discovery of two extrasolar objects Oumuamua and Borisov passing through our Solar System.
However, while the ramifications are mind-boggling, panspermia is definitely not a proven scientific process.
There are still many unanswered questions about how the space-surviving microbes could physically transfer from one celestial body to another.
How will Perseverance look for life on Mars?
NASA’s Perseverance rover is due to land on the red planet on February 18, 2021. It will land in a nearly four billion-year-old river delta in Mars’ 28 miles/45 kilometers-wide Jezero Crater.
It’s thought likely that Jezero Crater was home to a lake as large as Lake Tahoe more than 3.5 billion years ago. Ancient rivers there could have carried organic molecules and possibly even microorganisms.
Perseverance’s mission will be to analyze rock and sediment samples to see if Mars may have had conditions for microorganisms to thrive. It will drill a few centimeters into Mars and take core samples, then put the most promising into containers. It will then leave them on the Martian surface to be later collected by a human mission in the early 2030s.
I’m an experienced science, technology and travel journalist interested in space exploration, moon-gazing, exploring the night sky, solar and lunar eclipses, astro-travel, wildlife conservation and nature. I’m the editor of WhenIsTheNextEclipse.com and the author of “A Stargazing Program for Beginners: A Pocket Field Guide” (Springer, 2015), as well as many eclipse-chasing guides.
In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.
Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance — pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?
The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?
Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”
The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.
“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”
Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”
The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”
But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”
In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”
The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”
After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.
Garden of Cosmic Delights
Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.
In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point.
Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.
Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.
The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)
Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.
The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.
One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.
The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.
The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.
In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.
“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”
Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.
He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary’s defenders, to walk the other way.
It’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.
According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.
In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.
That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees.
For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.
There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.
Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems.
No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function — it’s less clear.”
A visitor takes a phone photograph of a large back lit image of the Large Hadron Collider (LHC) at the Science Museum’s ‘Collider’ exhibition on November 12, 2013 in London, England. Peter Macdiarmid, Getty Images
The Large Hadron Collider, at 27 kilometres in length, is currently the worlds highest energy particle collider. It’s also literally the largest machine ever built by human hands. But CERN, the European Organization for Nuclear Research behind the collider, is planning to build a second, even larger collider.
This one could end up being 100km, almost four times the size, and may cost up to $23 billion to produce. The collider will be used to further study the Higgs Boson particle, a particle that was theorized by Peter Higgs and five other scientists back in 1964, and essentially discovered as a particle back in 2012 using the Large Hadron Collider.
Fabiola Gianotti, Director General of CERN, talks about the past, present and future of the biggest particle accelerator on the planet – the 27 km circumference Large Hadron Collider. The LHC, which is currently being upgraded, will operate until 2037. and in this film Fabiola talks about the linear collider to smash together electrons and positrons in a tunnel up to 50 km, and the Future Circular Collider, a 100km ring for electron-positron and proton-proton collisions.
The idea of a parallel Universe that runs alongside our own sure is alluring. What is a parallel Universe? It’s a place of alternate realities, where different choices are made and different outcomes persist. It’s part of the “multiverse” where an infinite number of parallel Universes exist in infinite space-time. That’s the theory.
Or is it just some strange ice formations?
A cacophony of articles on the possibility of the discovery of a parallel Universe were published last month in the wake of a New Scientist article that contained some “out there” claims about some scientific research in Antarctica.
Now a new research paper provides a much more down-to-earth explanation for the two recent strange events that occurred in Antarctica—it was compacted snow and, possibly, underground lakes, that caused some unexpected radio pulses to be misinterpreted.
First, let’s examine the event that caused the “panic” about a parallel Universe in the first place. In both 2016 and 2018, high-energy neutrinos appeared to come up out of the Earth of their own accord and head skyward.
The ANtarctic Impulsive Transient Antenna (ANITA) experiment used radio antenna on high altitude balloons above the South Pole to search for the radio pulses of ultra-high-energy cosmic rays and neutrinos coming from space.
High-energy neutrinos are minute particles able to pass through virtually everything—including our planet. Some of them are created by exploding stars and gamma ray bursts. The scientists in Antarctica discovered radio pulses that indicated high-energy neutrinos coming upward out of the ground, which led to various different explanations, including that the pulses were:
neutrinos that passed through the Earth’s core and then came out of the ground.
a “fourth neutrino” known as the sterile neutrino—which would be a completely new discovery.
an unknown frontier of particle physics and astrophysics.
The out-there parallel Universe theory comes from the absence of a good explanation. That’s partly because a check on ANITA’s results were carried out by the IceCube neutrino detector in Antarctica; it found nothing.
Cue the possibility of a parallel Universe because maybe, just maybe, something “exotic” is going on. Since the high-energy neutrinos were detected coming “up” from the Earth instead of “down” from space they may be traveling back in time and, therefore, could from a—you guessed it—parallel Universe. The Big Bang occurred, it formed two Universes; one that flows forward, the other in reverse.
A theory with zero evidence.
The new paper—published today in the journal Annals of Glaciology—thinks that the pulses were:
reflections off strange ice formations
Specifically, unflipped reflections of ultra-high-energy cosmic rays that arrive from space, miss the top layer of ice, then enter the ground to strike deep, compacted snow.
In short, the culprit could be firn under the surface of the ice. “Firn is something between snow and glacial ice,” said lead author Ian Shoemaker, an assistant professor in the Department of Physics and the Center for Neutrino Physics, both part of the Virginia Tech College of Science. “It’s compacted snow that’s not quite dense enough to be ice.” Classified as crystalline or granular snow, it’s often found on the upper part of a glacier.
“When cosmic rays, or neutrinos, go through ice at very high energies, they scatter on materials inside the ice, on protons and electrons, and they can make a burst of radio, a big nice radio signal that scientists can see,” said Shoemaker. Cosmic rays are high-energy protons and atomic nuclei that move through space at nearly the speed of light. “The problem is that these signals have the radio pulse characteristic of a neutrino, but appear to be traversing vastly more than is possible given known physics.”
“Our idea is that part of the radio pulse from a cosmic ray can get deep into the ice before reflecting, so you can have the reflection without the phase flip. Without flipping the wave, in that case, it really looks like a neutrino.”
Ordinary neutrinos just don’t do that, but cosmic rays at these energies are common.
However, it’s not quite as simple as all that. “You can have density inversions, with ranges where you go from high density back to low density, and those crucial sorts of interfaces where this reflection can happen and could explain these events,” said Shoemaker.
That doesn’t mean that the scientists in Antarctica found nothing of interest. “Whatever ANITA has found, it is very interesting, but it may not be a Nobel Prize-winning particle physics discovery,” said Shoemaker, who thinks the scientists may nevertheless have found something interesting about glaciology. “It could be that ANITA discovered some unusual small glacial lakes,” he added.
It’s not known how many deep underground lakes there are under Antarctica; if, it turn out, there are lots, this discovery would be a big win for scientists.
So Shoemaker is proposing that instead of looking for high-energy neutrinos, his team will purposefully blast radio signals into the areas where the anomalies occurred to look for lakes.
It’s a plan that itself seems to have come straight from a parallel Universe, but that’s science for you.
I’m an experienced science, technology and travel journalist interested in space exploration, moon-gazing, exploring the night sky, solar and lunar eclipses, astro-travel, wildlife conservation and nature. I’m the editor of WhenIsTheNextEclipse.com and the author of “A Stargazing Program for Beginners: A Pocket Field Guide” (Springer, 2015), as well as many eclipse-chasing guides.
If you were to go as far out into space as you can imagine, what would you encounter? Would there be a limit to how far you could go, or could you travel a limitless distance? Would you eventually return to your starting point, or would you continue to traverse space that you had never encountered before? In other words, does the Universe have an edge, and if so, where is it?
Believe it or not, there are actually three different ways to think about this question, and each one has a different answer. If you consider how far you could go if you:
left today in an arbitrarily powerful rocket,
considered everything that could ever contact us or be contacted by us from the start of the hot Big Bang,
or used your imagination alone to access the entire Universe, including beyond what will ever be observable,
You can figure out how far it is to the edge. In each case, the answer is fascinating.
We often visualize space as a 3D grid, even though this is a frame-dependent oversimplification when… [+]
ReunMedia / Storyblocks
The key concept to keep in mind is that space isn’t how we normally conceive of it. Conventionally, we think about space as being like a coordinate system — a three-dimensional grid — where the shortest distance between two points is a straight line, and where distances don’t change over time.
But both of those assumptions, so thoroughly good in our everyday lives, fail spectacularly when we begin looking at the larger-scale Universe beyond our own planet. For starters, the idea that the shortest distance between two points is a straight line falls apart as soon as you start introducing masses and energetic quanta into your Universe. Because spacetime is subject to curvature, which the presence of matter and energy is the cause of, the shortest distance between two points is inherently dependent on the shape of the Universe between those points.
Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been… [+]
Christopher Vitale of Networkologies and the Pratt Institute
In addition to that, the fabric of spacetime itself does not remain static over time. In a Universe filled with matter and energy, a static, unchanging Universe (where distances between points remain the same over time) is inherently unstable; the Universe must evolve by either expanding or contracting. If Einstein’s General theory of Relativity is correct, this is mandatory.
Observationally, the evidence that our Universe is expanding is overwhelming: a spectacular validation for Einstein’s predictions. But this carries with it a series of consequences for objects separated by cosmic distances, including that the distance between them expands over time. Today, the most distant objects we can see are more than 30 billion light-years away, despite the fact that only 13.8 billion years have passed since the Big Bang.
The farther a galaxy is, the faster it expands away from us and the more its light appears… [+]
Larry McNish of RASC Calgary Center
When we measure how distant a variety of objects are from their physical and luminous properties — along with the amount that their light has been shifted by the Universe’s expansion — we can come to understand what the Universe is made of. Our cosmic cocktail, at present, consists of:
0.01% radiation in the form of photons,
0.1% neutrinos, an elusive, low-mass particle almost as numerous as photons,
4.9% normal matter, made mostly of the same stuff we are: protons, neutrons, and electrons,
27% dark matter, an unknown substance that gravitates but neither emits nor absorbs light,
and 68% dark energy, which is the energy inherent to space that causes distant objects to accelerate in their recession from us.
When you combine these effects together, you get a unique and unambiguous prediction for how far it is, at all times past and present, to the edge of the observable Universe.
A graph of the size/scale of the observable Universe vs. the passage of cosmic time. This is… [+]
This is a big deal! Most people assume that if the Universe has been around for 13.8 billion years since the Big Bang, then the limit to how far we can see will be 13.8 billion light-years, but that’s not quite right.
Only if the Universe were static and not expanding would this be true, but the fact is this: the farther away we look, the faster distant objects appear to speed away from us. The rate of that expansion changes in a way that is predictable based on what’s in the Universe, and in turn, knowing what’s in the Universe and observing how fast objects expand tells us how far away they are. When we take all of the available data together, we arrive at a unique value for everything together, including the distance to the observable cosmic horizon: 46.1 billion light-years.
The observable Universe might be 46 billion light years in all directions from our point of view,… [+]
Frédéric MICHEL and Andrew Z. Colvin, annotated by E. Siegel
This boundary, however, is not an “edge” to the Universe in any conventional sense of the word. It is not a boundary in space at all; if we happened to be located at any other point in space, we would still be able to detect and observe everything around us within that 46.1 billion light-year sphere centered on us.
This is because that “edge” is a boundary in time, rather than in space. This edge represents the limit of what we can see because the speed of light — even in an expanding Universe governed by General Relativity —only allows signals to travel so far over the Universe’s 13.8 billion year history. This distance is farther than 13.8 billion light-years because of the Universe’s expansion, but it’s still finite. However, we cannot reach all of it.
The size of our visible Universe (yellow), along with the amount we can reach (magenta). If we… [+]
E. Siegel, based on work by Wikimedia Commons users Azcolvin 429 and Frédéric MICHEL
Beyond a certain distance, we can see some of the light that was already emitted long ago, but will never see the light that is being emitted right now: 13.8 billion years after the Big Bang. Beyond a certain specific distance — calculated (by me) to be approximately 18 billion light-years away at present — even a signal moving at the speed of light will never reach us.
Similarly, that means that if we were in an arbitrarily high-powered rocket ship, all of the objects presently contained within this 18 billion light-year radius would be eventually reachable by us, even as the Universe continued to expand and these distances continued to increase. However, the objects beyond that would never be reachable. Even as we achieved greater and greater distances, they would recede faster than we could ever travel, preventing us from visiting them for all eternity. Already, 94% of all the galaxies in the observable Universe are beyond our eternal reach.
As vast as our observable Universe is and as much as we can see, it’s far more than we can ever… [+]
NASA, ESA, R. Windhorst, S. Cohen, and M. Mechtley (ASU), R. O’Connell (UVa), P. McCarthy (Carnegie Obs), N. Hathi (UC Riverside), R. Ryan (UC Davis), & H. Yan (tOSU)
And yet, there is a different “edge” that we might want to consider: beyond the limits of what we can observe today, or even what we can potentially observe arbitrarily far into the future, if we run our theoretical clock towards infinity. We can consider how large the entire Universe is — the unobservable Universe — and whether it folds in on itself or not.
The way we can answer this is based on an extrapolation of what we observe when we try to measure the spatial curvature of the Universe: the amount that space is curved on the largest scale we can possibly observe. If the Universe is positively curved, parallel lines will converge and the three angles of a triangle will sum to more than 180 degrees. If the Universe is negatively curved, parallel lines will diverge and the three angles of a triangle will sum to less than 180 degrees. And if the Universe is flat, parallel lines will remain parallel, and all triangles will contain 180 degrees exactly.
The angles of a triangle add up to different amounts depending on the spatial curvature present. A… [+]
NASA / WMAP science team
The way we do this is to take the most distant signals of all, such as the light that’s left over from the Big Bang, and examine in detail how the fluctuations are patterned. If the Universe is curved in either a positive or a negative direction, the fluctuation patterns that we observe will wind up distorted to appear on either larger or smaller angular scales, as opposed to a flat Universe.
When we take the best data available, which comes from both the cosmic microwave background’s fluctuations and the details of how galaxies cluster together on large scales at a variety of distances, we arrive at an inescapable conclusion: the Universe is indistinguishable from perfect spatial flatness. If it is curved, it’s at a level that’s no more than 0.4%, meaning that if the Universe is curved like a hypersphere, its radius is at least ~250 times larger than the part that’s observable to us.
The magnitudes of the hot and cold spots, as well as their scales, indicate the curvature of the… [+]
Smoot Cosmology Group / LBL
If you define the edge of the Universe as the farthest object we could ever reach if we began our journey immediately, then our present limit is a mere distance of 18 billion light-years, encompassing just 6% of the volume of our observable Universe. If you define it as the limit of what we can observe a signal from — who we can see and who can see us — then the edge goes out to 46.1 billion light-years. But if you define it as the limits of the unobservable Universe, the only limit we have is that it’s at least 11,500 billion light-years in size, and it could be even larger.
This doesn’t necessarily mean that the Universe is infinite, though. It could be flat and still curve back on itself, with a donut-like shape known mathematically as a torus. As large and expansive as the observable Universe is, it’s still finite, with a finite amount of information to teach us. Beyond that, the ultimate cosmic truths still remain unknown to us.
In a hypertorus model of the Universe, motion in a straight line will return you to your original… [+]
When Danielle Wuchenich hatched the idea for measurement startup Liquid Instruments, she was not chasing worldly success but a faster process for discovering the secrets of space. Her solution—a tool which jams 12 different electrical signal and frequency instruments into a single device—ended up being useful on Earth, with Apple, NASA and Texas Instruments employing the tool to ensure that the electronics they’re developing work.
Now Liquid Instruments’ chief strategy officer, Wuchenich was a graduate student at Australian National University, working on creating a tool called a phasemeter to measure gravitational waves in space, something only of use to high-level researchers. But in conducting the routine electrical measurements required for her research, she encountered a problem:
Every time she wanted to measure voltage over time, signal frequency or signal transmission, Wuchenich had to rely on separate devices with separate software and user interfaces, each with hefty price tags. To avoid this headache, Wuchenich programmed the high-tech phasemeter to do multiple kinds of measurements. In so doing, Wuchenich landed on a universally viable application for an otherwise esoteric product.
Over three years, a twelve-person founding team—consisting of Wuchenich, her lab mates and principal investigator CEO Daniel Shaddock—turned prototype into product. Liquid Instruments began selling its device, dubbed Moku:Lab, in 2017, an 8-inch tool the company argues is not only more efficient than the competition, but cheaper. Moku:Lab costs $6,500, whereas all the tools the device replaces cost up to $60,000, the company estimates. Shaddock says the product has the potential to fundamentally change the test and measurement industry.
“In the old days you had a typewriter for writing letters and a calculator for calculating. And they did the job pretty well. Then along came the computer, and it can write letters, it can calculate things, but it can do a whole lot more,” says Shaddock. “We’ve stumbled upon the formula for the computer for the test and measurement industry.”
So far, investors and scientists are buying it. The startup has raised $10.1 million from Anzu Partners, ANU Connect Ventures and Australian Capital Ventures Limited at a valuation of $33.7 million, with its 2018 revenue coming to around $750,000, according to Wuchenich. And Liquid boasts some big-name customers, including NASA, Texas Instruments, Apple and Nvidia.
Despite this early success, Robert W. Baird & Co. analyst Richard Eastman says Liquid Instruments faces a tough challenge breaking into an oligopoly dominated by five major companies—Keysight, Rohde & Schwarz, Tektronix, National Instruments and Anritsu. With several of these large players also selling single pieces of hardware that can make multiple measurements, Eastman is skeptical Liquid Instruments can make a dent. “I’m not sure it looks disruptive,” Eastman says.
Also, Liquid Instruments will need to prove it offers comparable precision to its rivals. J. Max Cortner, president of the Instrument & Measurement Society, says while Liquid Instruments offers a unique product, its specs are in mainstream ranges, which may not be good enough for its customer base of highly trained researchers. “That’s going to be their dividing line, their frontier. How do they expand this easy-to-use concept into the physical extremes?” Cortner says.
Wuchenich is hoping Moku:Lab’s ready-to-use software and a specialized computer chip called FPGA will separate it from the competition. She notes whatever Liquid Instruments loses on precision, it more than compensates with its low price point. “Bottom line—customers don’t want/can’t afford to overpay for specs they don’t need,” she wrote in an email.
It’ll be an uphill battle for a small startup like Liquid Instruments to compete with behemoths whose customers have been loyal for decades. But for Colonel Brian Neff, who heads the department of electrical engineering at the U.S. Air Force Academy and uses Moku:Lab to train his students, Liquid Instruments is a formidable challenger.
“There are advantages to this new way of thinking that I’d love to see some of the other players adopt, and if they don’t adopt, then I think it’s that’s just more promising for a company like Liquid Instruments to be able to come in and innovate a solution that hasn’t really been done to this point,” Neff says.
I cover billionaires and venture capital for Forbes. I’ve covered startups and debates in the business world for Inc. and breaking news for The Associated Press, WBUR and Metro Boston. I recently graduated from Tufts University, where I served as editor-in-chief of The Tufts Daily.
Compared to the unsolved mysteries of the universe, far less gets said about one of the most profound facts to have crystallized in physics over the past half-century: To an astonishing degree, nature is the way it is because it couldn’t be any different. “There’s just no freedom in the laws of physics that we have,” said Daniel Baumann, a theoretical physicist at the University of Amsterdam.
Since the 1960s, and increasingly in the past decade, physicists like Baumann have used a technique known as the “bootstrap” to infer what the laws of nature must be. This approach assumes that the laws essentially dictate one another through their mutual consistency — that nature “pulls itself up by its own bootstraps.” The idea turns out to explain a huge amount about the universe.
When bootstrapping, physicists determine how elementary particles with different amounts of “spin,” or intrinsic angular momentum, can consistently behave. In doing this, they rediscover the four fundamental forces that shape the universe. Most striking is the case of a particle with two units of spin: As the Nobel Prize winner Steven Weinberg showed in 1964, the existence of a spin-2 particle leads inevitably to general relativity — Albert Einstein’s theory of gravity. Einstein arrived at general relativity through abstract thoughts about falling elevators and warped space and time, but the theory also follows directly from the mathematically consistent behavior of a fundamental particle.
“I find this inevitability of gravity [and other forces] to be one of the deepest and most inspiring facts about nature,” said Laurentiu Rodina, a theoretical physicist at the Institute of Theoretical Physics at CEA Saclay who helped to modernize and generalize Weinberg’s proof in 2014. “Namely, that nature is above all self-consistent.”
How Bootstrapping Works
A particle’s spin reflects its underlying symmetries, or the ways it can be transformed that leave it unchanged. A spin-1 particle, for instance, returns to the same state after being rotated by one full turn. A spin-12 particle must complete two full rotations to come back to the same state, while a spin-2 particle looks identical after just half a turn. Elementary particles can only carry 0, 12, 1, 32 or 2 units of spin.
To figure out what behavior is possible for particles of a given spin, bootstrappers consider simple particle interactions, such as two particles annihilating and yielding a third. The particles’ spins place constraints on these interactions. An interaction of spin-2 particles, for instance, must stay the same when all participating particles are rotated by 180 degrees, since they’re symmetric under such a half-turn.
Interactions must obey a few other basic rules: Momentum must be conserved; the interactions must respect locality, which dictates that particles scatter by meeting in space and time; and the probabilities of all possible outcomes must add up to 1, a principle known as unitarity. These consistency conditions translate into algebraic equations that the particle interactions must satisfy. If the equation corresponding to a particular interaction has solutions, then these solutions tend to be realized in nature.
For example, consider the case of the photon, the massless spin-1 particle of light and electromagnetism. For such a particle, the equation describing four-particle interactions — where two particles go in and two come out, perhaps after colliding and scattering — has no viable solutions. Thus, photons don’t interact in this way. “This is why light waves don’t scatter off each other and we can see over macroscopic distances,” Baumann explained. The photon can participate in interactions involving other types of particles, however, such as spin-12 electrons. These constraints on the photon’s interactions lead to Maxwell’s equations, the 154-year-old theory of electromagnetism.
Or take gluons, particles that convey the strong force that binds atomic nuclei together. Gluons are also massless spin-1 particles, but they represent the case where there are multiple types of the same massless spin-1 particle. Unlike the photon, gluons can satisfy the four-particle interaction equation, meaning that they self-interact. Constraints on these gluon self-interactions match the description given by quantum chromodynamics, the theory of the strong force.
A third scenario involves spin-1 particles that have mass. Mass came about when a symmetry broke during the universe’s birth: A constant — the value of the omnipresent Higgs field — spontaneously shifted from zero to a positive number, imbuing many particles with mass. The breaking of the Higgs symmetry created massive spin-1 particles called W and Z bosons, the carriers of the weak force that’s responsible for radioactive decay.
Then “for spin-2, a miracle happens,” said Adam Falkowski, a theoretical physicist at the Laboratory of Theoretical Physics in Orsay, France. In this case, the solution to the four-particle interaction equation at first appears to be beset with infinities. But physicists find that this interaction can proceed in three different ways, and that mathematical terms related to the three different options perfectly conspire to cancel out the infinities, which permits a solution.
That solution is the graviton: a spin-2 particle that couples to itself and all other particles with equal strength. This evenhandedness leads straight to the central tenet of general relativity: the equivalence principle, Einstein’s postulate that gravity is indistinguishable from acceleration through curved space-time, and that gravitational mass and intrinsic mass are one and the same. Falkowski said of the bootstrap approach, “I find this reasoning much more compelling than the abstract one of Einstein.”
Thus, by thinking through the constraints placed on fundamental particle interactions by basic symmetries, physicists can understand the existence of the strong and weak forces that shape atoms, and the forces of electromagnetism and gravity that sculpt the universe at large.
In addition, bootstrappers find that many different spin-0 particles are possible. The only known example is the Higgs boson, the particle associated with the symmetry-breaking Higgs field that imbues other particles with mass. A hypothetical spin-0 particle called the inflaton may have driven the initial expansion of the universe. These particles’ lack of angular momentum means that fewer symmetries restrict their interactions. Because of this, bootstrappers can infer less about nature’s governing laws, and nature itself has more creative license.
Spin-12 matter particles also have more freedom. These make up the family of massive particles we call matter, and they are individually differentiated by their masses and couplings to the various forces. Our universe contains, for example, spin-12 quarks that interact with both gluons and photons, and spin-12 neutrinos that interact with neither.
The spin spectrum stops at 2 because the infinities in the four-particle interaction equation kill off all massless particles that have higher spin values. Higher-spin states can exist if they’re extremely massive, and such particles do play a role in quantum theories of gravity such as string theory. But higher-spin particles can’t be detected, and they can’t affect the macroscopic world.
Spin-32 particles could complete the 0, 12, 1, 32, 2 pattern, but only if “supersymmetry” is true in the universe — that is, if every force particle with integer spin has a corresponding matter particle with half-integer spin. In recent years, experiments have ruled out many of the simplest versions of supersymmetry. But the gap in the spin spectrum strikes some physicists as a reason to hold out hope that supersymmetry is true and spin-32 particles exist.
In his work, Baumann applies the bootstrap to the beginning of the universe. A recent Quanta article described how he and other physicists used symmetries and other principles to constrain the possibilities for those first moments.
It’s “just aesthetically pleasing,” Baumann said, “that the laws are inevitable — that there is some inevitability of the laws of physics that can be summarized by a short handful of principles that then lead to building blocks that then build up the macroscopic world.”
Computational Modeling Physics First with Bootstrap seeks to explore how modeling practices and computational thinking can be integrated and synergistically serve as important orienting frameworks for teaching physics. The project is supported by National Science Foundation and 100Kin10.