Fusion Power Is a Reason To Be Excited About The Future of Clean Energy

Fusion energy is perhaps the longest of long shots. To build a fusion reactor is essentially to create an artificial star. Scientists have been studying the physics of fusion for a century and working to harness the process for decades. Yet almost every time researchers make an advance, the goal posts seem to recede even farther in the distance.

Still, the enormous potential of fusion makes it hard to ignore. It’s a technology that could safely provide an immense and steady torrent of electricity, harnessing abundant fuel made from seawater to ignite the same reaction that powers the sun. It would produce no greenhouse gases and minimal waste compared to conventional energy sources.

With global average temperatures rising and energy demands growing, the quest for fusion is timelier than ever: It could help solve both these problems at the same time. But despite its promise, fusion is often treated as a scientific curiosity rather than a must-try moonshot — an actual, world-changing solution to a massive problem.

The latest episode of Unexplainable, Vox’s podcast about unsolved mysteries in science, asks scientists about their decades-long pursuit of a star in a bottle. They talk about their recent progress and why fusion energy remains such a challenge. And they make the case for not only continuing fusion research, but aggressively expanding and investing in it — even if it won’t light up the power grid anytime soon.

With some of the most powerful machines ever built, scientists are trying to refine delicate, subatomic mechanics to achieve a pivotal milestone: getting more energy out of a fusion reaction than they put in. Researchers say they are closer than ever.

Fusion is way more powerful than any other energy source we have

Nuclear fission is what happens when big atoms like uranium and plutonium split apart and release energy. These reactions powered the very first atomic bombs, and today they power conventional nuclear reactors.

Fusion is even more potent. It’s what happens when the nuclei of small atoms stick together, fusing to create a new element and releasing energy. The most common form is two hydrogen atoms fusing to create helium.

The reason that fusion generates so much energy is that the new element weighs a smidgen less than the sum of its parts. That tiny bit of lost matter is converted into energy according to Albert Einstein’s famous formula, E = mc2. “E” stands for energy and “m” stands for mass.

The last part of the formula is “c,” a constant that measures the speed of light — 300,000 kilometers per second, which is then squared. So there’s an enormous multiplier for matter that’s converted into energy, making fusion an extraordinarily powerful reaction.

These basics are well understood, and researchers are confident that it’s possible to harness it in a useful way, but so far, it’s been elusive.

“It’s a weird thing, because we absolutely know that the fundamental theory works. We’ve seen it demonstrated,” said Carolyn Kuranz, a plasma physicist at the University of Michigan. “But trying to do it in a lab has provided us a lot of challenges.”

For a demonstration, one only has to look up at the sun during the day (but not directly, because you’ll hurt your eyes). Even from 93 million miles away, our nearest star generates enough energy to heat up the Earth through the vacuum of space.

But the sun has an advantage that we don’t have here on Earth: It is very, very big. One of the difficulties with fusion is that atomic nuclei — the positively charged cores of atoms — normally repel each other. To overcome that repulsion and spark fusion, you have to get the atoms moving really fast in a confined space, which makes collisions more likely.

A star like the sun, which is about 333,000 times the mass of Earth, generates gravity that accelerates atoms toward its center — heating them up, confining them, and igniting fusion. The fusion reactions then provide the energy to speed up other atomic nuclei and trigger even more fusion reactions.

What makes fusion energy so tricky

Imitating the sun on Earth is a tall order. Humans have been able to trigger fusion, but in ways that are uncontrolled, like in thermonuclear weapons (sometimes called hydrogen bombs). Fusion has also been demonstrated in laboratories, but under conditions that consume far more energy than the reaction produces. The reaction generally requires creating a high-energy state of matter known as plasma, which has quirks and behaviors that scientists are still trying to understand.

To make fusion useful, scientists need to trigger it in a controlled way that yields far more energy than they put in. That energy can then be used to boil water, spin a turbine, or generate electricity. Teams around the world are studying different ways to accomplish this, but the approaches tend to fall into two broad categories.

One involves using magnets to contain the plasma. This is the approach used by ITER, the world’s largest fusion project, currently under construction in southern France.

The other category involves confining the fusion fuel and compressing it in a tiny space with the aid of lasers. This is the approach used by the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in California.

Replicating a star requires doing this research at massive scales, so fusion experiments often involve the most powerful scientific instruments ever built. ITER’s central solenoid, for example, can generate a magnetic force strong enough to hoist an aircraft carrier 6 feet out of the water.

Building hardware to withstand these extreme conditions is its own scientific and engineering challenge. Managing such massive experiments has also been a struggle. ITER started with an initial cost estimate of 6.6 billion euros, which has since more than tripled. It began construction in 2007 and its first experiments are set to begin in 2025.

An upside to the intricacy of fusion reactions is that it is almost impossible to cause a runaway reaction or meltdown of the sort that have devastated fission power plants like Chernobyl. If a fusion reactor is disrupted, the reaction rapidly fizzles out. In addition, the main “waste” product of hydrogen fusion is helium, an inert gas. The process can induce some reactor materials to become radioactive, but the radioactivity is much lower, and the quantity of hazardous waste is far smaller, compared to conventional nuclear power plants. So nuclear fusion energy could become one of the safest sources of electricity.

For policymakers, investing in an expensive research project that may not yield fruit for decades, if at all, is a tough sell. Scientific progress doesn’t always keep up with political timelines: A politician who greenlights a fusion project might not even live to see it become a viable energy source — so they certainly won’t be able to brag about their success by the time the next election rolls around.

In the United States, funding for fusion research has been erratic over the years and far below the levels government analysts say is needed to make the technology a reality. The US Department of Energy currently spends about $500 million on fusion per year, compared to almost $1 billion on fossil fuel energy and $2.7 billion on renewables. Investment in fusion seems even tinier next to other major programs like NASA ($23 billion) or the military ($700 billion).

So from its basic physics to government budgets, fusion energy has a lot working against it.

Fusion energy should be treated as a solution, not just an experiment

Working in fusion’s favor, however, are scientists and engineers who think it’s not just possible, but inevitable.

“I’m a true believer. I do think we can solve this problem,” said Troy Carter, a plasma physicist at the University of California Los Angeles. “It will take time, but the real issue is getting the resources brought to bear on these issues.”

Investors are also getting in the game, placing billion-dollar bets on private startup companies developing their own fusion strategies.

The journey toward fusion has yielded benefits for other fields, particularly in plasma physics, which is used extensively in manufacturing semiconductors for electronics. “Plasma processing is one of the things that make your iPhones possible,” said Kathryn McCarthy, a fusion researcher at Oak Ridge National Laboratory.

And despite the hurdles, there have been some real advances. Researchers at NIF reported last summer that they achieved their best results yet — 1.3 megajoules of output from 1.9 megajoules of input — putting them closer than ever to energy-positive fusion. “We’re on the threshold of ignition,” said Tammy Ma, a plasma physicist at NIF.

To break out of its rut, fusion will need to be more than a science experiment. Just as space exploration is more than astronomy, fusion is much more than physics. It should be a leading tool in the fight against the world’s most urgent problems, from climate change to lifting people out of poverty.

Increasing energy access is closely linked to improving health, economic growth, and social stability. Yet close to a billion people still don’t have electricity and many more only have intermittent power, so there is an urgent humanitarian need for more energy.

At the same time, the window for limiting climate change is slamming shut, and electricity and heat production remain the dominant sources of heat-trapping gases in the atmosphere. To meet one of the goals of the Paris climate agreement — limiting warming to less than 1.5 degrees Celsius this century — the world needs to cut greenhouse gas emissions by half or more by 2030, according to the Intergovernmental Panel on Climate Change.

Many of the world’s largest greenhouse gas emitters are also aiming to zero out their contributions to climate change by the middle of the century. Making such drastic cuts in emissions means phasing out fossil fuels as quickly as possible and rapidly deploying much cleaner sources of energy.

The technologies of today may not be up to the task of resolving the tension between the need for more energy and the need to reduce carbon dioxide emissions. A problem like climate change is an argument for placing bets on all kinds of far-reaching energy solutions, but fusion may be the technology with the highest upside. And on longer time scales, closer to the 2040s and 2050s, it could be a real solution.

With more investment from governments and the private sector, scientists could speed up their pace of progress and experiment with even more approaches to fusion. In the US, where much of the research is conducted at national laboratories, this would mean convincing your representatives in Congress to get excited about fusion and ultimately to spend more money. Lawmakers can also encourage private companies to get into the game by, for example, pricing carbon dioxide emissions to create incentives for clean energy research.

The key, according to Carter, is to ensure support for fusion remains steady. “Given the level of importance here and the amount of money invested in energy, the current investment in fusion is a drop in the bucket,” Carter said. “You could imagine ramping it up orders of magnitude to get the job done.”

He added that funding for fusion doesn’t have to cannibalize resources from other clean energy technologies, like wind, solar, and nuclear power. “We need to invest across the board,” Carter said.

For now, the big fusion experiments at NIF and ITER will continue inching forward. At NIF, scientists will continue refining their process and steadily work their way up toward energy-positive fusion. ITER is scheduled to begin operation in 2025 and start hydrogen fusion experiments in 2035.

Artificial star power might not illuminate the world for decades, but the foundations have to be laid now through research, development, and deployment. It may very well become humanity’s crowning achievement, more than a century in the making.

Umair Irfan covers climate change, energy, and Covid-19 vaccine development for Vox. He is also a contributor to Science Friday. Before joining Vox, Umair was a reporter for ClimateWire at E&E News in Washington, DC, where he covered health and climate change, science, and energy policy.

Source: Fusion power is a reason to be excited about the future of clean energy – Vox


More contents:


Forget Everything You Think You Know About Time

In April 2018, in the famous Faraday Theatre at the Royal Institution in London, Carlo Rovelli gave an hour-long lecture on the nature of time. A red thread spanned the stage, a metaphor for the Italian theoretical physicist’s subject. “Time is a long line,” he said. To the left lies the past—the dinosaurs, the big bang—and to the right, the future—the unknown. “We’re sort of here,” he said, hanging a carabiner on it, as a marker for the present.

Then he flipped the script. “I’m going to tell you that time is not like that,” he explained.

Rovelli went on to challenge our common-sense notion of time, starting with the idea that it ticks everywhere at a uniform rate. In fact, clocks tick slower when they are in a stronger gravitational field. When you move nearby clocks showing the same time into different fields—one in space, the other on Earth, say—and then bring them back together again, they will show different times. “It’s a fact,” Rovelli said, and it means “your head is older than your feet.”

Also a non-starter is any shared sense of “now.” We don’t really share the present moment with anyone. “If I look at you, I see you now—well, but not really, because light takes time to come from you to me,” he said. “So I see you sort of a little bit in the past .” As a result, “now” means nothing beyond the temporal bubble “in which we can disregard the time it takes light to go back and forth.”

Rovelli turned next to the idea that time flows in only one direction, from past to future. Unlike general relativity, quantum mechanics, and particle physics, thermodynamics embeds a direction of time. Its second law states that the total entropy, or disorder, in an isolated system never decreases over time. Yet this doesn’t mean that our conventional notion of time is on any firmer grounding, Rovelli said.

Entropy, or disorder, is subjective: “Order is in the eye of the person who looks.” In other words the distinction between past and future, the growth of entropy over time, depends on a macroscopic effect—“the way we have described the system, which in turn depends on how we interact with the system,” he said.

“A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Getting to the last common notion of time, Rovelli became a little more cautious. His scientific argument that time is discrete—that it is not seamless, but has quanta—is less solid. “Why? Because I’m still doing it! It’s not yet in the textbook.” The equations for quantum gravity he’s written down suggest three things, he said, about what “clocks measure.” First, there’s a minimal amount of time—its units are not infinitely small.

Second, since a clock, like every object, is quantum, it can be in a superposition of time readings. “You cannot say between this event and this event is a certain amount of time, because, as always in quantum mechanics, there could be a probability distribution of time passing.”

Which means that, third, in quantum gravity, you can have “a local notion of a sequence of events, which is a minimal notion of time, and that’s the only thing that remains,” Rovelli said. Events aren’t ordered in a line “but are confused and connected” to each other without “a preferred time variable—anything can work as a variable.”

Even the notion that the present is fleeting doesn’t hold up to scrutiny. It is certainly true that the present is “horrendously short” in classical, Newtonian physics. “But that’s not the way the world is designed,” Rovelli explained. Light traces a cone, or consecutively larger circles, in four-dimensional spacetime like ripples on a pond that grow larger as they travel. No information can cross the bounds of the light cone because that would require information to travel faster than the speed of light.

“In spacetime, the past is whatever is inside our past light-cone,” Rovelli said, gesturing with his hands the shape of an upside down cone. “So it’s whatever can affect us. The future is this opposite thing,” he went on, now gesturing an upright cone. “So in between the past and the future, there isn’t just a single line—there’s a huge amount of time.” Rovelli asked an audience member to imagine that he lived in Andromeda, which is two and a half million light years away. “A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Listening to Rovelli’s description, I was reminded of a phrase from his book, The Order of Time : Studying time “is like holding a snowflake in your hands: gradually, as you study it, it melts between your fingers and vanishes.”

By : Brian Gallagher

Brian Gallagher is the editor of Facts So Romantic, the Nautilus  blog. Follow him on Twitter @BSGallagher.

Source: Forget Everything You Think You Know About Time


Related Contents:

Process instruments and controls handbook

Compendium of Mathematical Symbols

Farmers have used the sun to mark time for thousands of years, as the most ancient method of telling time

Philosophiae Naturalis Principia Mathematica

A Brief History of Atomic Clocks at NIST

Exploring Black Holes: Introduction to General Relativity

An introduction to electromagnetic theory

New atomic clock can keep time for 200 million years: Super-precise instruments vital to deep space navigation

12 attoseconds is the world record for shortest controllable time

Frequency of cesium in terms of ephemeris time

Subjective Time Versus Proper (Clock) Time

A brief history of time-consciousness: historical precursors to James and Husserl

The concept of time in philosophy: A comparative study between Theravada Buddhist and Henri Bergson’s concept of time from Thai philosophers’ perspectives

Critique of Pure Reason, Lecture notes: Philosophy 175 UC Davis

From Eternity to Here: The Quest for the Ultimate Theory of Time

Getting organized at work : 24 lessons to set goals, establish priorities, and manage your time

Bridge between quantum mechanics and general relativity still possible

Mapping Time: The Calendar and its History

Can Consciousness Be Explained By Quantum Physics?

One of the most important open questions in science is how our consciousness is established. In the 1990s, long before winning the 2020 Nobel Prize in Physics for his prediction of black holes, physicist Roger Penrose teamed up with anaesthesiologist Stuart Hameroff to propose an ambitious answer.

They claimed that the brain’s neuronal system forms an intricate network and that the consciousness this produces should obey the rules of quantum mechanics – the theory that determines how tiny particles like electrons move around. This, they argue, could explain the mysterious complexity of human consciousness.

Penrose and Hameroff were met with incredulity. Quantum mechanical laws are usually only found to apply at very low temperatures. Quantum computers, for example, currently operate at around -272°C. At higher temperatures, classical mechanics takes over. Since our body works at room temperature, you would expect it to be governed by the classical laws of physics. For this reason, the quantum consciousness theory has been dismissed outright by many scientists – though others are persuaded supporters.

Instead of entering into this debate, I decided to join forces with colleagues from China, led by Professor Xian-Min Jin at Shanghai Jiaotong University, to test some of the principles underpinning the quantum theory of consciousness.

In our new paper, we’ve investigated how quantum particles could move in a complex structure like the brain – but in a lab setting. If our findings can one day be compared with activity measured in the brain, we may come one step closer to validating or dismissing Penrose and Hameroff’s controversial theory.

Brains and fractals

Our brains are composed of cells called neurons, and their combined activity is believed to generate consciousness. Each neuron contains microtubules, which transport substances to different parts of the cell. The Penrose-Hameroff theory of quantum consciousness argues that microtubules are structured in a fractal pattern which would enable quantum processes to occur.

Fractals are structures that are neither two-dimensional nor three-dimensional, but are instead some fractional value in between. In mathematics, fractals emerge as beautiful patterns that repeat themselves infinitely, generating what is seemingly impossible: a structure that has a finite area, but an infinite perimeter.

This might sound impossible to visualise, but fractals actually occur frequently in nature. If you look closely at the florets of a cauliflower or the branches of a fern, you’ll see that they’re both made up of the same basic shape repeating itself over and over again, but at smaller and smaller scales. That’s a key characteristic of fractals.

The same happens if you look inside your own body: the structure of your lungs, for instance, is fractal, as are the blood vessels in your circulatory system. Fractals also feature in the enchanting repeating artworks of MC Escher and Jackson Pollock, and they’ve been used for decades in technology, such as in the design of antennas. These are all examples of classical fractals – fractals that abide by the laws of classical physics rather than quantum physics.

It’s easy to see why fractals have been used to explain the complexity of human consciousness. Because they’re infinitely intricate, allowing complexity to emerge from simple repeated patterns, they could be the structures that support the mysterious depths of our minds.

But if this is the case, it could only be happening on the quantum level, with tiny particles moving in fractal patterns within the brain’s neurons. That’s why Penrose and Hameroff’s proposal is called a theory of “quantum consciousness”.

Quantum consciousness

We’re not yet able to measure the behaviour of quantum fractals in the brain – if they exist at all. But advanced technology means we can now measure quantum fractals in the lab. In recent research involving a scanning tunnelling microscope (STM), my colleagues at Utrecht and I carefully arranged electrons in a fractal pattern, creating a quantum fractal.

When we then measured the wave function of the electrons, which describes their quantum state, we found that they too lived at the fractal dimension dictated by the physical pattern we’d made. In this case, the pattern we used on the quantum scale was the Sierpiński triangle, which is a shape that’s somewhere between one-dimensional and two-dimensional.

This was an exciting finding, but STM techniques cannot probe how quantum particles move – which would tell us more about how quantum processes might occur in the brain. So in our latest research, my colleagues at Shanghai Jiaotong University and I went one step further. Using state-of-the-art photonics experiments, we were able to reveal the quantum motion that takes place within fractals in unprecedented detail.

We achieved this by injecting photons (particles of light) into an artificial chip that was painstakingly engineered into a tiny Sierpiński triangle. We injected photons at the tip of the triangle and watched how they spread throughout its fractal structure in a process called quantum transport. We then repeated this experiment on two different fractal structures, both shaped as squares rather than triangles. And in each of these structures we conducted hundreds of experiments.

Our observations from these experiments reveal that quantum fractals actually behave in a different way to classical ones. Specifically, we found that the spread of light across a fractal is governed by different laws in the quantum case compared to the classical case.

This new knowledge of quantum fractals could provide the foundations for scientists to experimentally test the theory of quantum consciousness. If quantum measurements are one day taken from the human brain, they could be compared against our results to definitely decide whether consciousness is a classical or a quantum phenomenon.

Our work could also have profound implications across scientific fields. By investigating quantum transport in our artificially designed fractal structures, we may have taken the first tiny steps towards the unification of physics, mathematics and biology, which could greatly enrich our understanding of the world around us as well as the world that exists in our heads.

By: / Professor, Theoretical Physics, Utrecht University 

Source: Can consciousness be explained by quantum physics? My research takes us a step closer to finding out


Offerings for a New Paradigm Politics – Random Communications from an Evolutionary Edge

Activate Peak States of Resonance, Peace, and Bliss Through Vibrational Medicine

Flattened by death? A universal response captured in brilliant prose

Felicity Wilcox’s need to escape constraints is bizarrely satisfying

Post COVID Memory Loss and Causes of Brain Fog.

Global Probiotics Market: Size & Forecast with Impact Analysis of COVID-19 (2021-2025)

Divine Connection Oracle

The path to recovery and how we build a more resilient | CEO Perspective

The Difference Between Data Science & Artificial Intelligence

When did humans begin experimenting with mind-altering drugs and alcohol?


How human creativity and consciousness works

Empathy, Enchantment and Emergence in the Use of Oral Narratives: Bloomsbury Advances in Ecolinguistics Anthony Nanson Bloomsbury Academic

Experience and Signs of Spiritual Development in the Consciousness by Sri Aurobindo Studies

Understanding Consciousness | Rupert Sheldrake, George Ellis, Amie Thomasson

The Engineering of Conscious Experience

This Biotech Startup Just Raised $255 Million To Make Its AI-Designed Drug A Reality

Science technology concept. Research and Development. Drug discovery.

While many AI biotech companies are on journeys to discover new drug targets, Hong Kong-based Insilico Medicine is a step ahead. The startup not only scouts for new drug sites using its AI and deep learning platforms but also develops novel molecules to target them.

In February, the company announced the discovery of a new drug target for idiopathic pulmonary fibrosis, a disease in which air sacs of the lungs get scarred, leading to breathing difficulties. Using information about the site, it developed potential drug targets. The startup recently raised $255 million in series C funding, taking its total to $310 million. The round was led by private equity firm Warburg Pincus. Insilico will use the funds to start human clinical trials, initiate multiple new programs for novel and difficult targets, and further develop its AI and drug discovery capabilities.

The company has stiff competition in the industry of using AI to discover new drugs. The global AI in Drug Discovery market was valued at $230 million in 2021 and is projected to reach a market value of over $4 billion  by 2031, according to a report from Vision Gain. The area has already minted at least one billionaire, Carl Hansen of AbCellera, and others have also gained attention from investors. Flagship Pioneering-backed Valo Health announced this month it’s going public via SPAC.

Investors said that Insilico’s AI technology and partnerships with leading pharmaceuticals attracted them to the startup, despite the crowded field. “Insilico fits strongly with our strategy of investing in the best-in-class innovators in the healthcare,” said Fred Hassan of Warburg Pincus, “Artificial Intelligence and Machine Learning is a powerful tool to revolutionize the drug discovery process and bring life-changing therapies to patients faster than ever before, he added.

CEO and founder Alex Zhavoronkov got his start in computer science, but his interest in research into slowing down aging drew him to the world of biotech. He received his Masters from Johns Hopkins and then got a PhD from Moscow State University, where his research focused on using machine learning to look at the physics of molecular interactions in biological systems.

The process for finding a preclinical target for idiopathic pulmonary fibrosis highlights Insilico’s approach. The company had initially found 20 new target sites to treat fibrosis. Then it used its machine learning processes to narrow those down to a specific target which is implicated in idiopathic pulmonary fibrosis. Then using its in-house tool, Chemistry42, it generated novel molecules to target the new site. The new preclinical drug candidate was found efficacious and safe in mice studies, the company said in a press release. 

“Now we have successfully linked both biology and chemistry and nominated the preclinical candidate for a novel target, with the intention of taking it into human clinical trials, which is orders of magnitude more complex and more risky problem to solve,” Zhavoronkov added in a statement.

Treatments for this condition are a dire need. Patients with idiopathic pulmonary fibrosis develop respiratory failure as their blood doesn’t receive adequate oxygen. Most patients die within two to three years of developing the condition. If the company’s drug candidate proves out during clinical trials, it would be a major step forward both for these patients and the industry as a whole.

“To my knowledge this is the first case where AI identified a novel target and designed a preclinical candidate for a very broad disease indication,” Zhavoronkov said in a statement.

Follow me on Twitter or LinkedIn. Send me a secure tip.

I am a New York based health and science reporter and a graduate from Columbia’s School of Journalism with a master’s in science and health reporting. I write on infectious diseases, global health, gene editing tools, intersection of public health and global warming. Previously, I worked as a health reporter in Mumbai, India, with the Hindustan Times, a daily newspaper where I extensively reported on drug resistant infections such as tuberculosis, leprosy and HIV. I also reported stories on medical malpractice, latest medical innovations and public health policies.

I have a master’s in biochemistry and a bachelor’s  degree in zoology. My experience of working in a molecular and a cell biology laboratory helped me see science from researcher’s eye. In 2018 I won the EurekAlert! Fellowships for International Science Reporters. My Twitter account @aayushipratap

Source: This Biotech Startup Just Raised $255 Million To Make Its AI-Designed Drug A Reality



CEO Alex Zhavoronkov founded Insilico Medicine in 2014, as an alternative to animal testing for research and development programs in the pharmaceutical industry. By using artificial intelligence and deep-learning techniques, Insilico is able to analyze how a compound will affect cells and what drugs can be used to treat the cells in addition to possible side effects. Through its Pharma.AI division, the company provides machine learning services to different pharmaceutical, biotechnology, and skin care companies. Insilico is known for hiring mainly through hackathons such as their own MolHack online hackathon.

The company has multiple collaborations in the applications of next-generation artificial intelligence technologies such as the generative adversarial networks (GANs) and reinforcement learning to the generation of novel molecular structures with desired properties. In conjunction with Alan Aspuru-Guzik‘s group at Harvard, they have published a journal article about an improved GAN architecture for molecular generation which combines GANs, reinforcement learning, and a differentiable neural computer.

In 2017, Insilico was named one of the Top 5 AI companies by NVIDIA for its potential for social impact. Insilico has R&D resources in Belgium, Russia, and the UK and hires talent through hackathons and other local competitions. In 2017, Insilico had raised $8.26 million in funding from investors including Deep Knowledge Ventures, JHU A-Level Capital, Jim Mellon, and Juvenescence. In 2019 it raised another $37 million from Fidelity Investments, Eight Roads Ventures, Qiming Venture Partners, WuXi AppTec, Baidu, Sinovation, Lilly Asia Ventures, Pavilion Capital, BOLD Capital, and other investors.

Physicists Debate Hawking’s Idea That the Universe Had No Beginning


In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.

Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance — pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?

The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?

Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”

The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.

“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”

Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”

The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”

But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”

In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”


The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”

After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.

Garden of Cosmic Delights

Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.

In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point.

GRAPHIC: "In the ‘Beginning’"
5W Infographics for Quanta Magazine

Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.

Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.

The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)

Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.

The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.

One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.

The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.

The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.

In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.

“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”

Imaginary Universes

Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.

He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary’s defenders, to walk the other way.

It’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.

According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.  

In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.

That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees.

For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.

There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.

Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems.

No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function — it’s less clear.”

BY: Natalie Wolchover



%d bloggers like this: