Scientists Probe Huge Crater On ‘Psyche,’ The Massive Metal Asteroid Worth More Than Our Global Economy

Does a massive crater on a weird-looking asteroid give us a way to deflect incoming asteroids?

A NASA spacecraft will depart this August on a mission to explore a metal-rich asteroid called 16 Psyche—speculated to be a highly valuable object—in an effort to determine exactly what it’s made of.

It will be NASA’s first visit to a metallic asteroid, as opposed to a rocky or icy one, though it has been studied by the Hubble Space Telescope.

16 Psyche is strange. Shaped like a potato and about 140 miles in diameter, it’s more reflective than anything else in the asteroid belt between Mars and Jupiter. So bright, in fact, that it’s presumed to be composed largely of metal‚ specifically nickel or iron.

That’s prompted claims that it could be worth about $10,000 quadrillion (the global economy is worth about $84.5 trillion) and that it could be a high priority for asteroid-mining in future.

However, theory that 16 Psyche could be the remains of a planet that never made it—the leftovers of a planet core—makes it priceless to astronomers trying to figure how the Solar System formed.

Its exact composition will be for the NASA spacecraft to determine from orbit, but a large crater on its surface is already giving scientists clues—and could provide critical intelligence for future attempts deflect a rogue object.

Asteroid-deflection is something NASA is very interested in perfecting well in advance of aa large asteroid being spotted that’s heading straight for Earth. On October 22, 2022 NASA’s Double Asteroid Redirection Test (DART) will smash a 500kg spacecraft into binary asteroid 65803 Didymos and its moonlet Dimorphos (also called “Didymoon.”)

The idea is that by creating a “kinetic deflection” on Dimorphos it will ever so slightly change the trajectory of both objects.However, what happened on 16 Psyche was something altogether more violent.

The theory goes that something smashed into 16 Psyche a few billion years ago, creating a massive crater about four miles deep and 33 miles wide. Running for a few days on up to 3,000 cores of a Los Alamos supercomputer, a new visualization by Los Alamos National Laboratory simulates what happened in the 400 seconds after 16 Psyche was struck by something.

“This is a weirdly shaped crater, shallow and wide,” said Wendy K. Caldwell, applied mathematician/planetary scientist at Los Alamos National Laboratory and the lead author for Los Alamos simulations of Psyche. Caldwell presented the team’s research results at the 2021 AGU Fall Meeting.

The simulations shed some light on what, exactly, 16 Psyche might be made of—rubble. Radar observations indicate the asteroid is metallic, but density measurements indicate it is porous. “In our simulations, hexagonal packing in a rubble pile gave almost perfect matches to the ratio of the depth to the diameter on Psyche,” said Caldwell. “That result was really exciting, because it’s shape, not just size, that you have to understand to determine the feasibility of potential compositions.”

The simulation shows an impactor striking Psyche modeled as a hexagonally packed rubble pile. Square packing of the rubble pile material failed to accurately reproduce the actual crater shape observed on Psyche, but hexagonal packing was a very close match. The rubble that makes-up 16 Psyche is expected to be of varying sizes and shapes.

Operating under NASA’s Discovery program, the Psyche spacecraft will lift-off atop a SpaceX Falcon Heavy rocket in August this year. The tennis court-sized construction with have seven scientific instruments and two solar arrays pr provide power.

The Psyche spacecraft will then conduct a gravity-assist flyby of Mars in May 2023 before finally arriving at 16 Psyche in January 2026. NASA’s spacecraft will go into orbit of 16 Psyche and attempt to determine whether or not it is a planet core, map it and age it.

“The Psyche mission will help us understand more about the early days of the solar system and how the planets formed,” said Caldwell.

Wishing you clear skies and wide eyes.

Follow me on Twitter or LinkedIn. Check out my website or some of my other work here.

I’m an experienced science, technology and travel journalist and stargazer writing about exploring the night sky, solar and lunar eclipses,

Source: Scientists Probe Huge Crater On ‘Psyche,’ The Massive Metal Asteroid Worth More Than Our Global Economy

.

More contents:

Are Supermoons Dangerous? Why This Week’s ‘Super Pink Moon’ Might Cause You Problems

Super Pink Moon in London

At dusk on Monday, April 24, 2021 right across the world the year’s second of four “supermoons” will rise. It will be ever-so slightly bigger than most full Moons because of its closeness to Earth in its egg-shaped orbit, but not so much that you’ll notice.

It will still look spectacular at it appears on your eastern horizon at dusk—as all full Moons do—but while the effect on you will be slight, the effects of Monday’s “supermoon” on the natural world will be dramatic.

Routinely derided astronomers they may be, but geographers know only too well that “supermoons” are actual physical phenomenons with consequences for the natural world.

The most recent published research reveals that, according to a 25-year study, “supermoons” cause bigger tidal ranges, higher water levels and more severe erosion.

A “supermoon” is a full moon that appears much larger than a normal full Moon. Technically they’re known as perigee full Moons by astronomers. The Moon’s orbital path around Earth is a slight ellipse, so each month there’s a near-point (perigee) and a far-point (apogee). At perigee it appears a little larger than the average apparent size (a “supermoon”), and at apogee, a little smaller (a “micromoon”).

The second of four “supermoons” or “perigee full Moons” of 2021, April’s full Moon will appear about 6% larger than an average full Moon.

The daily rise and fall of sea levels are called tides. They are caused by the Moon’s gravitational pull on the oceans as it orbits Earth, but also the Sun’s gravitational pull. They combine during a New Moon and a full Moon.

The main physical effect of a supermoon is a king tide, which increases the risk of coastal inundation. A king tide is an unusually high tide that results from a stronger lunar gravitational force than normal.

It’s also known as a perigean spring tide and is an entirely predictable astronomical tide.

Although the effect is magnified the closer the Moon is to the Earth, a supermoon can occur at both New Moon and full Moon. In practice, a supermoon at New Moon is barely mentioned in the media, though its physical effects are just as strong.

That’s because the Moon aligns with the Sun and the Earth every 14 days. At full Moon the Earth is in between the Sun and the Moon, while at New Moon the Moon is between the Sun and the Earth. At both times of the month the resulting alignment causes a tidal force. When the Moon is closer to the Earth than normal during a New Moon or a full Moon—so, during a supermoon—that tidal force is increased.

The distance to the Moon from Earth’s center changes from 406,000 km at apogee to about 357,000 km at perigee.

The research showed a long-term correlation between erosion across the beach and the Moon’s cycles, and suggested that a supermoon increases the risk of more severe beach erosion near the shoreline. These supermoon-driven king tides are more likely to cause coastal disasters when they occur simultaneously with storm surges and high waves.

So as you gaze at the beautiful “supermoon” appearing in the east on Monday evening, bear in mind that its greater gravitational force is what really makes it an important event for our planet. As rising sea levels kick-in, supermoons and the king tides they bring could mean even worse flooding for coastal communities.

Wishing you clear skies and wide eyes. 

I’m an experienced science, technology and travel journalist and stargazer writing about exploring the night sky, solar and lunar eclipses, moon-gazing, astro-travel, astronomy and space exploration. I’m the editor of WhenIsTheNextEclipse.com and the author of “A Stargazing Program for Beginners: A Pocket Field Guide” (Springer, 2015), as well as many eclipse-chasing guides.

Source: Are Supermoons Dangerous? Why This Week’s ‘Super Pink Moon’ Might Cause You Problems

.

.

of EarthSky:

Some astronomers complain about the name supermoon. They like to call supermoons hype. But supermoons aren’t hype. They’re special. Many people now know and use the word supermoon. We notice even some diehards are starting to use it now. Such is the power of folklore.

Before we called them supermoons, we in astronomy called these moons perigean full moons, or perigean new moons. Perigee just means near Earth.

The moon is full, or opposite Earth from the sun, once each month. It’s new, or more or less between the Earth and sun, once each month. And, every month, as it orbits Earth, the moon comes closest to Earth, or to perigee. The moon naturally swings farthest away once each month, too; that point is called apogee.

No doubt about it. Supermoon is a catchier term than perigean new moon or perigean full moon. That’s probably why the term supermoon has entered the popular culture. For example, Supermoon is the title track of Sophie Hunger’s 2015 album. It’s a nice song! Check it out.

The hype aspect of supermoons probably stems from an erroneous impression people had when the word supermoon came into popular usage … maybe a few decades ago? Some people mistakenly believed a full supermoon would look much, much bigger to the eye. It doesn’t. Full supermoons don’t look bigger to the eye than ordinary full moons, although experienced observers say they can detect a difference.

But supermoons do look brighter than ordinary full moons! The angular diameter of a supermoon is about 7% greater than that of the average-size full moon and 14% greater than the angular diameter of a micro-moon (year’s farthest and smallest full moon). Yet, a supermoon exceeds the area (disk size) and brightness of an average-size full moon by some 15% – and the micro-moon by some 30%. For a visual reference, the size difference between a supermoon and micro-moon is proportionally similar to that of a U.S. quarter versus a U.S. nickel.

So go outside on the night of a full supermoon, and – if you’re a regular observer of nature – you’ll surely notice the supermoon is exceptionally bright!

.

  1. What Is a Supermoon?
  2. The Moon Illusion
  3. The Moon Phases
  4. The Moon’s Effect on Tides
  5. What Is a Micro Moon?
  6. How Can Full Moon Be in the Daytime?
  7. Is a Blue Moon Blue?
  8. The Moon’s Orbit
  9. The Far Side of the Moon
  10. What Is a Black Moon?
  11. What Are Moonbows?
  12. Full Moon Names
  13. Taking pictures of the Moon

 

Newly Discovered Ghostly Circles In The Sky Cant Be Explained By Current Theories And Astronomers Excited

In September 2019, my colleague Anna Kapinska gave a presentation showing interesting objects she’d found while browsing our new radio astronomical data. She had started noticing very weird shapes she couldn’t fit easily to any known type of object.

Among them, labelled by Anna as WTF?, was a picture of a ghostly circle of radio emission, hanging out in space like a cosmic smoke-ring. None of us had ever seen anything like it before, and we had no idea what it was. A few days later, our colleague Emil Lenc found a second one, even more spooky than Anna’s.

Anna and Emil had been examining the new images from our pilot observations for the Evolutionary Map of the Universe (EMU) project, made with CSIRO’s revolutionary new Australian Square Kilometre Array Pathfinder (ASKAP) telescope.

EMU plans to boldly probe parts of the Universe where no telescope has gone before. It can do so because ASKAP can survey large swathes of the sky very quickly, probing to a depth previously only reached in tiny areas of sky, and being especially sensitive to faint, diffuse objects like these.

Join our readers who subscribe to free evidence-based news

I predicted a couple of years ago this exploration of the unknown would probably make unexpected discoveries, which I called WTFs. But none of us expected to discover something so unexpected, so quickly. Because of the enormous data volumes, I expected the discoveries would be made using machine learning. But these discoveries were made with good old-fashioned eyeballing.


Read more: Expect the unexpected from the big-data boom in radio astronomy


Hunting ORCs

Our team searched the rest of the data by eye, and we found a few more of the mysterious round blobs. We dubbed them ORCs, which stands for “odd radio circles”. But the big question, of course, is: “what are they?”

At first we suspected an imaging artefact, perhaps generated by a software error. But we soon confirmed they are real, using other radio telescopes. We still have no idea how big or far away they are. They could be objects in our galaxy, perhaps a few light-years across, or they could be far away in the Universe and maybe millions of light years across.

When we look in images taken with optical telescopes at the position of ORCs, we see nothing. The rings of radio emission are probably caused by clouds of electrons, but why don’t we see anything in visible wavelengths of light? We don’t know, but finding a puzzle like this is the dream of every astronomer.


Read more: The Australian Square Kilometre Array Pathfinder finally hits the big-data highway


We know what they’re not

We have ruled out several possibilities for what ORCs might be.

Could they be supernova remnants, the clouds of debris left behind when a star in our galaxy explodes? No. They are far from most of the stars in the Milky Way and there are too many of them.

Could they be the rings of radio emission sometimes seen in galaxies undergoing intense bursts of star formation? Again, no. We don’t see any underlying galaxy that would be hosting the star formation.

Could they be the giant lobes of radio emission we see in radio galaxies, caused by jets of electrons squirting out from the environs of a supermassive black hole? Not likely, because the ORCs are very distinctly circular, unlike the tangled clouds we see in radio galaxies.

Could they be Einstein rings, in which radio waves from a distant galaxy are being bent into a circle by the gravitational field of a cluster of galaxies? Still no. ORCs are too symmetrical, and we don’t see a cluster at their centre.

A genuine mystery

In our paper about ORCs, which is forthcoming in the Publications of the Astronomical Society of Australia, we run through all the possibilities and conclude these enigmatic blobs don’t look like anything we already know about.

So we need to explore things that might exist but haven’t yet been observed, such as a vast shockwave from some explosion in a distant galaxy. Such explosions may have something to do with fast radio bursts, or the neutron star and black hole collisions that generate gravitational waves.


Read more: How we closed in on the location of a fast radio burst in a galaxy far, far away


Or perhaps they are something else entirely. Two Russian scientists have even suggested ORCs might be the “throats” of wormholes in spacetime.

From the handful we’ve found so far, we estimate there are about 1,000 ORCs in the sky. My colleague Bärbel Koribalski notes the search is now on, with telescopes around the world, to find more ORCs and understand their cause.

It’s a tricky job, because ORCS are very faint and difficult to find. Our team is brainstorming all these ideas and more, hoping for the eureka moment when one of us, or perhaps someone else, suddenly has the flash of inspiration that solves the puzzle.

It’s an exciting time for us. Most astronomical research is aimed at refining our knowledge of the Universe, or testing theories. Very rarely do we get the challenge of stumbling across a new type of object which nobody has seen before, and trying to figure out what it is.

Is it a completely new phenomenon, or something we already know about but viewed in a weird way? And if it really is completely new, how does that change our understanding of the Universe? Watch this space!

By: Ray Norris Professor, School of Science, Western Sydney University

NASA Goddard

A new study using observations from NASA’s Fermi Gamma-ray Space Telescope reveals the first clear-cut evidence that the expanding debris of exploded stars produces some of the fastest-moving matter in the universe. This discovery is a major step toward meeting one of Fermi’s primary mission goals. Cosmic rays are subatomic particles that move through space at nearly the speed of light. About 90 percent of them are protons, with the remainder consisting of electrons and atomic nuclei.

In their journey across the galaxy, the electrically charged particles become deflected by magnetic fields. This scrambles their paths and makes it impossible to trace their origins directly. Through a variety of mechanisms, these speedy particles can lead to the emission of gamma rays, the most powerful form of light and a signal that travels to us directly from its sources. Two supernova remnants, known as IC 443 and W44, are expanding into cold, dense clouds of interstellar gas.

This material emits gamma rays when struck by high-speed particles escaping the remnants. Scientists have been unable to ascertain which particle is responsible for this emission because cosmic-ray protons and electrons give rise to gamma rays with similar energies. Now, after analyzing four years of data, Fermi scientists see a gamma-ray feature from both remnants that, like a fingerprint, proves the culprits are protons. When cosmic-ray protons smash into normal protons, they produce a short-lived particle called a neutral pion.

The pion quickly decays into a pair of gamma rays. This emission falls within a specific band of energies associated with the rest mass of the neutral pion, and it declines steeply toward lower energies. Detecting this low-end cutoff is clear proof that the gamma rays arise from decaying pions formed by protons accelerated within the supernova remnants. This video is public domain and can be downloaded at: http://svs.gsfc.nasa.gov/goto?11209 Like our videos? Subscribe to NASA’s Goddard Shorts HD podcast: http://svs.gsfc.nasa.gov/vis/iTunes/f… Or find NASA Goddard Space Flight Center on Facebook: http://www.facebook.com/NASA.GSFC Or find us on Twitter: http://twitter.com/NASAGoddard

Should you wish to make a donation, it will help us continue to provide future research and contents due to crisis

NASA Finally Contacts Voyager 2 After Unprecedented Seven-Month Silence

In the history of spaceflight, only five spacecraft ever launched by humanity possess enough energy to leave the gravitational pull of our Solar System. While thousands upon thousands of objects have been launched into space, overcoming the gravitational pull of planet Earth, the Sun is more than 300,000 times as massive as our home planet, and is far more difficult to escape from. A combination of fast launch speeds and gravitational assists from other planets were required to leave our Solar System, with only Pioneer 10 and 11, Voyager 1 and 2, and New Horizons attaining “escape velocity” from our Sun.

While Pioneer 10 and 11 are now inactive, New Horizons and both Voyager spacecrafts remain operational, powered by radioisotope thermoelectric generators. Voyager 1 has overtaken all other spacecrafts and is now the most distant: 22 billion km away, pulling away from the slightly slower Voyager 2 at “only” 18.8 billion km distant. Since the coronavirus pandemic in mid-March, NASA has had no contact with Voyager 2, but an upgraded deep space network dish made a successful call on October 29. Here’s the fascinating science that keeps us in touch with the most distant objects ever launched from Earth.

A logarithmic chart of distances, showing the Voyagers, our Solar System, and more.
At distances of 148 and 125 Astronomical Units, respectively, both Voyager 1 and 2 have passed the … [+] NASA / JPL-Caltech

When it comes to sending and receiving signals across astronomical distances, there are three enemies you have to overcome:

  1. distance,
  2. time,
  3. and power.

Recommended For You

The farther away a spacecraft is from you, the farther a signal that you send has to travel before it reaches it, the longer it takes to get there, and the lower in power that signal is when it arrives. If a spacecraft is twice as distant as another, the distance to it is twice as great, the time it takes a light signal to travel to it is twice as great, and the signal power that it receives is only one-fourth as great, since light signals spread out in the two dimensions perpendicular to the spacecraft’s line-of-sight. The farther away a spacecraft is, it’s harder to contact, it takes longer to contact it, and it requires more energy to send-or-receive the same signal.

PROMOTED Google Cloud BrandVoice | Paid Program Is Data The Answer To Scaling Compassion In Healthcare? UNICEF USA BrandVoice | Paid Program UNICEF To Lead Procurement And Supply Of Future COVID-19 Vaccines Grads of Life BrandVoice | Paid Program Disrupting Economic And Racial Inequity With Three Practical Steps

Illustration of the relationship between distance and how flux spreads out.
The way that sunlight, or any form of electromagnetic radiation, spreads out as a function of … [+] Wikimedia Commons user Borb

The way an electromagnetic signal works — whether you’re detecting it with a refracting lens, a reflecting dish, or a linear antenna — is straightforward: it spreads out in a spherical shape from its source. Because there’s a certain amount of inherent background noise to any observation you’d make, from both terrestrial and celestial sources, you need your signal to cross a certain threshold to be detectable, rising above the noise background. On the receiving end, that means larger detectors are better, while on the transmitting end, that means a higher-powered transmitter is better.

Unfortunately, the spacecraft that have already been launched cannot have their hardware upgraded in any way; once they’re launched, they’re simply stuck with the technology they’ve been outfitted with. To make matters worse, the spacecraft themselves are powered by radioactive sources, where specially chosen material, such as plutonium-238, radioactively decays, emitting heat that gets converted into electricity. As time goes on, more and more of the material decays away, decreasing the power available to the spacecraft for both transmitting and receiving signals.

A pellet of Plutonium Oxide, which is warm to the touch and glows under its own power.
A pellet of Plutonium Oxide, which is warm to the touch and glows under its own power. Pu-238 is a … [+] Public Domain / Los Alamos National Laboratory

As the amount of heat energy produced by radioactive material decreases, the conversion from heat energy into electrical energy becomes less successful: the thermocouples degrade over time and lose efficiency at lower powers. As a result, the power available to the spacecraft through radioisotope thermoelectric generators has decreased precipitously. As of 2020, the plutonium-238 onboard is producing just 69% of the initial heat energy, and that translates into only about ~50% of the original output power.

Even though Voyager 1 and 2 are now 43 years old and farther from Earth than any other operating spacecraft in history, however, they’re not lost to us yet. The reason is simple: as we improve our transmission and receiving capabilities back here on Earth, we can both send out more powerful signals to be received by these distant spacecraft, and we can do a better job of detecting the spacecrafts’ responses even at low powers. The key is through NASA’s Deep Space Network: a collection of radio antennae designed to communicate with humanity’s most distant spacecraft.

Repairs being conducted to the 70-meter dish that's part of NASA's Deep Space Network.
Crews conduct critical upgrades and repairs to the 70-meter-wide (230-foot-wide) radio antenna Deep … [+] CSIRO

There are three major radio antenna facilities around the world: one in Canberra, Australia, one in Madrid, Spain, and one in Goldstone, California. These three facilities are spaced roughly equidistant around the globe; for almost any location that you can imagine putting a spacecraft, at least one of the antennae will have a direct line-of-sight to that spacecraft at any given time.

Almost, of course. You might recognize that the facility in Canberra, Australia, is the only one located in Earth’s southern hemisphere. If a spacecraft is very far south — so far south that it’s invisible from locations like California or Spain — then the Australian dish would be the only one capable of communicating with it. While both Pioneers, New Horizons, and the Voyager 1 spacecraft could all be contacted (in theory) by all three of these facilities, Voyager 2 is the exception for one major reason: its 1989 flyby of Neptune and its giant moon, Triton.

The illuminated crescents of Neptune (foreground) & its largest moon Triton (background).
The illuminated crescents of Neptune (foreground) and its largest moon Triton (background) showcase … [+] NASA / Jet Propulsion Lab

The trip to Neptune still, even to this day, represents the only close encounter humanity has ever had with our Solar System’s eighth and final (for now) planet, as well as with Triton, the largest known object to originate in our Kuiper belt. The discoveries from that flyby were spectacular, as a number of fantastic features were discovered: Neptune’s ring system, a number of small, inner moons, and a series of features on Triton, including cryovolcanoes and varied terrain similar to what we’d discover some 26 years later when New Horizons flew past Pluto.

In order to have a close encounter with Triton, however, Voyager 2 needed to fly over Neptune’s north pole, deflecting Voyager 2’s trajectory far to the south of the plane in which the planets orbit the Sun. Over the past 31 years, it’s continued to follow that trajectory, rendering it invisible to every member of the Deep Space Network except for the one dish in Australia. And since mid-March, 2020, that dish — which includes the radio transmitter used to talk to Voyager 2 — has been shut down for upgrades.

NASA's Deep Space Station 43 (DSS43) radio telescope is massive: 70 meters in size.
This image of NASA’s Deep Space Station 43 (DSS43) radio telescope belies its massive size. At 70 … [+] NASA/CSIRO

The dish itself is a spectacular piece of technology. It’s 70 meters (230 feet) across: a world-class radio antenna. The instruments attached to it include two radio transmitters, one of which is used to send commands to Voyager 2. That instrument, as of early 2020, was 47 years old, and hadn’t been replaced in all that time. Additionally, it was using antiquated heating and cooling equipment, old and inefficient electronics, and a set of power supply equipment that limited any potential upgrades.

Fortunately, the decision was made to upgrade all of these, which should enable NASA to do what no other facility can do: send commands to Voyager 2. While the spacecraft is still operating — including sending health updates and science data that can be received by a series of smaller dishes also located in Australia — it has been unable to receive commands, ensuring that it will just keep doing whatever it was last doing until those new commands are received.

With its close flyby of Neptune and Triton, Voyager 2's trajectory was severely altered.
With its close flyby of Neptune and Triton, Voyager 2’s trajectory was severely altered, plunging it … [+] Image: Phoenix7777/Wikimedia Commons; Data: HORIZONS System, JPL, NASA

On October 29, 2020, enough of the upgrades had been executed that mission operators for Voyager 2 decided to perform a critical test: to send a series of commands to Voyager 2 for the first time since the upgrades began. According to the project manager of the Deep Space Network for NASA, Brad Arnold:

“What makes this task unique is that we’re doing work at all levels of the antenna, from the pedestal at ground level all the way up to the feedcones at the center of the dish that extend above the rim.”

Although it takes about 36 light-hours for a signal to travel round-trip from Earth to Voyager 2, NASA announced on November 2 that the test was successful. Voyager 2 returned a signal that confirmed the call was received, followed by a successful execution of the commands. According to Arnold, “This test communication with Voyager 2 definitely tells us that things are on track with the work we’re doing.”

Triton, at left, as imaged by Voyager 2, and Pluto, at right, as imaged by New Horizons.
Triton, at left, as imaged by Voyager 2, and Pluto, at right, as imaged by New Horizons. Both worlds … [+] NASA/JPL/USGS (L), NASA/JHUAPL/SWRI (R)

The upgrades to this member of the Deep Space Network are on track for completion in early 2021, where they will not only be critical for the continued success of the Voyager 2 mission, but will prepare NASA for a series of upcoming missions. The upgraded infrastructure will play a critical role in any upcoming Moon-to-Mars exploration efforts, will support any crewed missions such as Artemis, will provide communication and navigation infrastructure, and will also assist with communications to NASA’s Mars Perseverance rover, scheduled to land on Mars on February 18, 2021.

This particular dish was constructed in 1972, where it had an original size of 64 meters (210 feet). It was expanded to 70 meters (230 feet) 15 years later, but none of the subsequent repairs or upgrades compare to the work being done today. According to NASA, this is “one of the most significant makeovers the dish has received and the longest it’s been offline in over 30 years.”

Position and trajectory of Voyager 1 and the positions of the planets on 14 February 1990.
Position and trajectory of Voyager 1 and the positions of the planets on 14 February 1990, the day … [+] Wikimedia Commons / Joe Haythornthwaite and Tom Ruen

As Voyager 2 and the other escaping spacecraft continue to recede from the Sun, their power levels will continue to drop and it will become progressively more difficult to issue commands to them as well as to receive data. However, as long as they remain functional, even at incredibly low and inefficient power levels, we can continue to upgrade and enlarge the antennae that are a part of NASA’s Deep Space Network to continue to conduct science with them. As long as these spacecraft remain operational in some capacity, simply continuing to upgrade our facilities here on Earth will enable us to gather data for years, and likely even decades, to come.

Voyager 1 and 2 are already the most distant operational spacecraft ever launched from Earth, and continue to set new records. They’ve both passed the heliopause and entered interstellar space, probing different celestial hemispheres as they go. Each new piece of data they send back is a first: the first time we’ve directly sampled space outside of our Solar System from so far away. With these new upgrades, we’ll have the capacity to see what we’ve never seen before. In science, that’s where the potential for rich, new discoveries always lies. Follow me on Twitter. Check out my website or some of my other work here

Ethan Siegel

Ethan Siegel

I am a Ph.D. astrophysicist, author, and science communicator, who professes physics and astronomy at various colleges. I have won numerous awards for science writing since 2008 for my blog, Starts With A Bang, including the award for best science blog by the Institute of Physics. My two books, Treknology: The Science of Star Trek from Tricorders to Warp Drive, Beyond the Galaxy: How humanity looked beyond our Milky Way and discovered the entire Universe, are available for purchase at Amazon. Follow me on Twitter @startswithabang.

.

.

V101 Science

Part 2 – The Voyager two space probe was the second human-made object to reach interstellar space. But what did it see during its historic 42 year trip out of the solar the system? If you haven’t watched part 1, which follows Voyager 1 on its journey then here is the link – https://www.youtube.com/watch?v=Du5he… I would like to say a massive thank you to my generous Patrons. Every donation helps me improve the channel, by purchasing better software and equipment. I will always strive to make V101 Science as good as it can possibly be. Thank you so much. My Patreon *Star* list – Silverfleur LunaGirl (aka Claire) Want to help support my channel and also get added benefits? Then why not become a Patron today? https://www.patreon.com/V101Science Or maybe purchase some of our awesome merchandise! T-shirts, hoodies, mugs, Phone cases, it’s all available now on YouTube! Also, check out our Amazon store! Just click on the link below to find loads of awesome items – https://www.amazon.com/shop/v101science (US Version) https://www.amazon.co.uk/shop/v101sci… (UK Version) **REMEMBER TO SUBSCRIBE FOR MUCH MORE TO COME** Subscribe – https://www.youtube.com/c/V101Science Facebook – https://www.facebook.com/V101Science Twitter – https://twitter.com/V101Science Instagram – https://www.instagram.com/v101__science/ Music attribution – Ambient-wave-13 – Erokia Sound Design All content is Licensed under Creative Commons: By Attribution 3.0

.

Physicists Debate Hawking’s Idea That the Universe Had No Beginning

1

In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.

Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance — pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?

The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?

Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”

The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.

“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”

Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”

The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”

But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”

In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”

financecurrent

The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”

After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.

Garden of Cosmic Delights

Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.

In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point.

GRAPHIC: "In the ‘Beginning’"
5W Infographics for Quanta Magazine

Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.

Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.

The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)

Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.

The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.

One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.

The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.

The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.

In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.

“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”

Imaginary Universes

Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.

He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary’s defenders, to walk the other way.

It’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.

According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.  

In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.

That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees.

For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.

There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.

Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems.

No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function — it’s less clear.”

BY: Natalie Wolchover

bevtraders

bestmining1

%d bloggers like this: