A digital twin is precisely what its name suggests: A digital copy of a physical object or system—even a human being. It may be a simple concept, but the potential applications are anything but. Through the ongoing collection and exchange of data, a digital twin can simulate and even predict the behaviors and reactions of its physical twin in a variety of conditions, providing invaluable insights to industries ranging from manufacturing to healthcare.
Digital twin technology allows businesses and organizations to test products and processes, study and predict how real-world conditions can affect physical objects and beings, and make well-informed, big-impact decisions with minimized financial and human safety risks. Below, 16 members of Forbes Technology Council share some of the fascinating ways industries and organizations are leveraging digital twin technology.
1. Minimizing Manufacturing Waste
We at Cuby use digital twin technology to make sure we produce 1-to-1 kits of the parts needed in our prefab construction process. It’s been estimated that up to 40% of the solid waste in the U.S. is construction and demolition waste. Manufacturing all the parts in advance allows us to reduce waste by up to 90%. – Aleksandr Gampel, Cuby Technologies, Inc.
2. Building Resilient Supply Chains
Businesses are increasingly using digital twin technology to build resilient and responsive supply chains. The digitization of supply chain processes provides businesses the opportunity to increase organizational efficiency by predicting serious problems and deceleration. In fact, it is estimated that by 2025, 80% of participants in industry ecosystems will rely on digital twin technology. – Radhika Krishnan, Hitachi Vantara
3. Mitigating Disruptions Due To Weather And Climate
Businesses are using digital twin technology to mitigate climate-related disruption. By combining data from public sources, such as weather data, with data from suppliers and partners, leaders can see how an unplanned weather event might impact the flow of goods across their supply chain, then use this insight to quickly pivot orders, routes or suppliers to limit waste and meet demand. – Rohit Shrivastava, Anaplan
4. Studying And Refining Processes
Process mining combined with simulation gives reliable visibility into as-executed processes (versus relying on what somebody thinks is happening) and the ability to do “what-if” analyses. This is effective because it allows one to see what’s really happening and simulate changes before making them. Often, changes do nothing or create a bigger problem elsewhere. Simulation and mining prevent that. – Michael Nyman, iGrafx
5. Making Data-Driven Manufacturing Decisions
The utilization of digital twins in the manufacturing industry has seen large growth. Digital twins increase productivity and reduce costs by combining the physical and digital worlds to make data-driven decisions, prolong asset life cycles and minimize unexpected maintenance disruption across assets. This modernizes the sector by moving from the “break and fix” approach to proactive maintenance. – Cindy Jaudon, IFS
6. Testing Health Intervention And Engagement Strategies
Health outcomes improve when patients are confident, connected and engaged. Digital twin technologies provide healthcare organizations the option to test drive new interventions and engagement strategies. This lowers the risks of rolling out new programs by testing hypotheses through a simulated pilot while also enabling cost-conscious innovation. – Trisha Swift, PricewaterhouseCoopers
7. Improving Patient Outcomes
A digital twin enables accurate and continuous monitoring. That data flow can inform data-driven decisions. For example, doctors are using an individual’s genetic makeup to model new organs for transplant. Because these processes can leverage a populationwide data set of digital twins, they can replicate an individual human body’s internal system to improve treatment outcomes for all. – Nicholas Domnisch, EES Health
8. Expanding Professional Services Capabilities
Knowledge-rich professional services firms are building digital twins of accountants, advisors and auditors using graph-based intelligent automation. These are distinct from previous technologies, because the decisions these professionals make are complex, contextual and nonlinear. In a world where there are big skills shortages and raging inflation, this form of IA is closing the gap. – James Duez, Rainbird Technologies
9. Onboarding And Knowledge Sharing
A digital twin use case that is an easy entry point and can provide quick ROI is training or onboarding. In areas where experienced employees are preparing for retirement, where there is high turnover or where there are general labor shortages, having a prerecorded “virtual” expert that can walk you through the instructions in real time can be a game changer and is much more effective than a giant paper manual. – Samantha Williams, Sonoco
10. Providing Safe Training
Today, manufacturing organizations are leveraging digital twin technologies to replicate machinery that would typically put employees in harm’s way. Here, a virtualized version of the original piece of machinery can be used to give employees training experience with the virtual machinery without also putting the employee’s health and safety in peril. – Marc Fischer, Dogtown Media LLC
11. Understanding Multidimensional Problems
Digital twin technology shines the brightest when it helps companies better understand a multidimensional problem—one that is too complex to easily solve. Because it is a way to visualize and make better decisions, the technology has become extremely effective for everything from product design to diagnosing medical issues to better understanding variables that affect business expenses. – Josh Dunham, Reveel
12. Improving Manufacturing Efficiencies
A very interesting field of application is manufacturing. Thanks to a digital twin of a production plant, with all its different lines and machines, we can launch simulations to generate greater efficiency or to detect potential bottlenecks. Simulating the manufacture of new products or variants of existing products is also a very useful application. – Miguel Llorca, Torrent Group
13. Developing And Training Self-Driving Vehicles
Without digital twin technology, it would be impossible to develop self-driving vehicles at the scale and with the reliability we are witnessing nowadays. Billions of simulations on a “virtual road” by a “virtual car” allow for training machine learning models to forecast accidents and plan not just the fastest, but also the safest, routes so that drivers can entrust the actual driving to robots. – Aleks Farseev, SoMin.ai
14. Budgeting And Financial Planning
Financial and operational data is the lifeblood of a company, but it’s difficult to “see all of it” and understand it in real time. Digital twins offer real-time, big-data-enabled simulation modeling that can be particularly useful for budget and financial planning. The technology can streamline tasks such as procurement, case management and capital resiliency and deliver powerful insights for finance leaders. – Nicola Morini Bianzino, EY
15. Managing Traffic
Digital twins are already effectively used by urban planning councils in many U.S. cities for efficient traffic management. They help in simulating real-world congestion at junctions, predicting what may get worse when and where, and they can be used to test multiple mitigation techniques by leveraging the best mix of ML and city know-how. Dashcam-backed digital twins are explored alongside junction twins. – Pramod Konandur Prabhakar, Pelatro PLC
16. Simulating Real-World Conditions
One way businesses are leveraging digital twin technology is by using it to simulate the physical world. For example, a company can use a digital twin to simulate a real-life situation so that they can predict how their product or service will behave in that environment. Another way is by using it to understand how their customers use their services and products. – Leon Gordon, Pomerol Partners
To explain what Digital Twin means in simple words, it is a digital replica or a representation of a physical object (e.g. aircraft engine, person, vehicle) or an intangible system (e.g. marketing funnel, fulfillment process) that can be examined, altered and tested without interacting with it in the real world and avoiding negative consequences.
Think of it as an online platform for testing, creating and altering objects that are based in reality, without engaging with them in the real world itself. Technologies similar to this one have been used in various industries long before this concept was created, however, this new definition of the technology has much more potential, power, and scalability that can replicate, monitor and test virtually anything you can think of.
The rise of the Internet of Things (IoT) has complimented the adoption of this new technology, as IoT has resulted in its cost-effective implementation. Virtual twins have become imperative to business today, consistently named as a strategic technology trend in recent years. The complexity of technology has led to many questions within the industry. One of the most important ones is how is it changing the way design, planning, manufacturing, operation, simulation, and forecasting is traditionally functioning?
A physical twin that was replicated on a virtual platform is a near real-time digitized copy of a physical object. It is a bridge between the digital world and the physical world. Its core use is to optimize business performance, through the analysis of data and the monitoring of systems to prevent issues before they occur and prevent downtime. The simulations that are produced will help to develop and plan future opportunities and updates within the process or product.
The benefits of virtual twin technology are astronomical, with industries such as agriculture, government, transportation and retail experiencing rewards from the technology and benefits going forward. Companies must find methods to prevent the risk of potential product defects among their assets and future products. This piece of tech allows production costs to be minimized, as companies will save expenses when products are right the first time.
There is no need for expensive physical tests or updates to the products or process. Research with manufacturers has found that this concept will enable the reduction of development costs of the next generation of machines by well over 50%. The features of the tech also provide added confidence to boost product performance and aid complex decisions, preventing costly downtime to robotics and machinery.
The core benefit of why most companies started twinning their processes, products, and services via simulations is due to their efficiency. Businesses are racing to market their product faster than their competition, and having the ability to virtually simulate scenarios where a product is tested for failure via multiple angles helps the situation immensely. Not mentioning the fact that the development and testing costs are usually reduced hundreds of times.
This technology will be able to anticipate how the product and process will perform through digital simulations and analysis. The accessibility of reliable and consistently updated information provides the assurance needed to make faster decisions and increase the speed of production to overtake competitors. Here is a use case infographic we have presented in our workshop about understanding virtual twins – find event information here.
We can see how a virtual twin simulation is used to replicate and optimize machinery that regulates water flow in a factory. By doing this, developers can see every moving detail on the screen and then make the calculated decision to upgrade, optimize or make positive changes accordingly. Offices that adopt this technology early will attract innovative and leading talent. Offices will be able to incorporate interactive features to improve employee satisfaction and productivity using data-driven simulations.
Employees who will use digital twin technology will be able to expand their engagement with online tools, such as interactive maps to locate colleagues on the floor, book meetings and complete tasks with more diligence and accuracy. Managers will also be able to supervise remotely with the tool that will be similar to a 3D map that will be created using virtual online platforms that are based on simulations.
It is evident that the digital twin concept will benefit many people within the supply chain. Combining this disruptive concept with IoT technology is an incredible opportunity for businesses to improve. Ultimately, it will also allow stakeholders to improve the overall efficiency and cost of their business, and improve many aspects of work for employees….
Children looking over Aberfan, Wales, in the wake of the disaster there in 1966. When it finally happened, shortly after nine o’clock in the morning on October 21, 1966—when the teetering pile of mining waste known as a coal tip collapsed after days of heavy rain and an avalanche of black industrial sludge swept down the Welsh mountainside into the village of Aberfan, when rocks and mining equipment from the colliery slammed into people’s homes and the schools were buried and 116 young children were asphyxiated by this slurry dark as the river Styx—the anguished public response was that someone should have seen this disaster coming, ought to have predicted it.
Or at least, they claimed they had. Shortly after the tragedy at Aberfan, several women and men recalled having eerily specific premonitions of the event. A piano teacher named Kathleen Middleton awoke in North London, only hours before the tip fell, with a feeling of sheer dread, “choking and gasping and with the sense of the walls caving in.” A woman in Plymouth had a vision the evening before the disaster in which a small, frightened boy watched an “avalanche of coal” slide towards him but was rescued; she later recognized the child’s face on a television news segment about Aberfan.
One of the children who died had first dreamt of “something black” smothering her school. Paul Davies, an 8-year-old victim, drew a picture the night before the catastrophe that showed many people digging in a hillside. Above the scene, he had written two words: The End. Premonitions this dramatic and alarming are likely rare. But most of us have experienced odd coincidences that make us feel, even for an instant, that we have glimpsed the future. A phrase or scene that triggers a jarring sensation of déjà vu.
Thinking of someone right before they text or call. Inexplicably dreaming about a long-lost acquaintance or relative only to wake and find that they have fallen ill or died. It’s mostly accepted that these are not really forms of precognition or time travel but instead fluky accidents or momentary brain glitches, explainable by science. And so we don’t give them a second thought or take them that seriously. But what if we did?
The Premonitions Bureau, an adroit debut from TheNew Yorker staff writer Sam Knight, draws us into a world not that far gone in which psychic phenomena were yet untamed by science and uncanny sensations still whispered of the supernatural, of cosmic secrets. Knight’s book registers the spectral shockwaves that rippled out from Aberfan through the human instrument of John Barker, a British psychiatrist who began cataloguing and investigating the country’s premonitions and portents in the wake of the accident.
Barker spent his career seeking out the hidden joints between paranormal experience and modern medicine, asking scientific questions about the occult that we have now agreed no longer to ask. In Knight’s skillful hands, the life of this forgotten clinician becomes a meditation on time and a window through which we can perceive the long human history of fate and foresight. It’s also a tale about how we decide what is worthy of science and what it feels like to be left behind. It is a story about a scientific revolution that never happened.
Forty-two years old when the country learned of Aberfan, John Barker was a Cambridge-educated psychiatrist of terrific ambition and rather middling achievement. In his thirties, he had been an unusually young hospital superintendent at a facility in Dorset; a nervous breakdown led to his demotion and reassignment, by the mid-’60s, to Shelton Hospital, where he cared for about 200 of the facility’s thousand patients. Shelton was a Victorian-era asylum in western England, not far from Wales, and a hellish world unto itself.
Local doctors called it the “dumping ground,” this 15-acre gothic facility of red-brick buildings hidden behind red-brick walls, where women and men suffering from mental illness were deposited for the rest of their lives. One-third of Shelton’s population had never received a single visitor. Like other mental health facilities in midcentury Britain, it was a place of absolutely crushing neglect. “Nurses smoked constantly,” Knight writes, “in part to block out Shelton’s all-pervading smell: of a house, locked up for years, in which stray animals had occasionally come to piss.” Every week or two, another suicide. “The primary means of discharge was death.”
As a clinician, Barker was tough and demanding. He was also complicated (like all of us) and tough to caricature. Barker had arrived at Shelton as calls for psychiatric reform were growing louder, and he supported efforts to make conditions “as pleasant as possible” for the hospital’s permanent residents, including removing locks from most of the wards and arranging jazz concerts. But he also favored aversion shock therapies and once performed a lobotomy—which, to his credit, he later regretted.
At any rate, Barker’s true passion lay elsewhere. As a young medical student, he collected ghost stories from nurses and staff at the London hospital where he was training: sudden and unaccountable cold presences late at night, spectral ward sisters who shouldn’t have been there and who vanished when you looked twice. A “modern doctor” committed to rational methods, his interest in all things paranormal led him to join Britain’s Society for Psychical Research, whose members had been studying unexplained occult phenomena since 1882.
Barker had a crystal ball on his desk and spent his weekends at Shelton rambling around haunted houses with his son. He was a man caught between worlds who would eventually fall through the cracks. The day following the disaster, Barker showed up in Aberfan to interview residents for an ongoing project about people who frightened themselves to death. But he realized quickly that his questioning was insensitive—and as he learned more about the uncanny portents and premonitions that were already swirling around the tragedy, he sensed a much greater opportunity.
Barker contacted Peter Fairley, a journalist and science editor at the Evening Standard, with his hunch that some people may have foreseen the disaster through a kind of second sight. Days later, the paper broadcast Barker’s paranormal appeal to its 600,000 subscribers: “Did anyone have a genuine premonition before the coal tip fell on Aberfan? That is what a senior British psychiatrist would like to know.”
A gifted scientific popularizer, Fairley shared with Barker a knack for publicity as well as tremendous ambition. Within weeks, the two men had dramatically expanded the project. From January 1967, readers were told to send general auguries or prophecies to a newly established “Premonitions Bureau” within the newsroom. “We’re asking anyone,” Fairley told a BBC radio interviewer, “who has a dream or a vision or an intensely strong feeling of discomfort” which involves potential danger to themselves or others “to ring us.”
With Fairley’s brilliant assistant Jennifer Preston doing most of the work, the team categorized the predictions and tracked their accuracy. Their hope was to prove that precognition was real and convince Parliament to use this psychic power for good by developing a national early warning system for disasters. “Nobody will be scoffed at,” Fairley insisted. “Let us simply get at the truth.”
Seventy-six people wrote to Barker claiming premonitory visions of the Aberfan disaster. Throughout 1967, another 469 psychic warnings were submitted to the Bureau. Many of these submissions came from women and men who claimed to be seers, who experienced precognition throughout their lives as a sort of sixth sense. Kathleen Middleton, the piano teacher who awoke choking before the coal tip collapse, became a regular Bureau contact who had been sensitive to occult forces since she was a girl.
(During the Blitz, a vision of disaster convinced her to stay home one night instead of going out with friends; the dance hall was bombed.) Another frequent contributor was Alan Hencher, a telephone operator who wrote that he was “able to foretell certain events” but with “no idea how or why.” The premonitions gathered by Barker ran the gamut of believability. Some were instantly disqualified. Others were spookily prescient. In early November 1967, both Hencher and Middleton warned of a train derailment; one occurred days later, near London, killing 49 people.
Hencher suffered a severe headache on the evening of the disaster and suggested the time of the accident nearly to the minute, before the news had been reported. Most of the premonitions appear to have been vague enough to be right if you wanted them to be, if you were willing to cock your head to one side and squint. A woman reported a dream about a fire; on the day she mailed her letter, a department store in Brussels burned. One day in May 1967, Middleton warned about an impending maritime disaster; an oil tanker ran aground.
Visions of airliner crashes inevitably, if one waited long enough, came true somewhere in the world. Barker was determined to believe in them. “Somehow,” he told an interviewer, seers like Hencher and Middleton “can gate-crash the time barrier … see the unleashed wheels of disaster before the rest of us.… They are absolutely genuine. Quite honestly, it staggers me.” Visions of airliner crashes inevitably, if one waited long enough, came true somewhere in the world. Barker was determined to believe in them.
For those of us unable to gate-crash time itself, one wonders what it would be like to have this kind of premonitory sense, to perceive the future so viscerally and so involuntarily. It was like knowing the answer for a test, some explained, with cryptic keywords floating in space in their imaginations. ABERFAN. TRAIN. Others had physiological symptoms. Odd smells, like earth or rotting matter, that nobody else could perceive, or a spasm of tremors and pain at the precise moment when disaster struck far away.
People who sensed premonitions explained to Barker that it was an awful burden, that they grappled with, as one put it, “the torment of knowing” and “the problem of deciding whether we should tell what we have received” in the face of potential ridicule or error. Prone to a certain grandeur, Barker believed that the stakes of the project, which he called “essential material and perhaps the largest study on precognition in existence,” were high. Practically speaking, he thought it would help avert disaster.
(If the Premonitions Bureau had been up and running earlier, he boldly claimed, Aberfan could have been avoided and many children’s lives saved.) More daringly, Barker thought that proving the existence of precognition would overturn the basic human understanding of linear time. He wondered if some people were capable of registering “some sort of telepathic ‘shock wave’ induced by a disaster” before it occurred. It might be akin to the psychic bonds felt between twins, but able to vanquish time as well as space.
Inspired by Foreknowledge, a book by retired shipping agent and amateur psychic researcher Herbert Saltmarsh, Barker thought that our conscious minds could likely only experience time moving forward, and in three distinct categories: past, present, and future. To our unconscious, however, time might be less stable and more permeable. If scientists would “accept the evidence for precognition from the cases” gathered by the Bureau, he said, they would be “driven to the conclusion that the future does exist here and now—at the present moment.” Barker sensed a career-defining discovery just around the corner.
But it was not to be. John Barker died on August 20, 1968 after a sudden brain aneurysm. He was 44 years old. The Bureau, which Jennifer Preston dutifully continued through the 1970s, and which ultimately included more than 3,000 premonitions, represented the last, unfinished chapter of his brief life. He never wrote his book on precognition and fell into obscurity. The morning before he died, Kathleen Middleton woke up choking.
Knight narrates Barker’s story with considerable generosity and evident care. Rather than condescend or deride him as a crank, Knight thinks with Barker: about the strangeness of time and our human ways of moving through it, about how we make meaning from chaos and resist the truly random, about prediction and cognition and our hunger for prophecy. Yet the many disappointments in Barker’s career were not incidental to his significance, and emphasizing them does not diminish him.
In fact, his life can also be framed as a tale told much too rarely in the history of science, about how scientific inquiry relies as much upon failure as success in order to function, on exclusion as much as expansion. Around the time Barker was appointed to his role at Shelton, the American historian and philosopher of science Thomas Kuhn published a book called The Structure of Scientific Revolutions, a landmark work that now structures practically everyone’s thinking without them realizing it. What Kuhn proposed was that scientific research always occurs within a paradigm:
A set of rules and assumptions that reflect not only what we think we know about how the universe works, but also the questions we are permitted to ask about it. At any given moment, “normal science” beavers away within the borders of the current paradigm, working on “legitimate problems” and solving puzzles. For a long while, Kuhn explained, phenomena “that will not fit the box are … not seen at all,” and “fundamental novelties” are suppressed. Eventually, however, there are too many anomalies for which the reigning paradigm cannot account. When a critical mass is reached, the model breaks and a new one is adopted that can better explain things. This is a scientific revolution.
For Barker, precognition constituted what Kuhn would have called a legitimate problem within normal science: It ought to be studied using experimental methods and would, he thought, one day be explained by them. But he admitted the risk that modern psychiatry might not ever be able to accommodate the occult, that his work on premonitions could break the paradigm altogether. Hunches and visions that came true might demand a new way of explaining time and energy. “Existing scientific theories must be transformed or disregarded if they cannot explain all the facts,” he lectured his many critics. “Although unpalatable to many, this attitude is clearly essential to all scientific progress.”
He seems to have seen himself as a contemporary Galileo, insisting upon empirical truth in the face of “frivolous and irresponsible” gatekeepers. “What is now unfamiliar,” he argued in the BMJ, usually tends to be “not accepted, even despite overwhelming supportive evidence. Thus for generations the earth was traditionally regarded as flat, and those who opposed this notion were bitterly attacked.” Barker wanted the ruling scientific paradigm to make room for the paranormal—or give way.
It wasn’t so implausible, in midcentury Britain, that it just might. A craze for spiritualism and the paranormal had swept the country between the two world wars, and a rash of new technologies that seemed magical (telegram, radio, television, etc.) left many Britons, not unreasonably, to wonder if “supernatural” phenomena like prophecies or telepathy might turn out to be explainable after all. In Barker’s Britain, one quarter of the population had reported believing in some form of the occult. Even Sigmund Freud, nervously protecting the reputation of psychoanalysis, refused to dismiss paranormal activities “in advance” as being “unscientific, or unworthy, or harmful.”
In physics, too, Knight points out, “the old order of time was collapsing” by midcentury, thanks to developments in relativity as well as quantum mechanics. For experts, time had become less predictable and mechanisms of causation less clear, both subatomically and cosmically. Barker had been formed, in other words, by “a society in which one set of certainties had yet to be eclipsed by another.” Premonitions became understood not in terms of extrasensory perception but simply misperception: the work of cognitive error or misfiring neurons rather than the supernatural.
But instead of rearranging itself around Barker’s research into precognition, the paradigm shifted away from him and snapped more firmly into place. The walls sprang up, and the questions that interested Barker became seen as illegitimate and unscientific. The Bureau he built with Fairley was not all that successful. Only about 3 percent of submissions ever came true, and in February 1968 a deadly fire at Shelton Hospital itself went unpredicted, to the unabashed glee of critics and satirists.
Barker’s supervisors grew skeptical and then embarrassed. As time went on, and the boundaries of the scientific paradigm in which we still live grew less permeable, occult phenomena were explained not by bending time, but with recourse to cognitive science and neurology. Premonitions became understood not in terms of extrasensory perception but simply misperception: the work of cognitive error or misfiring neurons rather than the supernatural.
The popular understanding of scientific revolutions still revolves around big ruptures and great scientists, the paradigm-defining concepts (like heliocentrism, gravity, or relativity) that transform how human beings think they understand the universe: We shift the frame to move forward. Yet there is just as much to be learned from the times when revolutions don’t occur, when scientific inquiry is defined not by asking thrilling new questions, but by the determination that some old questions will no longer be asked.
What’s so brilliant about Knight’s account, in the end, is the way it portrays a creative workaday researcher rather than a modern-day Newton or Einstein, a man aspiring to do normal science while the rules shifted around him; the way it conveys the rarely captured feeling of a paradigm closing in around you and your ideas, until it all fades to black.
Sam is woken from a dream, believing to have had a premonition of a man being suffocated by his car’s exhaust fumes. He tells Dean it felt the same as when he dreamt of their old house and of Jessica, which prompts Dean to question why he would be dreaming of a random man in Michigan.
Americans support recycling. We do too. But although some materials can be effectively recycled and safely made from recycled content, plastics cannot. Plastic recycling does not work and will never work. The United States in 2021 had a dismal recycling rate of about 5 percent for post-consumer plastic waste, down from a high of 9.5 percent in 2014, when the U.S. exported millions of tons of plastic waste to China and counted it as recycled—even though much of it wasn’t.
Recycling in general can be an effective way to reclaim natural material resources. The U.S.’s high recycling rate of paper, 68 percent, proves this point. The problem with recycling plastic lies not with the concept or process but with the material itself. The first problem is that there are thousands of different plastics, each with its own composition and characteristics. They all include different chemical additives and colorants that cannot be recycled together, making it impossible to sort the trillions of pieces of plastics into separate types for processing.
For example, polyethylene terephthalate (PET#1) bottles cannot be recycled with PET#1 clamshells, which are a different PET#1 material, and green PET#1 bottles cannot be recycled with clear PET#1 bottles (which is why South Korea has outlawed colored PET#1 bottles.) High-density polyethylene (HDPE#2), polyvinyl chloride (PVC#3), low-density polyethylene (LDPE#4), polypropylene (PP#5), and polystyrene (PS#6) all must be separated for recycling.
Just one fast-food meal can involve many different types of single-use plastic, including PET#1, HDPE#2, LDPE#4, PP#5, and PS#6 cups, lids, clamshells, trays, bags, and cutlery, which cannot be recycled together. This is one of several reasons why plastic fast-food service items cannot be legitimately claimed as recyclable in the U.S.
Another problem is that the reprocessing of plastic waste—when possible at all—is wasteful. Plastic is flammable, and the risk of fires at plastic-recycling facilities affects neighboring communities—many of which are located in low-income communities or communities of color. Unlike metal and glass, plastics are not inert. Plastic products can include toxic additives and absorb chemicals, and are generally collected in curbside bins filled with possibly dangerous materials such as plastic pesticide containers.
According to a report published by the Canadian government, toxicity risks in recycled plastic prohibit “the vast majority of plastic products and packaging produced” from being recycled into food-grade packaging. Yet another problem is that plastic recycling is simply not economical. Recycled plastic costs more than new plastic because collecting, sorting, transporting, and reprocessing plastic waste is exorbitantly expensive. The petrochemical industry is rapidly expanding, which will further lower the cost of new plastic.
Despite this stark failure, the plastics industry has waged a decades-long campaign to perpetuate the myth that the material is recyclable. This campaign is reminiscent of the tobacco industry’s efforts to convince smokers that filtered cigarettes are healthier than unfiltered cigarettes. Conventional mechanical recycling, in which plastic waste is ground up and melted, has been around for many decades. Now the plastics industry is touting the benefits of so-called chemical recycling— in which plastic waste is broken down using high heat or more chemicals and turned into a low-quality fossil fuel.
In 2018, Dow Chemical claimed that the Renewlogy chemical-recycling plant in Salt Lake City was able to reprocess mixed plastic waste from Boise, Idaho, households through the “Hefty EnergyBag” program and turn it into diesel fuel. As Reuters exposed in a 2021 investigation, however, all the different types of plastic waste contaminated the pyrolysis process. Today, Boise burns its mixed plastic waste in cement kilns, resulting in climate-warming carbon emissions. This well-documented Renewlogy failure has not stopped the plastics industry from continuing to claim that chemical recycling works for “mixed plastics.”
Chemical recycling is not viable. It has failed and will continue to fail for the same down-to-earth, real-world reasons that the conventional mechanical recycling of plastics has consistently failed. Worse yet, its toxic emissions could cause new harm to our environment, climate, and health. We’re not making a case for despair. Just the opposite. We need the facts so that individuals and policy makers can take concrete action. Proven solutions to the U.S.’s plastic-waste and pollution problems exist and can be quickly replicated across the country.
These solutions include enacting bans on single-use plastic bags and unrecyclable single-use plastic food-service products, ensuring widespread access to water-refilling stations, installing dishwashing equipment in schools to allow students to eat food on real dishes rather than single-use plastics, and switching Meals on Wheels and other meal-delivery programs from disposables to reusable dishware. If the plastics industry is following the tobacco industry’s playbook, it may never admit to the failure of plastics recycling. Although we may not be able to stop them from trying to fool us, we can pass effective laws to make real progress.
Single-use-plastic bans reduce waste, save taxpayer money spent on disposal and cleanup, and reduce plastic pollution in the environment. Consumers can put pressure on companies to stop filling store shelves with single-use plastics by not buying them and instead choosing reusables and products in better packaging. And we should all keep recycling our paper, boxes, cans, and glass, because that actually works.
As part of a broader push on part of the aviation industry to reduce its carbon footprint, Airbus has conducted the first ever flight of its giant A380 jumbo jet using 100 percent biofuel. This is the third Airbus aircraft to fly using the sustainable fuel made up of primarily cooking oil, as the company works to certify the technology by the end of the decade.
The aircraft featured in the groundbreaking flight is the Airbus ZEROe Demonstrator, an A380 adapted for use as a flying testbed and one the company plans to also use to test out hydrogen combustion jet engines.
For this particular outing, the aircraft was loaded up with 27 tonnes of Sustainable Aviation Fuel (SAF), made mostly with cooking oil and waste fats. This powered the A380’s Rolls-Royce Trent 900 engine across a three-hour test flight out of the Blagnac Airport in Toulouse France on March 28, with a second flight then carrying it all the way to Nice Airport on March 29.
This demonstration follows successful flights of the Airbus A350 and the Airbus A319neo single-aisle plane using SAF last year. Using the biofuel to now power the world’s largest passenger jet marks another step forward for the testing program, as Airbus aspires to bring the world’s first zero-emission aircraft to market by 2035.
Airbus isn’t alone in pursuing cleaner aviation with the help of cooking oil. Way back in 2012, Boeing made the first biofuel-powered Pacific crossing in its 787 Dreamliner using a mix of regular jet fuel and fuel derived mainly from cooking oil. In 2014, it even opened up a biofuel production plant in China based to ensure a consistent supply.
In emphasizing the potential of SAF, Airbus refers to the Waypoint 2050 report put together by collaboration of aviation experts to outline how the industry can achieve decarbonization by midway through the century. That report identifies the deploying of SAF as the single largest opportunity to meet these goals, with the potential to deliver between 53 and 71 percent of the required carbon reductions.
As it stands, all of Airbus’ aircraft are certified to fly with a 50 percent SAF-kerosene blend. Airbus aims to achieve certification for 100 percent SAF use by the end of the decade.
Nick has been writing and editing at New Atlas for over six years, where he has covered everything from distant space probes to self-driving cars to oddball animal science. He previously spent time at The Conversation, Mashable and The Santiago Times, earning a Masters degree in communications from Melbourne’s RMIT University along the way.
SAF is a biofuel used to power aircraft that has similar properties to conventional jet fuel but with a smaller carbon footprint. Depending on the feedstock and technologies used to produce it, SAF can reduce life cycle GHG emissions dramatically compared to conventional jet fuel. Some emerging SAF pathways even have a net-negative GHG footprint.
SAFs lower carbon intensity makes it an important solution for reducing aviation GHGs, which make up 9%–12% of U.S. transportation GHG emissions, according to the U.S. Environmental Protection Agency.
An estimated 1 billion dry tons of biomass can be collected sustainably each year in the United States, enough to produce 50–60 billion gallons of low-carbon biofuels. These resources include:
Other fats, oils, and greases
Wood mill waste
Municipal solid waste streams
Wet wastes (manures, wastewater treatment sludge)
Dedicated energy crops.
This vast resource contains enough feedstock to meet the projected fuel demand of the U.S. aviation industry, additional volumes of drop-in low carbon fuels for use in other modes of transportation, and produce high-value bioproducts and renewable chemicals.
Growing, sourcing, and producing SAF from renewable and waste resources can create new economic opportunities in farming communities, improve the environment, and even boost aircraft performance.
By growing biomass crops for SAF production, American farmers can earn more money during off seasons by providing feedstocks to this new market, while also securing benefits for their farms like reducing nutrient losses and improving soil quality.Biomass crops can control erosion and improve water quality and quantity.
They can also increase biodiversity and store carbon in the soil, which can deliver on-farm benefits and environmental benefits across the country. Producing SAF from wet wastes, like manure and sewage sludge, reduces pollution pressure on watersheds, while also keeping potent methane gas—a key contributor to climate change—out of the atmosphere.
In March 2016, scientists in Japan published an extraordinary finding. After scooping up some sludge from outside a bottle recycling facility in Osaka, they discovered bacteria which had developed the ability to decompose, or “eat,” plastic.
The bacteria, Ideonella sakaiensis, was only able to eat a particular kind of plastic called PET, from which bottles are commonly made, and it could not do so nearly fast enough to mitigate the tens of millions of tons of plastic waste that enter the environment every year.
Still, this and a series of other breakthroughs in recent years mean it could one day be possible to build industrial-scale facilities where enzymes chomp on piles of landfill-bound plastic, or even to spray them on the mountains of plastic that accumulate in the ocean or in rivers.
These advances are timely. By vastly increasing our use of single-use plastics such as masks and takeaway boxes, the Covid-19 pandemic has focused attention on the world’s plastic waste crisis. Earth is on track to have as much plastic in the ocean as fish by weight by 2050, according to one estimate.
However, experts caution that large-scale commercial use of plastic-eating microorganisms is still years away, while their potential release in the environment, even if practical, could create more issues than it solves.
Overcoming an evolutionary barrier
The scientists working to find and develop plastic-eating organisms must contend with a basic reality: evolution. Microbes have had millions of years to learn how to biodegrade organic matter such as fruits and tree bark. They have had barely any time at all to learn to decompose plastics, which did not exist on Earth at any scale before roughly 1950.
“Seaweed has been around for hundreds of millions of years, so there is a variety of microbes and organisms that can break it down,” said Pierre-Yves Paslier, the co-founder of a British company, Notpla, that is using seaweed and other plants to make films and coatings that could replace some types of plastic packaging. By contrast plastic is very new, he said.
Still, recent discoveries of plastic-eating microorganisms show that evolution is already getting to work. A year after the 2016 discovery of Ideonella sakaiensis in Osaka, scientists reported a fungus able to degrade plastic at a waste disposal site in Islamabad, Pakistan. In 2017 a biology student at Reed College in Oregon analyzed samples from an oil site near her home in Houston, Texas, and found they contained plastic-eating bacteria. In March 2020, German scientists discovered strains of bacteria capable of degrading polyurethane plastic after collecting soil from a brittle plastic waste site in Leipzig.
In order to make any of these naturally-occurring bacteria useful, they must be bioengineered to degrade plastic hundreds or thousands of times faster. Scientists have enjoyed some breakthroughs here, too. In 2018 scientists in the U.K. and U.S. modified bacteria so that they could begin breaking down plastic in a matter of days. In October 2020 the process was improved further by combining the two different plastic-eating enzymes that the bacteria produced into one “super enzyme.”
The first large-scale commercial applications are still years away, but within sight. Carbios, a French firm, could break ground in coming months on a demonstration plant that will be able to enzymatically biodegrade PET plastic.
This could help companies such as PepsiCo and Nestle, with whom Carbios is partnering, achieve longstanding goals of incorporating large amounts of recycled material back into their products. They’ve so far failed to succeed because there has never been a way to sufficiently break down plastic back into more fundamental materials. (Because of this, most plastic that is recycled is only ever used to make lower-quality items, such as carpets, and likely won’t ever be recycled again.)
“Without new technologies, it’s impossible for them to meet their goals. It’s just impossible,” said Martin Stephan, deputy CEO of Carbios.
Besides plastic-eating bacteria, some scientists have speculated that it may be possible to use nanomaterials to decompose plastic into water and carbon dioxide. One 2019 study in the journal Matter demonstrated the use of “magnetic spring-like carbon nanotubes” to biodegrade microplastics into carbon dioxide and water.
The challenges ahead
Even if these new technologies are one day deployed at scale, they would still face major limitations and could even be dangerous, experts caution.
Of the seven major commercial types of plastic, the plastic-eating enzyme at the heart of several of the recent breakthroughs has only been shown to digest one, PET. Other plastics, such as HDPE, used to make harder materials such as shampoo bottles or pipes, could prove more difficult to biodegrade using bacteria.
Nor are the bacteria able to degrade the plastic all the way back into their core elemental building blocks, including carbon and hydrogen. Instead, they typically break up the polymers out of which plastics are composed back into monomers, which are often useful only to create more plastics. The Carbios facility, for example, is intended only to convert PET plastic back into a feedstock for the creation of more plastics.
Even if one day it becomes possible to mass produce bacteria that can be sprayed onto piles of plastic waste, such an approach could be dangerous. Biodegrading the polymers that comprise plastic risks releasing chemical additives that are normally stored up safely inside the un-degraded plastic.
Others point out that there are potential unknown side-effects of releasing genetically engineered microorganisms into nature. “Since most likely genetically engineered microorganisms would be needed, they cannot be released uncontrolled into the environment,” said Wolfgang Zimmerman, a scientist at the University of Leipzig who studies biocatalysis.
Similar issues constrain the potential use of nanomaterials. Nicole Grobert, a nanomaterials scientist at Oxford University, said that the tiny scales involved in nanotechnology mean that widespread use of new materials would “add to the problem in ways that could result in yet greater challenges.”
The best way to beat the plastic waste crisis, experts say, is by switching to reusable alternatives, such as Notpla’s seaweed-derived materials, ensuring that non-recyclable plastic waste ends up in a landfill rather than in the environment, and using biodegradable materials where possible.
Judith Enck, a former regional Environmental Protection Agency (EPA) administrator in the Obama administration and the president of Beyond Plastics, a non-profit based in Vermont, pointed to the gradual spread of bans on single-use plastics around the world, from India to China to the EU, U.K. and a number of U.S. states from New York to California.
These are signs of progress, she said, although more and tougher policies are needed. “We can’t wait for a big breakthrough.”
Update: This story has been updated to clarify the timing of a discovery of plastic-eating bacteria by a Reed College student.
I cover the energy industry, focusing on climate and green tech. Formerly I covered oil markets for commodities publication Argus Media. My writing has appeared in The Economist, among other publications.
Brandl H, Püchner P (1992). “Biodegradation Biodegradation of Plastic Bottles Made from ‘Biopol’ in an Aquatic Ecosystem Under In Situ Conditions”. Biodegradation. 2 (4): 237–43. doi:10.1007/BF00114555. S2CID37486324.
Franzen H (30 November 2017). “Almost all plastic in the ocean comes from just 10 rivers”. Deutsche Welle. Retrieved 18 December 2018. It turns out that about 90 percent of all the plastic that reaches the world’s oceans gets flushed through just 10 rivers: The Yangtze, the Indus, Yellow River, Hai River, the Nile, the Ganges, Pearl River, Amur River, the Niger, and the Mekong (in that order).