A NASA Scientist Explains Why The Weather is Becoming More Extreme

Across China and Western Europe in July, the amount of rain that might typically fall over several months to a year came down within a matter of days, triggering floods that swept entire homes off their foundations. In June, the usually mild regions of Southwest Canada and the US’s Pacific Northwest saw temperatures that rivaled highs in California’s Death Valley desert. The severe heat was enough to buckle roads and melt power cables.

Yesterday, a landmark United Nations report helped put those kinds of extreme events into context. By burning fossil fuels and releasing planet-heating greenhouse gases into the atmosphere, humans are fueling more dangerous weather. Researchers have been able to connect the dots between greenhouse gas emissions and climate change for decades.

But the new report showcases a big leap forward in climate science: being able to tie the climate crisis directly to extreme weather events like the June heatwave, which would have been “virtually impossible” without climate change according to recent studies.

The Verge spoke with Alex Ruane, one of the authors of the new report and a research physical scientist at the NASA Goddard Institute for Space Studies. He walks us through the phenomena that’s supercharging extreme weather events. And he explains why scientists have gotten so much better at seeing the “human footprint” in each weather disaster.

This interview has been lightly edited for length and clarity.

The new United Nations report ties many changes in extreme weather to a more intense water cycle. What is the water cycle and how does it affect the weather?

The water cycle is basically the way that we track moisture moving through the climate system. So it includes everything from the oceans to the atmosphere, the clouds, ice, rivers, lakes, the groundwater, and the way that those things move and transfer moisture and water from place to place.

So when we’re talking about the intensification of the water cycle, we’re basically saying things are moving faster. Air is pulling the moisture out of the oceans and out of the land faster. It’s moving more moisture from place to place on the planet. And when it rains, it can come down hard.

The fundamental difference is that there is more energy in the system. There’s more heat. And as the temperature goes up, there is an overall increase in the amount of moisture that the air is trying to hold. So that means when a storm happens, there’s more moisture in the air to tap into for a big, heavy downpour.

It also means that when air moves over a region, it has the potential to suck more moisture out of the ground more rapidly. So the same phenomenon is leading both to more intensive rainfalls and floods and precipitation, and also to more stark drought conditions when they do occur.

How are people affected by those changes?

So, I personally live in New York City. We are affected by the water cycle, for example, when there’s a heavy downpour it can flood subway stations. It can lead to surface flooding in rivers and streets that can affect transportation.

Other parts of the world have different engagements with the water cycle. They may be concerned about the snow fall or river floods that affect broad areas. And then of course huge parts of the world are concerned about drought. When we look at something like drought, it doesn’t just affect agriculture. It also affects ecosystems and urban parks. It affects water resources and infrastructure like power plants and roads and buildings.

So in all of these climate factors, we see that more than one sector is affected by these changes. We also see that if you take any specific thing that we care about, like agricultural fields, they are affected by more than just one type of climate change.

A specific set of climate conditions can lead to two extremes at the same time. So for example, heat and drought often go together because as conditions become drier, all of that sunshine, all of that energy, all of that heat goes into warming the air. That is a reinforcing cycle that can make hot and dry conditions even more extreme.

The big picture, as we’re seeing it, is that climate change is affecting all of the regions on Earth, with multiple types of climate changes already observed. And as the climate changes further, these shifts become more pronounced and widespread.

I’ve read that “weather whiplash” is becoming more common because of climate change — what is “weather whiplash”?

This idea that you can go from extreme to extreme very rapidly is giving society this sensation of a whiplash. This is part of the idea of an intensified water cycle. The water is moving faster, so when a wet condition comes it can be extremely wet. And then behind it could be a dry condition that can quickly get extremely dry.

That type of shift from wet to dry conditions is something that we explore and understand in our climate models, but the lived experience of it can be quite jarring — and not just uncomfortable, but a direct challenge for ecosystems and other things that we care about in society. They really are connected in many cases to the same types of phenomenon, and this new report connects the dots between this phenomenon and our human footprint.

How do scientists study how climate change affects extreme weather events?

There have been big steps forward in the methodologies and the scientific rigor of detection and attribution studies, which is another way of saying: understanding the human influence on these events.

The basic idea behind the extreme event attribution is that we need to compare the likelihood that an event would have happened without human influences against the likelihood of that event happening, given that we have influenced the climate.

We are able to use observational records and our models to look at what conditions were like before there was strong human influence. We look at what we call a preindustrial condition, before the Industrial Revolution and land use changes led to greenhouse gas emissions and other climate changes.

If we can understand how likely events would have been before we had our climate influences, and then compare it against the likelihoods today with those climate change influences factored in, that allows us to identify the increased chance of those events because of our influence. It allows us to attribute a human component of those extreme events.

How have researchers gotten so much better at attributing extreme weather events to climate change?

This is a really exciting, cutting-edge field right now. Methodological advances and several groups that have really taken this on as a major focus of their efforts have, in many ways, increased our ability and the speed at which we can make these types of connections. So that’s a big advantage.

Every year, the computational power is stronger in terms of what our models can do. We also use remote sensing to have a better set of observations in parts of the world where we don’t have weather stations. And we have models that are designed to integrate multiple types of observations into the same kind of physically coherent system, so that we can understand and fill in the gaps between those observations.

The other thing, of course, is when you look at any single attribution study, you get a piece of the picture. But what the new report does is bring them all into one place and assesses them together, and draw out larger messages. When you look at them all together, it is a much stronger and more compelling case than any one single event. And this is what the scientific community is showing us, that these things are part of a larger pattern of change that we have influenced.

What should we expect in the future when it comes to extreme weather? And what might we need to do to adapt?

First of all, it’s not like drought is a new phenomenon. There are parts of the world that are dealing with these conditions every day of the year. What we’re seeing, however, is that the overall set of expected conditions is moving into uncharted territory.

I want to emphasize it’s not just the record levels that we care about. We also care about the frequency by which these extremes occur, how long they last, the seasonal timing of when things like the last frost occurs, and also the spatial extent of extreme events — so where are conditions going to happen in the future that are outside of the observed experience of the last several generations.

It is a set of challenges that we have to face in terms of how do we adapt or manage the risk of these changes. Also, how do we prepare knowing that they may come in combination or in overlapping ways, with more than one extreme event happening at the same time, or in the same season in a sequence, or potentially hitting different parts of the same market or commodities trade exchange or something like that.

We are facing a situation where we have more information about these regional risks, but also know that every increment of climate change that occurs makes these changes more prominent. That sounds scary, but it also gives us agency.

It gives us the ability to reduce these changes if we reduce emissions, and if we can eventually limit them to something like net zero — no total carbon emissions into the climate system. And in that sense, I still remain optimistic despite all this information that you’re seeing in the report about the changes that could come. The bottom line is we have the potential to reduce those changes, if we can get emissions under control.

Source: A NASA scientist explains why the weather is becoming more extreme – The Verge

.

Related Contents:

Scientific Consensus: Earth’s Climate is Warming

United Nations Environment Programme

Developing signals to trigger adaptation to sea-level rise

How insurance can support climate resilience

Cross-sectoral interactions of adaptation and mitigation measures

Trade-offs and conflicts between urban climate change mitigation and adaptation measures

Earth Is Warmer Than It’s Been in 125,000 Years

Work of the Statistical Commission pertaining to the 2030

Natural Resources Defense Council, 29 September 2017.

Co-financing in the green climate fund: lessons from the global environment facility

Scientists Reach 100% Consensus on Anthropogenic Global Warming

Weart “The Carbon Dioxide Greenhouse Effect

Hurricanes and Climate Change

Weart “Suspicions of a Human-Caused Greenhouse

Climate Models and their Evaluation

Historical Overview of Climate Change Science

Detection and Attribution of Climate Change: from Global to Regional

IPCC Special Report on Climate Change, Desertification, Land Degradation

How To Support Kids Who Are Anxious About Returning School

Back-to-school jitters are normal every fall. But as families prepare for the beginning of the 2021–22 school year, these run-of-the-mill worries are colliding with fresh uncertainties about the ongoing COVID-19 pandemic, leaving kids and parents more anxious than usual.

Parents can use many strategies to help their children handle this challenging situation, according to Elizabeth Reichert, clinical associate professor of psychiatry and behavioral sciences at the Stanford University School of Medicine.

“I often talk to parents about being the lighthouse in their child’s storm, the light that shines steadily in a predictable rhythm and doesn’t waver no matter how big the storm is,” Reichert said. “Their job is to be that lighthouse.”

Reichert spoke with science writer Erin Digitale about how parents can help ensure that budding students of any age—from preschool to high school—are ready to handle anxieties as the school year begins.

Erin Digitale: What are some concerns kids may have?

Elizabeth Reichert: Lots of things come to mind. Many kids are going to a new school for the first time: Maybe they’re starting middle school, preschool, or kindergarten. Those are big transitions in nonpandemic times. With the pandemic, we might see more stress in kids of all ages.

Children may have concerns specific to the pandemic, such as the mandate that California students must wear masks while indoors at school. Kids who are more anxious may ask a lot of questions: “How am I going to keep my mask on all day? What if I want to take it off? What are the rules around it?” They may have increased fear of getting sick, too.

For some children and teens, it will be the first time they’ve been in close proximity to groups of people in a very long time, which brings up concerns about social interactions. For kids in middle and high school, social dynamics are especially important. They’ve just had a year and a half of navigating their social lives in the virtual world, and now they’re re-navigating how to manage social dynamics in person. Social interactions may feel more emotionally draining.

Also, not all kids are the same. With virtual learning, some children really struggled to stay engaged and motivated, grasp the material, and remain connected with friends and teachers. But there were other children, often those who were shyer or had difficulties in large-group settings, who thrived. For those more introverted kiddos, if they’ve been in a comfort zone at home, going back to large groups may be a more difficult transition.

ED: What signs might parents see that children are feeling anxious or otherwise struggling emotionally?

ER: This depends on the age of the child. Among little ones, parents may see increased tearfulness about going to preschool or day care, clingy behavior, or regression in milestones such as potty training. With school-aged children, parents may see resistance to going to school, oppositional behavior, and somatic complaints such as stomachaches or headaches.

That’s going to be really tricky to navigate because schools now have strict guidelines about not coming to school sick. For teens, there may also be school refusal and withdrawn behavior, such as staying isolated in their rooms, or more irritability and moodiness. Risky behavior such as substance abuse may also increase.

Parents can expect some distress and worry during the first few weeks after any transition—especially now, when children are being asked to do many new things all at once. That can affect energy levels and emotional reserves. But if there is a major change from a child’s or teen’s baseline behavior that doesn’t dissipate after a couple of weeks—such as a teenager who is withdrawing more and more and refusing to engage in typical activities, or a child who is progressively more distressed—that is a red flag. Parents may want to consider seeking help at that point.

ED: What proactive steps can parents take before school begins?

ER: Parents can start talking about going back, listening to what’s on their child’s mind, and engaging kids in the fun components of returning to school, such as picking out school supplies or a new T-shirt—something they can get excited about. They can also walk or drive by the school or visit its playground to build excitement. It may also be helpful to start practicing saying goodbye and leaving the house, encouraging independent play, and helping children adjust to being away from their parents.

If bedtimes have drifted later during summer vacation, parents can shift the family schedule during the week or two before school starts to get back in the habit of going to bed and waking up earlier. They can also reestablish other pre-pandemic routines that worked well for the family.

ED: If a child still feels distressed, what should parents do to help?

ER: If a child remains anxious, there are key steps parents can take. When our children are upset, our natural is instinct to remove the distress they’re experiencing. But the first step is not jumping straight to problem solving.

The first step is to listen, to create space to hear the kid’s concerns. Acknowledge what they’re feeling even if you don’t agree with it. The child should feel that they’re being heard, that it is OK to feel what they are feeling, and that they have space to talk to Mom or Dad.

Once parents have a better sense of what’s going on, they should try to work collaboratively with the child to figure out a plan. They can ask: What does the child feel like they’re capable of doing? What can Mom or Dad do to help? Who else could help—a friend, sibling, another family member? If, for example, a child refuses to go to school, parents can say, “How can we make it feel easier?” while also communicating to the child that, ultimately, it’s their job to go to school.

By creating small opportunities for getting through difficult situations and coping with their worries, children will build the confidence and the independence they need to feel more in control and less afraid. It’s important to remember that children are resilient and adaptable, and, for many, after a period of transition, they will find their groove.

Parents can also elicit the help of the school and teacher. Teachers know this is a big transition for kids, and they are gearing up to help.

ED: Parents feel anxiety about this transition, too. What healthy coping strategies can they use to make sure they manage their own stress instead of expressing it in ways that may increase their child’s distress?

ER: Parents are the biggest models for our kids. If our kids see us really anxious about something, they’re going to feed off that. Parents need to be mindful of their own emotions so they can self-regulate and become present for their child.

We want to be steady sources of support for our children. It’s also fine to say we feel worried or we don’t know the answer, because that shows it’s OK to feel those things. The problem is when our worries get too big, when we’re no longer calm, or we are saying and doing things we don’t want to model for our children.

It’s essential to find moments for self-care. Taking even just a couple of deep breaths in the moment, taking a bathroom break, getting a drink of water, or doing other things that create a brief transition for yourself, a moment to regulate your feelings, is helpful. Think back to what worked for you before the pandemic, and try getting even a small inkling of that back, such as five minutes a day of moving your body if exercise helps you. This is not only important for you as a parent, but it also shows your child that you have strategies to take care of yourself.

We can also invite our children into healthy coping activities with us: A parent can say to a school-aged or older child, “I’m feeling pretty stressed about this, and for me, going for a walk helps me clear my head. Do you want to go for a walk with me?” Parents and young kids can blow bubbles together—small kids enjoy it, and you can talk about how big breaths for bubbles help everyone feel better.

If they need more help, parents can seek resources from the teachers and support staff at their child’s school, from their pediatrician, and from online resources at the Stanford Parenting Center at Stanford Children’s Health.

Source: How to Support Kids Who Are Anxious About Returning…

.

Related Contents:

COVID-19 Educational Disruption and Response

Childcare-policy responses in the COVID-19 pandemic: unpacking cross-country variation

School closures caused by Coronavirus (COVID-19)

Update from Cambridge International on May/June 2020 exams

Modeling Reading Ability Gain in Kindergarten Children during COVID-19 School Closures

Adverse consequences of school closures

School closures are starting, and they’ll have far-reaching economic impacts

Impacts of the COVID-19 Pandemic on Life of Higher Education Students: A Global Perspective

Student-Loan Debt Relief Offers Support to an Economy Battered by Coronavirus

Homeless students during the coronavirus pandemic: ‘We have to make sure they’re not forgotten

Coronavirus Forces Families to Make Painful Childcare Decisions

Education Dept. Says Disability Laws Shouldn’t Get In The Way Of Online Learning

COVID-19 Educational Disruption and Response”. UNESCO

290 million students out of school due to COVID-19: UNESCO releases first global numbers and mobilizes response

Should schools close due to coronavirus? Here’s what research says

Latin America takes steps to counter coronavirus, Brazil’s Bolsonaro snubs warnings

Singapore makes ‘decisive move’ to close most workplaces and impose full home-based learning for schools

AI Breakthrough Could Spark Medical Revolution

Artificial intelligence has been used to predict the structures of almost every protein made by the human body. The development could help supercharge the discovery of new drugs to treat disease, alongside other applications. Proteins are essential building blocks of living organisms; every cell we have in us is packed with them.

Understanding the shapes of proteins is critical for advancing medicine, but until now, only a fraction of these have been worked out. Researchers used a program called AlphaFold to predict the structures of 350,000 proteins belonging to humans and other organisms. The instructions for making human proteins are contained in our genomes – the DNA contained in the nuclei of human cells.

There are around 20,000 of these proteins expressed by the human genome. Collectively, biologists refer to this full complement as the “proteome”. Commenting on the results from AlphaFold, Dr Demis Hassabis, chief executive and co-founder of artificial intelligence company Deep Mind, said: “We believe it’s the most complete and accurate picture of the human proteome to date.

“We believe this work represents the most significant contribution AI has made to advancing the state of scientific knowledge to date. “And I think it’s a great illustration and example of the kind of benefits AI can bring to society.” He added: “We’re just so excited to see what the community is going to do with this.”

Proteins are made up of chains of smaller building blocks called amino acids. These chains fold in myriad different ways, forming a unique 3D shape. A protein’s shape determines its function in the human body. The 350,000 protein structures predicted by AlphaFold include not only the 20,000 contained in the human proteome, but also those of so-called model organisms used in scientific research, such as E. coli, yeast, the fruit fly and the mouse.

This giant leap in capability is described by DeepMind researchers and a team from the European Molecular Biology Laboratory (EMBL) in the prestigious journal Nature.  AlphaFold was able to make a confident prediction of the structural positions for 58% of the amino acids in the human proteome.

The positions of 35.7% were predicted with a very high degree of confidence – double the number confirmed by experiments. Traditional techniques to work out protein structures include X-ray crystallography, cryogenic electron microscopy (Cryo-EM) and others. But none of these is easy to do: “It takes a huge amount of money and resources to do structures,” Prof John McGeehan, a structural biologist at the University of Portsmouth, told BBC News.

Therefore, the 3D shapes are often determined as part of targeted scientific investigations, but no project until now had systematically determined structures for all the proteins made by the body. In fact, just 17% of the proteome is covered by a structure confirmed experimentally. Commenting on the predictions from AlphaFold, Prof McGeehan said: “It’s just the speed – the fact that it was taking us six months per structure and now it takes a couple of minutes. We couldn’t really have predicted that would happen so fast.”

“When we first sent our seven sequences to the DeepMind team, two of those we already had the experimental structures for. So we were able to test those when they came back. It was one of those moments – to be honest – where the hairs stood up on the back of my neck because the structures [AlphaFold] produced were identical.”

Prof Edith Heard, from EMBL, said: “This will be transformative for our understanding of how life works. That’s because proteins represent the fundamental building blocks from which living organisms are made.” “The applications are limited only by our understanding.” Those applications we can envisage now include developing new drugs and treatments for disease, designing future crops that can resist climate change, and enzymes that can break down the plastic that pervades the environment.

Prof McGeehan’s group is already using AlphaFold’s data to help develop faster enzymes for degrading plastic. He said the program had provided predictions for proteins of interest whose structures could not be determined experimentally – helping accelerate their project by “multiple years”.

Dr Ewan Birney, director of EMBL’s European Bioinformatics Institute, said the AlphaFold predicted structures were “one of the most important datasets since the mapping of the human genome”. DeepMind has teamed up with EMBL to make the AlphaFold code and protein structure predictions openly available to the global scientific community.

Dr Hassabis said DeepMind planned to vastly expand the coverage in the database to almost every sequenced protein known to science – over 100 million structures.

By : Paul Rincon / Science editor, BBC News website

Source: AI breakthrough could spark medical revolution – BBC News

.

Dubai Is Using Laser-Beam-Shooting Drones to Shock Rain Out of the Sky

The National Center of Meteorology in Dubai, United Arab Emirates, has found a new way to make it rain. It’s using laser-beam-shooting drones to generate rainfall artificially.

Last week the country’s weather service posted two videos offering proof of the heavy downpours in Dubai’s streets.

Here’s how it works: The drones shoot laser beams into the clouds, charging them with electricity. The charge prompts precipitation by forcing water droplets together to create bigger raindrops, essentially electrifying the air to create rain.

This past March, the BBC reported that the UAE was looking to test the drone technology, which it developed in collaboration with the University of Reading in the UK.

Artificially generated rain is crucial because Dubai only gets an average of 4 inches of rainfall annually. This makes farming difficult and forces the country to import more than 80% of its food.

The efforts are part of the country’s ongoing “quest to ensure water security” since the 1990s through the UAE Research Program for Rain Enhancement, according to the center.

Water security remains one of the UAE’s “main future challenges” as the country relies on groundwater for two-thirds of its water needs, according to the National Center of Meteorology website. The arid nation faces low rainfall level, high temperatures and high evaporation rates of surface water, the center says. Paired with increased demand due to high population growth, this puts the UAE in a precarious water security situation, according to the center.

But rain enhancement may “offer a viable, cost-effective supplement to existing water supplies,” especially amid diminishing water resources across the globe, the center said.“While most of us take free water for granted, we must remember that it is a precious and finite resource,” according to the center.

Cloud seeding projects may also be improving the UAE’s air quality in recent years, according to a 2021 study led by American University of Sharjah. So far, rain enhancement projects have centered on the country’s mountainous north-east regions, where cumulus clouds gather in the summer, according to the National Center of Meteorology website.

There have been successes in the U.S., as well as China, India, and Thailand. Long-term cloud seeding in the mountains of Nevada have increased snowpack by 10% or more each year, according to research published by the American Meteorological Society. A 10-year cloud seeding experiment in Wyoming resulted in 5-10% increases in snowpack, according to the State of Wyoming.

The practice is used in at least eight states in the western U.S. and in dozens of countries, the Scientific American reported. The UAE is one of the first countries in the Arab Gulf region to use cloud seeding technology, according to the National Center of Meteorology website.

It also doesn’t help with the country’s sweltering temperatures. On June 6, for example, Dubai recorded a sweltering temperature high of 125 degrees Fahrenheit.

Dubai’s rainmaking technology is not entirely dissimilar to cloud seeding, which has been used in the US since 1923 to combat prolonged periods of drought. Cloud seeding requires crushed-up silver iodide, a chemical used in photography, to help create water clusters in the air.

Forbes reported that the UAE has invested in nine rain-enhancement projects over the past few years, which cost around $15 million in total. The bulk of those projects have involved traditional cloud-seeding techniques.

Critics of the drone technology worry that it could unintentionally cause massive flooding. And they also worry about such technology being privatized, Forbes reported.

In the US, innovative solutions to the extreme effects of the climate crisis have been explored. Billionaire Bill Gates is backing the development of a sunlight-dimming technology that might help to achieve a global cooling effect by reflecting the sun’s rays from the planet’s atmosphere.

In the meantime, more than 80 wildfires are blazing across the US, devastating communities and destroying homes. On July 13, Death Valley in California recorded a temperature high of 128 degrees Fahrenheit, the Earth’s hottest temperature record since 2017.

By:

Source: Dubai Is Using Laser-Beam-Shooting Drones to Shock Rain Out of the Sky

.

Social Psychologist Amy Cuddy on How to Find Power and Confidence in a Crisis

In times of crisis, don’t look to the past or the future for answers. That’s according to social psychologist and behavioral science expert Amy Cuddy. The Harvard University lecturer and author of Presence: Bringing Your Boldest Self to Your Biggest Challenges explained in a virtual keynote to Inc. 5000 honorees this week that productivity-sapping emotions such as anxiety, dread, and distraction come from thinking too much about the past and future.

Staying present, Cuddy explains, can help you approach difficult situations with composure and find solutions with confidence. “It’s the power to bring yourself forward to express your most confident, competent, trustworthy, decent, awesome self in stressful situations,” Cuddy says. “It is the ability to control your own states, your own behaviors, and, to some extent, your own outcomes.”

Here are three of Cuddy’s tips for how to make the most of a bad situation.

View challenges as opportunities.

When presented with a challenge, Cuddy advises reframing the situation. If you feel nervous to approach someone, for example, think of them as a collaborator or an ally, rather than as a competitor. Changing viewpoints can make you feel more in control of coming up with a solution to your problems.

“When we feel powerful, it leads us to act,” Cuddy says. “When we feel powerless, we don’t act.”

Don’t fake it until you make it.

Faking it until you make it works in some situations, but not when it comes to relationships. The best relationships are built on trust and authenticity–not on overstating your abilities.

“Unfortunately, we often make the mistake in work situations of showing off our skills and our strengths before showing that we are trustworthy,” Cuddy says. “When we neglect that piece, this other piece–the strength, the competence, the skills–they just don’t matter, especially for leaders who really need to inspire people to do their best work.”

Avoid panicing at all costs.

When presented with something that makes you panic, Cuddy advises business owners to think of a time when you felt your best, whether it was finishing your first successful fundraising meeting, landing your biggest client, or even at a personal event such as a wedding. By contrasting the panic with a good feeling, it can help you reset your approach to the situation and feel more present.

“When we feel present, we’re not doubting who we are [and] we believe in ourselves,” Cuddy says. “And when we believe in ourselves, we believe in what we’re selling.”

Source: Social Psychologist Amy Cuddy on How to Find Power and Confidence in a Crisis | Inc.com

.

More Contents:

Where Workers Are the Happiest

Lessons from America’s Golden Age of Innovation

On Résumés, an Upper-Class Background Benefits Men but Not Women

Whiteboard Session: The Business Case for Sustainability

The Other Kind of Inequality, Explained

Why Immigrant Entrepreneurs Are So Important to the U.S.

Gender Equality Is Making Men Feel Discriminated Against

Even After Criticism, Men Think Highly of Themselves

Climate Crisis: Scientists Spot Warning Signs of Gulf Stream Collapse

Climate scientists have detected warning signs of the collapse of the Gulf Stream, one of the planet’s main potential tipping points.

The research found “an almost complete loss of stability over the last century” of the currents that researchers call the Atlantic meridional overturning circulation (AMOC). The currents are already at their slowest point in at least 1,600 years, but the new analysis shows they may be nearing a shutdown.

Such an event would have catastrophic consequences around the world, severely disrupting the rains that billions of people depend on for food in India, South America and West Africa; increasing storms and lowering temperatures in Europe; and pushing up the sea level off eastern North America. It would also further endanger the Amazon rainforest and Antarctic ice sheets.

The complexity of the AMOC system and uncertainty over levels of future global heating make it impossible to forecast the date of any collapse for now. It could be within a decade or two, or several centuries away. But the colossal impact it would have means it must never be allowed to happen, the scientists said.

“The signs of destabilisation being visible already is something that I wouldn’t have expected and that I find scary,” said Niklas Boers, from the Potsdam Institute for Climate Impact Research in Germany, who did the research. “It’s something you just can’t [allow to] happen.”

It is not known what level of CO2 would trigger an AMOC collapse, he said. “So the only thing to do is keep emissions as low as possible. The likelihood of this extremely high-impact event happening increases with every gram of CO2 that we put into the atmosphere”.

Scientists are increasingly concerned about tipping points – large, fast and irreversible changes to the climate. Boers and his colleagues reported in May that a significant part of the Greenland ice sheet is on the brink, threatening a big rise in global sea level. Others have shown recently that the Amazon rainforest is now emitting more CO2 than it absorbs, and that the 2020 Siberian heatwave led to worrying releases of methane.

The world may already have crossed a series of tipping points, according to a 2019 analysis, resulting in “an existential threat to civilization”. A major report from the Intergovernmental Panel on Climate Change, due on Monday, is expected to set out the worsening state of the climate crisis.

Boer’s research, published in the journal Nature Climate Change, is titled “Observation-based early-warning signals for a collapse of the AMOC”. Ice-core and other data from the last 100,000 years show the AMOC has two states: a fast, strong one, as seen over recent millennia, and a slow, weak one. The data shows rising temperatures can make the AMOC switch abruptly between states over one to five decades.

The AMOC is driven by dense, salty seawater sinking into the Arctic ocean, but the melting of freshwater from Greenland’s ice sheet is slowing the process down earlier than climate models suggested.

Boers used the analogy of a chair to explain how changes in ocean temperature and salinity can reveal the AMOC’s instability. Pushing a chair alters its position, but does not affect its stability if all four legs remain on the floor. Tilting the chair changes both its position and stability.

Eight independently measured datasets of temperature and salinity going back as far as 150 years enabled Boers to show that global heating is indeed increasing the instability of the currents, not just changing their flow pattern.

The analysis concluded: “This decline [of the AMOC in recent decades] may be associated with an almost complete loss of stability over the course of the last century, and the AMOC could be close to a critical transition to its weak circulation mode.”

Levke Caesar, at Maynooth University in Ireland, who was not involved in the research, said: “The study method cannot give us an exact timing of a possible collapse, but the analysis presents evidence that the AMOC has already lost stability, which I take as a warning that we might be closer to an AMOC tipping than we think.”

David Thornalley, at University College London in the UK, whose work showed the AMOC is at its weakest point in 1,600 years, said: “These signs of decreasing stability are concerning. But we still don’t know if a collapse will occur, or how close we might be to it.”

By: Environment editor

Source: Climate crisis: Scientists spot warning signs of Gulf Stream collapse | Climate change | The Guardian

.

More Contents:

Why Vaccinated People Are Getting ‘Breakthrough’ Infections

A wedding in Oklahoma leads to 15 vaccinated guests becoming infected with the coronavirus. Raucous Fourth of July celebrations disperse the virus from Provincetown, Mass., to dozens of places across the country, sometimes carried by fully vaccinated celebrants.

As the Delta variant surges across the nation, reports of infections in vaccinated people have become increasingly frequent — including, most recently, among at least six Texas Democrats, a White House aide and an aide to Speaker Nancy Pelosi.

The highly contagious variant, combined with a lagging vaccination campaign and the near absence of preventive restrictions, is fueling a rapid rise in cases in all states, and hospitalizations in nearly all of them. It now accounts for about 83 percent of infections diagnosed in the United States.

But as worrying as the trend may seem, breakthrough infections — those occurring in vaccinated people — are still relatively uncommon, experts said, and those that cause serious illness, hospitalization or death even more so. More than 97 percent of people hospitalized for Covid-19 are unvaccinated.

“The takeaway message remains, if you’re vaccinated, you are protected,” said Dr. Celine Gounder, an infectious disease specialist at Bellevue Hospital Center in New York. “You are not going to end up with severe disease, hospitalization or death.”

Reports of breakthrough infections should not be taken to mean that the vaccines do not work, Dr. Anthony S. Fauci, the Biden administration’s top pandemic adviser, said on Thursday at a news briefing.

“By no means does that mean that you’re dealing with an unsuccessful vaccine,” he said. “The success of the vaccine is based on the prevention of illness.”

Still, vaccinated people can come down with infections, overwhelmingly asymptomatic or mild. That may come as a surprise to many vaccinated Americans, who often assume that they are completely shielded from the virus. And breakthrough infections raise the possibility, as yet unresolved, that vaccinated people may spread the virus to others.

Given the upwelling of virus across much of the country, some scientists say it is time for vaccinated people to consider wearing masks indoors and in crowded spaces like shopping malls or concert halls — a recommendation that goes beyond current guidelines from the Centers for Disease Control and Prevention, which recommends masking only for unvaccinated people.

The agency does not plan to change its guidelines unless there is a significant change in the science, said a federal official speaking on condition of anonymity because he was not authorized to speak on the matter.

The agency’s guidance already gives local leaders latitude to adjust their policies based on rates of transmission in their communities, he added. Citing the rise of the Delta variant, health officials in several California jurisdictions are already urging a return to indoor masking; Los Angeles County is requiring it.

“Seatbelts reduce risk, but we still need to drive carefully,” said Dr. Scott Dryden-Peterson, an infectious disease physician and epidemiologist at Brigham & Women’s Hospital in Boston. “We’re still trying to figure out what is ‘drive carefully’ in the Delta era, and what we should be doing.”

The uncertainty about Delta results in part from how it differs from previous versions of the coronavirus. Although its mode of transmission is the same — it is inhaled, usually in indoor spaces — Delta is thought to be about twice as contagious as the original virus.

Significantly, early evidence also suggests that people infected with the Delta variant may carry roughly a thousandfold more virus than those infected with the original virus. While that does not seem to mean that they get sicker, it does probably mean that they are more contagious and for longer.

Dose also matters: A vaccinated person exposed to a low dose of the coronavirus may never become infected, or not noticeably so. A vaccinated person exposed to extremely high viral loads of the Delta variant is more likely to find his or her immune defenses overwhelmed.

The problem grows worse as community transmission rates rise, because exposures in dose and number will increase. Vaccination rates in the country have stalled, with less than half of Americans fully immunized, giving the virus plenty of room to spread.

Unvaccinated people “are not, for the most part, taking precautions, and that’s what’s driving it for everybody,” said Dr. Eric J. Rubin, the editor in chief of the New England Journal of Medicine. “We’re all susceptible to whatever anyone’s behavior is in this epidemic.”

Dr. Gounder likened the amount of protection offered by the vaccines to a golf umbrella that keeps people dry in a rainstorm. “But if you’re out in a hurricane, you’re still going to get wet,” she said. “That’s kind of the situation that the Delta variant has created, where there’s still a lot of community spread.”

For the average vaccinated person, a breakthrough infection is likely to be inconsequential, causing few to no symptoms. But there is concern among scientists that a few vaccinated people who become infected may go on to develop long Covid, a poorly understood constellation of symptoms that persists after the active infection is resolved.

Much has been made of Delta’s ability to sidestep immune defenses. In fact, all of the existing vaccines seem able to prevent serious illness and death from the variant. In laboratory studies, Delta actually has proved to be a milder threat than Beta, the variant first identified in South Africa.

Whether a vaccinated person ever becomes infected may depend on how high antibodies spiked after vaccination, how potent those antibodies are against the variant, and whether the level of antibodies in the person’s blood has waned since immunization.

In any case, immune defenses primed by the vaccines should recognize the virus soon after infection and destroy it before significant damage occurs.

“That is what explains why people do get infected and why people don’t get seriously ill,” said Michel C. Nussenzweig, an immunologist at Rockefeller University in New York. “It’s nearly unavoidable, unless you’re going to give people very frequent boosters.”

There is limited evidence beyond anecdotal reports to indicate whether breakthrough infections with the Delta variant are more common or more likely to fan out to other people. The C.D.C. has recorded about 5,500 hospitalizations and deaths in vaccinated people, but it is not tracking milder breakthrough infections.

Additional data is emerging from the Covid-19 Sports and Society Workgroup, a coalition of professional sports leagues that is working closely with the C.D.C. Sports teams in the group are testing more than 10,000 people at least daily and sequencing all infections, according to Dr. Robby Sikka, a physician who worked with the N.B.A.’s Minnesota Timberwolves.

Breakthrough infections in the leagues seem to be more common with the Delta variant than with Alpha, the variant first identified in Britain, he said. As would be predicted, the vaccines cut down the severity and duration of illness significantly, with players returning less than two weeks after becoming infected, compared with nearly three weeks earlier in the pandemic.

But while they are infected, the players carry very high amounts of virus for seven to 10 days, compared with two or three days in those infected with Alpha, Dr. Sikka said. Infected players are required to quarantine, so the project has not been able to track whether they spread the virus to others — but it’s likely that they would, he added.

“If they’re put just willy-nilly back into society, I think you’re going to have spread from vaccinated individuals,” he added. “They don’t even recognize they have Covid because they think they’re vaccinated.”

Elyse Freitas was shocked to discover that 15 vaccinated people became infected at her wedding. Dr. Freitas, 34, a biologist at the University of Oklahoma, said she had been very cautious throughout the pandemic, and had already postponed her wedding once. But after much deliberation, she celebrated the wedding indoors on July 10.

Based on the symptoms, Dr. Freitas believes that the initial infection was at a bachelorette party two days before the wedding, when a dozen vaccinated people went unmasked to bars in downtown Oklahoma City; seven of them later tested positive. Eventually, 17 guests at the wedding became infected, nearly all with mild symptoms.

“In hindsight, I should have paid more attention to the vaccination rates in Oklahoma and the emergence of the Delta variant and adjusted my plans accordingly,” she said.

An outbreak in Provincetown, Mass., illustrates how quickly a cluster can grow, given the right conditions. During its famed Fourth of July celebrations, the small town hosted more than 60,000 unmasked revelers, dancing and mingling in crowded bars and house parties.

The crowds this year were much larger than usual, said Adam Hunt, 55, an advertising executive who has lived in Provincetown part time for about 20 years. But the bars and clubs didn’t open until they were allowed to, Mr. Hunt noted: “We thought we were doing the right thing. We thought we were OK.”

Mr. Hunt did not become infected with the virus, but several of his vaccinated friends who had flown in from places as far as Hawaii and Alabama tested positive after their return. In all, the cluster has grown to at least 256 cases — including 66 visitors from other states — about two-thirds in vaccinated people.

“I did not expect that people who were vaccinated would be becoming positive at the rate that they were,” said Steve Katsurinis, chair of the Provincetown Board of Health. Provincetown has moved swiftly to contain the outbreak, reinstating a mask advisory and stepping up testing. It is conducting 250 tests a day, compared with about eight a day before July 1, Mr. Katsurinis said.

Health officials should also help the public understand that vaccines are doing what they are supposed to — preventing people from getting seriously ill, said Kristen Panthagani, a geneticist at Baylor College of Medicine who runs a blog explaining complex scientific concepts.

“Vaccine efficacy isn’t 100 percent — it never is,” she said. “We shouldn’t expect Covid vaccines to be perfect, either. That’s too high an expectation.”

By:

Source: Why Vaccinated People Are Getting ‘Breakthrough’ Infections – The New York Times

.

Related Contents:

Scientists Predict Early Covid-19 Symptoms Using AI (And An App)

Combining self-reported symptoms with Artificial Intelligence can predict the early symptoms of Covid-19, according to research led by scientists at Kings College London. Previous studies have predicted whether people will develop Covid using symptoms from the peak of viral infection, which can be less relevant over time — fever is common during later phases, for instance.

The new study reveals which symptoms of infection can be used for early detection of the disease. Published in the journal The Lancet Digital Health, the research used data collected via the ZOE COVID Symptom Study smartphone app. Each app user logged any symptoms that they experienced over the first 3 days, plus the result of a subsequent PCR test for Coronavirus and personal information like age and sex.

Researchers used those self-reported data from the app to assess three models for predicting Covid in advance, which involved using one dataset to train a given model before its performance was tested on another set. The training set included almost 183,000 people who reported symptoms from 16 October to 30 November 2020, while the test dataset consisted of more than 15,000 participants with data between 16 October and 30 November.

The three models were: 1) a statistical method called logical regression; 2) a National Health Service (NHS) algorithm, and; 3) an Artificial Intelligence (AI) approach known as a ‘hierarchical Gaussian process’. Of the three prediction models, the AI approach performed the best, so it was then used to identify patterns in the data. The AI prediction model was sensitive enough to find which symptoms were most relevant in various groups of people.

The subgroups were occupation (healthcare professional versus non-healthcare), age group (16-39, 40-59, 60-79, 80+ years old), sex (male or female), Body-Mass Index (BMI as underweight, normal, overweight/obese) and several well-known health conditions. According to results produced by the AI model, loss of smell was the most relevant early symptom among both healthcare and non-healthcare workers, and the two groups also reported chest pain and a persistent cough.

The symptoms varied among age groups: loss of smell had less relevance to people over 60 years old, for instance, and seemed irrelevant to those over 80 — highlighting age as a key factor in early Covid detection. There was no big difference between sexes for their reported symptoms, but shortness of breath, fatigue and chills/shivers were more relevant signs for men than for women.

No particular patterns were found in BMI subgroups either and, in terms of health conditions, heart disease was most relevant for predicting Covid. As the study’s symptoms were from 2020, its results might only apply to the original strain of the SARS-CoV-2 virus and Alpha variant – the two variants with highest prevalence in the UK that year.

The predictions wouldn’t have been possible without the self-reported data from the ZOE COVID Symptom Study project, a non-profit collaboration between scientists and personalized health company ZOE, which was co-founded by genetic epidemiologist Tim Spector of Kings College London.

The project’s website keeps an up-to-date ranking of the top 5 Covid symptoms reported by British people who are now fully vaccinated (with a Pfizer or AstraZeneca vaccine), have so far received one of the two doses, or are still unvaccinated. Those top 5 symptoms provide a useful resource if you want to know which signs are common for the most prevalent variant circulating in a population — currently Delta – as distinct variants can be associated with different symptoms.

When a new variant emerges in future, you could pass some personal information (such as age) to the AI prediction model so it shows the early symptoms most relevant to you — and, if you developed those symptoms, take a Covid test and perhaps self-isolate before you transmit the virus to other people. As the new study concludes, such steps would help alleviate stress on public health services:

“Early detection of SARS-CoV-2-infected individuals is crucial to contain the spread of the COVID-19 pandemic and efficiently allocate medical resources.” Follow me on Twitter or LinkedIn. Check out my website or some of my other work here.

I’m a science communicator and award-winning journalist with a PhD in evolutionary biology. I specialize in explaining scientific concepts that appear in popular culture and mainly write about health, nature and technology. I spent several years at BBC Science Focus magazine, running the features section and writing about everything from gay genes and internet memes to the science of death and origin of life. I’ve also contributed to Scientific American and Men’s Health. My latest book is ’50 Biology Ideas You Really Need to Know’.

Source: Scientists Predict Early Covid-19 Symptoms Using AI (And An App)

.

Critics:

Healthcare providers and researchers are faced with an exponentially increasing volume of information about COVID-19, which makes it difficult to derive insights that can inform treatment. In response, AWS launched CORD-19 Search, a new search website powered by machine learning, that can help researchers quickly and easily search for research papers and documents and answer questions like “When is the salivary viral load highest for COVID-19?”

Built on the Allen Institute for AI’s CORD-19 open research dataset of more than 128,000 research papers and other materials, this machine learning solution can extract relevant medical information from unstructured text and delivers robust natural-language query capabilities, helping to accelerate the pace of discovery.

In the field of medical imaging, meanwhile, researchers are using machine learning to help recognize patterns in images, enhancing the ability of radiologists to indicate the probability of disease and diagnose it earlier.

UC San Diego Health has engineered a new method to diagnose pneumonia earlier, a condition associated with severe COVID-19. This early detection helps doctors quickly triage patients to the appropriate level of care even before a COVID-19 diagnosis is confirmed. Trained with 22,000 notations by human radiologists, the machine learning algorithm overlays x-rays with colour-coded maps that indicate pneumonia probability. With credits donated from the AWS Diagnostic Development Initiative, these methods have now been deployed to every chest x-ray and CT scan throughout UC San Diego Health in a clinical research study.

Related Links:

Governments must build trust in AI to fight COVID-19 – Here’s how they can do it

This AI model has predicted which patients will get the sickest from COVID-19

Coalition for Epidemic Preparedness Innovations

What history tells us about pandemics’ impact on inflation

How to back an inclusive post-COVID recovery

Survey: How US employees feel about a full return to the workplace

What Happens To Our Brains When We Get Depressed

In the ’90s, when he was a doctoral student at the University of Lausanne, in Switzerland, neuroscientist Sean Hill spent five years studying how cat brains respond to noise. At the time, researchers knew that two regions—the cerebral cortex, which is the outer layer of the brain, and the thalamus, a nut-like structure near the centre—did most of the work. But, when an auditory signal entered the brain through the ear, what happened, specifically?

Which parts of the cortex and thalamus did the signal travel to? And in what order? The answers to such questions could help doctors treat hearing loss in humans. So, to learn more, Hill, along with his supervisor and a group of lab techs, anaesthetized cats and inserted electrodes into their brains to monitor what happened when the animals were exposed to sounds, which were piped into their ears via miniature headphones. Hill’s probe then captured the brain signals the noises generated.

The last step was to euthanize the cats and dissect their brains, which was the only way for Hill to verify where he’d put his probes. It was not a part of the study he enjoyed. He’d grown up on a family farm in Maine and had developed a reverence for all sentient life. As an undergraduate student in New Hampshire, he’d experimented on pond snails, but only after ensuring that each was properly anaesthetized. “I particularly loved cats,” he says, “but I also deeply believed in the need for animal data.” (For obvious reasons, neuroscientists cannot euthanize and dissect human subjects.)

Over time, Hill came to wonder if his data was being put to the best possible use. In his cat experiments, he generated reels of magnetic tape—printouts that resembled player piano scrolls. Once he had finished analyzing the tapes, he would pack them up and store them in a basement. “It was just so tangible,” he says. “You’d see all these data coming from the animals, but then what would happen with it? There were boxes and boxes that, in all likelihood, would never be looked at again.” Most researchers wouldn’t even know where to find them.

Hill was coming up against two interrelated problems in neuroscience: data scarcity and data wastage. Over the past five decades, brain research has advanced rapidly—we’ve developed treatments for Parkinson’s and epilepsy and have figured out, if only in the roughest terms, which parts of the brain produce arousal, anger, sadness, and pain—but we’re still at the beginning of the journey.

Scientists are still some way, for instance, from knowing the size and shape of each type of neuron (i.e., brain cell), the RNA sequences that govern their behavior, or the strength and frequency of the electrical signals that pass between them. The human brain has 86 billion neurons. That’s a lot of data to collect and record.

But, while brain data is a precious resource, scientists tend to lock it away, like secretive art collectors. Labs the world over are conducting brain experiments using increasingly sophisticated technology, from hulking magnetic-imaging devices to microscopic probes. These experiments generate results, which then get published in journals. Once each new data set has served this limited purpose, it goes . . . somewhere, typically onto a secure hard drive only a few people can access.

Hill’s graduate work in Lausanne was at times demoralizing. He reasoned that, for his research to be worth the costs to both the lab that conducted it and the cats who were its subjects, the resulting data—perhaps even all brain data—should live in the public domain. But scientists generally prefer not to share. Data, after all, is a kind of currency: it helps generate findings, which lead to jobs, money, and professional recognition. Researchers are loath to simply give away a commodity they worked hard to acquire. “There’s an old joke,” says Hill, “that neuroscientists would rather share toothbrushes than data.”

He believes that, if they don’t get over this aversion—and if they continue to stash data in basements and on encrypted hard drives—many profound questions about the brain will remain unanswered. This is not just a matter of academic curiosity: if we improve our understanding of the brain, we could develop treatments that have long eluded us for major mental illnesses.

In 2019, Hill became director of Toronto’s Krembil Centre for Neuroinformatics (KCNI), an organization working at the intersection of neuroscience, information management, brain modelling, and psychiatry. The basic premise of neuroinformatics is this: the brain is big, and if humans are going to have a shot at understanding it, brain science must become big too. The KCNI’s goal is to aggregate brain data and use it to build computerized models that, over time, become ever more complex—all to aid them in understanding the intricacy of a real brain.

There are about thirty labs worldwide explicitly dedicated to such work, and they’re governed by a central regulatory body, the International Neuroinformatics Coordinating Facility, in Sweden. But the KCNI stands out because it’s embedded in a medical institution: the Centre for Addiction and Mental Health (CAMH), Canada’s largest psychiatric hospital. While many other neuroinformatics labs study genetics or cognitive processing, the KCNI seeks to demystify conditions like schizophrenia, anxiety, and dementia. Its first area of focus is depression.

Fundamentally, we don’t have a biological understanding of depression.

The disease affects more than 260 million people around the world, but we barely understand it. We know that the balance between the prefrontal cortex (at the front of the brain) and the anterior cingulate cortex (tucked just behind it) plays some role in regulating mood, as does the chemical serotonin. But what actually causes depression? Is there a tiny but important area of the brain that researchers should focus on?

And does there even exist a singular disorder called depression, or is the label a catch-all denoting a bunch of distinct disorders with similar symptoms but different brain mechanisms? “Fundamentally,” says Hill, “we don’t have a biological understanding of depression or any other mental illness.”

The problem, for Hill, requires an ambitious, participatory approach. If neuroscientists are to someday understand the biological mechanisms behind mental illness—that is, if they are to figure out what literally happens in the brain when a person is depressed, manic, or delusional—they will need to pool their resources. “There’s not going to be a single person who figures it all out,” he says. “There’s never going to be an Einstein who solves a set of equations and shouts, ‘I’ve got it!’ The brain is not that kind of beast.”

The KCNI lab has the feeling of a tech firm. It’s an open-concept space with temporary workstations in lieu of offices, and its glassed-in meeting rooms have inspirational names, like “Tranquility” and “Perception.” The KCNI is a “dry centre”: it works with information and software rather than with biological tissue.

To obtain data, researchers forge relationships with other scientists and try to convince them to share what they’ve got. The interior design choices are a tactical part of this effort. “The space has to look nice,” says Dan Felsky, a researcher at the centre. “Colleagues from elsewhere must want to come in and collaborate with us.”

Yet it’s hard to forget about the larger surroundings. During one interview in the “Clarity” room, Hill and I heard a code-blue alarm, broadcast across CAMH, to indicate a medical emergency elsewhere in the hospital. Hill’s job doesn’t involve front line care, so he doesn’t personally work with patients, but these disruptions reinforce his sense of urgency. “I come from a discipline where scientists focus on theoretical subjects,” he says. “It’s important to be reminded that people are suffering and we have a responsibility to help them.”

Today, the science of mental illness is based primarily on the study of symptoms. Patients receive a diagnosis when they report or exhibit maladaptive behaviours—despair, anxiety, disordered thinking—associated with a given condition. If a significant number of patients respond positively to a treatment, that treatment is deemed effective. But such data reveals nothing about what physically goes on within the brain.

“When it comes to the various diseases of the brain,” says Helena Ledmyr, co-director of the International Neuroinformatics Coordinating Facility, “we know astonishingly little.” Shreejoy Tripathy, a KCNI researcher, gives modern civilization a bit more credit: “The ancient Egyptians would remove the brain when embalming people because they thought it was useless. In theory, we’ve learned a few things since then. In relation to how much we have left to learn, though, we’re not that much further along.”

Joe Herbert, a Cambridge University neuroscientist, offers a revealing comparison between the way mental versus physical maladies are diagnosed. If, in the nineteenth century, you walked into a doctor’s office complaining of shortness of breath, the doctor would likely diagnose you with dyspnea, a word that basically means . . . shortness of breath.

Today, of course, the doctor wouldn’t stop there: they would take a blood sample to see if you were anemic, do an X-ray to search for a collapsed lung, or subject you to an echocardiogram to spot signs of heart disease. Instead of applying a Greek label to your symptoms, they’d run tests to figure out what was causing them.

Herbert argues that the way we currently diagnose depression is similar to how we once diagnosed shortness of breath. The term depression is likely as useful now as dyspnea was 150 years ago: it probably denotes a range of wildly different maladies that just happen to have similar effects. “Psychiatrists recognize two types of depression—or three, if you count bipolar—but that’s simply on the basis of symptoms,” says Herbert. “Our history of medicine tells us that defining a disease by its symptoms is highly simplistic and inaccurate.”

The advantage of working with models, as the KCNI researchers do, is that scientists can experiment in ways not possible with human subjects. They can shut off parts of the model brain or alter the electrical circuitry. The disadvantage is that models are not brains. A model is, ultimately, a kind of hypothesis—an illustration, analogy, or computer simulation that attempts to explain or replicate how a certain brain process works.

Over the centuries, researchers have created brain models based on pianos, telephones, and computers. Each has some validity—the brain has multiple components working in concert, like the keys of a piano; it has different nodes that communicate with one another, like a telephone network; and it encodes and stores information, like a computer—but none perfectly describes how a real brain works. Models may be useful abstractions, but they are abstractions nevertheless.

Yet, because the brain is vast and mysterious and hidden beneath the skull, we have no choice but to model it if we are to study it. Debates over how best to model it, and whether such modelling should be done at the micro or macro scale, are hotly contested in neuroscience. But Hill has spent most of his life preparing to answer these questions.

Hill grew up in the ’70s and ’80s, in an environment entirely unlike the one in which he works. His parents were adherents of the back-to-the-land movement, and his father was an occasional artisanal toymaker. On their farm, near the coast of Maine, the family grew vegetables and raised livestock using techniques not too different from those of nineteenth-century homesteaders. They pulled their plough with oxen and, to fuel their wood-burning stove, felled trees with a manual saw.

When Hill and his older brother found out that the local public school had acquired a TRS-80, an early desktop computer, they became obsessed. The math teacher, sensing their passion, decided to loan the machine to the family for Christmas. Over the holidays, the boys became amateur programmers. Their favourite application was Dancing Demon, in which a devilish figure taps its feet to an old swing tune. Pretty soon, the boys had hacked the program and turned the demon into a monster resembling Boris Karloff in Frankenstein. “In the dark winter of Maine,” says Hill, “what else were we going to do?”

The experiments spurred conversation among the brothers, much of it the fevered speculation of young people who’ve read too much science fiction. They fantasized about the spaceships they would someday design. They also discussed the possibility of building a computerized brain. “I was probably ten or eleven years old,” Hill recalls, “saying to my brother, ‘Will we be able to simulate a neuron? Maybe that’s what we need to get artificial intelligence.’”

Roughly a decade later, as an undergraduate at the quirky liberal arts university Hampshire College, Hill was drawn to computational neuroscience, a field whose practitioners were doing what he and his brother had talked about: building mathematical, and sometimes even computerized, brain models.

In 2006, after completing his PhD, along with postgraduate studies in San Diego and Wisconsin, Hill returned to Lausanne to co-direct the Blue Brain Project, a radical brain-modelling lab in the Swiss Alps. The initiative had been founded a year earlier by Henry Markram, a South African Israeli neuroscientist whose outsize ambitions had made him a revered and controversial figure.

In neuroscience today, there are robust debates as to how complex a brain model should be. Some researchers seek to design clean, elegant models. That’s a fitting description of the Nobel Prize–winning work of Alan Hodgkin and Andrew Huxley, who, in 1952, drew handwritten equations and rudimentary illustrations—with lines, symbols, and arrows—describing how electrical signals exit a neuron and travel along a branch-like cable called an axon.

Other practitioners seek to make computer-generated maps that incorporate hundreds of neurons and tens of thousands of connections, image fields so complicated that Michelangelo’s Sistine Chapel ceiling looks minimalist by comparison. The clean, simple models demystify brain processes, making them understandable to humans. The complex models are impossible to comprehend: they offer too much information to take in, attempting to approximate the complexity of an actual brain.

Markram’s inclinations are maximalist. In a 2009 TED Talk, he said that he aimed to build a computer model so comprehensive and biologically accurate that it would account for the location and activity of every human neuron. He likened this endeavour to mapping out a rainforest tree by tree. Skeptics wondered whether such a project was feasible. The problem isn’t merely that there are numerous trees in a rainforest: it’s also that each tree has its own configuration of boughs and limbs. The same is true of neurons.

Each is a microscopic, blob-like structure with dense networks of protruding branches called axons and dendrites. Neurons use these branches to communicate. Electrical signals run along the axons of one neuron and then jump, over a space called a synapse, to the dendrites of another. The 86 billion neurons in the human brain each have an average of 10,000 synaptic connections. Surely, skeptics argued, it was impossible, using available technology, to make a realistic model from such a complicated, dynamic system.

In 2006, Markram and Hill got to work. The initial goal was to build a hyper-detailed, biologically faithful model of a “microcircuit” (i.e., a cluster of 31,000 neurons) found within the brain of a rat. With a glass probe called a patch clamp, technicians at the lab penetrated a slice of rat brain, connected to each individual neuron, and recorded the electrical signals it sent out.

By injecting dye into the neurons, the team could visualize their shape and structure. Step by step, neuron by neuron, they mapped out the entire communication network. They then fed the data into a model so complex that it required Blue Gene, the IBM supercomputer, to run.

In 2015, they completed their rat microcircuit. If they gave their computerized model certain inputs (say, a virtual spark in one part of the circuit), it would predict an output (for instance, an electrical spark elsewhere) that corresponded to biological reality. The model wasn’t doing any actual cognitive processing: it wasn’t a virtual brain, and it certainly wasn’t thinking.

But, the researchers argued, it was predicting how electrical signals would move through a real circuit inside a real rat brain. “The digital brain tissue naturally behaves like the real brain tissue,” reads a statement on the Blue Brain Project’s website. “This means one can now study this digital tissue almost like one would study real brain tissue.”

The breakthrough, however, drew fresh criticisms. Some neuroscientists questioned the expense of the undertaking. The team had built a multimillion-dollar computer program to simulate an already existing biological phenomenon, but so what? “The question of ‘What are you trying to explain?’ hadn’t been answered,” says Grace Lindsay, a computational neuroscientist and author of the book Models of the Mind. “A lot of money went into the Blue Brain Project, but without some guiding goal, the whole thing seemed too open ended to be worth the resources.”

Others argued that the experiment was not just profligate but needlessly convoluted. “There are ways to reduce a big system down to a smaller system,” says Adrienne Fairhall, a computational neuroscientist at the University of Washington. “When Boeing was designing airplanes, they didn’t build an entire plane just to figure out how air flows around the wings. They scaled things down because they understood that a small simulation could tell them what they needed to know.” Why seek complexity, she argues, at the expense of clarity and elegance?

The harshest critics questioned whether the model even did what it was supposed to do. When building it, the team had used detailed information about the shape and electrical signals of each neuron. But, when designing the synaptic connections—that is, the specific locations where the branches communicate with one another—they didn’t exactly mimic biological reality, since the technology for such detailed brain mapping didn’t yet exist. (It does now, but it’s a very recent development.)

Instead, the team built an algorithm to predict, based on the structure of the neurons and the configuration of the branches, where the synaptic connections were likely to be. If you know the location and shape of the trees, they reasoned, you don’t need to perfectly replicate how the branches intersect.

But Moritz Helmstaedter—a director at the Max Planck Institute for Brain Research, in Frankfurt, Germany, and an outspoken critic of the project—questions whether this supposition is true. “The Blue Brain model includes all kinds of assumptions about synaptic connectivity, but what if those assumptions are wrong?” he asks. The problem, for Helmstaedter, isn’t just that the model could be inaccurate: it’s that there’s no way to fully assess its accuracy given how little we know about brain biology.

If a living rat encounters a cat, its brain will generate a flight signal. But, if you present a virtual input representing a cat’s fur to the Blue Brain model, will the model generate a virtual flight signal too? We can’t tell, Helmstaedter argues, in part because we don’t know, in sufficient detail, what a flight signal looks like inside a real rat brain.

Hill takes these comments in stride. To criticisms that the project was too open-ended, he responds that the goal wasn’t to demystify a specific brain process but to develop a new kind of brain modelling based in granular biological detail.

The objective, in other words, was to demonstrate—to the world and to funders—that such an undertaking was possible. To criticisms that the model may not work, Hill contends that it has successfully reproduced thousands of experiments on actual rats. Those experiments hardly prove that the simulation is 100 percent accurate—no brain model is—but surely they give it credibility.

And, to criticisms that the model is needlessly complicated, he counters that the brain is complicated too. “We’d been hearing for decades that the brain is too complex to be modelled comprehensively,” says Hill. “Markram put a flag in the ground and said, ‘This is achievable in a finite amount of time.

The specific length of time is a matter of some speculation. In his TED Talk, Markram implied that he might build a detailed human brain model by 2019, and he began raising money toward a new initiative, the Human Brain Project, meant to realize this goal. But funding dried up, and Markram’s predictions came nowhere close to
panning out.

The Blue Brain Project, however, remains ongoing. (The focus, now, is on modelling a full mouse brain.) For Hill, it offers proof of concept for the broader mission of neuroinformatics. It has demonstrated, he argues, that when you systemize huge amounts of data, you can build platforms that generate reliable insights about the brain. “We showed that you can do incredibly complex data integration,” says Hill, “and the model will give rise to biologically realistic responses.”

When Hill was approached by recruiters on behalf of CAMH to ask if he might consider leaving the Blue Brain Project to start a neuroinformatics lab in Toronto, he demurred. “I’d just become a Swiss citizen,” he says, “and I didn’t want to go.” But the hospital gave him a rare opportunity: to practice cutting-edge neuroscience in a clinical setting. CAMH was formed, in 1998, through a merger of four health care and research institutions.

It treats over 34,000 psychiatric patients each year and employs more than 140 scientists, many of whom study the brain. Its mission, therefore, is both psychiatric and neuroscientific—a combination that appealed to Hill. “I’ve spoken to psychiatrists who’ve told me, ‘Neuroscience doesn’t matter,’” he says. “In their work, they don’t think about brain biology. They think about treating the patient in front of them.” Such biases, he argues, reveal a profound gap between brain research and the illnesses that clinicians see daily. At the KCNI, he’d have a chance to bridge that gap.

The business of data-gathering and brain-modelling may seem dauntingly abstract, but the goal, ultimately, is to figure out what makes us human. The brain, after all, is the place where our emotional, sensory, and imaginative selves reside. To better understand how the modelling process works, I decided to shadow a researcher and trace an individual data point from its origins in a brain to its incorporation in a KCNI model.

Last February, I met Homeira Moradi, a neuroscientist at Toronto Western Hospital’s Krembil Research Institute who shares data with the KCNI. Because of where she works, she has access to the rarest and most valuable resource in her field: human brain tissue. I joined her at 9 a.m., in her lab on the seventh floor. Below us, on the ground level, Taufik Valiante, a neurosurgeon, was operating on an epileptic patient. To treat epilepsy and brain cancer, surgeons sometimes cut out small portions of the brain. But, to access the damaged regions, they must also remove healthy tissue in the neocortex, the high-functioning outer layer of the brain.

Moradi gets her tissue samples from Valiante’s operating room, and when I met her, she was hard at work weighing and mixing chemicals. The solution in which her tissue would sit would have to mimic, as closely as possible, the temperature and composition of an actual brain. “We have to trick the neurons into thinking they’re still at home,” she said.

She moved at the frenetic pace of a line cook during a dinner rush. At some point that morning, Valiante’s assistant would text her from the OR to indicate that the tissue was about to be extracted. When the message came through, she had to be ready. Once the brain sample had been removed from the patient’s head, the neurons within it would begin to die. At best, Moradi would have twelve hours to study the sample before it expired.

The text arrived at noon, by which point we’d been sitting idly for an hour. Suddenly, we sprang into action. To comply with hospital policy, which forbids Moradi from using public hallways where a visitor may spot her carrying a beaker of brains, we approached the OR indirectly, via a warren of underground tunnels.

The passages were lined with gurneys and illuminated, like catacombs in an Edgar Allan Poe story, by dim, inconsistent lighting. I hadn’t received permission to witness the operation, so I waited for Moradi outside the OR and was able to see our chunk of brain only once we’d returned to the lab. It didn’t look like much—a marble-size blob, gelatinous and slightly bloody, like gristle on a steak.

Under a microscope, though, the tissue was like nothing I’d ever seen. Moradi chopped the sample into thin pieces, like almond slices, which went into a small chemical bath called a recording chamber. She then brought the chamber into another room, where she kept her “rig”: an infrared microscope attached to a manual arm.

She put the bath beneath the lens and used the controls on either side of the rig to operate the arm, which held her patch clamp—a glass pipette with a microscopic tip. On a TV monitor above us, we watched the pipette as it moved through layers of brain tissue resembling an ancient root system—tangled, fibrous, and impossibly dense.

Moradi needed to bring the clamp right up against the wall of a cell. The glass had to fuse with the neuron without puncturing the membrane. Positioning the clamp was maddeningly difficult, like threading the world’s smallest needle. It took her the better part of an hour to connect to a pyramidal neuron, one of the largest and most common cell types in our brain sample.

Once we’d made the connection, a filament inside the probe transmitted the electrical signals the neuron sent out. They went first into an amplifier and then into a software application that graphed the currents—strong pulses with intermittent weaker spikes between them—on an adjacent computer screen. “Is that coming from the neuron?” I asked, staring at the screen. “Yes,” Moradi replied. “It’s talking to us.”

A depressive brain is a noisy one. What if scientists could locate the neurons causing the problem?

It had taken us most of the day, but we’d successfully produced a tiny data set—information that may be relevant to the study of mental illness. When neurons receive electrical signals, they often amplify or dampen them before passing them along to adjacent neurons. This function, called gating, enables the brain to select which stimuli to pay attention to. If successive neurons dampen a signal, the signal fades away.

If they amplify it, the brain attends more closely. A popular theory of depression holds that the illness has something to do with gating. In depressive patients, neurons may be failing to dampen specific signals, thereby inducing the brain to ruminate unnecessarily on negative thoughts. A depressive brain, according to this theory, is a noisy one. It is failing to properly distinguish between salient and irrelevant stimuli. But what if scientists could locate and analyze a specific cluster of neurons (i.e., a circuit) that was causing the problem?

Etay Hay, an Israeli neuroscientist and one of Hill’s early hires at the KCNI, is attempting to do just that. Using Moradi’s data, he’s building a model of a “canonical” circuit—that is, a circuit that appears thousands of times, with some variations, in the outer layer of the brain. He believes a malfunction in this circuit may underlie some types of treatment-resistant depression.

The circuit contains pyramidal neurons, like the one Moradi recorded from, that communicate with smaller cells, called interneurons. The interneurons dampen the signals the pyramidal neurons send them. It’s as if the interneurons are turning down the volume on unwanted thoughts. In a depressive brain, however, the interneurons may be failing to properly reduce the signals, causing the patient to get stuck in negative-thought loops.

Etienne Sibille, another CAMH neuroscientist, has designed a drug that increases communication between the interneurons and the pyramidal neurons in Hay’s circuit. In theory, this drug should enable the interneurons to better do their job, tamp down on negative thoughts, and improve cognitive function.

This direct intervention, which occurs at the cellular level, could be more effective than the current class of antidepressants, called SSRIs, which are much cruder. “They take a shotgun approach to depression,” says Sibille, “by flooding the entire brain with serotonin.” (That chemical, for reasons we don’t fully understand, can reduce depressive symptoms, albeit only in some people.)

Sibille’s drug, however, is more targeted. When he gives it to mice who seem listless or fearful, they perk up considerably. Before testing it on humans, Sibille hopes to further verify its efficacy. That’s where Hay comes in. He has finished his virtual circuit and is now preparing to simulate Sibille’s treatment. If the simulation reduces the overall amount of noise in the circuit, the drug can likely proceed to human trials, a potentially game-changing breakthrough.

Hill’s other hires at the KCNI have different specialties from Hay’s but similar goals. Shreejoy Tripathy is building computer models to predict how genes affect the shape and behaviour of neurons. Andreea Diaconescu is using video games to collect data that will allow her to better model early stage psychosis.

This can be used to predict symptom severity and provide more effective treatment plans. Joanna Yu is building the BrainHealth Databank, a digital repository for anonymized data—on symptoms, metabolism, medications, and side effects—from over 1,000 CAMH patients with depression. Yu’s team will employ AI to analyze the information and predict which treatment may offer the best outcome for each individual.

Similarly, Dan Felsky is helping to run a five-year study on over 300 youth patients at CAMH, incorporating data from brain scans, cognitive tests, and doctors’ assessments. “The purpose,” he says, “is to identify signs that a young person may go on to develop early adult psychosis, one of the most severe manifestations of mental illness.”

All of these researchers are trained scientists, but their work can feel more like engineering: they’re each helping to build the digital infrastructure necessary to interpret the data they bring in.

Sibille’s work, for instance, wouldn’t have been possible without Hay’s computer model, which in turn depends on Moradi’s brain-tissue lab, in Toronto, and on data from hundreds of neuron recordings conducted in Seattle and Amsterdam. This collaborative approach, which is based in data-sharing agreements and trust-based relationships, is incredibly efficient. With a team of three trainees, Hay built his model in a mere twelve months. “If just one lab was generating my data,” he says, “I’d have kept it busy for twenty years.” Read more……

Simon Lewsen, a Toronto-based writer, contributes to Azure, Precedent, enRoute, the Globe and Mail, and The Atlantic. In 2020, he won a National Magazine Award.

Source: What Happens to Our Brains When We Get Depressed? | The Walrus

.

More Contents:

False Positive: Why Thousands of Patients May Not Have Asthma after All

Same Vaccine, Different Effects: Why Women Are Feeling Worse after the Jab

How Big Tobacco Set the Stage for Fake News

kayman-3-1024x272-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1

How Much Do I Need To Sleep? It Depends on Your Age

Do you find yourself dozing off at your desk, even after what you thought was a good night’s rest? Then you probably have the same question as so many others: How much do I need to sleep? The answer of how many hours you need is not so straightforward, said Dr. Raj Dasgupta, an assistant professor of clinical medicine in the division of pulmonary, critical care and sleep medicine at the Keck School of Medicine of the University of Southern California.Sleep needs are very individualized, he said, but the general recommendation — the “sweet spot” — is to get seven to nine hours of sleep a night. Recommendations really change as people age, however.”Sleep needs vary over the lifespan,” said Christina Chick, a postdoctoral scholar in psychiatry and behavioral sciences at Stanford University.

CDC’s sleep guideline

Adults should get at least seven hours of sleep a night, but 1 in 3 of them don’t, according to the US Centers for Disease Control and Prevention. Poor sleep has been associated with long-term health consequences, such as higher risk of cardiovascular disease, diabetes, obesity and dementia. In the short term, even one day of sleep loss can harm your well-being, according to a recent study. People who get poor sleep might also be predisposed to conditions such as anxiety, depression and bipolar disorder, Dasgupta said.”There are chronic consequences, and there are acute consequences, which is why sleep is more than just saying, ‘The early bird gets the worm,'” he said. “It’s much more than that.”

Sleep for kids and teens

If it feels like babies are sleeping all day, they pretty much are. In the first year of life, babies can sleep 17 to 20 hours a day, Dasgupta said. Infants 4 months to 12 months need their 12 to 16 hours of sleep, including naps, according to Chick. Toddlers, who are between the ages of 1 and 3, should get 11 to 14 hours of sleep, according to Dr. Bhanu Kolla, associate professor of psychiatry and psychology at the Mayo Clinic with a special interest in sleep. Children ages 3 to 5 should sleep for 10 to 13 hours, he added, and from ages 6 to 12, they should sleep nine to 12 hours. For kids up to age 5, these sleep recommendations include naps, Chick said. Teenagers should get eight to 10 hours of sleep, Kolla said. This recommendation has sparked a debate in recent years about start times for school.

“As children move toward adolescence, they naturally prefer to go to sleep later and wake up later,” Chick said. “This is why school start times are such an important focus of debate: If you can’t fall asleep until later, but your school start time remains the same, you’re going to get less sleep.” The quantity of sleep is important, but so is the quality of it, Dasgupta added. Getting deeper sleep and hitting the rapid eye movement (REM) stage helps with cognition, memory and productivity throughout the day. REM is the sleep stage where memories are consolidated and stored. It also allows us to dream vividly. People can sometimes get the right quantity of sleep but still feel fatigued, and this might mean they aren’t reaching these sleep stages.

Sleep for college students and adults

The stereotypical image of the college student usually includes messy hair, undereye bags, and a coffee or energy drink in hand. It doesn’t matter if they stay up all night partying or cramming for an exam — both result in sleep deprivation. “It’s unfortunate, but it’s almost like a rite of passage in a college student to pull the perennial all-nighter even though we know that’s not what you’re supposed to do,” Dasgupta said. He and Kolla concur that seven to nine hours of sleep is best for adults, though Kolla added that older adults may be better at coping with some sleep deprivation.

As an exception, young adults may need nine or more hours on a regular basis because their brains are still developing, Chick said, and adults of any age may also need nine or more hours when recovering from an injury, illness or sleep debt. There are also “natural variants,” Kolla said, referring to some people who require more than 10 hours of sleep and others who get less than four and function normally. If you’re wondering whether it matters if you’re an early bird or night owl, Chick said it depends on “whether your lifestyle is compatible” with your preference. “If you are a night owl, but your job requires you to be in the office at 7 am, this misalignment is less than ideal for your physical and mental health,” she wrote in an email. “But it would be equally problematic for a morning person who works the night shift.”

How to improve your sleep

Are you not getting enough sleep? Here are a few ways to solve that:

1. Stick to a bedtime routine. Try to go to bed and wake up at the same time every day. You can even keep a journal to log these sleep times and how often you wake up at night, Dasgupta said, so you can have an idea of what works for you. You should also make sure your room is dark, cool and comfortable when you go to sleep.

2. Turn off the electronic devices. Do this as early as possible before bed, Chick added, as light exposure can affect your body’s sleep-wake cycle. “Particularly if you are aiming to fall asleep earlier, it’s important to expose yourself to bright natural light as early as possible in the day, and to limit exposure to light in the hours before bedtime,” she said. “Electronic devices mimic many of the wavelengths in sunlight that cue your body to stay awake.”

3. Try mindfulness techniques. Breathing exercises, meditation and yoga can also support sleep, Chick added. Her recent study showed that mindfulness training helped children sleep over an hour more per night.

4. Set good food and exercise habits. Finally, eating healthy and keeping a daily fitness regimen can support better sleep at night, Dasgupta said. “Always try to be consistent with exercise during the day,” he said. “Exercise relieves stress, it helps build up your drive to sleep at night, so there’s many good things there.”

Source: How much do I need to sleep? It depends on your age – CNN

.

More Contents:

%d bloggers like this: