5G Technology Begins To Expand Beyond Smartphones

Proponents of 5G technology have long said it will remake much of day-to-day life. The deployment of superfast 5G networks is believed to herald a new era for much more than smartphones – everything from advanced virtual-reality video games to remote heart surgery. The vision has been slow to come to mind, but the first wave of 5G-enabled gadgets is emerging.

Last among the first uses of 5G to enter the consumer market is the delivery of home broadband Internet service to cord-cutters: those who want to not only drop their cable-TV bills but also give up internet access via wires altogether. give. For example, Samsung Electronics Co. has partnered with Verizon Communications Inc. to offer a wireless 5G router. Which promises to provide broadband access at home. The router takes a 5G signal just like a smartphone.

Other consumer devices that are starting to hit the market include 5G-compatible laptops from several manufacturers, all of which are faster than other laptops and offer high-quality video viewing when connected to a 5G network. (The laptop requires a 5G chip to make that connection.)

In the latest: Lenovo Group Ltd., in association with AT&T Inc., in August released a 5G laptop, the ThinkPad X13 5G. The device, which started shipping last month, comes with a 13.3-inch screen and retails for around $1,500. Samsung also introduced a new laptop in June that offers 5G connectivity. The Galaxy Book Go 5G has a 14-inch screen, and retails for around $800.

OK, but what if you want a 5G connection on your yacht, miles offshore? You have good luck. Meridian 5G, a Monaco-based provider of internet services for superyachts – the really big ones – advertises 5G Dome Routers, a combination of antennas and modems that are within about 60 miles of the coast to access 5G connectivity. Allows sailing. Hardware costs about $17,000 for an average-sized Superyacht.

America is ready for China’s Huawei, and it just happened

Of course, all of these gadgets are only useful where 5G networks are available, which still doesn’t cover a lot of locations, onshore or off. The same holds true for new drone technology unveiled by Qualcomm Inc in August with 5G and artificial-intelligence capabilities. The company says the technology called Qualcomm Flight RB5 5G Platform enables high-quality photo and video collection.

Drones equipped with 5G technology can be used in a variety of industries, including filming, mapping and emergency services like firefighting, Qualcomm notes. For example, due to new camera technology enabled by 5G, drones can be used for mapping large areas of land and for rapidly transferring data for analysis and processing.

Proponents of 5G technology have long said it will remake much of day-to-day life, bringing the so-called Internet of Things to a point where you can name any number of devices—home and office appliances, Industrial equipment, hospital equipment, vehicles, etc.—will be connected to the Internet and exchange data with the cloud at a speed that will allow for new capabilities.

“The goal of 5G, when we have a mature 5G network globally, is to make sure everything is connected to the cloud 100% of the time,” Qualcomm CEO Cristiano Amon said at a conference in Germany last month.

But it will take years for 5G devices to become widespread, analysts say, as network coverage expands and markets develop for all those advanced new products.

By: Meghan Bobrowsky

Meghan Bobrowsky is reporter with the tech team. She is a graduate of Scripps College. She previously interned for The Wall Street Journal, the San Francisco Chronicle, the Philadelphia Inquirer and the Sacramento Bee. As an intern at the Miami Herald, she spent the summer of 2020 investigating COVID-19 outbreaks in nursing homes and federal Paycheck Protection Program fraud. She previously served as editor in chief of her school newspaper, the Student Life.

Source: 5G technology begins to expand beyond smartphones

.

Related Contents:

How Many Senses Do You Have? A Lot More Than 5, Says Science

How many senses does the average human have? Assuming you equate senses with their receptors, such as the retinas in your eyes and the cochlea in your ears, then the traditional answer to this question is five – seeing, hearing, touch, smell and taste. They’re called the ‘exteroceptive’ senses because they carry information about the external world.

But your body also has receptors for events occurring inside you, such as your beating heart, expanding lungs, gurgling stomach and many other movements that you’re completely unaware of. They’re traditionally grouped together as another sense, called ‘interoception’.

Yet a proper answer to this question is even more complex and interesting. For one thing, your body has receptors to carry other types of information, such as temperature, that we don’t usually consider to be senses.

Also, some of your receptors are used for more than one sense. Your retinas, for example, are portals for the light waves you need for vision, but some retinal cells also inform your brain if it’s daytime or nighttime. This unnamed ‘day/night sense’ is the basis for circadian rhythms that affect your metabolism and your sleep/wake cycle.

Even senses that seem fundamental, such as vision, are intimately entwined with other senses that seem separate. For example, it turns out that what you see, and how you see it, is yoked to your brain’s tracking of your heartbeat, which is part of interoception.

In the moments when your heart contracts and pushes blood out to your arteries, your brain takes in less visual information from the world. Your brain also constructs senses that you don’t have receptors for. Examples are flavour, which the brain constructs from gustatory (taste) and olfactory (smell) data, and wetness, which is created from touch and temperature.

In fact, your brain constructs everything you see, hear, smell, taste and feel using more than just the sense data from your body’s receptors. Light waves, for example, don’t simply enter your eyes, travel to your brain as electrical signals, and then you see.

Your brain actually predicts what you might see before you see it, based on past experience, the state of your body and your current situation. It combines its predictions with the incoming sense data from your retinas to construct your visual experience of the world around you.

Similarly, when you place your fingers on your wrist to feel your pulse, you’re actually feeling a construction based on your brain’s predictions and the actual sense data. You don’t experience sensations with your sense organs. You experience them with your brain.

Barrett_Portrait-crop

 

By

Lisa is a professor of psychology at Northeastern University and the author of Seven And A Half Lessons About The Brain (£14.99, Picador)

Source: How many senses do you have? A lot more than 5, says science – BBC Science Focus Magazine

.

More Contents:

Your motivation is at rock bottom. Here’s how neuroscience can help

Hangover anxiety and depression: The neuroscience behind your alcohol morning blues

7 (and a half) myths about your brain

What is the time resolution of our senses?

Which of our senses evolved first?

Is there really a noise that makes you poop yourself?

The race to stop ageing: 10 breakthroughs that will help us

Five senses refers to the five traditionally recognized methods of perception, or sense: taste, sight, touch, smell, and sound.

Five senses or The Five Senses may also refer to:

What Happens To Our Brains When We Get Depressed

In the ’90s, when he was a doctoral student at the University of Lausanne, in Switzerland, neuroscientist Sean Hill spent five years studying how cat brains respond to noise. At the time, researchers knew that two regions—the cerebral cortex, which is the outer layer of the brain, and the thalamus, a nut-like structure near the centre—did most of the work. But, when an auditory signal entered the brain through the ear, what happened, specifically?

Which parts of the cortex and thalamus did the signal travel to? And in what order? The answers to such questions could help doctors treat hearing loss in humans. So, to learn more, Hill, along with his supervisor and a group of lab techs, anaesthetized cats and inserted electrodes into their brains to monitor what happened when the animals were exposed to sounds, which were piped into their ears via miniature headphones. Hill’s probe then captured the brain signals the noises generated.

The last step was to euthanize the cats and dissect their brains, which was the only way for Hill to verify where he’d put his probes. It was not a part of the study he enjoyed. He’d grown up on a family farm in Maine and had developed a reverence for all sentient life. As an undergraduate student in New Hampshire, he’d experimented on pond snails, but only after ensuring that each was properly anaesthetized. “I particularly loved cats,” he says, “but I also deeply believed in the need for animal data.” (For obvious reasons, neuroscientists cannot euthanize and dissect human subjects.)

Over time, Hill came to wonder if his data was being put to the best possible use. In his cat experiments, he generated reels of magnetic tape—printouts that resembled player piano scrolls. Once he had finished analyzing the tapes, he would pack them up and store them in a basement. “It was just so tangible,” he says. “You’d see all these data coming from the animals, but then what would happen with it? There were boxes and boxes that, in all likelihood, would never be looked at again.” Most researchers wouldn’t even know where to find them.

Hill was coming up against two interrelated problems in neuroscience: data scarcity and data wastage. Over the past five decades, brain research has advanced rapidly—we’ve developed treatments for Parkinson’s and epilepsy and have figured out, if only in the roughest terms, which parts of the brain produce arousal, anger, sadness, and pain—but we’re still at the beginning of the journey.

Scientists are still some way, for instance, from knowing the size and shape of each type of neuron (i.e., brain cell), the RNA sequences that govern their behavior, or the strength and frequency of the electrical signals that pass between them. The human brain has 86 billion neurons. That’s a lot of data to collect and record.

But, while brain data is a precious resource, scientists tend to lock it away, like secretive art collectors. Labs the world over are conducting brain experiments using increasingly sophisticated technology, from hulking magnetic-imaging devices to microscopic probes. These experiments generate results, which then get published in journals. Once each new data set has served this limited purpose, it goes . . . somewhere, typically onto a secure hard drive only a few people can access.

Hill’s graduate work in Lausanne was at times demoralizing. He reasoned that, for his research to be worth the costs to both the lab that conducted it and the cats who were its subjects, the resulting data—perhaps even all brain data—should live in the public domain. But scientists generally prefer not to share. Data, after all, is a kind of currency: it helps generate findings, which lead to jobs, money, and professional recognition. Researchers are loath to simply give away a commodity they worked hard to acquire. “There’s an old joke,” says Hill, “that neuroscientists would rather share toothbrushes than data.”

He believes that, if they don’t get over this aversion—and if they continue to stash data in basements and on encrypted hard drives—many profound questions about the brain will remain unanswered. This is not just a matter of academic curiosity: if we improve our understanding of the brain, we could develop treatments that have long eluded us for major mental illnesses.

In 2019, Hill became director of Toronto’s Krembil Centre for Neuroinformatics (KCNI), an organization working at the intersection of neuroscience, information management, brain modelling, and psychiatry. The basic premise of neuroinformatics is this: the brain is big, and if humans are going to have a shot at understanding it, brain science must become big too. The KCNI’s goal is to aggregate brain data and use it to build computerized models that, over time, become ever more complex—all to aid them in understanding the intricacy of a real brain.

There are about thirty labs worldwide explicitly dedicated to such work, and they’re governed by a central regulatory body, the International Neuroinformatics Coordinating Facility, in Sweden. But the KCNI stands out because it’s embedded in a medical institution: the Centre for Addiction and Mental Health (CAMH), Canada’s largest psychiatric hospital. While many other neuroinformatics labs study genetics or cognitive processing, the KCNI seeks to demystify conditions like schizophrenia, anxiety, and dementia. Its first area of focus is depression.

Fundamentally, we don’t have a biological understanding of depression.

The disease affects more than 260 million people around the world, but we barely understand it. We know that the balance between the prefrontal cortex (at the front of the brain) and the anterior cingulate cortex (tucked just behind it) plays some role in regulating mood, as does the chemical serotonin. But what actually causes depression? Is there a tiny but important area of the brain that researchers should focus on?

And does there even exist a singular disorder called depression, or is the label a catch-all denoting a bunch of distinct disorders with similar symptoms but different brain mechanisms? “Fundamentally,” says Hill, “we don’t have a biological understanding of depression or any other mental illness.”

The problem, for Hill, requires an ambitious, participatory approach. If neuroscientists are to someday understand the biological mechanisms behind mental illness—that is, if they are to figure out what literally happens in the brain when a person is depressed, manic, or delusional—they will need to pool their resources. “There’s not going to be a single person who figures it all out,” he says. “There’s never going to be an Einstein who solves a set of equations and shouts, ‘I’ve got it!’ The brain is not that kind of beast.”

The KCNI lab has the feeling of a tech firm. It’s an open-concept space with temporary workstations in lieu of offices, and its glassed-in meeting rooms have inspirational names, like “Tranquility” and “Perception.” The KCNI is a “dry centre”: it works with information and software rather than with biological tissue.

To obtain data, researchers forge relationships with other scientists and try to convince them to share what they’ve got. The interior design choices are a tactical part of this effort. “The space has to look nice,” says Dan Felsky, a researcher at the centre. “Colleagues from elsewhere must want to come in and collaborate with us.”

Yet it’s hard to forget about the larger surroundings. During one interview in the “Clarity” room, Hill and I heard a code-blue alarm, broadcast across CAMH, to indicate a medical emergency elsewhere in the hospital. Hill’s job doesn’t involve front line care, so he doesn’t personally work with patients, but these disruptions reinforce his sense of urgency. “I come from a discipline where scientists focus on theoretical subjects,” he says. “It’s important to be reminded that people are suffering and we have a responsibility to help them.”

Today, the science of mental illness is based primarily on the study of symptoms. Patients receive a diagnosis when they report or exhibit maladaptive behaviours—despair, anxiety, disordered thinking—associated with a given condition. If a significant number of patients respond positively to a treatment, that treatment is deemed effective. But such data reveals nothing about what physically goes on within the brain.

“When it comes to the various diseases of the brain,” says Helena Ledmyr, co-director of the International Neuroinformatics Coordinating Facility, “we know astonishingly little.” Shreejoy Tripathy, a KCNI researcher, gives modern civilization a bit more credit: “The ancient Egyptians would remove the brain when embalming people because they thought it was useless. In theory, we’ve learned a few things since then. In relation to how much we have left to learn, though, we’re not that much further along.”

Joe Herbert, a Cambridge University neuroscientist, offers a revealing comparison between the way mental versus physical maladies are diagnosed. If, in the nineteenth century, you walked into a doctor’s office complaining of shortness of breath, the doctor would likely diagnose you with dyspnea, a word that basically means . . . shortness of breath.

Today, of course, the doctor wouldn’t stop there: they would take a blood sample to see if you were anemic, do an X-ray to search for a collapsed lung, or subject you to an echocardiogram to spot signs of heart disease. Instead of applying a Greek label to your symptoms, they’d run tests to figure out what was causing them.

Herbert argues that the way we currently diagnose depression is similar to how we once diagnosed shortness of breath. The term depression is likely as useful now as dyspnea was 150 years ago: it probably denotes a range of wildly different maladies that just happen to have similar effects. “Psychiatrists recognize two types of depression—or three, if you count bipolar—but that’s simply on the basis of symptoms,” says Herbert. “Our history of medicine tells us that defining a disease by its symptoms is highly simplistic and inaccurate.”

The advantage of working with models, as the KCNI researchers do, is that scientists can experiment in ways not possible with human subjects. They can shut off parts of the model brain or alter the electrical circuitry. The disadvantage is that models are not brains. A model is, ultimately, a kind of hypothesis—an illustration, analogy, or computer simulation that attempts to explain or replicate how a certain brain process works.

Over the centuries, researchers have created brain models based on pianos, telephones, and computers. Each has some validity—the brain has multiple components working in concert, like the keys of a piano; it has different nodes that communicate with one another, like a telephone network; and it encodes and stores information, like a computer—but none perfectly describes how a real brain works. Models may be useful abstractions, but they are abstractions nevertheless.

Yet, because the brain is vast and mysterious and hidden beneath the skull, we have no choice but to model it if we are to study it. Debates over how best to model it, and whether such modelling should be done at the micro or macro scale, are hotly contested in neuroscience. But Hill has spent most of his life preparing to answer these questions.

Hill grew up in the ’70s and ’80s, in an environment entirely unlike the one in which he works. His parents were adherents of the back-to-the-land movement, and his father was an occasional artisanal toymaker. On their farm, near the coast of Maine, the family grew vegetables and raised livestock using techniques not too different from those of nineteenth-century homesteaders. They pulled their plough with oxen and, to fuel their wood-burning stove, felled trees with a manual saw.

When Hill and his older brother found out that the local public school had acquired a TRS-80, an early desktop computer, they became obsessed. The math teacher, sensing their passion, decided to loan the machine to the family for Christmas. Over the holidays, the boys became amateur programmers. Their favourite application was Dancing Demon, in which a devilish figure taps its feet to an old swing tune. Pretty soon, the boys had hacked the program and turned the demon into a monster resembling Boris Karloff in Frankenstein. “In the dark winter of Maine,” says Hill, “what else were we going to do?”

The experiments spurred conversation among the brothers, much of it the fevered speculation of young people who’ve read too much science fiction. They fantasized about the spaceships they would someday design. They also discussed the possibility of building a computerized brain. “I was probably ten or eleven years old,” Hill recalls, “saying to my brother, ‘Will we be able to simulate a neuron? Maybe that’s what we need to get artificial intelligence.’”

Roughly a decade later, as an undergraduate at the quirky liberal arts university Hampshire College, Hill was drawn to computational neuroscience, a field whose practitioners were doing what he and his brother had talked about: building mathematical, and sometimes even computerized, brain models.

In 2006, after completing his PhD, along with postgraduate studies in San Diego and Wisconsin, Hill returned to Lausanne to co-direct the Blue Brain Project, a radical brain-modelling lab in the Swiss Alps. The initiative had been founded a year earlier by Henry Markram, a South African Israeli neuroscientist whose outsize ambitions had made him a revered and controversial figure.

In neuroscience today, there are robust debates as to how complex a brain model should be. Some researchers seek to design clean, elegant models. That’s a fitting description of the Nobel Prize–winning work of Alan Hodgkin and Andrew Huxley, who, in 1952, drew handwritten equations and rudimentary illustrations—with lines, symbols, and arrows—describing how electrical signals exit a neuron and travel along a branch-like cable called an axon.

Other practitioners seek to make computer-generated maps that incorporate hundreds of neurons and tens of thousands of connections, image fields so complicated that Michelangelo’s Sistine Chapel ceiling looks minimalist by comparison. The clean, simple models demystify brain processes, making them understandable to humans. The complex models are impossible to comprehend: they offer too much information to take in, attempting to approximate the complexity of an actual brain.

Markram’s inclinations are maximalist. In a 2009 TED Talk, he said that he aimed to build a computer model so comprehensive and biologically accurate that it would account for the location and activity of every human neuron. He likened this endeavour to mapping out a rainforest tree by tree. Skeptics wondered whether such a project was feasible. The problem isn’t merely that there are numerous trees in a rainforest: it’s also that each tree has its own configuration of boughs and limbs. The same is true of neurons.

Each is a microscopic, blob-like structure with dense networks of protruding branches called axons and dendrites. Neurons use these branches to communicate. Electrical signals run along the axons of one neuron and then jump, over a space called a synapse, to the dendrites of another. The 86 billion neurons in the human brain each have an average of 10,000 synaptic connections. Surely, skeptics argued, it was impossible, using available technology, to make a realistic model from such a complicated, dynamic system.

In 2006, Markram and Hill got to work. The initial goal was to build a hyper-detailed, biologically faithful model of a “microcircuit” (i.e., a cluster of 31,000 neurons) found within the brain of a rat. With a glass probe called a patch clamp, technicians at the lab penetrated a slice of rat brain, connected to each individual neuron, and recorded the electrical signals it sent out.

By injecting dye into the neurons, the team could visualize their shape and structure. Step by step, neuron by neuron, they mapped out the entire communication network. They then fed the data into a model so complex that it required Blue Gene, the IBM supercomputer, to run.

In 2015, they completed their rat microcircuit. If they gave their computerized model certain inputs (say, a virtual spark in one part of the circuit), it would predict an output (for instance, an electrical spark elsewhere) that corresponded to biological reality. The model wasn’t doing any actual cognitive processing: it wasn’t a virtual brain, and it certainly wasn’t thinking.

But, the researchers argued, it was predicting how electrical signals would move through a real circuit inside a real rat brain. “The digital brain tissue naturally behaves like the real brain tissue,” reads a statement on the Blue Brain Project’s website. “This means one can now study this digital tissue almost like one would study real brain tissue.”

The breakthrough, however, drew fresh criticisms. Some neuroscientists questioned the expense of the undertaking. The team had built a multimillion-dollar computer program to simulate an already existing biological phenomenon, but so what? “The question of ‘What are you trying to explain?’ hadn’t been answered,” says Grace Lindsay, a computational neuroscientist and author of the book Models of the Mind. “A lot of money went into the Blue Brain Project, but without some guiding goal, the whole thing seemed too open ended to be worth the resources.”

Others argued that the experiment was not just profligate but needlessly convoluted. “There are ways to reduce a big system down to a smaller system,” says Adrienne Fairhall, a computational neuroscientist at the University of Washington. “When Boeing was designing airplanes, they didn’t build an entire plane just to figure out how air flows around the wings. They scaled things down because they understood that a small simulation could tell them what they needed to know.” Why seek complexity, she argues, at the expense of clarity and elegance?

The harshest critics questioned whether the model even did what it was supposed to do. When building it, the team had used detailed information about the shape and electrical signals of each neuron. But, when designing the synaptic connections—that is, the specific locations where the branches communicate with one another—they didn’t exactly mimic biological reality, since the technology for such detailed brain mapping didn’t yet exist. (It does now, but it’s a very recent development.)

Instead, the team built an algorithm to predict, based on the structure of the neurons and the configuration of the branches, where the synaptic connections were likely to be. If you know the location and shape of the trees, they reasoned, you don’t need to perfectly replicate how the branches intersect.

But Moritz Helmstaedter—a director at the Max Planck Institute for Brain Research, in Frankfurt, Germany, and an outspoken critic of the project—questions whether this supposition is true. “The Blue Brain model includes all kinds of assumptions about synaptic connectivity, but what if those assumptions are wrong?” he asks. The problem, for Helmstaedter, isn’t just that the model could be inaccurate: it’s that there’s no way to fully assess its accuracy given how little we know about brain biology.

If a living rat encounters a cat, its brain will generate a flight signal. But, if you present a virtual input representing a cat’s fur to the Blue Brain model, will the model generate a virtual flight signal too? We can’t tell, Helmstaedter argues, in part because we don’t know, in sufficient detail, what a flight signal looks like inside a real rat brain.

Hill takes these comments in stride. To criticisms that the project was too open-ended, he responds that the goal wasn’t to demystify a specific brain process but to develop a new kind of brain modelling based in granular biological detail.

The objective, in other words, was to demonstrate—to the world and to funders—that such an undertaking was possible. To criticisms that the model may not work, Hill contends that it has successfully reproduced thousands of experiments on actual rats. Those experiments hardly prove that the simulation is 100 percent accurate—no brain model is—but surely they give it credibility.

And, to criticisms that the model is needlessly complicated, he counters that the brain is complicated too. “We’d been hearing for decades that the brain is too complex to be modelled comprehensively,” says Hill. “Markram put a flag in the ground and said, ‘This is achievable in a finite amount of time.

The specific length of time is a matter of some speculation. In his TED Talk, Markram implied that he might build a detailed human brain model by 2019, and he began raising money toward a new initiative, the Human Brain Project, meant to realize this goal. But funding dried up, and Markram’s predictions came nowhere close to
panning out.

The Blue Brain Project, however, remains ongoing. (The focus, now, is on modelling a full mouse brain.) For Hill, it offers proof of concept for the broader mission of neuroinformatics. It has demonstrated, he argues, that when you systemize huge amounts of data, you can build platforms that generate reliable insights about the brain. “We showed that you can do incredibly complex data integration,” says Hill, “and the model will give rise to biologically realistic responses.”

When Hill was approached by recruiters on behalf of CAMH to ask if he might consider leaving the Blue Brain Project to start a neuroinformatics lab in Toronto, he demurred. “I’d just become a Swiss citizen,” he says, “and I didn’t want to go.” But the hospital gave him a rare opportunity: to practice cutting-edge neuroscience in a clinical setting. CAMH was formed, in 1998, through a merger of four health care and research institutions.

It treats over 34,000 psychiatric patients each year and employs more than 140 scientists, many of whom study the brain. Its mission, therefore, is both psychiatric and neuroscientific—a combination that appealed to Hill. “I’ve spoken to psychiatrists who’ve told me, ‘Neuroscience doesn’t matter,’” he says. “In their work, they don’t think about brain biology. They think about treating the patient in front of them.” Such biases, he argues, reveal a profound gap between brain research and the illnesses that clinicians see daily. At the KCNI, he’d have a chance to bridge that gap.

The business of data-gathering and brain-modelling may seem dauntingly abstract, but the goal, ultimately, is to figure out what makes us human. The brain, after all, is the place where our emotional, sensory, and imaginative selves reside. To better understand how the modelling process works, I decided to shadow a researcher and trace an individual data point from its origins in a brain to its incorporation in a KCNI model.

Last February, I met Homeira Moradi, a neuroscientist at Toronto Western Hospital’s Krembil Research Institute who shares data with the KCNI. Because of where she works, she has access to the rarest and most valuable resource in her field: human brain tissue. I joined her at 9 a.m., in her lab on the seventh floor. Below us, on the ground level, Taufik Valiante, a neurosurgeon, was operating on an epileptic patient. To treat epilepsy and brain cancer, surgeons sometimes cut out small portions of the brain. But, to access the damaged regions, they must also remove healthy tissue in the neocortex, the high-functioning outer layer of the brain.

Moradi gets her tissue samples from Valiante’s operating room, and when I met her, she was hard at work weighing and mixing chemicals. The solution in which her tissue would sit would have to mimic, as closely as possible, the temperature and composition of an actual brain. “We have to trick the neurons into thinking they’re still at home,” she said.

She moved at the frenetic pace of a line cook during a dinner rush. At some point that morning, Valiante’s assistant would text her from the OR to indicate that the tissue was about to be extracted. When the message came through, she had to be ready. Once the brain sample had been removed from the patient’s head, the neurons within it would begin to die. At best, Moradi would have twelve hours to study the sample before it expired.

The text arrived at noon, by which point we’d been sitting idly for an hour. Suddenly, we sprang into action. To comply with hospital policy, which forbids Moradi from using public hallways where a visitor may spot her carrying a beaker of brains, we approached the OR indirectly, via a warren of underground tunnels.

The passages were lined with gurneys and illuminated, like catacombs in an Edgar Allan Poe story, by dim, inconsistent lighting. I hadn’t received permission to witness the operation, so I waited for Moradi outside the OR and was able to see our chunk of brain only once we’d returned to the lab. It didn’t look like much—a marble-size blob, gelatinous and slightly bloody, like gristle on a steak.

Under a microscope, though, the tissue was like nothing I’d ever seen. Moradi chopped the sample into thin pieces, like almond slices, which went into a small chemical bath called a recording chamber. She then brought the chamber into another room, where she kept her “rig”: an infrared microscope attached to a manual arm.

She put the bath beneath the lens and used the controls on either side of the rig to operate the arm, which held her patch clamp—a glass pipette with a microscopic tip. On a TV monitor above us, we watched the pipette as it moved through layers of brain tissue resembling an ancient root system—tangled, fibrous, and impossibly dense.

Moradi needed to bring the clamp right up against the wall of a cell. The glass had to fuse with the neuron without puncturing the membrane. Positioning the clamp was maddeningly difficult, like threading the world’s smallest needle. It took her the better part of an hour to connect to a pyramidal neuron, one of the largest and most common cell types in our brain sample.

Once we’d made the connection, a filament inside the probe transmitted the electrical signals the neuron sent out. They went first into an amplifier and then into a software application that graphed the currents—strong pulses with intermittent weaker spikes between them—on an adjacent computer screen. “Is that coming from the neuron?” I asked, staring at the screen. “Yes,” Moradi replied. “It’s talking to us.”

A depressive brain is a noisy one. What if scientists could locate the neurons causing the problem?

It had taken us most of the day, but we’d successfully produced a tiny data set—information that may be relevant to the study of mental illness. When neurons receive electrical signals, they often amplify or dampen them before passing them along to adjacent neurons. This function, called gating, enables the brain to select which stimuli to pay attention to. If successive neurons dampen a signal, the signal fades away.

If they amplify it, the brain attends more closely. A popular theory of depression holds that the illness has something to do with gating. In depressive patients, neurons may be failing to dampen specific signals, thereby inducing the brain to ruminate unnecessarily on negative thoughts. A depressive brain, according to this theory, is a noisy one. It is failing to properly distinguish between salient and irrelevant stimuli. But what if scientists could locate and analyze a specific cluster of neurons (i.e., a circuit) that was causing the problem?

Etay Hay, an Israeli neuroscientist and one of Hill’s early hires at the KCNI, is attempting to do just that. Using Moradi’s data, he’s building a model of a “canonical” circuit—that is, a circuit that appears thousands of times, with some variations, in the outer layer of the brain. He believes a malfunction in this circuit may underlie some types of treatment-resistant depression.

The circuit contains pyramidal neurons, like the one Moradi recorded from, that communicate with smaller cells, called interneurons. The interneurons dampen the signals the pyramidal neurons send them. It’s as if the interneurons are turning down the volume on unwanted thoughts. In a depressive brain, however, the interneurons may be failing to properly reduce the signals, causing the patient to get stuck in negative-thought loops.

Etienne Sibille, another CAMH neuroscientist, has designed a drug that increases communication between the interneurons and the pyramidal neurons in Hay’s circuit. In theory, this drug should enable the interneurons to better do their job, tamp down on negative thoughts, and improve cognitive function.

This direct intervention, which occurs at the cellular level, could be more effective than the current class of antidepressants, called SSRIs, which are much cruder. “They take a shotgun approach to depression,” says Sibille, “by flooding the entire brain with serotonin.” (That chemical, for reasons we don’t fully understand, can reduce depressive symptoms, albeit only in some people.)

Sibille’s drug, however, is more targeted. When he gives it to mice who seem listless or fearful, they perk up considerably. Before testing it on humans, Sibille hopes to further verify its efficacy. That’s where Hay comes in. He has finished his virtual circuit and is now preparing to simulate Sibille’s treatment. If the simulation reduces the overall amount of noise in the circuit, the drug can likely proceed to human trials, a potentially game-changing breakthrough.

Hill’s other hires at the KCNI have different specialties from Hay’s but similar goals. Shreejoy Tripathy is building computer models to predict how genes affect the shape and behaviour of neurons. Andreea Diaconescu is using video games to collect data that will allow her to better model early stage psychosis.

This can be used to predict symptom severity and provide more effective treatment plans. Joanna Yu is building the BrainHealth Databank, a digital repository for anonymized data—on symptoms, metabolism, medications, and side effects—from over 1,000 CAMH patients with depression. Yu’s team will employ AI to analyze the information and predict which treatment may offer the best outcome for each individual.

Similarly, Dan Felsky is helping to run a five-year study on over 300 youth patients at CAMH, incorporating data from brain scans, cognitive tests, and doctors’ assessments. “The purpose,” he says, “is to identify signs that a young person may go on to develop early adult psychosis, one of the most severe manifestations of mental illness.”

All of these researchers are trained scientists, but their work can feel more like engineering: they’re each helping to build the digital infrastructure necessary to interpret the data they bring in.

Sibille’s work, for instance, wouldn’t have been possible without Hay’s computer model, which in turn depends on Moradi’s brain-tissue lab, in Toronto, and on data from hundreds of neuron recordings conducted in Seattle and Amsterdam. This collaborative approach, which is based in data-sharing agreements and trust-based relationships, is incredibly efficient. With a team of three trainees, Hay built his model in a mere twelve months. “If just one lab was generating my data,” he says, “I’d have kept it busy for twenty years.” Read more……

Simon Lewsen, a Toronto-based writer, contributes to Azure, Precedent, enRoute, the Globe and Mail, and The Atlantic. In 2020, he won a National Magazine Award.

Source: What Happens to Our Brains When We Get Depressed? | The Walrus

.

More Contents:

False Positive: Why Thousands of Patients May Not Have Asthma after All

Same Vaccine, Different Effects: Why Women Are Feeling Worse after the Jab

How Big Tobacco Set the Stage for Fake News

kayman-3-1024x272-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1

Tanzania Considers Crypto and Boosts Bitcoin as Nations Line Up Behind El Salvador To Embrace Decentralized Finance

Bitcoin

Tanzania became the latest country to signal its support for digital assets this weekend as its president instructed financial authorities to prepare for widespread use of cryptocurrencies, elevating bitcoin prices further after El Salvador became the first country to make bitcoin legal tender last week and Elon Musk outlined plans for Tesla to resume accepting bitcoin as a form of payment.

Key Facts

President Samia Suluhu Hassan called on the Tanzanian Central Bank Sunday to begin “working on” facilitating widespread use of cryptocurrencies in the East African nation.  While many in Tanzania have not yet embraced decentralized finance, Hassan said the Central Bank should “be ready for the changes and not be caught unprepared.

”Hassan is one of the most senior politicians to signal support for digital assets since El Salvador voted to adopt bitcoin as legal tender last week and helped give the flagging market a boost. The announcement helped bitcoin gain nearly 10% in 24 hours, nearly reaching $40,000 a token Monday morning.

The token was also buoyed on by news that Tesla would resume its use of bitcoin when there is proof the asset is obtained using around 50% clean energy.

What To Watch For

There is growing popular support for bitcoin adoption in Nigeria that also gained momentum over the weekend. Russell Okung, an NFL player of Nigerian descent, penned an open letter to the Nigerian president imploring the country to adopt a Bitcoin standard so as to avoid “falling behind.” Twitter and Square CEO Jack Dorsey, one of the most high profile crypto enthusiasts, tweeted his support of the idea a number of times over the weekend.

Key Background

While reaching its highest point in several weeks, bitcoin, along with the wider crypto market, is still recovering from a tailspin that rapidly wiped over $700 billion from the market’s value. This slump was primarily induced by Tesla announcing it would no longer accept bitcoin due to environmental concerns and China cracking down on the assets.

Support from the likes of El Salvador, alongside other countries and banks that may begin to adopt bitcoin, or other cryptocurrency tokens, the market has slowly started to recover, though remains volatile. Beyond Tanzania, lawmakers in a number of Latin American countries have expressed at least a casual interest in following El Salvador’s footsteps, including Brazil and Panama.

Further Reading

El Salvador Makes History As World’s First Country To Make Bitcoin Legal Tender (Forbes)

Tanzanian president urges central bank to prepare for crypto (Coin Telegraph)

I am a London-based reporter for Forbes covering breaking news. Previously, I have worked as a reporter for a specialist legal publication covering big data and as a freelance journalist and policy analyst covering science, tech and health. I have a master’s degree in Biological Natural Sciences and a master’s degree in the History and Philosophy of Science from the University of Cambridge. Follow me on Twitter @theroberthart or email me at rhart@forbes.com

Source: Tanzania Considers Crypto—And Boosts Bitcoin—As Nations Line Up Behind El Salvador To Embrace Decentralized Finance

.

Critics:

The European Union has passed no specific legislation relative to the status of bitcoin as a currency, but has stated that VAT/GST is not applicable to the conversion between traditional (fiat) currency and bitcoin. VAT/GST and other taxes (such as income tax) still apply to transactions made using bitcoins for goods and services. 

In October 2015, the Court of Justice of the European Union ruled that “The exchange of traditional currencies for units of the ‘bitcoin’ virtual currency is exempt from VAT” and that “Member States must exempt, inter alia, transactions relating to ‘currency, bank notes and coins used as legal tender‘”, making bitcoin a currency as opposed to being a commodity. According to judges, the tax should not be charged because bitcoins should be treated as a means of payment.

According to the European Central Bank, traditional financial sector regulation is not applicable to bitcoin because it does not involve traditional financial actors. Others in the EU have stated, however, that existing rules can be extended to include bitcoin and bitcoin companies.

The European Central Bank classifies bitcoin as a convertible decentralized virtual currency. In July 2014 the European Banking Authority advised European banks not to deal in virtual currencies such as bitcoin until a regulatory regime was in place.

In 2016 the European Parliament’s proposal to set up a task force to monitor virtual currencies to combat money laundering and terrorism, passed by 542 votes to 51, with 11 abstentions, has been sent to the European Commission for consideration.

See also

References

%d bloggers like this: