Why Is It So Hard To Control Our Appetites? A Doctor’s Struggles With Giving Up Sugar

We’ve become convinced that if we can eat more healthily, we will be morally better people. But where does this idea come from? Near the end of the hellish first year of the coronavirus pandemic, I was possessed by the desire to eliminate sugar – all refined sugar – from my diet. In retrospect, it probably wasn’t the best time to add a new challenge to my life. My wife and I had been struggling to remote-school three young kids with no childcare. My elderly parents lived out of state and seemed to need a surprising number of reminders that pandemic restrictions were not lifted for Diwali parties or new Bollywood movie releases.

Like many people in those early days, we were looking around for masks and trying to make sense of shifting government guidelines about when to wear them. In addition, as a doctor, I was seeing patients in clinic at a time dominated by medical uncertainty, when personal protective equipment was scarce, and my hospital, facing staff shortages, was providing training videos and “how-to” tip sheets to specialists like me who hadn’t practised in an emergency room for years, in case we were needed as backup.

It would have been enough to focus on avoiding the virus and managing all this without putting more on my plate. But cutting processed sugar seemed like an opportunity to reassert some measure of order to the daily scrum, or at least to the body that entered the fray each day.

My former physique was behind me and the stress of clinical practice during the pandemic was taking its toll. Maybe it was all the pandemic death in the air, but I started feeling like I was what the narrator in Arundhati Roy’s novel The God of Small Things calls “Not old. Not young. But a viable die-able age.” Maybe doing away with sugar could slow things down? More tantalisingly, maybe it could even take me back to a fresher time, the days in college when I had actually gone sugar-free for a while.

My friends offered condolences on what they called my soon-to-be joyless lifestyle. But I was set, compelled by literature about the deleterious, even toxin-like effects of added sugar. I had my doubts about being able to pull something like this off again, though, so I decided – as doctors often do – to tackle the problem by studying it.

That year, in what was arguably an act of masochism, I began the coursework required to sit for a medical-board exam on dietetics, metabolism and appetite. By earning another qualification, I thought, I would credential my way to realising my goal. After shifts at work, during breaks or once the kids were asleep, I would attend virtual lectures and pore over board-review books in a quest to understand the body’s metabolism.

I immersed myself in the physiology of exercise, the thermodynamics of nutrition, and the neuroendocrine regulation of appetite. But this knowledge didn’t break my pandemic eating habits. Cupcakes and ice cream and cookies didn’t call to me any less. And big food corporations were winning the bet that Lay’s potato chips first made back in the 1960s with its “Betcha can’t just eat one” ad campaign. So, I found myself reaching for Double Stuf Oreos while flipping through my medical textbooks and scarfing chocolate bars even as I correctly answered my practice-exam questions.

My body refused to be disciplined by my intellectual mastery of its operations. I passed the board examination, but my appetite for sugar didn’t change. I was left with more questions than I had when I started. Was sugar really a problem? Or had I internalised hangups about desire from the culture at large? Why did my soul feel so inexplicably sick – so unsatisfied – with the outcome of my first effort to quit that I tried it all again? And what does my “success” – I’ve been sugar-free for a year now – even mean?

I turned to Plato – a man occupied by appetite – for some answers. In his body map of the soul, the stomach was the dwelling place of desire. Reason, of course, resided in the head, while courage rested in the chest. In this tripartite architecture, it was up to reason – with the help of courage – to subjugate appetite and elevate the individual. The thinking went that if we could just rule our stomachs, we might be able to hold our heads up high and our chests out wide. For the Greeks, the right moral posture was key to the good life, or eudaimonia.

Early medical science in the west borrowed heavily from Plato, beginning with Aristotle, who practiced and taught medicine throughout his life. Aristotle agreed that eudaimonia could be realized by moderating the visceral and sensual appetites. He saw the heart as the vessel of intelligence, and arguably the most virtuous of organs. In his hypothesis, the heart occupied – physically and figuratively – a central place in the body, controlling other organs. The brain and lungs played supporting roles, merely cooling and cushioning the heart. The heart was, for Aristotle, where reason flowed.

Five hundred years later, the Greek anatomist and surgeon Galen challenged the centrality of the heart but still adhered closely to Plato’s triadic notion of the soul. Galen’s treatises, foundational to the development of modern medicine, are suffused with Platonic assumptions, and he painstakingly tried to stitch the divided parts of the soul – the rational, the spirited and the appetitive – on to specific organs in the human body.

In a striking display of topographical certitude, Galen writes in On the Doctrines of Hippocrates and Plato: “I do claim to have proofs that the forms of the soul are more than one, that they are located in three different places … and further, that one of these parts [rational] is situated in the brain, one [spirited] in the heart, and one [appetitive] in the liver. These facts can be demonstrated scientifically.”

The Harvard classicist Mark Schiefsky writes that, in Galenic physiology, equilibrium is understood “as a balance of strength between the three parts; the best state is when reason is in charge, the spirited part is strong and obedient, and the appetitive part is weak”.

Should we be sceptical of this aspiration to tame appetite? Sigmund Freud doubted whether desire could ever be so readily controlled. In tossing Plato’s map aside, Freud erased the “soul” and instead sketched a three-part atlas of the “self” and its ratio of desires and repressions – endlessly fractured, negotiating between order (superego), consciousness (ego) and appetite (id). For Freud, appetites could not be overcome but only better managed. Perfect harmony and permanent equilibrium were nowhere in sight. Rather, in Freud’s idea of the self, anxiety for order loomed above the ego, with desire buried beneath it. Appetite was the subterranean tether that consciousness could never escape, but only sublimate.

There was something talismanic about my focus on sugar. So often, liberty is conceived of as the ability to say yes to things. To make affirmative choices: to open this door or that window. But there is also a flipside to that freedom: the power to say no. To refuse. Increasingly during the pandemic, I felt like I was powerless in the face of my cravings. If there was a knock at the door of appetite, a tap on the window of impulse, I had to answer it. And this felt shameful. Why couldn’t I say no? And why was realizing this so painful?

I don’t pretend to anything approaching total understanding of my motivations. But there were a few loosely detected currents worth illuminating here. For one thing, not being able to say no to sugar sometimes felt like a form of bondage to the demands of the body, the very body that I was eager to assert power over, particularly during a global health crisis that was damaging bodies everywhere.

If I couldn’t control this plague, could I not at the very least control myself? I wonder now if this insistence on regulating appetite was my sublimated response to the coronavirus’s immense death toll – a way of denying mortality in the midst of its excess. In this respect, perhaps there was not as much separating me from other kinds of pandemic deniers as I would like to believe. Were we all just coping with the inexorability of our decay – laid painfully bare by Covid-19 – in different ways?

Maybe. But there was something beyond the exigencies of the pandemic on my mind as well. The inability to resist sugar cravings – to break the habit – seemed like a victory of the past over the present. It felt like the triumph of the mere memory of pleasure over real satisfaction in the moment. Saying no to that memory – the neurological underpinning of craving – became important, because it felt like the only way to say yes to imagination. “I am free only to the extent that I can disengage myself,” the philosopher Simone Weil wrote.

Detachment from an indulgence, however small, felt like a way to stop being beholden to an old storehouse of desires (and aversions and beliefs). Developing the ability to refuse to reach for the cookie was also a way to break free from the impulse to reach for patterns of the past, from the compulsion of replicating yesterday at the expense of tomorrow. It’s the trick of habit to convince us that we are reaching forward, even as we are stepping back. Or, as the British scholar of asceticism Gavin Flood elegantly summarizes: “The less we are able to refuse, the more automated we become.”

If Freud dismantled the soul, modern medicine mechanized what he left of the self. But where Freud’s psychoanalytic theory allowed for a pinch of poetry, materialist models hold comparatively dry sway today. A look at the biomedical literature on appetite reveals a tortuous mix of neural circuits and endocrine pathways. What’s clear is that if there was a moral aspect of appetite for ancient philosophers and physicians, it’s not readily discernible in the language of contemporary scientific literature.

There are upsides to this development. In the modern era, medicine’s tradition-bound framing of appetite as a moral problem has been demoralizing for patients, who often felt – and still feel – objectified, policed and discriminated against by institutions that sermonize about it. The stigmatisation of appetite remains pervasive in the culture, in and out of medicine. The loss of at least an explicit moral charge in the scientific literature is a welcome shift.

In the century or so since Freud’s conjectures, appetite has been atomised by medicine into a problem of eating, or more specifically, of fighting the body’s tendency toward “disordered” eating. In the pursuit of better and longer lives, maladies of appetite – of eating too much, too little, or not the right kinds of food – have been studied and treated with varying degrees of success. The empirical study of digestion and appetite in the laboratory moved hunger from the moral arena into a biochemical one. Still, in both experimental physiology and clinical medicine, the ancient impulse to locate the appetite persisted: was it in the body or in the mind? Lines were drawn – and defended – between diseases of the stomach and diseases of the psyche.

What was at stake in the difference? Pinning down the appetite – claiming it belonged to the gut or the brain – was arguably the first in a series of steps leading to its regulation. Understood this way, medicine’s mission to uncover the mechanisms of appetite, despite the erasure of the soul from scientific databases, cannot escape Plato’s legacy. Whether we’re trying to improve or curtail appetite, we seem unable to resist the desire to control it.

It would have been different – I wouldn’t have felt the need to go all-or-nothing with sugar – if I could have simply walked away after a few bites. But increasingly during the pandemic, I wouldn’t stop even after I was full. What started off as pleasure would morph into painful excess. Sure, there’s pleasure in abundance, in overdoing a thing. But I found myself barrelling past that threshold.

While studying for the board exam in my first, failed attempt at going sugar-free, I was also using various apps and devices to keep track of my body. I had long used a smart watch to log my steps and workouts. I was also using a calorie-tracking app, studiously punching in numbers for every meal and scheming how much I could eat and still remain under the calorie limit. But all that logging and calculating felt joyless and anxiety-ridden. Sometimes, at a meal, in the middle of tallying up numbers like an accountant, I’d explain to impatient friends and family that “I’m just entering my data”. It was a lot of data.

I grew weary of all the inputting, and so I switched to an app with more of a behavioural focus. This app still had me tracking calories, but also came with recipes, a personal coach and “psychology-based” courses, as part of what the company calls your “journey”. The courses were a welcome shift from the myopic focus of calorie counting, and chatting with a coach added an opportunity to get some clarity about my goals.

The coach would share chipper motivational advice and provide tips to overcome obstacles. I diligently went through the app’s courses, answered its behavioural questions and followed its nudges. There were a few weeks where I was able to go sugar-free, but after a couple of months, the coaching advice seemed more and more generic, and the courses too simplistic when I was already spending so much time studying for my upcoming exam. I lost interest and reverted to simply recording calories.

I eventually passed that exam without much to show for it in terms of changes to my nutritional habits. I needed something different, a way to hold myself accountable and mean it. I stumbled upon another app that described itself as being “on a mission to disrupt diet culture and make our relationship with food, nutrition – and ourselves – healthier for good”. It promised live coaching calls with a certified nutritionist, shared recipes, and even offered to tailor my coaching with a vegetarian dietician. It did not ask you to track calories or enter food items from a database. All it wanted was for you to send pictures … of your food. It felt radically different than tapping numbers into a screen: someone else would see this.

The app’s slogan was “100% accountability and “0% judgment”. But, to be clear, it was the judgment that I came for. The simple fact that my nutritionist wouldn’t just know but also actually see what I was eating was the killer feature. I answered a questionnaire about my dietary habits and goals. I made it clear that I wanted to go sugar-free, and repeated as much to my nutritionist during a preliminary call.

She didn’t exactly endorse this goal, but rather acknowledged it as something that was important to me and gently marked it as a topic we would come back to, adding that she hoped I would get to the point where a more balanced approach would suffice. I told her we’d see. I made a promise to take a photo of every meal, good or bad. She kindly reminded me there are not “good” and “bad” foods, and we were on our way.

It’s been a year since I downloaded the app. Every day since then, I have taken a photo of every morsel of food I’ve eaten, whether it’s a handful of pistachios, a salad or a veggie burger. In every one of those pics, every day, I have been sugar-free. I’ve eaten more vegetables and greens and fruits than I’ve probably ever eaten in my life. My plates look balanced (I make sure of it). I take care to snap pictures that look nice for my nutritionist. Though she never judges me negatively, I look forward to the raising-hands emoji and approving words she sends if she sees a salad with asparagus and garlic balsamic drizzle and avocado up front.

Like an influencer on Instagram, I’ll take another shot if the lighting isn’t quite right, or if the framing is off. It’s been satisfying to upload a cache of sugar-free images, all beautifully arranged on the app’s user interface. Even more satisfying has been avoiding feeling like the guy who said he’d go sugar-free only to end up sending in pictures of donuts and cookies. Compared to calorie logs and food diaries, the prospect of someone else seeing photos of what I’m eating has made the potential pain of falling short feel more proximate than the pleasure of eating sweets. So I just stopped eating sugar. And it’s still working. Was this all it took?

Perhaps the persistent effort to control appetite, replicated across many cultures and times, reveals just how vigorously it resists that very control. The seemingly endless proliferation of constraints on appetite – from the disciplinary to the pharmacological – underscores its untamable quality. And yet the training of appetite – both as physiological fact and, more abstractly, as desire – can function as an ascetic practice. In this paradigm, as religion scholars such as Flood argue, the negation of desire amplifies the subjectivity of the individual.

Depriving the body paradoxically accentuates the conscious subject, because hunger unsatiated allows the pangs of the self to be felt more acutely, and renders being more vivid. In other words, appetite unfulfilled creates the conditions for expanding self-awareness. This is seen in the Bhagavad Gita in the figure of the ascetic, one who has renounced the pull of appetite and “attains extinction in the absolute” – in seeming contradiction, gaining infinity through loss.

If philosophy is after theoretical victories, science aims more concretely to hack, or at least short-circuit, a physiological truth. Take, for example, gastric bypass surgery, an operation that cuts the stomach into two parts (leaving one functional thumb-size pouch alongside a larger remnant) and radically reconstructs separate intestinal systems for each segment to restrict the amount of food that can be eaten. By shrinking the stomach to fool the mind into feeling satisfied with less, this surgery builds on growing recognition that the long-embraced brain-gut divide is far more porous than previously thought.

Recipients of the surgery generally do well in the short term, with reduced appetite, marked weight loss, better control of diabetes and improved health markers. But the percentage of patients who “fail” in the long-term after bariatric surgery (ie achieve less than half of excess weight loss) is reportedly as high as 35%. During that first post-op year, studies suggest, an influx of appetite-reducing intestinal hormones decreases patients’ urge to eat. Crucially, however, there are questions about the duration of those salutary hormonal changes and their effectiveness in controlling appetite as post-surgical days add up.

For a significant proportion of patients, even surgically shrinking the stomach – the historical seat of hunger – doesn’t offer complete freedom from unchecked appetite. This fact is not entirely surprising, given what is now known about the multiple neuroendocrine nodes that govern appetite, but it poses a conundrum for medical science: can appetite, as Freud asked in his own way, ever be fully controlled? And if not, is it a wonder that patients turn back to more personal strategies to pursue the work that prescriptions and sutures leave undone?

I can’t say I fully understand why teaming up with a nutritionist on an app worked so well, so fast. Would sharing pics of my food with friends and family in a group chat or a Facebook page have been as effective? Probably not. The issue seemed to be one of epistemology. My friends and family wouldn’t have been as suitable an audience, since they don’t just know me as I am, but also as I was. That knowledge of what’s bygone necessarily shapes the stories we can tell and believe about one another.

But with my nutritionist reviewing pictures of my meals from god knows what timezone, the app created an epistemological gap into which both of us could step. It was within this gap that my future self – the self I aspired to be, still unrealised and therefore unknown – could intercede in the present with slightly less inertia from the past. The app provided an illusion that daily life could not, offering a space for the dormant commitments of the future to come to fruition in the present. A space for imagination to overcome memory.

As my sugar-free streak extended, I began to wonder about the future of this illusion. Was it a rare example of tech living up to its glitteringly naive promise of liberation? Or was this an instance of the digital panopticon yet again determining our ability to imagine ourselves, revealing just how far-reaching its gaze is? And, more practically, I began thinking about how long I needed to keep eating this way. The cravings that had knocked so loudly at my door at the start of the pandemic now softly shuffled from leg to leg right outside it. I could still hear their shoes creaking at the threshold, but they couldn’t force their way in anymore. Things seemed quiet, maybe a little too quiet.

Whereas the Greeks soughtto regulate appetite in pursuit of the good life, perhaps what is sought after today is a facsimile of it: a corporatised eudaimonia-lite, where the goal isn’t virtue but efficiency; not equanimity, but productivity. In this view, it’s not a better way to live we’re seeking, just a less painful way to work and die – all while “looking good”. A more charitable and poetic possibility is that the constraint of appetite continues to appeal because it provides the same sense of structure to selfhood that metre does to a poem: a limit against which to construct narrative unity of the psyche.

As fascinating as it is to think about this question, even more essential ones – about the links between appetite, scarcity and loss – loom in the writings of Toni Morrison, a writer who provides a necessary counterbalance to the obsession with appetite restriction in societies glutted with luxury. In particular, I’m thinking of Beloved, which tells the story of human beings struggling for survival and wholeness in the face of slavery’s horrors. In portraying this struggle, Morrison uses the language of food and appetite to unfurl narratives saturated with the metaphysics of hunger: the difficulty of sating the self; the confusion between hunger, history and hurt.

I was struck by this unexpected resonance while rereading the book in the middle of my bid to quit sugar. Morrison’s characters think about what it would mean to satisfy what the narrator calls their “original hunger” – and whether doing so is even possible. They imagine getting to a place “beyond appetite”, but are also compelled by history to contemplate the price of doing so.

In my reading of the book, the denial of hunger risks becoming a costly exercise in self-abnegation – a severing of self from history, of self from self – whose consequences Plato doesn’t seem to fully consider, but which Morrison is deeply wary of. I think Morrison is, like Freud, skeptical of the metaphysicians who would have us render hunger subordinate. But where Freud is an anti-idealist, Morrison appears willing to reach for hunger, perilous though it may be. Straddling both the risk of self-destruction posed by contact with the original hunger, and the anguish of self-denial created by leaving it unrecognised, Morrison casts her faith in the human ability to embrace the beautiful, blood-hued predicament of incarnation.

About 10 months into my sugar-free life, a scent from the pantry hit me like it hadn’t for a while. My wife had just baked chocolate-chip cookies for our kids as a treat. By then, I was unfazed by sweets around the house. They might as well have been made of stone. But, at the end of a long day, I found myself unexpectedly at the pantry door. Minutes passed. After a while, I opened the plastic container and inhaled. My mouth began to water. I could almost taste the cookies.

I remembered the delightful way the chocolate melted at the back of the tongue. I remembered the satisfaction of soaking a warm cookie in milk. A part of my brain was humming, eager to replicate the memory of sugar, butter and dough on the cortex. Another part was already dreading the pain of not being able to stop. I picked up the cookie and, having built nearly a year’s worth of muscle memory, simultaneously opened the app on my phone. I centred the cookie in the glowing frame and was about to press send when, looking at the screen, it hit me: what would my nutritionist think?

As of this writing, my streak remains unbroken, despite a few close calls. In many ways the story seems to be going the way I intended: I am eating well balanced, sugar-free meals and haven’t counted a calorie in more than a year. The cravings that were troubling me aren’t gone, but the future version of me – the unsweetened aspirant – grows closer with each picture I snap. I feel the spiritual and physical acuity that comes with ascetic practice.

But I also feel some qualms about neglecting Morrison’s original hunger, with all its attendant risks and possibilities. I think about how I have sacrificed memory at the altar of imagination, recognising the chance that imagination ends up being overrated and memory proves to be the last storehouse of joy. But then I remind myself that visions like Morrison’s may be too large, too untimely for us to inhabit. They come from a place we haven’t arrived at. At least not yet.

By

Source: Why is it so hard to control our appetites? A doctor’s struggles with giving up sugar | Health & wellbeing | The Guardian

More contents:

Rob Lowe on diet, exercise and podcasts Associated Press – YouTube

20:17
20:16
 

More Remote Working Apps:

https://quintexcapital.com/?ref=arminham     Quintex Capital

https://www.genesis-mining.com/a/2535466   Genesis Mining

 http://www.bevtraders.com/?ref=arminham   BevTraders

https://www.litefinance.com/?uid=929237543  LiteTrading

https://jvz8.com/c/202927/369164  prime stocks

  https://jvz3.com/c/202927/361015  content gorilla

  https://jvz8.com/c/202927/366443  stock rush  

 https://jvz1.com/c/202927/373449  forrk   

https://jvz3.com/c/202927/194909  keysearch  

 https://jvz4.com/c/202927/296191  gluten free   

https://jvz1.com/c/202927/286851  diet fitness diabetes  

https://jvz8.com/c/202927/213027  writing job  

 https://jvz6.com/c/202927/108695  postradamus

https://jvz1.com/c/202927/372094  stoodaio

 https://jvz4.com/c/202927/358049  profile mate  

 https://jvz6.com/c/202927/279944  senuke  

 https://jvz8.com/c/202927/54245   asin   

https://jvz8.com/c/202927/370227  appimize

 https://jvz8.com/c/202927/376524  super backdrop

 https://jvz6.com/c/202927/302715  audiencetoolkit

 https://jvz1.com/c/202927/375487  4brandcommercial

https://jvz2.com/c/202927/375358  talkingfaces

 https://jvz6.com/c/202927/375706  socifeed

 https://jvz2.com/c/202927/184902  gaming jobs

 https://jvz6.com/c/202927/88118   backlinkindexer

 https://jvz1.com/c/202927/376361  powrsuite  

https://jvz3.com/c/202927/370472  tubeserp  

https://jvz4.com/c/202927/343405  PR Rage  

https://jvz6.com/c/202927/371547  design beast  

https://jvz3.com/c/202927/376879  commission smasher

 https://jvz2.com/c/202927/376925  MT4Code System

https://jvz6.com/c/202927/375959  viral dash

https://jvz1.com/c/202927/376527  coursova

 https://jvz4.com/c/202927/144349  fanpage

https://jvz1.com/c/202927/376877  forex expert  

https://jvz6.com/c/202927/374258  appointomatic

https://jvz2.com/c/202927/377003  woocommerce

https://jvz6.com/c/202927/377005  domainname

 https://jvz8.com/c/202927/376842  maxslides

https://jvz8.com/c/202927/376381  ada leadz

https://jvz2.com/c/202927/333637  eyeslick

https://jvz1.com/c/202927/376986  creaitecontentcreator

https://jvz4.com/c/202927/376095  vidcentric

https://jvz1.com/c/202927/374965  studioninja

https://jvz6.com/c/202927/374934  marketingblocks

https://jvz3.com/c/202927/372682  clipsreel  

https://jvz2.com/c/202927/372916  VideoEnginePro

https://jvz1.com/c/202927/144577  BarclaysForexExpert

https://jvz8.com/c/202927/370806  Clientfinda

https://jvz3.com/c/202927/375550  Talkingfaces

https://jvz1.com/c/202927/370769  IMSyndicator

https://jvz6.com/c/202927/283867  SqribbleEbook

https://jvz8.com/c/202927/376524  superbackdrop

https://jvz8.com/c/202927/376849  VirtualReel

https://jvz2.com/c/202927/369837  MarketPresso

https://jvz1.com/c/202927/342854  voiceBuddy

https://jvz6.com/c/202927/377211  tubeTargeter

https://jvz6.com/c/202927/377557  InstantWebsiteBundle

https://jvz6.com/c/202927/368736  soronity

https://jvz2.com/c/202927/337292  DFY Suite 3.0 Agency+ information

https://jvz8.com/c/202927/291061  VideoRobot Enterprise

https://jvz8.com/c/202927/327447  Klippyo Kreators

https://jvz8.com/c/202927/324615  ChatterPal Commercial

https://jvz8.com/c/202927/299907  WP GDPR Fix Elite Unltd Sites

https://jvz8.com/c/202927/328172  EngagerMate

https://jvz3.com/c/202927/342585  VidSnatcher Commercial

https://jvz3.com/c/202927/292919  myMailIt

https://jvz3.com/c/202927/320972  Storymate Luxury Edition

https://jvz2.com/c/202927/320466  iTraffic X – Platinum Edition

https://jvz2.com/c/202927/330783  Content Gorilla One-time

https://jvz2.com/c/202927/301402  Push Button Traffic 3.0 – Brand New

https://jvz2.com/c/202927/321987  SociCake Commercial

https://jvz2.com/c/202927/289944  The Internet Marketing

 https://jvz2.com/c/202927/297271  Designa Suite License

https://jvz2.com/c/202927/310335  XFUNNELS FE Commercial 

https://jvz2.com/c/202927/291955  ShopABot

https://jvz2.com/c/202927/312692  Inboxr

https://jvz2.com/c/202927/343635  MediaCloudPro 2.0 – Agency

 https://jvz2.com/c/202927/353558  MyTrafficJacker 2.0 Pro+

https://jvz2.com/c/202927/365061  AIWA Commercial

https://jvz2.com/c/202927/357201  Toon Video Maker Premium

https://jvz2.com/c/202927/351754  Steven Alvey’s Signature Series

https://jvz2.com/c/202927/344541  Fade To Black

https://jvz2.com/c/202927/290487  Adsense Machine

https://jvz2.com/c/202927/315596  Diddly Pay’s DLCM DFY Club

https://jvz2.com/c/202927/355249  CourseReel Professional

https://jvz2.com/c/202927/309649  SociJam System

https://jvz2.com/c/202927/263380  360Apps Certification

 https://jvz2.com/c/202927/359468  LocalAgencyBox

https://jvz2.com/c/202927/377557  Instant Website Bundle

https://jvz2.com/c/202927/377194  GMB Magic Content

https://jvz2.com/c/202927/376962  PlayerNeos VR

https://jvz8.com/c/202927/381812/  BrandElevate Bundle information

https://jvz4.com/c/202927/381807/ BrandElevate Ultimate

https://jvz2.com/c/202927/381556/ WowBackgraounds Plus

https://jvz4.com/c/202927/381689/  Your3DPal Ultimate

https://jvz2.com/c/202927/380877/  BigAudio Club Fast Pass

https://jvz3.com/c/202927/379998/ Podcast Masterclass

https://jvz3.com/c/202927/366537/  VideoGameSuite Exclusive

https://jvz8.com/c/202927/381148/ AffiliateMatic

https://jvzoo.com/c/202927/381179  YTSuite Advanced

https://jvz1.com/c/202927/381749/  Xinemax 2.0 Commercial

https://jvzoo.com/c/202927/382455  Living An Intentional Life

https://jvzoo.com/c/202927/381812  BrandElevate Bundle

https://jvzoo.com/c/202927/381935 Ezy MultiStores

https://jvz2.com/c/202927/381194/  DFY Suite 4.0 Agency

https://jvzoo.com/c/202927/381761  ReVideo

https://jvz4.com/c/202927/381976/  AppOwls Bundle

https://jvz8.com/c/202927/381950/  TrafficForU

https://jvz3.com/c/202927/381615/  WOW Backgrounds 2.0

https://jvz4.com/c/202927/381560   ALL-in-One HD Stock Bundle

https://jvz6.com/c/202927/382326/   Viddeyo Bundle

https://jvz8.com/c/202927/381617/  The Forex Joustar

Life After Death: How The Pandemic Has Transformed Our Psychic Landscape

Modern society has largely exiled death to the outskirts of existence, but Covid-19 has forced us all to confront it. Our relationship to the planet, each other and time itself can never be the same again

We have been asked to write about the future, the afterlife of the pandemic, but the future can never be told. This at least was the view of the economist John Maynard Keynes, who was commissioned to edit a series of essays for the Guardian in 1921, as the world was rebuilding after the first world war.

The future is “fluctuating, vague and uncertain”, he wrote later, at a time when the mass unemployment of the 1930s had upended all confidence, the first stage on a road to international disaster that could, and could not, be foreseen. “The senses in which I am using the term [uncertain],” he said, “is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest 20 years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970.

About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.” This may always be the case, but the pandemic has brought this truth so brutally into our lives that it threatens to crush the best hopes of the heart, which always look beyond the present. We are being robbed of the illusion that we can predict what will happen in the space of a second, a minute, an hour or a day.

From one moment to the next, the pandemic seems to turn and point its finger at anyone, even at those who believed they were safely immune. The distribution of the virus and vaccination programme in different countries has been cruelly unequal, but as long as Covid remains a global presence, waves of increasing severity will be possible anywhere and at any moment in time.

The most deadly pandemic of the 20th century, the Spanish flu at the end of the first world war, went through wave after wave and lasted for nearly four years. Across the world, people are desperate to feel they have turned a corner, that an end is in sight, only to be faced with a future that seems to be retreating like a vanishing horizon, a shadow, a blur. Nobody knows, with any degree of confidence, what will happen next. Anyone claiming to do so is a fraud.

At such a time, only so much faith can be placed even in the governments who have shown the surest touch in dealing with the pandemic. Anyone living under regimes whose acts have felt measured and thoughtful has watched with dismay the death-dealing denials of national leaders from India to Brazil. No country is exempt, which is just one reason why the monopoly of vaccinations by the privileged countries is so manifestly self-defeating.

If the wretched of the earth are not protected, then no one is. An ethical principle – one that, in an ideal world, should always apply – is pushing to the fore, taking on an unmistakable if ghostly shape. Nobody can save themselves, certainly not for ever, at the cost of anybody else.

In the UK, we legitimately rail against an incompetent government whose repeated refusal to take measures called for by its scientific advisers has given us one of the highest Covid death tolls of the western world. It is guilty of negligence, but of also violating the unspoken contract between government and governed, by leaving the people alone with their fear. Though officially denied, the policy at the outset of the pandemic and reviewed this summer, seems to have been “herd immunity”.

If the idea has been so disturbing, it is not just because it runs the risk of a virus run rampant and mutating into vaccine-resistant variants, or because of the sinister undercover calculations of the acceptable level of the dead that it entails. Perhaps even more distressing, the avalanche of deaths that it appeared to sanction reminds us of the reality that death can happen at any time and eventually comes for us all.

“Let the bodies pile high,” words allegedly spoken (though officially denied) by Boris Johnson, have lingered in the atmosphere and leave any vestiges of safety in shreds. A stalled economy, whose serious consequences must indeed be recognised, is – or so we were officially told – more alarming than mass deaths.

Sigmund Freud once stated that no one believes in their own death. In the unconscious, there is a blank space where knowledge of this one sure thing about our futures should be. If the pandemic has changed life for ever, it might therefore be because that inability to countenance death – which may seem to be the condition of daily sanity – has been revealed for the delusion it always is.

To be human, in modern western cultures at least, is to push the knowledge of death away for as long as we can. “There used to be no house,” the Marxist critic Walter Benjamin wrote in his 1936 essay The Storyteller, “hardly a room, in which someone had not once died.” In modern life, on the other hand, he argued, dying had been pushed beyond the perceptual realm of the living, although his diagnosis did not of course include the destitute nations or anticipate the impending war.

In the midst of a pandemic, death cannot be exiled to the outskirts of existence. Instead, it is an unremitting presence that seems to trail from room to room. One of the as yet unanswered questions of the present moment is how soon the vaccine rollout, among the privileged nations, will allow hospitals to return once and for all to curing and caring for life rather than preparing for death, so that doctors and nurses will no longer be faced with the inhuman choice between cancer and Covid.

“Not today,” one palliative care nurse found herself saying in the midst of the first wave, to patients cut off from their loved ones, the terror visible in their eyes, when they asked her if they were going to die. “Not today” – she did not even pretend to know more. What on earth, we might then ask, does the future consist of once the awareness of death passes a certain threshold and breaks into our waking dreams?

What, then, is the psychic time we are living? How can we prepare – can we prepare – for what is to come? If the uncertainty strikes at the core of inner life, it also has a political dimension. Every claim for justice relies on belief in a possible future, even when – or rather especially when – we feel the planet might be facing its demise. This is all the more visibly the case since the pandemic has allowed the bruises of racial, sexual and economic inequality in the modern world to rise mercilessly to the surface of our social arrangements for everyone, unavoidably, to see.

The misery of impoverished peoples, black men gunned down by police on the streets, women trapped in their homes during lockdown, assaulted and murdered by their partners – all these realities, each with its history of racial and sexual violence, are pressing harder on public consciousness, as they move from the sidelines on to the front page. The psychological terrain is starting to shift.

Alongside the terror, and at least partly in response, a renewed form of boldness, itself relying on longstanding traditions of protest, has entered the stage – a new claim on the future, we might say. One by one, people are calling out the systemic forms of discrimination that are so often passed off as the norm.

People will no longer accept denials that the problem exists, such as the UK government-commissioned Sewell report, published in March, which rejected the fact of institutional racism; or tolerate the more deeply entrenched hatreds, as expressed in the visceral rage and threats against the marchers of Black Lives Matter; or leave unchallenged the studied indifference towards injustice that makes people turn aside and casually assume that this is just how the world is and always will be.

Meanwhile, it becomes more and more obvious that endless growth and accumulation of wealth involves an exploitation of humans and resources that is destroying the planet. First, in the pandemic, caused by the virus crossing the barrier between humans and other animals, which many scientists believe was caused by interference in the food chain.

This in itself is a consequence of large-scale industrial farming and the wildlife trade, which are boosting the production of deadly pathogens. Second, in the bodies of people in flight from war zones, washing up on the shores of the so-called “developed” nations. Then, in the droughts, floods, wildfires, superstorms, heatwaves, earthquakes and hurricanes, under pressure of climate disaster, as if life on the planet had already reached the end of days.

“We were terrified of this new disease,” said Maheshi Ramasami, senior clinical researcher on the Oxford AstraZeneca team, who recently described their slow realisation of what they were facing. “There was one moment when somebody said to me, ‘Is this what the end of the world feels like?’”

Today, everything is telling us that we cannot go on making all the bad decisions that have been made in the name of progress. Being driven – working harder and harder, making more and more money – is not a virtue or some kind of ethical principle to adhere to, but a sure sign of greed, panic and decay.

Shooting yourself into outer space, as Richard Branson and Jeff Bezos have raced against each other to do, is a narcissistic sideshow put on by obscenely wealthy men. That they are men is surely key (the last gasp of the phallus and all that). The sky is no limit. “Expansion is everything,” wrote Cecil Rhodes, mining magnate and prime minister of the Cape Colony from 1890 to 1896, “I would annex the planets if I could.”

Rhodes advised the British government that the export of instruments of violence to Africa in order to secure their investments was a holy duty. He also passed laws to drive black people off their land, limiting the areas where they could then settle. The laws he put in place are considered by many historians to have provided a foundation for what later became apartheid.

Rhodes’ statue at the University of Cape Town was brought down by student protests in one of the most resonant political actions of the times, but the one outside Oriel College in Oxford is still standing. Either way, the organising principle and fantasy – colonising the universe to infinity – endures. “We know there is life on Mars,” the associate administrator of Nasa’s Science Mission stated in 2015, “because we put it there.”

The process is known as “forward contamination” – you destroy at exactly the same moment that you make something grow. Last March, one of Elon Musk’s SpaceX ships crashed back to earth in Texas. “We’ve got a lot of land with nobody around, so if it blows up, it’s cool,” he is reported to have observed a few years previously. The explosion scattered debris over the fragile ecosystem of state and federally protected lands in the Lower Rio Grande valley, a national wildlife refuge that is home to vulnerable species.

It is surely no coincidence that such Faustian pacts are being struck when the fragility of life on earth has never been more glaring. These intrepid space explorers remind me of the stinking rich individuals who try to barter with the boatman on their way to the island of the dead in Philip Pullman’s The Amber Spyglass.

Bezos is said to be pouring millions into Altos Labs, a Silicon Valley gene reprogramming company searching for the secrets of eternal life. They think their money will save them, while the bodies of the less privileged crumble and fail (the sums already spent on these space extravaganzas would pay for vaccines across the world for everyone). This much seems clear. If we want to prepare for a better, fairer, life – if we want to prepare for any kind of future at all – we must slow the pace and change our relationship to time.

So what happens if we enter the realm of psychic time, the inner world of the unconscious where the mind, which we can never fully know or master, constantly flickers back and forth between the different moments of a partly remembered, partly repressed life?

This is a vision of human subjectivity that completely scuppers any idea of progress as a forward march through time. The British psychoanalyst DW Winnicott, writing in 1949, described a patient who had to go looking for a piece of their past in the future, something they could barely envisage in the present, and which, when it first happened, had been too painful for them to fully live or even contemplate.

Seen in this light, the relentless drive to push ourselves on and on, as if our lives depended on it – killing us, more likely – reveals itself as a doomed effort to bypass inner pain. The first hysterical patient in the history of psychoanalysis – analysed by Freud’s colleague Josef Breuer – fell ill as she sat nursing her dying father, overwhelmed by an inadmissible combination of resentment and sorrow.

Her anger at the suffocating restrictions of her life was a feeling that, as a young Viennese daughter, she could not allow herself, at least consciously, to entertain. Even without a pandemic, it is rare for such agonising ambivalence towards those we love and lose to be spoken. There is a limit to how much we can psychically tolerate. This remains the fundamental insight of psychoanalysis, never more needed than today.

When Boris Johnson slipped under cover of night to visit the memorial wall along the Thames, avoiding daytime mourners, an act generally seen as an insult to all those whom the wall is designed to commemorate; or when he blustered and refused for 18 months to meet the bereaved families of people who have died of Covid-19, he was refusing public accountability, while at the same time making a statement, no doubt unintentionally, about what he could not bear to countenance.

(He has now met and assured them that there will be a public inquiry, for which, somewhat unpromisingly, he will take personal charge himself.) He was also revealing the gulf between official life and the inward life of the mind. Grief brings time shuddering to a halt. As beautifully rendered by poet Denise Riley after the death of her son, it is time lived without its flow. When you are grieving, there is nothing else to do but grieve, as the mind battles against a knowledge that no one ever wishes to own.

Even the term “the bereaved” is misleading, as it suggests a group apart, and something over and done with, as if you can neatly place to one side and sign off on something that feels, for the one afflicted, like an interminable process (which must feel interminable, at least to begin with, if it is ever to be processed at all).

Seen in this light, Johnson’s “boosterism”, his boyish insouciance, appears as a psychological project in itself. What must be avoided at all cost is any glimmer of anguish. Anything can and must be managed. Everything is going to be all right – a mantra of which the irreality has never been more glaring.

All that matters is the endlessly deferred promise of good times ahead. Hence too, I would suggest, the evasions and obfuscations on everything from climate breakdown, to “levelling up”, to social care – for none of which there appears to be anything sufficiently ambitious or well-resourced to be dignified with the word “plan”.

The same goes for the fiasco of “freedom day” on 19 July this year, when most remaining pandemic restrictions were lifted in the UK, a day people in England were exhorted to celebrate. For many in the UK and across a tensely watching world, it felt instead like an occasion for dread. “Needless suffering”, “disastrous myopia” is how observers from New York to the capitals of Europe have described UK government recklessness as case numbers have steadily risen close to their highest levels since then.

Each time, the same pattern. The political reality of the moment is ignored by subduing the difficult forms of mental life that would be needed in order to face it. In one of his most famous statements, Freud described the hysterical patient as suffering “mainly from reminiscences”. From that moment on, psychoanalytic thought has committed itself to understanding how flight from the past freezes people in endlessly repeating time, robbing them of any chance for a life that might be lived with a modicum of freedom.

You have to look back, however agonising, even if it goes against all your deepest impulses, if you are to have the slightest hope of getting to a new stage. This, too, has become more obviously true as people are crossing over from the space of intimacy and privately stored memories to tell their stories in the public domain. When women step up – and it is mainly if not exclusively women – to recount harrowing tales of sexual abuse from bygone years, it is part of a bid to claim the past as the only way of allowing a future to emerge no longer blocked by violent memory.

During lockdown, psychoanalysts reported a flood of untold memories from their patients, as if the physical distance and reduced intimacy of the virtual session, combined with the sheer urgency of the moment, were finally giving them the courage to speak.One glance at today’s culture wars will confirm how central this type of reckoning is to our ability to understand the political urgencies of the present.

What is causing the most trouble, and provoking the strongest rebuttals and hatred, is the fearlessness with which the damaged, disadvantaged and dispossessed are calling up the legacy of the past as their passage to a viable future. Their resolve to combat historic and entrenched injustice is surely exemplary. Most vocal of all has been the anger unleashed by the project to bring down the statues of imperial magnates – beginning with Rhodes – or to acknowledge that colonial Britain was involved in the slave trade at all.

At the time of abolition, British slavers were bought off by the government with compensation worth $17bn. Those funds have massively increased over hundreds of years, leaving the next generations to enjoy levels of prosperity that – not surprisingly and even in the face of incontrovertible scholarship and evidence – they have been reluctant to accept was sourced in ill-gotten gains.

When The Legacies of British Slavery, the University College London database charting this history, first opened in 2016, within days it was flooded in almost equal measure by those wishing to know the truth of the past and those wishing no less fervently to deny it.

One place to begin would be to make room for the complex legacies of the human mind, without the need to push reckoning aside. Past wrongs would not be subject to denial, as if our personal or national identities depended on a pseudo-innocence that absolves us of all crimes. Let the insights of the analytic couch percolate into our public and political lives and, no less crucially, the other way round (we need to acknowledge the weight of historical affliction on our dreams).

In my ideal scenario, social trauma and injustice would not be seen as belonging to another universe from our most wayward fears and desires. Instead, they would both find their place at the negotiating table, as we tentatively begin to draw the outlines of a better world. Meanwhile, taking responsibility for failure in relation to the pandemic would help:

The cry for redress, for official investigations, or simply for public acknowledgment of the avoidable disaster that millions have been living, from the UK to India to Brazil. Though none of this will bring back the thousands who should not have died.

In psychoanalytic thought, failure and fragility are a crucial part of who we are (only by knowing this can we make the best of our lives). Failure, too, has the strongest political resonance today, as many of us now anxiously wait to see if the idea will be allowed, in any meaningful or lasting way, to enter the collective political mind.

Will the collapse of the western powers in Afghanistan be a gamechanger? Or, despite widespread agreement that we have been witnessing a catastrophic failure of policy, will any such recognition turn out to be a fleeting gesture, no more than a pause in the preparations for endless war? Squabbling over whether the US is a “big” or “super” power – according to the UK defence minister, only a country willing to exert global force has a right to the second epithet – is hardly reassuring.

So, how will the pandemic be lived when it is no longer – as we can only hope – at the forefront of people’s consciously lived lives? How will it be remembered? Will it be a tale of vaccine triumph, with no mention of the murderous injustice of unequal global distribution; a story of government negligence and accountability; or an acceptance of the ongoing grief for the dead?

Responding to a suggestion to make the memorial wall permanent, the artist Rachel Whiteread suggested it should be “left just to be and then gradually disappear. To have its quietness.” You cannot, she stated, memorialise something that is still going on; a more permanent memorial will need distance and time.

When we reach that point, the challenge will be to resist the temptation to brush everything under the carpet, as if the best hope for the future were to go back to normal and blithely continue with matters as they were before: push death aside, treat swaths of the Earth’s inhabitants as dispensable, drive the planet to its end. On the other hand, a world that makes room for memory and justice would be something else. There is still everything to play for.

By

Source: Life after death: how the pandemic has transformed our psychic landscape | Death and dying | The Guardian

.

More contents:

Can Lucid Dreaming Help Us Understand Consciousness

The ability to control our dreams is a skill that more of us are seeking to acquire for sheer pleasure. But if taken seriously, scientists believe it could unlock new secrets of the mind

Michelle Carr is frequently plagued by tidal waves in her dreams. What should be a terrifying nightmare, however, can quickly turn into a whimsical adventure – thanks to her ability to control her dreams. She can transform herself into a dolphin and swim into the water. Once, she transformed the wave itself, turning it into a giant snail with a huge shell. “It came right up to me – it was a really beautiful moment.”

There’s a thriving online community of people who are now trying to learn how to lucid dream. (A single subreddit devoted to the phenomenon has more than 400,000 members.) Many are simply looking for entertainment. “It’s just so exciting and unbelievable to be in a lucid dream and to witness your mind creating this completely vivid simulation,” says Carr, who is a sleep researcher at the University of Rochester in New York state. Others hope that exercising skills in their dreams will increase their real-life abilities. “A lot of elite athletes use lucid dreams to practice their sport.”

And there are more profound reasons to exploit this sleep state, besides personal improvement. By identifying the brain activity that gives rise to the heightened awareness and sense of agency in lucid dreams, neuroscientists and psychologists hope to answer fundamental questions about the nature of human consciousness, including our apparently unique capacity for self-awareness. “More and more researchers, from many different fields, have started to incorporate lucid dreams in their research,” says Carr.

This interest in lucid dreaming has been growing in fits and starts for more than a century. Despite his fascination with the interaction between the conscious and subconscious minds, Sigmund Freud barely mentioned lucid dreams in his writings. Instead, it was an English aristocrat and writer, Mary Arnold-Forster, who provided one of the earliest and most detailed descriptions in the English language in her book Studies in Dreams.

Published in 1921, the book offered countless colourful escapades in the dreamscape, including charming descriptions of her attempts to fly. “A slight paddling motion by my hands increases the pace of the flight and is used either to enable me to reach a greater height, or else for the purpose of steering, especially through any narrow place, such as through a doorway or window,” she wrote.

Based on her experiences, Arnold-Forster proposed that humans have a “dual consciousness”. One of these, the “primary self”, allows us to analyze our circumstances and to apply logic to what we are experiencing – but it is typically inactive during sleep, leaving us with a dream consciousness that cannot reflect on its own state. In lucid dreams, however, the primary self “wakes up”, bringing with it “memories, knowledge of facts, and trains of reasoning”, as well as the awareness that one is asleep.

She may have been on to something. Neuroscientists and psychologists today may balk at the term “dual consciousness”, but most would agree that lucid dreams involve an increased self-awareness and reflection, a greater sense of agency and volition, and an ability to think about the more distant past and future. These together mark a substantially different mental experience from the typically passive state of non-lucid dreams.

“There’s a grouping of higher-level features, which seem to be very closely associated with what we think of as human consciousness, which come back in that shift from a non-lucid to a lucid dream,” says Dr Benjamin Baird, a research scientist at the Center for Sleep and Consciousness at the University of Wisconsin-Madison. “And there’s something to be learned in looking at that contrast.”

You may wonder why we can’t just scan the brains of fully awake subjects to identify the neural processes underlying this sophisticated mental state. But waking consciousness also involves many other phenomena, including sensory inputs from the outside world, that can make it hard to separate the different elements of the experience. When a sleeper enters a lucid dream, nothing has changed apart from the person’s conscious state. As a result, studies of lucid dreams may provide an important point of comparison that could help to isolate the specific regions involved in heightened self-awareness and agency.

Unfortunately, it has been very hard to get someone to lucid dream inside the noisy and constrained environment of an fMRI scanner. Nevertheless, a case study published in 2012 confirmed that it can be done. The participant, a frequent lucid dreamer, was asked to shift his gaze from left to right whenever he “awoke” in his dream – a dream motion that is also known to translate to real eye movements. This allowed the researchers to identify the moment at which he had achieved lucidity.

The brain scans revealed heightened activity in a group of regions, including the anterior prefrontal cortex, that are together known as the frontoparietal network. These areas are markedly less active during normal REM sleep, but they became much busier whenever the participant entered his lucid dream – suggesting that they are somehow involved in the heightened reflection and self-awareness that characterize the state.

Several other strands of research all point in the same direction. Working with the famed consciousness researcher Giulio Tononi, Baird has recently examined the overall brain connectivity of people who experience more than three lucid dreams a week. In line with the findings of the case study, he found evidence of greater communication between the regions in the frontoparietal network – a difference that may have made it easier to gain the heightened self-awareness during sleep.

Further evidence comes from the alkaloid galantamine, which can be used to induce lucid dreams. In a recent study, Baird and colleagues asked people to sleep for a few hours before waking. The participants then took a small dose of the drug, or a placebo, before practising a few visualisation exercises that are also thought to modestly increase the chances of lucid dreaming. After about half an hour, they went back to sleep.

The results were striking. Just 14% of those taking a placebo managed to gain awareness of their dream state, compared with 27% taking a 4mg dose of galantamine, and 42% taking an 8mg dose. “The effect is humongous,” says Baird.

Galantamine has been approved by Nice to treat moderate Alzheimer’s disease. It is thought to work by boosting concentrations of the neurotransmitter acetylcholine at our brain cell’s synapses. Intriguingly, previous research had shown that this can raise signalling in the frontoparietal regions from a low baseline. This may have helped the dreaming participants to pass the threshold of neural activity that is necessary for heightened self-awareness. “It’s yet another source of evidence for the involvement of these regions in lucid dreaming,” says Baird, who now hopes to conduct more detailed fMRI studies to test the hypothesis.

Prof Daniel Erlacher, who researches lucid dreams at the University of Berne in Switzerland, welcomes the increased interest in the field. “There is more research funding now,” he says, though he points out that some scientists are still sceptical of its worth.

That cynicism is a shame, since there could be important clinical applications of these findings. When people are unresponsive after brain injuries, it can be very difficult to establish their level of consciousness. If work on lucid dreams helps scientists to establish a neural signature of self-awareness, it might allow doctors to make more accurate diagnoses and prognoses for these patients and to determine how they might be experiencing the effects of their illness.

At the very least, Baird’s research is sure to attract attention from the vast online community of wannabe lucid dreamers, who are seeking more reliable ways to experience the phenomenon. Galantamine, which can be extracted from snowdrops, is already available as an over-the-counter dietary supplement in the US, and its short-term side-effects are mild – so there are currently no legal barriers for Americans who wish to self-experiment. But Baird points out that there may be as-yet-unknown long-term consequences if it is used repeatedly to induce lucid dreams. “My advice would be to use your own discretion and to seek the guidance of a physician,” he says.

For the time being, we may be safest using psychological strategies (see below). Even then, we should proceed with caution. Dr Nirit Soffer-Dudek, a psychologist at Ben-Gurion University of the Negev in Israel, points out that most attempts to induce lucid dreaming involve some kind of sleep disturbance – such as waking in the middle of the night to practice certain visualizations. “We know how important sleep is for your mental and physical health,” she says. “It can even influence how quickly your wounds heal.” Anything that regularly disrupts our normal sleep cycle could therefore have undesired results.

Many techniques for lucid dream induction also involve “reality testing”, in which you regularly question whether you are awake, in the hope that those thoughts will come to mind when you are actually dreaming. If it is done too often, this could be “a bit disorienting”, Soffer-Dudek suggests – leading you to feel “unreal” rather than fully present in the moment.

Along these lines, she has found that people who regularly try to induce lucid dreams are more likely to suffer from dissociation – the sense of being disconnected from one’s thoughts, feelings and sense of identity. They were also more likely to show signs of schizotypy – a tendency for paranoid and magical thinking.

Soffer-Dudek doubts that infrequent experiments will cause lasting harm, though. “I don’t think it’s such a big deal if someone who is neurologically and psychologically healthy tries it out over a limited period,” she says.

Perhaps the consideration of these concerns is an inevitable consequence of the field’s maturation. As for my own experiments, I am happy to watch the research progress from the sidelines. One hundred years after Mary Arnold-Forster’s early investigations, the science of lucid dreaming may be finally coming of age.

How to lucid dream

There is little doubt that lucid dreaming can be learned. One of the best-known techniques is “reality testing”, which involves asking yourself regularly during the day whether you are dreaming – with the hope that this will spill into your actual dreams.

Another is Mnemonic Induction of Lucid Dreaming (Mild). Every time you wake from a normal dream, you spend a bit of time identifying the so-called “dream signs” – anything that was bizarre or improbable and differed from normal life. As you then try to return to sleep, you visualise entering that dream and repeat to yourself the intention: “Next time I’m dreaming, I will remember to recognise that I’m dreaming.” Some studies suggest that it may be particularly effective if you set an alarm to wake up after a few hours of sleep and spend a whole hour practising Mild, before drifting off again. This is known as WBTB – Wake Back to Bed.

There is nothing particularly esoteric about these methods. “It’s all about building a ‘prospective’ memory for the future – like remembering what you have to buy when you go shopping,” says Prof Daniel Erlacher.

Technology may ease this process. Dr Michelle Carr recently asked participants to undergo a 20-minute training programme before they fell asleep. Each time they heard a certain tone or saw the flash of a red light, they were asked to turn their attention to their physical and mental state and to question whether anything was amiss that might suggest they were dreaming. Afterwards, they were given the chance to nap, as a headset measured their brain’s activity.

When it sensed that they had entered REM sleep, it produced the same cues as the training, which – Carr hoped – would be incorporated into their dreams and act as reminders to check their state of consciousness. It worked, with about 50% experiencing a lucid dream.

Some commercial devices already purport to offer this kind of stimulation – though most have not been adequately tested for their efficacy. As the technology advances, however, easy dream control may come within anyone’s reach.

By:

David Robson is a writer based in London. His next book, The Expectation Effect: How Your Mindset Can Transform Your Life (Canongate), is available to preorder now

Source: Can lucid dreaming help us understand consciousness? | Consciousness | The Guardian

.

More Contents:

Can Consciousness Be Explained By Quantum Physics?

One of the most important open questions in science is how our consciousness is established. In the 1990s, long before winning the 2020 Nobel Prize in Physics for his prediction of black holes, physicist Roger Penrose teamed up with anaesthesiologist Stuart Hameroff to propose an ambitious answer.

They claimed that the brain’s neuronal system forms an intricate network and that the consciousness this produces should obey the rules of quantum mechanics – the theory that determines how tiny particles like electrons move around. This, they argue, could explain the mysterious complexity of human consciousness.

Penrose and Hameroff were met with incredulity. Quantum mechanical laws are usually only found to apply at very low temperatures. Quantum computers, for example, currently operate at around -272°C. At higher temperatures, classical mechanics takes over. Since our body works at room temperature, you would expect it to be governed by the classical laws of physics. For this reason, the quantum consciousness theory has been dismissed outright by many scientists – though others are persuaded supporters.

Instead of entering into this debate, I decided to join forces with colleagues from China, led by Professor Xian-Min Jin at Shanghai Jiaotong University, to test some of the principles underpinning the quantum theory of consciousness.

In our new paper, we’ve investigated how quantum particles could move in a complex structure like the brain – but in a lab setting. If our findings can one day be compared with activity measured in the brain, we may come one step closer to validating or dismissing Penrose and Hameroff’s controversial theory.

Brains and fractals

Our brains are composed of cells called neurons, and their combined activity is believed to generate consciousness. Each neuron contains microtubules, which transport substances to different parts of the cell. The Penrose-Hameroff theory of quantum consciousness argues that microtubules are structured in a fractal pattern which would enable quantum processes to occur.

Fractals are structures that are neither two-dimensional nor three-dimensional, but are instead some fractional value in between. In mathematics, fractals emerge as beautiful patterns that repeat themselves infinitely, generating what is seemingly impossible: a structure that has a finite area, but an infinite perimeter.

This might sound impossible to visualise, but fractals actually occur frequently in nature. If you look closely at the florets of a cauliflower or the branches of a fern, you’ll see that they’re both made up of the same basic shape repeating itself over and over again, but at smaller and smaller scales. That’s a key characteristic of fractals.

The same happens if you look inside your own body: the structure of your lungs, for instance, is fractal, as are the blood vessels in your circulatory system. Fractals also feature in the enchanting repeating artworks of MC Escher and Jackson Pollock, and they’ve been used for decades in technology, such as in the design of antennas. These are all examples of classical fractals – fractals that abide by the laws of classical physics rather than quantum physics.

It’s easy to see why fractals have been used to explain the complexity of human consciousness. Because they’re infinitely intricate, allowing complexity to emerge from simple repeated patterns, they could be the structures that support the mysterious depths of our minds.

But if this is the case, it could only be happening on the quantum level, with tiny particles moving in fractal patterns within the brain’s neurons. That’s why Penrose and Hameroff’s proposal is called a theory of “quantum consciousness”.

Quantum consciousness

We’re not yet able to measure the behaviour of quantum fractals in the brain – if they exist at all. But advanced technology means we can now measure quantum fractals in the lab. In recent research involving a scanning tunnelling microscope (STM), my colleagues at Utrecht and I carefully arranged electrons in a fractal pattern, creating a quantum fractal.

When we then measured the wave function of the electrons, which describes their quantum state, we found that they too lived at the fractal dimension dictated by the physical pattern we’d made. In this case, the pattern we used on the quantum scale was the Sierpiński triangle, which is a shape that’s somewhere between one-dimensional and two-dimensional.

This was an exciting finding, but STM techniques cannot probe how quantum particles move – which would tell us more about how quantum processes might occur in the brain. So in our latest research, my colleagues at Shanghai Jiaotong University and I went one step further. Using state-of-the-art photonics experiments, we were able to reveal the quantum motion that takes place within fractals in unprecedented detail.

We achieved this by injecting photons (particles of light) into an artificial chip that was painstakingly engineered into a tiny Sierpiński triangle. We injected photons at the tip of the triangle and watched how they spread throughout its fractal structure in a process called quantum transport. We then repeated this experiment on two different fractal structures, both shaped as squares rather than triangles. And in each of these structures we conducted hundreds of experiments.

Our observations from these experiments reveal that quantum fractals actually behave in a different way to classical ones. Specifically, we found that the spread of light across a fractal is governed by different laws in the quantum case compared to the classical case.

This new knowledge of quantum fractals could provide the foundations for scientists to experimentally test the theory of quantum consciousness. If quantum measurements are one day taken from the human brain, they could be compared against our results to definitely decide whether consciousness is a classical or a quantum phenomenon.

Our work could also have profound implications across scientific fields. By investigating quantum transport in our artificially designed fractal structures, we may have taken the first tiny steps towards the unification of physics, mathematics and biology, which could greatly enrich our understanding of the world around us as well as the world that exists in our heads.

By: / Professor, Theoretical Physics, Utrecht University 

Source: Can consciousness be explained by quantum physics? My research takes us a step closer to finding out

.

Offerings for a New Paradigm Politics – Random Communications from an Evolutionary Edge

Activate Peak States of Resonance, Peace, and Bliss Through Vibrational Medicine

Flattened by death? A universal response captured in brilliant prose

Felicity Wilcox’s need to escape constraints is bizarrely satisfying

Post COVID Memory Loss and Causes of Brain Fog.

Global Probiotics Market: Size & Forecast with Impact Analysis of COVID-19 (2021-2025)

Divine Connection Oracle

The path to recovery and how we build a more resilient | CEO Perspective

The Difference Between Data Science & Artificial Intelligence

When did humans begin experimenting with mind-altering drugs and alcohol?

INSTITUTE FOR INTEGRAL YOGA PSYCHOLOGY :: Life Divine Chapters

How human creativity and consciousness works

Empathy, Enchantment and Emergence in the Use of Oral Narratives: Bloomsbury Advances in Ecolinguistics Anthony Nanson Bloomsbury Academic

Experience and Signs of Spiritual Development in the Consciousness by Sri Aurobindo Studies

Understanding Consciousness | Rupert Sheldrake, George Ellis, Amie Thomasson

The Engineering of Conscious Experience
 

The Invisible Addiction: Is It Time To Give Up Caffeine?

After years of starting the day with a tall morning coffee, followed by several glasses of green tea at intervals, and the occasional cappuccino after lunch, I quit caffeine, cold turkey. It was not something that I particularly wanted to do, but I had come to the reluctant conclusion that the story I was writing demanded it. Several of the experts I was interviewing had suggested that I really couldn’t understand the role of caffeine in my life – its invisible yet pervasive power – without getting off it and then, presumably, getting back on.

Roland Griffiths, one of the world’s leading researchers of mood-altering drugs, and the man most responsible for getting the diagnosis of “caffeine withdrawal” included in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the bible of psychiatric diagnoses, told me he hadn’t begun to understand his own relationship with caffeine until he stopped using it and conducted a series of self-experiments. He urged me to do the same.

For most of us, to be caffeinated to one degree or another has simply become baseline human consciousness. Something like 90% of humans ingest caffeine regularly, making it the most widely used psychoactive drug in the world, and the only one we routinely give to children (commonly in the form of fizzy drinks). Few of us even think of it as a drug, much less our daily use of it as an addiction. It’s so pervasive that it’s easy to overlook the fact that to be caffeinated is not baseline consciousness but, in fact, an altered state. It just happens to be a state that virtually all of us share, rendering it invisible.

The scientists have spelled out, and I had duly noted, the predictable symptoms of caffeine withdrawal: headache, fatigue, lethargy, difficulty concentrating, decreased motivation, irritability, intense distress, loss of confidence and dysphoria. But beneath that deceptively mild rubric of “difficulty concentrating” hides nothing short of an existential threat to the work of the writer. How can you possibly expect to write anything when you can’t concentrate?

I postponed it as long as I could, but finally the dark day arrived. According to the researchers I’d interviewed, the process of withdrawal had actually begun overnight, while I was sleeping, during the “trough” in the graph of caffeine’s diurnal effects. The day’s first cup of tea or coffee acquires most of its power – its joy! – not so much from its euphoric and stimulating properties than from the fact that it is suppressing the emerging symptoms of withdrawal.

This is part of the insidiousness of caffeine. Its mode of action, or “pharmacodynamics”, mesh so perfectly with the rhythms of the human body that the morning cup of coffee arrives just in time to head off the looming mental distress set in motion by yesterday’s cup of coffee. Daily, caffeine proposes itself as the optimal solution to the problem caffeine creates.

At the coffee shop, instead of my usual “half caff”, I ordered a cup of mint tea. And on this morning, that lovely dispersal of the mental fog that the first hit of caffeine ushers into consciousness never arrived. The fog settled over me and would not budge. It’s not that I felt terrible – I never got a serious headache – but all day long I felt a certain muzziness, as if a veil had descended in the space between me and reality, a kind of filter that absorbed certain wavelengths of light and sound.

I was able to do some work, but distractedly. “I feel like an unsharpened pencil,” I wrote in my notebook. “Things on the periphery intrude, and won’t be ignored. I can’t focus for more than a minute.”

Over the course of the next few days, I began to feel better, the veil lifted, yet I was still not quite myself, and neither, quite, was the world. In this new normal, the world seemed duller to me. I seemed duller, too. Mornings were the worst. I came to see how integral caffeine is to the daily work of knitting ourselves back together after the fraying of consciousness during sleep. That reconsolidation of self took much longer than usual, and never quite felt complete.


Humanity’s acquaintance with caffeine is surprisingly recent. But it is hardly an exaggeration to say that this molecule remade the world. The changes wrought by coffee and tea occurred at a fundamental level – the level of the human mind. Coffee and tea ushered in a shift in the mental weather, sharpening minds that had been fogged by alcohol, freeing people from the natural rhythms of the body and the sun, thus making possible whole new kinds of work and, arguably, new kinds of thought, too.

By the 15th century, coffee was being cultivated in east Africa and traded across the Arabian peninsula. Initially, the new drink was regarded as an aide to concentration and used by Sufis in Yemen to keep them from dozing off during their religious observances. (Tea, too, started out as a little helper for Buddhist monks striving to stay awake through long stretches of meditation.) Within a century, coffeehouses had sprung up in cities across the Arab world. In 1570 there were more than 600 of them in Constantinople alone, and they spread north and west with the Ottoman empire.

The Islamic world at this time was in many respects more advanced than Europe, in science and technology, and in learning. Whether this mental flourishing had anything to do with the prevalence of coffee (and prohibition of alcohol) is difficult to prove, but as the German historian Wolfgang Schivelbusch has argued, the beverage “seemed to be tailor-​made for a culture that forbade alcohol consumption and gave birth to modern mathematics”.

In 1629 the first coffeehouses in Europe, styled on Arab and Turkish models, popped up in Venice, and the first such establishment in England was opened in Oxford in 1650 by a Jewish immigrant. They arrived in London shortly thereafter, and proliferated: within a few decades there were thousands of coffeehouses in London; at their peak, one for every 200 Londoners.

To call the English coffeehouse a new kind of public space doesn’t quite do it justice. You paid a penny for the coffee, but the information – in the form of newspapers, books, magazines and conversation – was free. (Coffeehouses were often referred to as “penny universities”.) After visiting London coffeehouses, a French writer named Maximilien Misson wrote, “You have all Manner of News there; You have a good fire, which you may sit by as long as you please: You have a Dish of Coffee; you meet your Friends for the Transaction of Business, and all for a Penny, if you don’t care to spend more.”

London’s coffeehouses were distinguished one from another by the professional or intellectual interests of their patrons, which eventually gave them specific institutional identities. So, for example, merchants and men with interests in shipping gathered at Lloyd’s Coffee House. Here you could learn what ships were arriving and departing, and buy an insurance policy on your cargo. Lloyd’s Coffee House eventually became the insurance brokerage Lloyd’s of London. Learned types and scientists – known then as “natural philosophers” – gathered at the Grecian, which became closely associated with the Royal Society; Isaac Newton and Edmond Halley debated physics and mathematics here, and supposedly once dissected a dolphin on the premises.

The conversation in London’s coffee houses frequently turned to politics, in vigorous exercises of free speech that drew the ire of the government, especially after the monarchy was restored in 1660. Charles II, worried that plots were being hatched in coffeehouses, decided that the places were dangerous fomenters of rebellion that the crown needed to suppress. In 1675 the king moved to close down the coffeehouses, on the grounds that the “false, malicious and scandalous Reports” emanating therefrom were a “Disturbance of the Quiet and Peace of the Realm”. Like so many other compounds that change the qualities of consciousness in individuals, caffeine was regarded as a threat to institutional power, which moved to suppress it, in a foreshadowing of the wars against drugs to come.

But the king’s war against coffee lasted only 11 days. Charles discovered that it was too late to turn back the tide of caffeine. By then the coffeehouse was such a fixture of English culture and daily life – and so many eminent Londoners had become addicted to caffeine – that everyone simply ignored the king’s order and blithely went on drinking coffee. Afraid to test his authority and find it lacking, the king quietly backed down, issuing a second proclamation rolling back the first “out of princely consideration and royal compassion”.

It’s hard to imagine that the sort of political, cultural and intellectual ferment that bubbled up in the coffeehouses of both France and England in the 17th century would ever have developed in a tavern. The kind of magical thinking that alcohol sponsored in the medieval mind began to yield to a new spirit of rationalism and, a bit later, Enlightenment thinking.

French historian Jules Michelet wrote: “Coffee, the sober drink, the mighty nourishment of the brain, which unlike other spirits, heightens purity and lucidity; coffee, which clears the clouds of the imagination and their gloomy weight; which illumines the reality of things suddenly with the flash of truth.”

To see, lucidly, “the reality of things”: this was, in a nutshell, the rationalist project. Coffee became, along with the microscope, telescope and the pen, one of its indispensable tools.


After a few weeks, the mental impairments of withdrawal had subsided, and I could once again think in a straight line, hold an abstraction in my head for more than two minutes, and shut peripheral thoughts out of my field of attention. Yet I continued to feel as though I was mentally just slightly behind the curve, especially when in the company of drinkers of coffee and tea, which, of course, was all the time and everywhere.

Here’s what I was missing: I missed the way caffeine and its rituals used to order my day, especially in the morning. Herbal teas – which are barely, if at all, psychoactive – lack the power of coffee and tea to organize the day into a rhythm of energetic peaks and valleys, as the mental tide of caffeine ebbs and flows. The morning surge is a blessing, obviously, but there is also something comforting in the ebb tide of afternoon, which a cup of tea can gently reverse.

At some point I began to wonder if perhaps it was all in my head, this sense that I had lost a mental step since getting off coffee and tea. So I decided to look at the science, to learn what, if any, cognitive enhancement can actually be attributed to caffeine. I found numerous studies conducted over the years reporting that caffeine improves performance on a range of cognitive measures – of memory, focus, alertness, vigilance, attention and learning.

An experiment done in the 1930s found that chess players on caffeine performed significantly better than players who abstained. In another study, caffeine users completed a variety of mental tasks more quickly, though they made more errors; as one paper put it in its title, people on caffeine are “faster, but not smarter”. In a 2014 experiment, subjects given caffeine immediately after learning new material remembered it better than subjects who received a placebo. Tests of psychomotor abilities also suggest that caffeine gives us an edge: in simulated driving exercises, caffeine improves performance, especially when the subject is tired. It also enhances physical performance on such metrics as time trials, muscle strength and endurance.

True, there is reason to take these findings with a pinch of salt, if only because this kind of research is difficult to do well. The problem is finding a good control group in a society in which virtually everyone is addicted to caffeine. But the consensus seems to be that caffeine does improve mental (and physical) performance to some degree.

Whether caffeine also enhances creativity is a different question, however, and there’s some reason to doubt that it does. Caffeine improves our focus and ability to concentrate, which surely enhances linear and abstract thinking, but creativity works very differently. It may depend on the loss of a certain kind of focus, and the freedom to let the mind off the leash of linear thought.

Cognitive psychologists sometimes talk in terms of two distinct types of consciousness: spotlight consciousness, which illuminates a single focal point of attention, making it very good for reasoning, and lantern consciousness, in which attention is less focused yet illuminates a broader field of attention. Young children tend to exhibit lantern consciousness; so do many people on psychedelics.

This more diffuse form of attention lends itself to mind wandering, free association, and the making of novel connections – all of which can nourish creativity. By comparison, caffeine’s big contribution to human progress has been to intensify spotlight consciousness – the focused, linear, abstract and efficient cognitive processing more closely associated with mental work than play. This, more than anything else, is what made caffeine the perfect drug not only for the age of reason and the Enlightenment, but for the rise of capitalism, too.

The power of caffeine to keep us awake and alert, to stem the natural tide of exhaustion, freed us from the circadian rhythms of our biology and so, along with the advent of artificial light, opened the frontier of night to the possibilities of work.

What coffee did for clerks and intellectuals, tea would soon do for the English working class. Indeed, it was tea from the East Indies – heavily sweetened with sugar from the West Indies – that fuelled the Industrial Revolution. We think of England as a tea culture, but coffee, initially the cheaper beverage by far, dominated at first.

Soon after the British East India Company began trading with China, cheap tea flooded England. A beverage that only the well-to-do could afford to drink in 1700 was by 1800 consumed by virtually everyone, from the society matron to the factory worker.

To supply this demand required an imperialist enterprise of enormous scale and brutality, especially after the British decided it would be more profitable to turn India, its colony, into a tea producer, than to buy tea from the Chinese. This required first stealing the secrets of tea production from the Chinese (a mission accomplished by the renowned Scots botanist and plant explorer Robert Fortune, disguised as a mandarin); seizing land from peasant farmers in Assam (where tea grew wild), and then forcing the farmers into servitude, picking tea leaves from dawn to dusk.

The introduction of tea to the west was all about exploitation – the extraction of surplus value from labor, not only in its production in India, but in its consumption by the British as well. Tea allowed the British working class to endure long shifts, brutal working conditions and more or less constant hunger; the caffeine helped quiet the hunger pangs, and the sugar in it became a crucial source of calories. (From a strictly nutritional standpoint, workers would have been better off sticking with beer.) The caffeine in tea helped create a new kind of worker, one better adapted to the rule of the machine. It is difficult to imagine an Industrial Revolution without it.


So how exactly does coffee, and caffeine more generally, make us more energetic, efficient and faster? How could this little molecule possibly supply the human body energy without calories? Could caffeine be the proverbial free lunch, or do we pay a price for the mental and physical energy – the alertness, focus and stamina – that caffeine gives us?

Alas, there is no free lunch. It turns out that caffeine only appears to give us energy. Caffeine works by blocking the action of adenosine, a molecule that gradually accumulates in the brain over the course of the day, preparing the body to rest. Caffeine molecules interfere with this process, keeping adenosine from doing its job – and keeping us feeling alert. But adenosine levels continue to rise, so that when the caffeine is eventually metabolized, the adenosine floods the body’s receptors and tiredness returns. So the energy that caffeine gives us is borrowed, in effect, and eventually the debt must be paid back.

For as long as people have been drinking coffee and tea, medical authorities have warned about the dangers of caffeine. But until now, caffeine has been cleared of the most serious charges against it. The current scientific consensus is more than reassuring – in fact, the research suggests that coffee and tea, far from being deleterious to our health, may offer some important benefits, as long as they aren’t consumed to excess.

Regular coffee consumption is associated with a decreased risk of several cancers (including breast, prostate, colorectal and endometrial), cardiovascular disease, type 2 diabetes, Parkinson’s disease, dementia and possibly depression and suicide. (Though high doses can produce nervousness and anxiety, and rates of suicide climb among those who drink eight or more cups a day.)

My review of the medical literature on coffee and tea made me wonder if my abstention might be compromising not only my mental function but my physical health, as well. However, that was before I spoke to Matt Walker.

An English neuroscientist on the faculty at University of California, Berkeley, Walker, author of Why We Sleep, is single-minded in his mission: to alert the world to an invisible public-health crisis, which is that we are not getting nearly enough sleep, the sleep we are getting is of poor quality, and a principal culprit in this crime against body and mind is caffeine. Caffeine itself might not be bad for you, but the sleep it’s stealing from you may have a price.

According to Walker, research suggests that insufficient sleep may be a key factor in the development of Alzheimer’s disease, arteriosclerosis, stroke, heart failure, depression, anxiety, suicide and obesity. “The shorter you sleep,” he bluntly concludes, “the shorter your lifespan.”

Walker grew up in England drinking copious amounts of black tea, morning, noon and night. He no longer consumes caffeine, save for the small amounts in his occasional cup of decaf. In fact, none of the sleep researchers or experts on circadian rhythms I interviewed for this story use caffeine.

Walker explained that, for most people, the “quarter life” of caffeine is usually about 12 hours, meaning that 25% of the caffeine in a cup of coffee consumed at noon is still circulating in your brain when you go to bed at midnight. That could well be enough to completely wreck your deep sleep.

I thought of myself as a pretty good sleeper before I met Walker. At lunch he probed me about my sleep habits. I told him I usually get a solid seven hours, fall asleep easily, dream most nights. “How many times a night do you wake up?” he asked. I’m up three or four times a night (usually to pee), but I almost always fall right back to sleep.

He nodded gravely. “That’s really not good, all those interruptions. Sleep quality is just as important as sleep quantity.” The interruptions were undermining the amount of “deep” or “slow wave” sleep I was getting, something above and beyond the REM sleep I had always thought was the measure of a good night’s rest. But it seems that deep sleep is just as important to our health, and the amount we get tends to decline with age.

Caffeine is not the sole cause of our sleep crisis; screens, alcohol (which is as hard on REM sleep as caffeine is on deep sleep), pharmaceuticals, work schedules, noise and light pollution, and anxiety can all play a role in undermining both the duration and quality of our sleep. But here’s what’s uniquely insidious about caffeine: the drug is not only a leading cause of our sleep deprivation; it is also the principal tool we rely on to remedy the problem. Most of the caffeine consumed today is being used to compensate for the lousy sleep that caffeine causes – which means that caffeine is helping to hide from our awareness the very problem that caffeine creates.


The time came to wrap up my experiment in caffeine deprivation. I was eager to see what a body that had been innocent of caffeine for three months would experience when subjected to a couple of shots of espresso. I had thought long and hard about what kind of coffee I would get, and where. I opted for a “special”, my local coffee shop’s term for a double-​shot espresso made with less steamed milk than a typical cappuccino; it’s more commonly known as a flat white.

My special was unbelievably good, a ringing reminder of what a poor counterfeit decaf is; here were whole dimensions and depths of flavour that I had completely forgotten about. Everything in my visual field seemed pleasantly italicised, filmic, and I wondered if all these people with their cardboard-sleeve-swaddled cups had any idea what a powerful drug they were sipping. But how could they?

They had long ago become habituated to caffeine, and were now using it for another purpose entirely. Baseline maintenance, that is, plus a welcome little lift. I felt lucky that this more powerful experience was available to me. This – along with the stellar sleeps – was the wonderful dividend of my investment in abstention.

And yet in a few days’ time I would be them, caffeine-tolerant and addicted all over again. I wondered: was there any way to preserve the power of this drug? Could I devise a new relationship with caffeine? Maybe treat it more like a psychedelic – say, something to be taken only on occasion, and with a greater degree of ceremony and intention. Maybe just drink coffee on Saturdays? Just the one.

When I got home I tackled my to-do list with unaccustomed fervour, harnessing the surge of energy – of focus! – coursing through me, and put it to good use. I compulsively cleared and decluttered – on the computer, in my closet, in the garden and the shed. I raked, I weeded, I put things in order, as if I were possessed. Whatever I focused on, I focused on zealously and single-mindedly.

Around noon, my compulsiveness began to subside, and I felt ready for a change of scene. I had yanked a few plants out of the vegetable garden that were not pulling their weight, and decided to go to the garden centre to buy some replacements. It was during the drive that I realised the true reason I was heading to this particular garden centre: it had this Airstream trailer parked out front that served really good espresso.

This article was amended on 8 July 2021 to include mention of the Turkish influence on early European coffeehouses.

This is an edited extract from This Is Your Mind on Plants: Opium-Caffeine-Mescaline by Michael Pollan, published by Allen Lane on 8 July and available at guardianbookshop.co.uk

By

Source: The invisible addiction: is it time to give up caffeine? | Coffee | The Guardian

.

Critics:

Caffeine is a central nervous system (CNS) stimulant of the methylxanthine class. It is the world’s most widely consumed psychoactive drug. Unlike many other psychoactive substances, it is legal and unregulated in nearly all parts of the world. There are several known mechanisms of action to explain the effects of caffeine. The most prominent is that it reversibly blocks the action of adenosine on its receptors and consequently prevents the onset of drowsiness induced by adenosine. Caffeine also stimulates certain portions of the autonomic nervous system.

Caffeine is a bitter, white crystalline purine, a methylxanthine alkaloid, and is chemically related to the adenine and guanine bases of deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). It is found in the seeds, fruits, nuts, or leaves of a number of plants native to Africa, East Asia and South America, and helps to protect them against herbivores and from competition by preventing the germination of nearby seeds, as well as encouraging consumption by select animals such as honey bees. The best-known source of caffeine is the coffee bean, the seed of the Coffea plant.

Caffeine is used in:

  • Bronchopulmonary dysplasia in premature infants for both prevention and treatment. It may improve weight gain during therapy and reduce the incidence of cerebral palsy as well as reduce language and cognitive delay. On the other hand, subtle long-term side effects are possible.
  • Apnea of prematurity as a primary treatment, but not prevention.
  • Orthostatic hypotension treatment.
  • Some people use caffeine-containing beverages such as coffee or tea to try to treat their asthma. Evidence to support this practice, however, is poor. It appears that caffeine in low doses improves airway function in people with asthma, increasing forced expiratory volume (FEV1) by 5% to 18%, with this effect lasting for up to four hours.
  • The addition of caffeine (100–130 mg) to commonly prescribed pain relievers such as paracetamol or ibuprofen modestly improves the proportion of people who achieve pain relief
%d bloggers like this: