The Science of Mind Reading

One night in October, 2009, a young man lay in an fMRI scanner in Liège, Belgium. Five years earlier, he’d suffered a head trauma in a motorcycle accident, and since then he hadn’t spoken. He was said to be in a “vegetative state.” A neuroscientist named Martin Monti sat in the next room, along with a few other researchers. For years, Monti and his postdoctoral adviser, Adrian Owen, had been studying vegetative patients, and they had developed two controversial hypotheses.

First, they believed that someone could lose the ability to move or even blink while still being conscious; second, they thought that they had devised a method for communicating with such “locked-in” people by detecting their unspoken thoughts.

In a sense, their strategy was simple. Neurons use oxygen, which is carried through the bloodstream inside molecules of hemoglobin. Hemoglobin contains iron, and, by tracking the iron, the magnets in fMRI machines can build maps of brain activity. Picking out signs of consciousness amid the swirl seemed nearly impossible. But, through trial and error, Owen’s group had devised a clever protocol.

They’d discovered that if a person imagined walking around her house there was a spike of activity in her parahippocampal gyrus—a finger-shaped area buried deep in the temporal lobe. Imagining playing tennis, by contrast, activated the premotor cortex, which sits on a ridge near the skull. The activity was clear enough to be seen in real time with an fMRI machine. In a 2006 study published in the journal Science, the researchers reported that they had asked a locked-in person to think about tennis, and seen, on her brain scan, that she had done so.

With the young man, known as Patient 23, Monti and Owen were taking a further step: attempting to have a conversation. They would pose a question and tell him that he could signal “yes” by imagining playing tennis, or “no” by thinking about walking around his house. In the scanner control room, a monitor displayed a cross-section of Patient 23’s brain. As different areas consumed blood oxygen, they shimmered red, then bright orange. Monti knew where to look to spot the yes and the no signals.

He switched on the intercom and explained the system to Patient 23. Then he asked the first question: “Is your father’s name Alexander?” The man’s premotor cortex lit up. He was thinking about tennis—yes.

“Is your father’s name Thomas?”

Activity in the parahippocampal gyrus. He was imagining walking around his house—no.

“Do you have any brothers?”

Tennis—yes.

“Do you have any sisters?”

House—no.

“Before your injury, was your last vacation in the United States?”

Tennis—yes.

The answers were correct. Astonished, Monti called Owen, who was away at a conference. Owen thought that they should ask more questions. The group ran through some possibilities. “Do you like pizza?” was dismissed as being too imprecise. They decided to probe more deeply. Monti turned the intercom back on.

That winter, the results of the study were published in The New England Journal of Medicine. The paper caused a sensation. The Los Angeles Times wrote a story about it, with the headline “Brains of Vegetative Patients Show Life.” Owen eventually estimated that twenty per cent of patients who were presumed to be vegetative were actually awake. This was a discovery of enormous practical consequence: in subsequent years, through painstaking fMRI sessions, Owen’s group found many patients who could interact with loved ones and answer questions about their own care.

The conversations improved their odds of recovery. Still, from a purely scientific perspective, there was something unsatisfying about the method that Monti and Owen had developed with Patient 23. Although they had used the words “tennis” and “house” in communicating with him, they’d had no way of knowing for sure that he was thinking about those specific things. They had been able to say only that, in response to those prompts, thinking was happening in the associated brain areas. “Whether the person was imagining playing tennis, football, hockey, swimming—we don’t know,” Monti told me recently.

During the past few decades, the state of neuroscientific mind reading has advanced substantially. Cognitive psychologists armed with an fMRI machine can tell whether a person is having depressive thoughts; they can see which concepts a student has mastered by comparing his brain patterns with those of his teacher. By analyzing brain scans, a computer system can edit together crude reconstructions of movie clips you’ve watched. One research group has used similar technology to accurately describe the dreams of sleeping subjects.

In another lab, scientists have scanned the brains of people who are reading the J. D. Salinger short story “Pretty Mouth and Green My Eyes,” in which it is unclear until the end whether or not a character is having an affair. From brain scans alone, the researchers can tell which interpretation readers are leaning toward, and watch as they change their minds.

I first heard about these studies from Ken Norman, the fifty-year-old chair of the psychology department at Princeton University and an expert on thought decoding. Norman works at the Princeton Neuroscience Institute, which is housed in a glass structure, constructed in 2013, that spills over a low hill on the south side of campus. P.N.I. was conceived as a center where psychologists, neuroscientists, and computer scientists could blend their approaches to studying the mind; M.I.T. and Stanford have invested in similar cross-disciplinary institutes.

At P.N.I., undergraduates still participate in old-school psych experiments involving surveys and flash cards. But upstairs, in a lab that studies child development, toddlers wear tiny hats outfitted with infrared brain scanners, and in the basement the skulls of genetically engineered mice are sliced open, allowing individual neurons to be controlled with lasers. A server room with its own high-performance computing cluster analyzes the data generated from these experiments.

Norman, whose jovial intelligence and unruly beard give him the air of a high-school science teacher, occupies an office on the ground floor, with a view of a grassy field. The bookshelves behind his desk contain the intellectual DNA of the institute, with William James next to texts on machine learning. Norman explained that fMRI machines hadn’t advanced that much; instead, artificial intelligence had transformed how scientists read neural data.

This had helped shed light on an ancient philosophical mystery. For centuries, scientists had dreamed of locating thought inside the head but had run up against the vexing question of what it means for thoughts to exist in physical space. When Erasistratus, an ancient Greek anatomist, dissected the brain, he suspected that its many folds were the key to intelligence, but he could not say how thoughts were packed into the convoluted mass.

In the seventeenth century, Descartes suggested that mental life arose in the pineal gland, but he didn’t have a good theory of what might be found there. Our mental worlds contain everything from the taste of bad wine to the idea of bad taste. How can so many thoughts nestle within a few pounds of tissue?

Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.

Norman invited me to watch an experiment in thought decoding. A postdoctoral student named Manoj Kumar led us into a locked basement lab at P.N.I., where a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest.

“We want to get the brain patterns that are associated with different subclasses of scenes,” Norman said.

As the woman watched the slide show, the scanner tracked patterns of activation among her neurons. These patterns would be analyzed in terms of “voxels”—areas of activation that are roughly a cubic millimetre in size. In some ways, the fMRI data was extremely coarse: each voxel represented the oxygen consumption of about a million neurons, and could be updated only every few seconds, significantly more slowly than neurons fire.

But, Norman said, “it turned out that that information was in the data we were collecting—we just weren’t being as smart as we possibly could about how we’d churn through that data.” The breakthrough came when researchers figured out how to track patterns playing out across tens of thousands of voxels at a time, as though each were a key on a piano, and thoughts were chords.

The origins of this approach, I learned, dated back nearly seventy years, to the work of a psychologist named Charles Osgood. When he was a kid, Osgood received a copy of Roget’s Thesaurus as a gift. Poring over the book, Osgood recalled, he formed a “vivid image of words as clusters of starlike points in an immense space.” In his postgraduate days, when his colleagues were debating how cognition could be shaped by culture, Osgood thought back on this image. He wondered if, using the idea of “semantic space,” it might be possible to map the differences among various styles of thinking.

Osgood conducted an experiment. He asked people to rate twenty concepts on fifty different scales. The concepts ranged widely: BOULDER, ME, TORNADO, MOTHER. So did the scales, which were defined by opposites: fair-unfair, hot-cold, fragrant-foul. Some ratings were difficult: is a TORNADO fragrant or foul? But the idea was that the method would reveal fine and even elusive shades of similarity and difference among concepts.

“Most English-speaking Americans feel that there is a difference, somehow, between ‘good’ and ‘nice’ but find it difficult to explain,” Osgood wrote. His surveys found that, at least for nineteen-fifties college students, the two concepts overlapped much of the time. They diverged for nouns that had a male or female slant. MOTHER might be rated nice but not good, and COP vice versa. Osgood concluded that “good” was “somewhat stronger, rougher, more angular, and larger” than “nice.”

Osgood became known not for the results of his surveys but for the method he invented to analyze them. He began by arranging his data in an imaginary space with fifty dimensions—one for fair-unfair, a second for hot-cold, a third for fragrant-foul, and so on. Any given concept, like TORNADO, had a rating on each dimension—and, therefore, was situated in what was known as high-dimensional space. Many concepts had similar locations on multiple axes: kind-cruel and honest-dishonest, for instance. Osgood combined these dimensions. Then he looked for new similarities, and combined dimensions again, in a process called “factor analysis.”

When you reduce a sauce, you meld and deepen the essential flavors. Osgood did something similar with factor analysis. Eventually, he was able to map all the concepts onto a space with just three dimensions. The first dimension was “evaluative”—a blend of scales like good-bad, beautiful-ugly, and kind-cruel. The second had to do with “potency”: it consolidated scales like large-small and strong-weak. The third measured how “active” or “passive” a concept was. Osgood could use these three key factors to locate any concept in an abstract space. Ideas with similar coördinates, he argued, were neighbors in meaning.

For decades, Osgood’s technique found modest use in a kind of personality test. Its true potential didn’t emerge until the nineteen-eighties, when researchers at Bell Labs were trying to solve what they called the “vocabulary problem.” People tend to employ lots of names for the same thing. This was an obstacle for computer users, who accessed programs by typing words on a command line. George Furnas, who worked in the organization’s human-computer-interaction group, described using the company’s internal phone book.

“You’re in your office, at Bell Labs, and someone has stolen your calculator,” he said. “You start putting in ‘police,’ or ‘support,’ or ‘theft,’ and it doesn’t give you what you want. Finally, you put in ‘security,’ and it gives you that. But it actually gives you two things: something about the Bell Savings and Security Plan, and also the thing you’re looking for.” Furnas’s group wanted to automate the finding of synonyms for commands and search terms.

They updated Osgood’s approach. Instead of surveying undergraduates, they used computers to analyze the words in about two thousand technical reports. The reports themselves—on topics ranging from graph theory to user-interface design—suggested the dimensions of the space; when multiple reports used similar groups of words, their dimensions could be combined.

In the end, the Bell Labs researchers made a space that was more complex than Osgood’s. It had a few hundred dimensions. Many of these dimensions described abstract or “latent” qualities that the words had in common—connections that wouldn’t be apparent to most English speakers. The researchers called their technique “latent semantic analysis,” or L.S.A.

At first, Bell Labs used L.S.A. to create a better internal search engine. Then, in 1997, Susan Dumais, one of Furnas’s colleagues, collaborated with a Bell Labs cognitive scientist, Thomas Landauer, to develop an A.I. system based on it. After processing Grolier’s American Academic Encyclopedia, a work intended for young students, the A.I. scored respectably on the multiple-choice Test of English as a Foreign Language. That year, the two researchers co-wrote a paper that addressed the question “How do people know as much as they do with as little information as they get?”

They suggested that our minds might use something like L.S.A., making sense of the world by reducing it to its most important differences and similarities, and employing this distilled knowledge to understand new things. Watching a Disney movie, for instance, I immediately identify a character as “the bad guy”: Scar, from “The Lion King,” and Jafar, from “Aladdin,” just seem close together. Perhaps my brain uses factor analysis to distill thousands of attributes—height, fashion sense, tone of voice—into a single point in an abstract space. The perception of bad-guy-ness becomes a matter of proximity.

In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.”

Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail. Other companies, including Apple and Amazon, built similar systems. Eventually, researchers realized that the “vectorization” made popular by L.S.A. and word2vec could be used to map all sorts of things. Today’s facial-recognition systems have dimensions that represent the length of the nose and the curl of the lips, and faces are described using a string of coördinates in “face space.” Chess A.I.s use a similar trick to “vectorize” positions on the board.

The technique has become so central to the field of artificial intelligence that, in 2017, a new, hundred-and-thirty-five-million-dollar A.I. research center in Toronto was named the Vector Institute. Matthew Botvinick, a professor at Princeton whose lab was across the hall from Norman’s, and who is now the head of neuroscience at DeepMind, Alphabet’s A.I. subsidiary, told me that distilling relevant similarities and differences into vectors was “the secret sauce underlying all of these A.I. advances.”

In 2001, a scientist named Jim Haxby brought machine learning to brain imaging: he realized that voxels of neural activity could serve as dimensions in a kind of thought space. Haxby went on to work at Princeton, where he collaborated with Norman. The two scientists, together with other researchers, concluded that just a few hundred dimensions were sufficient to capture the shades of similarity and difference in most fMRI data. At the Princeton lab, the young woman watched the slide show in the scanner.

With each new image—beach, cave, forest—her neurons fired in a new pattern. These patterns would be recorded as voxels, then processed by software and transformed into vectors. The images had been chosen because their vectors would end up far apart from one another: they were good landmarks for making a map. Watching the images, my mind was taking a trip through thought space, too.

The larger goal of thought decoding is to understand how our brains mirror the world. To this end, researchers have sought to watch as the same experiences affect many people’s minds simultaneously. Norman told me that his Princeton colleague Uri Hasson has found movies especially useful in this regard. They “pull people’s brains through thought space in synch,” Norman said. “What makes Alfred Hitchcock the master of suspense is that all the people who are watching the movie are having their brains yanked in unison. It’s like mind control in the literal sense.”

One afternoon, I sat in on Norman’s undergraduate class “fMRI Decoding: Reading Minds Using Brain Scans.” As students filed into the auditorium, setting their laptops and water bottles on tables, Norman entered wearing tortoiseshell glasses and earphones, his hair dishevelled.

He had the class watch a clip from “Seinfeld” in which George, Susan (an N.B.C. executive he is courting), and Kramer are hanging out with Jerry in his apartment. The phone rings, and Jerry answers: it’s a telemarketer. Jerry hangs up, to cheers from the studio audience.

“Where was the event boundary in the clip?” Norman asked. The students yelled out in chorus, “When the phone rang!” Psychologists have long known that our minds divide experiences into segments; in this case, it was the phone call that caused the division.

Norman showed the class a series of slides. One described a 2017 study by Christopher Baldassano, one of his postdocs, in which people watched an episode of the BBC show “Sherlock” while in an fMRI scanner. Baldassano’s guess going into the study was that some voxel patterns would be in constant flux as the video streamed—for instance, the ones involved in color processing. Others would be more stable, such as those representing a character in the show.

The study confirmed these predictions. But Baldassano also found groups of voxels that held a stable pattern throughout each scene, then switched when it was over. He concluded that these constituted the scenes’ voxel “signatures.” Norman described another study, by Asieh Zadbood, in which subjects were asked to narrate “Sherlock” scenes—which they had watched earlier—aloud.

The audio was played to a second group, who’d never seen the show. It turned out that no matter whether someone watched a scene, described it, or heard about it, the same voxel patterns recurred. The scenes existed independently of the show, as concepts in people’s minds.

Through decades of experimental work, Norman told me later, psychologists have established the importance of scripts and scenes to our intelligence. Walking into a room, you might forget why you came in; this happens, researchers say, because passing through the doorway brings one mental scene to a close and opens another.

Conversely, while navigating a new airport, a “getting to the plane” script knits different scenes together: first the ticket counter, then the security line, then the gate, then the aisle, then your seat. And yet, until recently, it wasn’t clear what you’d find if you went looking for “scripts” and “scenes” in the brain.

In a recent P.N.I. study, Norman said, people in an fMRI scanner watched various movie clips of characters in airports. No matter the particulars of each clip, the subjects’ brains all shimmered through the same series of events, in keeping with boundary-defining moments that any of us would recognize. The scripts and the scenes were real—it was possible to detect them with a machine. What most interests Norman now is how they are learned in the first place.

How do we identify the scenes in a story? When we enter a strange airport, how do we know intuitively where to look for the security line? The extraordinary difficulty of such feats is obscured by how easy they feel—it’s rare to be confused about how to make sense of the world. But at some point everything was new. When I was a toddler, my parents must have taken me to the supermarket for the first time; the fact that, today, all supermarkets are somehow familiar dims the strangeness of that experience.

When I was learning to drive, it was overwhelming: each intersection and lane change seemed chaotic in its own way. Now I hardly have to think about them. My mind instantly factors out all but the important differences.

Norman clicked through the last of his slides. Afterward, a few students wandered over to the lectern, hoping for an audience with him. For the rest of us, the scene was over. We packed up, climbed the stairs, and walked into the afternoon sun.

Like Monti and Owen with Patient 23, today’s thought-decoding researchers mostly look for specific thoughts that have been defined in advance. But a “general-purpose thought decoder,” Norman told me, is the next logical step for the research. Such a device could speak aloud a person’s thoughts, even if those thoughts have never been observed in an fMRI machine. In 2018, Botvinick, Norman’s hall mate, co-wrote a paper in the journal Nature Communications titled “Toward a Universal Decoder of Linguistic Meaning from Brain Activation.”

Botvinick’s team had built a primitive form of what Norman described: a system that could decode novel sentences that subjects read silently to themselves. The system learned which brain patterns were evoked by certain words, and used that knowledge to guess which words were implied by the new patterns it encountered.

The work at Princeton was funded by iARPA, an R. & D. organization that’s run by the Office of the Director of National Intelligence. Brandon Minnery, the iARPA project manager for the Knowledge Representation in Neural Systems program at the time, told me that he had some applications in mind. If you knew how knowledge was represented in the brain, you might be able to distinguish between novice and expert intelligence agents. You might learn how to teach languages more effectively by seeing how closely a student’s mental representation of a word matches that of a native speaker.

Minnery’s most fanciful idea—“Never an official focus of the program,” he said—was to change how databases are indexed. Instead of labelling items by hand, you could show an item to someone sitting in an fMRI scanner—the person’s brain state could be the label. Later, to query the database, someone else could sit in the scanner and simply think of whatever she wanted. The software could compare the searcher’s brain state with the indexer’s. It would be the ultimate solution to the vocabulary problem.

Jack Gallant, a professor at Berkeley who has used thought decoding to reconstruct video montages from brain scans—as you watch a video in the scanner, the system pulls up frames from similar YouTube clips, based only on your voxel patterns—suggested that one group of people interested in decoding were Silicon Valley investors. “A future technology would be a portable hat—like a thinking hat,” he said.

He imagined a company paying people thirty thousand dollars a year to wear the thinking hat, along with video-recording eyeglasses and other sensors, allowing the system to record everything they see, hear, and think, ultimately creating an exhaustive inventory of the mind. Wearing the thinking hat, you could ask your computer a question just by imagining the words. Instantaneous translation might be possible. In theory, a pair of wearers could skip language altogether, conversing directly, mind to mind. Perhaps we could even communicate across species.

Among the challenges the designers of such a system would face, of course, is the fact that today’s fMRI machines can weigh more than twenty thousand pounds. There are efforts under way to make powerful miniature imaging devices, using lasers, ultrasound, or even microwaves. “It’s going to require some sort of punctuated-equilibrium technology revolution,” Gallant said. Still, the conceptual foundation, which goes back to the nineteen-fifties, has been laid.

Recently, I asked Owen what the new thought-decoding technology meant for locked-in patients. Were they close to having fluent conversations using something like the general-purpose thought decoder? “Most of that stuff is group studies in healthy participants,” Owen told me. “The really tricky problem is doing it in a single person. Can you get robust enough data?” Their bare-bones protocol—thinking about tennis equals yes; thinking about walking around the house equals no—relied on straightforward signals that were statistically robust.

It turns out that the same protocol, combined with a series of yes-or-no questions (“Is the pain in the lower half of your body? On the left side?”), still works best. “Even if you could do it, it would take longer to decode them saying ‘it is in my right foot’ than to go through a simple series of yes-or-no questions,” Owen said. “For the most part, I’m quietly sitting and waiting. I have no doubt that, some point down the line, we will be able to read minds. People will be able to articulate, ‘My name is Adrian, and I’m British,’ and we’ll be able to decode that from their brain. I don’t think it’s going to happen in probably less than twenty years.”

In some ways, the story of thought decoding is reminiscent of the history of our understanding of the gene. For about a hundred years after the publication of Charles Darwin’s “On the Origin of Species,” in 1859, the gene was an abstraction, understood only as something through which traits passed from parent to child. As late as the nineteen-fifties, biologists were still asking what, exactly, a gene was made of. When James Watson and Francis Crick finally found the double helix, in 1953, it became clear how genes took physical form. Fifty years later, we could sequence the human genome; today, we can edit it.

Thoughts have been an abstraction for far longer. But now we know what they really are: patterns of neural activation that correspond to points in meaning space. The mind—the only truly private place—has become inspectable from the outside. In the future, a therapist, wanting to understand how your relationships run awry, might examine the dimensions of the patterns your brain falls into.

Some epileptic patients about to undergo surgery have intracranial probes put into their brains; researchers can now use these probes to help steer the patients’ neural patterns away from those associated with depression. With more fine-grained control, a mind could be driven wherever one liked. (The imagination reels at the possibilities, for both good and ill.) Of course, we already do this by thinking, reading, watching, talking—actions that, after I’d learned about thought decoding, struck me as oddly concrete. I could picture the patterns of my thoughts flickering inside my mind. Versions of them are now flickering in yours.

On one of my last visits to Princeton, Norman and I had lunch at a Japanese restaurant called Ajiten. We sat at a counter and went through the familiar script. The menus arrived; we looked them over. Norman noticed a dish he hadn’t seen before—“a new point in ramen space,” he said. Any minute now, a waiter was going to interrupt politely to ask if we were ready to order.

“You have to carve the world at its joints, and figure out: what are the situations that exist, and how do these situations work?” Norman said, while jazz played in the background. “And that’s a very complicated problem. It’s not like you’re instructed that the world has fifteen different ways of being, and here they are!” He laughed. “When you’re out in the world, you have to try to infer what situation you’re in.” We were in the lunch-at-a-Japanese-restaurant situation. I had never been to this particular restaurant, but nothing about it surprised me. This, it turns out, might be one of the highest accomplishments in nature.

Norman told me that a former student of his, Sam Gershman, likes using the terms “lumping” and “splitting” to describe how the mind’s meaning space evolves. When you encounter a new stimulus, do you lump it with a concept that’s familiar, or do you split off a new concept? When navigating a new airport, we lump its metal detector with those we’ve seen before, even if this one is a different model, color, and size. By contrast, the first time we raised our hands inside a millimetre-wave scanner—the device that has replaced the walk-through metal detector—we split off a new category.

Norman turned to how thought decoding fit into the larger story of the study of the mind. “I think we’re at a point in cognitive neuroscience where we understand a lot of the pieces of the puzzle,” he said. The cerebral cortex—a crumply sheet laid atop the rest of the brain—warps and compresses experience, emphasizing what’s important. It’s in constant communication with other brain areas, including the hippocampus, a seahorse-shaped structure in the inner part of the temporal lobe.

For years, the hippocampus was known only as the seat of memory; patients who’d had theirs removed lived in a perpetual present. Now we were seeing that the hippocampus stores summaries provided to it by the cortex: the sauce after it’s been reduced. We cope with reality by building a vast library of experience—but experience that has been distilled along the dimensions that matter. Norman’s research group has used fMRI technology to find voxel patterns in the cortex that are reflected in the hippocampus. Perhaps the brain is like a hiker comparing the map with the territory.

In the past few years, Norman told me, artificial neural networks that included basic models of both brain regions had proved surprisingly powerful. There was a feedback loop between the study of A.I. and the study of the real human mind, and it was getting faster. Theories about human memory were informing new designs for A.I. systems, and those systems, in turn, were suggesting ideas about what to look for in real human brains. “It’s kind of amazing to have gotten to this point,” he said.

On the walk back to campus, Norman pointed out the Princeton University Art Museum. It was a treasure, he told me.

“What’s in there?” I asked.

“Great art!” he said

After we parted ways, I returned to the museum. I went to the downstairs gallery, which contains artifacts from the ancient world. Nothing in particular grabbed me until I saw a West African hunter’s tunic. It was made of cotton dyed the color of dark leather. There were teeth hanging from it, and claws, and a turtle shell—talismans from past kills. It struck me, and I lingered for a moment before moving on.

Six months later, I went with some friends to a small house in upstate New York. On the wall, out of the corner of my eye, I noticed what looked like a blanket—a kind of fringed, hanging decoration made of wool and feathers. It had an odd shape; it seemed to pull toward something I’d seen before. I stared at it blankly. Then came a moment of recognition, along dimensions I couldn’t articulate—more active than passive, partway between alive and dead. There, the chest. There, the shoulders. The blanket and the tunic were distinct in every way, but somehow still neighbors. My mind had split, then lumped. Some voxels had shimmered. In the vast meaning space inside my head, a tiny piece of the world was finding its proper place. ♦

Source: The Science of Mind Reading | The New Yorker

.

More Contents:

Can Lucid Dreaming Help Us Understand Consciousness

The ability to control our dreams is a skill that more of us are seeking to acquire for sheer pleasure. But if taken seriously, scientists believe it could unlock new secrets of the mind

Michelle Carr is frequently plagued by tidal waves in her dreams. What should be a terrifying nightmare, however, can quickly turn into a whimsical adventure – thanks to her ability to control her dreams. She can transform herself into a dolphin and swim into the water. Once, she transformed the wave itself, turning it into a giant snail with a huge shell. “It came right up to me – it was a really beautiful moment.”

There’s a thriving online community of people who are now trying to learn how to lucid dream. (A single subreddit devoted to the phenomenon has more than 400,000 members.) Many are simply looking for entertainment. “It’s just so exciting and unbelievable to be in a lucid dream and to witness your mind creating this completely vivid simulation,” says Carr, who is a sleep researcher at the University of Rochester in New York state. Others hope that exercising skills in their dreams will increase their real-life abilities. “A lot of elite athletes use lucid dreams to practice their sport.”

And there are more profound reasons to exploit this sleep state, besides personal improvement. By identifying the brain activity that gives rise to the heightened awareness and sense of agency in lucid dreams, neuroscientists and psychologists hope to answer fundamental questions about the nature of human consciousness, including our apparently unique capacity for self-awareness. “More and more researchers, from many different fields, have started to incorporate lucid dreams in their research,” says Carr.

This interest in lucid dreaming has been growing in fits and starts for more than a century. Despite his fascination with the interaction between the conscious and subconscious minds, Sigmund Freud barely mentioned lucid dreams in his writings. Instead, it was an English aristocrat and writer, Mary Arnold-Forster, who provided one of the earliest and most detailed descriptions in the English language in her book Studies in Dreams.

Published in 1921, the book offered countless colourful escapades in the dreamscape, including charming descriptions of her attempts to fly. “A slight paddling motion by my hands increases the pace of the flight and is used either to enable me to reach a greater height, or else for the purpose of steering, especially through any narrow place, such as through a doorway or window,” she wrote.

Based on her experiences, Arnold-Forster proposed that humans have a “dual consciousness”. One of these, the “primary self”, allows us to analyze our circumstances and to apply logic to what we are experiencing – but it is typically inactive during sleep, leaving us with a dream consciousness that cannot reflect on its own state. In lucid dreams, however, the primary self “wakes up”, bringing with it “memories, knowledge of facts, and trains of reasoning”, as well as the awareness that one is asleep.

She may have been on to something. Neuroscientists and psychologists today may balk at the term “dual consciousness”, but most would agree that lucid dreams involve an increased self-awareness and reflection, a greater sense of agency and volition, and an ability to think about the more distant past and future. These together mark a substantially different mental experience from the typically passive state of non-lucid dreams.

“There’s a grouping of higher-level features, which seem to be very closely associated with what we think of as human consciousness, which come back in that shift from a non-lucid to a lucid dream,” says Dr Benjamin Baird, a research scientist at the Center for Sleep and Consciousness at the University of Wisconsin-Madison. “And there’s something to be learned in looking at that contrast.”

You may wonder why we can’t just scan the brains of fully awake subjects to identify the neural processes underlying this sophisticated mental state. But waking consciousness also involves many other phenomena, including sensory inputs from the outside world, that can make it hard to separate the different elements of the experience. When a sleeper enters a lucid dream, nothing has changed apart from the person’s conscious state. As a result, studies of lucid dreams may provide an important point of comparison that could help to isolate the specific regions involved in heightened self-awareness and agency.

Unfortunately, it has been very hard to get someone to lucid dream inside the noisy and constrained environment of an fMRI scanner. Nevertheless, a case study published in 2012 confirmed that it can be done. The participant, a frequent lucid dreamer, was asked to shift his gaze from left to right whenever he “awoke” in his dream – a dream motion that is also known to translate to real eye movements. This allowed the researchers to identify the moment at which he had achieved lucidity.

The brain scans revealed heightened activity in a group of regions, including the anterior prefrontal cortex, that are together known as the frontoparietal network. These areas are markedly less active during normal REM sleep, but they became much busier whenever the participant entered his lucid dream – suggesting that they are somehow involved in the heightened reflection and self-awareness that characterize the state.

Several other strands of research all point in the same direction. Working with the famed consciousness researcher Giulio Tononi, Baird has recently examined the overall brain connectivity of people who experience more than three lucid dreams a week. In line with the findings of the case study, he found evidence of greater communication between the regions in the frontoparietal network – a difference that may have made it easier to gain the heightened self-awareness during sleep.

Further evidence comes from the alkaloid galantamine, which can be used to induce lucid dreams. In a recent study, Baird and colleagues asked people to sleep for a few hours before waking. The participants then took a small dose of the drug, or a placebo, before practising a few visualisation exercises that are also thought to modestly increase the chances of lucid dreaming. After about half an hour, they went back to sleep.

The results were striking. Just 14% of those taking a placebo managed to gain awareness of their dream state, compared with 27% taking a 4mg dose of galantamine, and 42% taking an 8mg dose. “The effect is humongous,” says Baird.

Galantamine has been approved by Nice to treat moderate Alzheimer’s disease. It is thought to work by boosting concentrations of the neurotransmitter acetylcholine at our brain cell’s synapses. Intriguingly, previous research had shown that this can raise signalling in the frontoparietal regions from a low baseline. This may have helped the dreaming participants to pass the threshold of neural activity that is necessary for heightened self-awareness. “It’s yet another source of evidence for the involvement of these regions in lucid dreaming,” says Baird, who now hopes to conduct more detailed fMRI studies to test the hypothesis.

Prof Daniel Erlacher, who researches lucid dreams at the University of Berne in Switzerland, welcomes the increased interest in the field. “There is more research funding now,” he says, though he points out that some scientists are still sceptical of its worth.

That cynicism is a shame, since there could be important clinical applications of these findings. When people are unresponsive after brain injuries, it can be very difficult to establish their level of consciousness. If work on lucid dreams helps scientists to establish a neural signature of self-awareness, it might allow doctors to make more accurate diagnoses and prognoses for these patients and to determine how they might be experiencing the effects of their illness.

At the very least, Baird’s research is sure to attract attention from the vast online community of wannabe lucid dreamers, who are seeking more reliable ways to experience the phenomenon. Galantamine, which can be extracted from snowdrops, is already available as an over-the-counter dietary supplement in the US, and its short-term side-effects are mild – so there are currently no legal barriers for Americans who wish to self-experiment. But Baird points out that there may be as-yet-unknown long-term consequences if it is used repeatedly to induce lucid dreams. “My advice would be to use your own discretion and to seek the guidance of a physician,” he says.

For the time being, we may be safest using psychological strategies (see below). Even then, we should proceed with caution. Dr Nirit Soffer-Dudek, a psychologist at Ben-Gurion University of the Negev in Israel, points out that most attempts to induce lucid dreaming involve some kind of sleep disturbance – such as waking in the middle of the night to practice certain visualizations. “We know how important sleep is for your mental and physical health,” she says. “It can even influence how quickly your wounds heal.” Anything that regularly disrupts our normal sleep cycle could therefore have undesired results.

Many techniques for lucid dream induction also involve “reality testing”, in which you regularly question whether you are awake, in the hope that those thoughts will come to mind when you are actually dreaming. If it is done too often, this could be “a bit disorienting”, Soffer-Dudek suggests – leading you to feel “unreal” rather than fully present in the moment.

Along these lines, she has found that people who regularly try to induce lucid dreams are more likely to suffer from dissociation – the sense of being disconnected from one’s thoughts, feelings and sense of identity. They were also more likely to show signs of schizotypy – a tendency for paranoid and magical thinking.

Soffer-Dudek doubts that infrequent experiments will cause lasting harm, though. “I don’t think it’s such a big deal if someone who is neurologically and psychologically healthy tries it out over a limited period,” she says.

Perhaps the consideration of these concerns is an inevitable consequence of the field’s maturation. As for my own experiments, I am happy to watch the research progress from the sidelines. One hundred years after Mary Arnold-Forster’s early investigations, the science of lucid dreaming may be finally coming of age.

How to lucid dream

There is little doubt that lucid dreaming can be learned. One of the best-known techniques is “reality testing”, which involves asking yourself regularly during the day whether you are dreaming – with the hope that this will spill into your actual dreams.

Another is Mnemonic Induction of Lucid Dreaming (Mild). Every time you wake from a normal dream, you spend a bit of time identifying the so-called “dream signs” – anything that was bizarre or improbable and differed from normal life. As you then try to return to sleep, you visualise entering that dream and repeat to yourself the intention: “Next time I’m dreaming, I will remember to recognise that I’m dreaming.” Some studies suggest that it may be particularly effective if you set an alarm to wake up after a few hours of sleep and spend a whole hour practising Mild, before drifting off again. This is known as WBTB – Wake Back to Bed.

There is nothing particularly esoteric about these methods. “It’s all about building a ‘prospective’ memory for the future – like remembering what you have to buy when you go shopping,” says Prof Daniel Erlacher.

Technology may ease this process. Dr Michelle Carr recently asked participants to undergo a 20-minute training programme before they fell asleep. Each time they heard a certain tone or saw the flash of a red light, they were asked to turn their attention to their physical and mental state and to question whether anything was amiss that might suggest they were dreaming. Afterwards, they were given the chance to nap, as a headset measured their brain’s activity.

When it sensed that they had entered REM sleep, it produced the same cues as the training, which – Carr hoped – would be incorporated into their dreams and act as reminders to check their state of consciousness. It worked, with about 50% experiencing a lucid dream.

Some commercial devices already purport to offer this kind of stimulation – though most have not been adequately tested for their efficacy. As the technology advances, however, easy dream control may come within anyone’s reach.

By:

David Robson is a writer based in London. His next book, The Expectation Effect: How Your Mindset Can Transform Your Life (Canongate), is available to preorder now

Source: Can lucid dreaming help us understand consciousness? | Consciousness | The Guardian

.

More Contents:

What Causes Weird Phobias & What Can We Do About Them

I’m ashamed to say that when my husband told me he was terrified of cooked eggs, I mocked him and made jokes, from pretending that there was an egg in something he had just bitten into and waving my egg-based dishes under his nose.

I thought that his reactions of horror were a little exaggerated. There are plenty of foods I don’t like but I’m certainly not terrified at the thought of a kidney bean. It turns out that my reaction was wrong – and I still feel pangs of guilt for it. The fact is, my husband has a phobia. He doesn’t just hate eggs, they cause him trauma. He probably won’t read this as even the word egg is vile to him.

He won’t go to cafes due to the risk that a pan his breakfast has been cooked on had previously contained an egg. He has been physically sick at the smell of cooking eggs. If food he had ordered contained even a sliver of egg, he would not touch the entire dish, even parts that weren’t touching it.

Many people will be able to relate to his experience – or mine. It’s possible to have a phobia of anything, despite many believing only the obviously scary things – think spiders, flying, snakes – constitute a real, genuine fear. My sister has a fear of patterns; particularly dotted but any kind of repetitive pattern. Anything with hectic shapes, lines, dots or colours whether a piece of art, wallpaper or packaging terrifies her.

Other ‘weird’ phobias can include arachibutyrophobia, the fear of peanut butter sticking to the roof of your mouth. Octophobia is the fear of the number eight and hippopotomonstrosesquippedaliophobia is, ironically, the fear of long words. Celebrity phobias include Billy Bob Thornton’s ‘crippling’ fear of antique furniture, Kylie Minogue’s phobia of clothes hangers, Matthew McConaughey’s fear of revolving doors, and Khloe Kardashian’s horror at belly buttons.

My husband was satisfied at the feeling of vindication when he found out the name of his own phobia, which is ovophobia. Where do these phobias originate? Are they just innate? Or are they linked to childhood experiences that may have been forgotten, but which triggered a connection to the item of fear?

When does a fear become a phobia?

Fear is a normal part of human life. But it becomes a phobia when this fear is overwhelming and debilitating. Someone with a phobia will have an extreme or unrealistic sense of danger about a particular situation, sensation, animal, or object. It might not make sense to other people, because the focus of the phobia isn’t obviously dangerous.

Phobias come under the umbrella of anxiety disorders, and can cause physical symptoms such as:

  • unsteadiness, dizziness and lightheadedness
  • nausea
  • sweating
  • increased heart rate or palpitations
  • shortness of breath
  • trembling or shaking
  • an upset stomach

My husband recently recalled, after years of trying to figure his egg fear out, that he was always terrified of visiting a relative’s house as a toddler. This relative had a booming voice, slammed his fist on the table without warning and threatened to lock him in the coal shed, as well as saying that there was a monster living inside the sink.

His mum recalls how she could feel both him and his brother physically sweating with fear while on her knee and the one consistent thing that was in that kitchen was fried eggs being cooked. It’s clear that he associates that smell of eggs and the sight of them with frightening times as a child. It makes perfect sense why that phobia has manifested itself into something like this.

According to Clinical Partners, who specialize in the treatment of phobias, around 5% of children and 16% of teenagers in the UK suffer from a phobia, with most phobias developing before the age of 10.

Children and teenagers with phobias often feel ashamed about their fears and keep them secret from their friends in case they are teased. This will be the same for adults in a workplace or social setting. I’m frightened of patterns, bananas, beards or the colour yellow is hardly a comfortable ice breaker.

And yet, working alongside a new colleague with a beard or all memos coming on yellow paper would be triggering for those suffering with said phobias; making for a very uncomfortable environment both for the sufferer and the colleagues who have no idea they’re causing alarm.

Clinical Partners explains: ‘Phobias arise for different reasons but a bad experience in early years can trigger a pattern of thoughts that result in a powerful fear of a situation – for instance if your child falls ill after having an injection, they may develop an ongoing fear to injections, which can get worse over time.

‘Children may also “learn” to have a phobia – for instance if a close family member is afraid of spiders and the child witnesses them screaming when they see one, they may also develop that fear.’ There are a lot of environmental factors at play here but for the less common phobias, we have to dig deeper to try and work out the source.

There is no guarantee that discovering that source will erase your phobia but if the phobia is seriously impacting your life to the point where you can’t work, go out, become ill and even fear dying, it’s a valid starting point to understand the root of it.

CBT and talking therapies are available for this. Start by talking to your GP; phobias are a recognized condition and for many, a gradual but very carefully carried out exposure to the item of fear by a professional can be an important first step.

For my husband, his knowledge of what caused his phobia is enough. He isn’t desperate to get over his fear of eggs and doesn’t want to spend weeks and months of treatment just to potentially be ok with an egg yolk dribbling onto his bacon.

But for others, treatment is vital in order to get to a place where the phobia is not ruling their life. What can the rest of us do? Showing compassion and understanding – and never poking fun – is key. It’s a hard and embarrassing thing to confess, so don’t break a person’s confidence by waving a peeled banana under the nose of someone who is scared of them.

At the same time, you don’t have to wrap a person with a phobia in cotton wool and treat them any differently; simply be conscious of their fear and check your own actions to ensure that you are not inadvertently causing them discomfort.

Phobias are very real and sometimes we don’t know where they originate from or why they affect us so much. It’s a condition we have been programmed to underestimate, but given the mental health impacts they can lead onto, we need to all be more accepting that people can be and are terrified of things we don’t understand.

author image

By:

Source: What causes ‘weird phobias – and what can we do about them? | Metro News

.

More Contents:

Postnatal depression rates doubled over lockdown – it hit me hard

Free activities to do with your kids this summer to boost their mental wellbeing

Police officer shares how she feeds her family-of-five for £50 a week

Ultra high-cut Shein bodysuit labelled ‘yeast infection waiting to happen’ by puzzled shoppers

How to romance each star sign, from compliments for Leos to pretty things for Libras

You can’t call yourself a hairdresser unless you can do Afro hair,’ says white salon owner

What I Rent: Emma, £850 a month for a four-bedroom townhouse in Todwick, Rotherham

Don’t Wish for Happiness. Work for It

In his 1851 work American Notebooks, Nathaniel Hawthorne wrote, “Happiness in this world, when it comes, comes incidentally. Make it the object of pursuit, and it leads us a wild-goose chase, and is never attained.” This is basically a restatement of the Stoic philosophers’ “paradox of happiness”: To attain happiness, we must not try to attain it.

A number of scholars have set out to test this claim. For example, researchers writing in the journal Emotion in 2011 found that valuing happiness was associated with lower moods, less well-being, and more depressive symptoms under conditions of low life stress. At first, this would seem to support the happiness paradox—that thinking about it makes it harder to get. But there are alternative explanations. For example, unhappy people might say they “value happiness” more than those who already possess it, just as hungry people value food more than those who are full.

More to the point, wishing you were happier does not mean that you are working to improve your happiness. Think of your friend who complains about her job every day but never tries to find a new one. No doubt she wishes she were happier—but for whatever reason, she doesn’t do the work to improve her circumstances. This is not evidence that she can’t become happier, or that her wishes are bringing her down.

In truth, happiness requires effort, not just desire. Focusing on your dissatisfaction and wishing things were different in your life is a recipe for unhappiness if you don’t take action to put yourself on a better path. But if you make an effort to understand human happiness, formulate a plan to apply what you learn to your life, execute on it, and share what you learn with others, happiness will almost surely follow.

In contrast, self-awareness—to be attentive to our own thinking processes—leads to new knowledge and breakthroughs. One recent study in the Proceedings of the National Academy of Sciences concluded that self-awareness allows us to recognize emotional cues and distractions and to redirect our minds in productive ways. In essence, studying your own mind and pondering ways to improve your happiness takes inchoate anxieties and mental meandering and transforms them into real plans for life improvement.

Rumination is to be stuck; self-reflection is to seek to be unstuck. The trick, of course, is telling the difference. Say you have just experienced a breakup. If you go over the painful circumstances again and again, like watching a looped video for hours and days, this is rumination. To break out of the cycle and begin the process of self-reflection, you’d have to follow the painful memory with insightful questions. For example: “Is this a recurring pattern in my life? If so, why?” “If I could do it over again, what would I do differently?” “What can I read to help inform me more about what I have just experienced and use it constructively?”

Self-reflection moves feelings of unhappiness from our reactive brains to our executive brains, where we can manage them through concrete action. The action itself is crucial. There is an old joke about a man who asks God every day to let him win the lottery. After many years of this prayer, he finally gets an answer from heaven: “Do me a favor,” says God. “Buy a ticket.” If you want happiness, reflecting on why you don’t have it and seeking information on how to attain it is a good start. But if you don’t use that information, you’re not buying a ticket.

Easier said than done, I realize. When we are happy, we are primed for action; unhappiness often makes us want to cocoon. The way to fight this is to do the opposite of what you want to do: When you’re unhappy, don’t curl up and watch a sad movie. Exercise, call a friend in need, and read up on happiness instead. You will be reprogrammed for action.

Once you’ve reflected (not ruminated), learned, taken action, and reaped the happy rewards, it’s time to make sure the benefits are not temporary—that you don’t fall back into simply wishing. The key is sharing your new knowledge with other people.

Teaching arithmetic problems to others has been shown to improve people’s ability to solve them, and in my experience, the same is true for the study of happiness: Sharing knowledge cements it in your own mind. One of the most important assignments I give my graduate students is for them to talk about the science and art of happiness at every party they go to. This ensures that they have the ideas clear enough in their heads to explain them to others. (It also makes them more popular.)

Further, when we share knowledge about how to become happier, we persuade ourselves every bit as much as we do others. It is a well-known phenomenon in psychology that asking people to argue in favor of something can be a great way to get them to believe it. Sharing the secrets to happiness will also make you happier, because doing so is an act of love. And as we have all learned, love is generative: The more you give it, the more of it you get.

I tremble at the thought of contradicting Hawthorne and the Stoics. But it is not true that pursuing happiness must lead to a “wild-goose chase,” or that thinking about happiness makes it more elusive. Like everything else in life that is worthwhile, pursuing happiness requires intellectual energy and real effort. You simply have to do the work. The good news is that the work will be joyful, and the results quite wonderful.

By: Arthur C. Brooks
Arthur C. Brooks is a contributing writer at The Atlantic, the William Henry Bloomberg professor of the practice of public leadership at the Harvard Kennedy School, a professor of management practice at the Harvard Business School, and host of the podcast The Art of Happiness With Arthur Brooks.

Source: How to Have a Happiness Breakthrough – The Atlantic

.

Related Links:

How To Use Psychology To Stop Your Impulsive Online Shopping

Combine a pandemic that’s kept us cooped up indoors with an unusually cold winter and what do you get? A perfect recipe for some highly questionable online impulse purchases. Maybe you can’t stop hunting for a cocktail dress to wear at those summer weddings you may-or-may-not attend.

Or maybe you suddenly find your AmazonBasics kitchenware lacking in comparison to the celebrity chefs you’ve taken recipe inspiration from. Either way, if you feel like your online shopping has been more out of control than usual, you’re not alone: Consumer spending on e-commerce platforms shot up 44% over the past year, according to information from the U.S. Commerce Department.

Financial experts will tell you that if you want to curb unnecessary spending, you need to unsubscribe from marketing emails, block websites, and delete your credit card information from your browser. It’s sound advice that does the trick for many — but sometimes these tips can backfire or simply not go far enough. (Not to point any fingers, but this author may or may not have accidentally memorized her own credit card number from manually typing it in too many times.)

So if you’re a fellow member of the credit card memorization club who’s still spending more online than you’d like to, then you may need to replace easy hacks with more long-lasting habits rooted in behavioral psychology.

“I don’t think [easy hacks] are nearly as helpful as understanding why you’re doing it in the first place,” says Brad Klontz, a financial psychologist and certified financial planner. Here’s what to know about the psychology behind impulsive shopping and how to use that knowledge to create better habits.

Be conscious of your decision-making process

Most people would like to consider themselves rational beings, making decisions without letting their emotions get in the way. But behavioral economists have some harsh truth: that simply isn’t true. And when it comes to shopping, external players are actually encouraging you to act irrationally.

“Marketers are experts at triggering you emotionally to get you to spend your money,” Klontz says. In the digital age, where everywhere you click is seemingly a never-ending maze of email alerts and carousel ads, it can be downright impossible to avoid getting wound up, worrying you might miss out on a great deal.

“When we become emotionally charged, we become rationally challenged,” Klontz says. “Our prefrontal cortex becomes impaired.”

The prefrontal cortex is the area of your brain responsible for decision-making, and engaging it to get ahead of what triggers you to spend requires vigilance. Luckily, while the prevalence of online shopping can hinder peoples’ ability to think rationally, it also offers benefits that you can’t take advantage of in-store. Tricks like letting your cart sit for 24 hours or disabling alerts from stores can force us to reflect on whether or not it’s a good spending decision.

But managing your decision-making works best when you can individualize the experience. One way to do this is to take stock of what tends to be your go-to categories for impulsive spending and create specific parameters for what makes a purchase justifiable. For example, if shoes are your vice you might ask yourself: Can I wear them with X amount of outfits? Do I already have a similar pair that serve a similar function? Will they last for more than one season? And so on.

If you can honestly answer whatever questions you decide are important with qualifications that make spending the money worthwhile, then you’ll be less likely to cave when presented with the opportunity to make an impulsive purchase.

Train your brain to prioritize long-term gains…

What does buying a brand new KitchenAid mixer have to do with your ancestors foraging for berries to keep from starving? A lot, actually.

“So much of what we do around money and life relates back to what I call our ‘cave person’ brain,” Klontz says.

No, we don’t need to stockpile months’ worth of resources to protect our clan from outside threats, but the biological drive that motivates these survival behaviors appears to have a hand in the way people make shopping decisions.

Animals — including humans — have reward centers in their brains that respond to the “feel good” hormone dopamine when they acquire something they want or achieve a goal. Using that heightened sense of reward to your advantage by reorienting your priorities from buying something new to meeting more essential long-term financial goals could be the key to curbing unnecessary spending.

Klontz suggests those who find themselves overspending take stock of their overall financial health first and set goals from there: “Most people aren’t paying themselves first. That’s where the problem arises.”

Many financial advisors encourage people to follow the 50-30-20 breakdown: put 50% of your net income toward living expenses, 30% toward discretionary spending (aka fun money), and 20% into savings. If that last category isn’t up to par or you aren’t contributing a substantial amount to a retirement plan, Klontz says it should be your top priority before any unnecessary lifestyle upgrades.

But working to build a strong savings can still satisfy our natural inclinations to gather and protect — it just requires training. According to research from Santa Clara University, while a small portion of people have a genetic predisposition to save more due to a stronger link between their short-term and long-term thinking processes, the majority of us can get there by gradually rewiring our brain to prioritize long-term outcomes over short-term gains. The researchers found, for example, that when people were given tools to help them pre-commit to put more money in their savings accounts months in advance, they were more likely to accomplish the task and feel more positive about saving rather than spending.

Financial goal-setting apps that track your saving progress like YNAB, Mint or a good old-fashioned spreadsheet can help you start to change the way you think about saving from a chore-like must-do to a goal you can continually look forward to.

… And earn your present-day rewards

If your financial house is in order, you’re meeting that 20% savings threshold and you still have money leftover, then “frankly, I don’t care what you do with the rest,” Klontz says.

But if you want to avoid accumulating a bunch of junk you won’t actually use — even if you have the money for it — then connecting the goal of saving for a big purchase to meeting goals in your personal or work life can deliver a powerful dopamine response more satisfying than making daily “trips” to Amazon.

Here’s how it works: Say you want to buy a $250 memory foam mattress topper, an upgrade to your current set-up that will get plenty of use. At the same time, you have to give a major presentation at work in two weeks that requires extra attention each day to prepare for it. If you set aside $25 every day you work on the project, you can time an exciting purchase alongside the completion of the presentation. The delayed gratification and association between a higher level of effort with a higher reward can train you to prioritize long-term satisfaction over a short-term thrill.

Another option is to keep a list of spending ideas that come to you throughout the day — but don’t go browsing for them yet. When you browse or even let something sit in your cart for a few days, Klontz says you’re more likely to be blasted with advertisements and price change alerts specifically designed to trigger feelings of scarcity, which can influence people to make choices they usually wouldn’t.

Instead, jot down every potential purchase that comes up throughout the week and pick a dedicated day to comb through them to decide if you want to fork over the cash. Putting some distance between when the idea strikes you and when you actually hit ‘buy’ allows you the time to think through spending decisions and compare which items on your list will be most valuable to you.

If your finances are secure, there’s no need to deprive yourself of a fun splurge every now and then. It’s just about knowing how to keep yourself in check when faced with tempting offers.

By Kenadi Silcox

Source: How to Use Psychology to Stop Your Impulsive Online Shopping | Money

.

.

You May Like

Insurance

Why I Bought Life Insurance For My Young Children

With online shopping and the availability of shopping apps and more, it’s harder than ever to avoid impulse buying things you might not need. Here are some things I’ve learned that have helped me reduce impulse buying in my own life.
↓ more ↓ 🎤
Mentioned: Cait Flanders: http://www.caitflanders.com 📧 Join The Member Community: http://www.breakthetwitch.com/community ⚡️ Subscribe to Podcast: iTunes: https://goo.gl/Wo9tfP Spotify: https://goo.gl/YwQWVr Stitcher: https://goo.gl/9DkYzU SoundCloud: https://goo.gl/Zg3AYJ 📖 My Book: eBook: https://amzn.to/2JQxmgZ Audiobook: https://amzn.to/2Ia5tCr 📔 Read More On The Blog: http://www.breakthetwitch.com/
%d bloggers like this: