Posts Tagged ‘Stanford University’

Free Exchange | Replacebook

March 27, 2019

The title of this post is identical to the title of a piece in the Finance & Economics section of the 16 February 2019 issue of “The Economist.” The article notes, “There has never been such an agglomeration of humanity as Facebook. Some 2.3bn people, 30% of the world’s population engage with the network each month.” It describes an experiment in which researchers kicked a sample of people off Facebook and observed the results.

In January, Hunt Allcott, of New York University, and Luca Braghiere, Sarah Eichmeyer and and Matthew Gentzkow, of Stanford University, published results of the largest such experiment yet. They recruited several thousand Facebookers and sorted them into control and treatment groups. Members of the treatment group were asked to deactivate their Facebook profiles for four weeks in late 2018. The researchers checked up on their volunteers to make sure they stayed off the social network, and then studied the results.

On average, those booted off enjoyed an additional hour of free time. They tended not to redistribute their liberated minutes to other websites and social networks, but instead watched more television and spent time with friends and family. They consumed much less news, and were consequently less aware of events but also less polarized in their views about them than those still on the network. Leaving Facebook boosted self-reported happiness and reduced feelings of depression and anxiety.

Several weeks after the deactivation period, those who had been off Facebook spent 23% less time on it than those who never left, and 5% of the forced leavers had yet to turn their accounts back on. And the amount of money subjects were willing to accept to shut off their accounts for another four weeks was 13% lower after the month off than it had been before.

In previous posts HM has made the point that our attentional resources are limited, and that they should not be wasted. HM has also recommended quitting Facebook and similar accounts. Of course, this is a personal question regarding how each of us uses πour attentional resources. They key point is to be cognizant that our precious attentional resources are limited and to spend them wisely and not waste them.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Raising of Children in a Digital Age

October 23, 2018

The title of this post is identical to the title of a letter in “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf. Wolf refers to her chapters as letters. Wolf writes: “The tough questions raised in the previous letters come to roost like chickens on a fence in the raising of our children. They require of us a developmental version of the issues summarized to this point: Will time-consuming, cognitively demanding deep-reading processes atrophy or be gradually lost within a culture whose principle mediums advantage speed, immediacy, high levels of stimulation, multitasking and large amount of information?”

She continues, “Loss, however, in this question implies the existence of a well-formed, fully elaborated circuitry. The reality is that each new reader—that is, each child—must build a wholly new reading circuit. Our children can form a very simple circuit for learning to read and acquire a basic level of decoding, or they can go on to develop highly elaborated reading circuits that add more and more sophisticated intellectual processes over time.”

These not-yet-formed reading circuits present unique challenges and a complex set of questions: First, will the early-developing cognitive components of the reading circuit be altered by digital media before, while, and after children learn to read. What will happen to the development of their attention, memory, and background knowledge—processes known to be be affected in adults by multitasking, rapidity, and distraction? Second, if they are affected, will such changes alter the makeup of the resulting expert reading circuit and/or the motivation to form and sustain deep reading capacities? Finally, what can we do to address the potential negative effects of varied digital media on reading without losing their immensely positive contributions to children and to society?

The digital world grabs children. A 2015 RAND study reported the average amount of time spent by three-to-five year old children on digital devices was four hours a day, with 75% of children from zero to eight years old having access to digital devices. This figure is up from 52% only two years earlier. The use of digital devices increased by 117% in just one year. Our evolutionary reflex, the lovely bias pulls our attention immediately toward anything new. The neuroscientist Daniel Levitin says, “Humans will work just as hard to obtain a novel experience as we will to get a meal or a mate…In multitasking, we unknowingly enter an addiction loop as the brain’s novelty centers become rewarded for processing tiny new stimuli, to the detriment of our prefrontal cortex, which wants to stay on task and gain the rewards of sustained effort attention. We need to train ourselves to go for the long reward and forgo the short one.”

Levitin claims that children can become so accustomed to a continuous stream of competitors for their attention that their brains are for all purposes being bathed in hormones such as cortisol and adrenaline, the hormones more commonly associated with fight, flight, and stress. Children three, or four, or sometimes even two and younger—but they are first passively receiving and then, ever so gradually requiring the levels of stimulation of much older children on a regular basis.

The Stanford University neuroscientist Poldrack and his team has found that some digitally raised youth can multitask if they have been trained sufficiently on one of the tasks. Unfortunately, not enough information is reported to evaluate this claim, other than to leave it open and look to further research to see how these skills can develop.

Wolfe raises legitimate concerns. Much research is needed. But the hope is that damaging effects can be eliminated or minimized. Perhaps even certain types of training with certain types of individuals can be done to minimize the costs of multitasking.

The Upside of Stress

June 10, 2018

HM is surprised that he is writing this title. As a person and as a psychologist, he has thought that stress is harmful and something to be avoided. The author of the book, “The Upside of Stress” is also a psychologist, a health psychologist at Stanford University to be specific, who also thought that stress was harmful and something that should be avoided. But we psychologists change our minds, when data indicate that we should change our minds. The data so indicated and we changed our minds. The subtitle of Dr. McGonigal’s book is “Why Stress is Good for You, and How to Get Good at It.”

In 1998, thirty thousand adults in the United States were asked how much stress they had experienced in the past year. They were also asked, “Do you believe stress is harmful to your health. Eight years later the researchers examined the public records to find out who among the thirty thousand participants had died. High levels of stress increased the risk of dying by 43%, but this increased risk applied only to people who also believed that stress was harmful to their health. People who reported high levels of stress but who did not view they stress as harmful were not more likely to die. Moreover, they had the lowest risk of death of anyone in the study, even lower than those who reported experiencing very little stress.

So it doesn’t appear that stress alone is harmful. Rather, it is the combination of stress and the belief that stress is harmful. The researchers estimated that over the eight years they conducted the study, 182,000 Americans may have died prematurely because they believed that stress was harming their health. According to statistics from the Centers for Disease Control and Prevention that makes “believing stress is bad for you” the fifteenth-leading cause of death in the United States, killing more people than skin cancer, HIV/AIDS, and homicide. The researchers had looked at a wide range of factors that might explain the finding to include gender, race, ethnicity, age, education, income, work status, marital status, smoking, physical activity, chronic health condition, and health insurance. None of these facts explained why stress beliefs interacted with stress levels to predict mortality.

It is known that beliefs and attitudes are important. One example is that people with a positive attitude about aging live longer than those who hold negative stereotypes about getting oder. A classic study by researchers at Yale University followed middle-aged adults for 20 years. Those who had a positive view of aging in midlife lived an average of 7.6 years longer than those who had a negative view. To put this finding in perspective, many factors we regard as obvious and important, such as exercising regularly, not smoking, and maintaining healthy blood pressure and cholesterol levels, have been shown, on average, to add less than four years to one’s life span.

After learning of these findings, Dr. McGonigal had to face the fact that by teaching the dangers of stress, she was actually damaging the health of students, not helping them. Dr. McGonigal learned from these studies and from talking to scientists, who are part of a new generation of stress researchers, whose work is redefining our understanding of stress by illuminating its upside. “The latest science reveals that stress can make you smarter, stronger, and more successful. It helps us learn and grow, and it can even inspire courage and compassion. The best way to manage stress is not to reduce or avoid it, but rather to rethink and even embrace it.

The next thirteen healthy memory blog posts will be based on Dr. McGonigal’s book to help you rethink and embrace it.

Key to the development of an effective response to stress involves the concept of mindsets. The healthy memory blog has been a strong advocate of growth mindsets in which we embrace continual new learning. Learning about stress is just another topic to add to our growth mindsets.

Why We Hate

May 28, 2018

Why we hate is the topic of the first chapter in The OPPOSITE of HATE: A Field Guide to Repairing our humanity by Sally Kohn. The chapter begins with a quote from Booker T. Washington: “I would permit no man…to narrow and degrade my soul by making me hate him.”

In 1977, Lee Ross and some colleagues conducted a study in which Stanford University students were randomly assigned to participate in a fake quiz show, either as questioners, contestants, or audience members. The questioners were asked to come up with ten questions based on their own knowledge, and the contestants had to try to answer those questions. Everyone, including the audience, was well aware that this was the setup—in other words, they knew that by design the people who came up with the questions knew the answers far better than those supposed to answer them. Yet afterwards, the students participating as audience members said they thought the questioners were inherently smarter than the contestants. They discounted the very obvious staged context. What is even more surprising, the contestants themselves rated the questioners as more knowledgeable. These results are truly mind boggling, and these were Stanford University students. When he wrote up this experiment, Lee Ross coined the phrase “fundamental attribution error.”

Two years later, psychologist Thomas Pettigrew took matters one step further by introducing what he called the “ultimate attribution error.” Pettigrew reasoned that if we assume that the negative behaviors of other individuals are attributable to their inherent, internal disposition, the same effect would be magnified in our prejudices against other groups. We are all members of in-groups and out-groups. Our family is an in-group and it is likely that our neighborhood is an in-group also. But the family in the neighborhood on the other side of town is an out-group. Membership in these groups is relative. If you’re primed to think about your entire town versus another town—for example, during a sporting match—suddenly the other neighborhood in your town becomes part of your out-group.

Some demarcations between in-groups and out-groups have become cemented in our society’s collective psyche. In the United States today, race, gender. immigration status, and economic class are categories of identity we’re accustomed to defining ourselves in relation to, and thinking of the people in “our group” as somewhat distinct from “others.” Ms. Kohn continues, “On top of this, like a giant living being, society has its own historical and collective perceptions about which of these groups usually fall in the in-group and which fall out. This is where the very meaningful, albeit complicated and sometimes even annoying, concept of ‘privilege’ comes in—the idea that certain identities and thus certain groups are inherently favored and advantaged in the broader norms and systems of our society, That’s how you end up with a dynamic where, in spite of the fact that women make up more than half the US population and more than half of US voters, more than 80% of those elected Congress are men. We all ingest and imitate society’s in-group and out-group biases.”

The ultimate attribution error gets a powerful assist from another of the fundamental psychological habits of hate: essentialism, which is the tendency to generalize wildly about people, especially those we lump into out-groups. Essentialism is the belief that everyone within a group shares the same characteristics or qualities, generalizations we’re especially likely to make—and assumed are fixed—about out groups. David Livingstone Smith in his book “Less Than Human” writes, “Essences are imagined to be shared by members of natural kinds, kinds that are discovered rather then invented, real rather than merely imagined and rooted in nature.” To which Ms. Kohn responds, “But that’s a myth. The distinctions between us are largely not ‘natural’ but created. We define and demand ‘others’ in large part because of society’s biases, all of which harden into negative and unyielding judgments about others that shape the rest of our perceptions. And this, I learned, is the core of prejudice and discrimination.”

The big question is how to converse with people of differing beliefs or political persuasions. Ms. Kohn has a handy tool taught to her by Matt Kohut and John Neffinger, authors of the book “Compelling People.” The problem that many of us have, HM included, is that we are tempted to respond to something someone says is wrong, by arguing, “No, you’re wrong, and let me explain the three reasons why!” Ms. Kohn used neuroscience to explain why this is not going to be productive. We know from neuroscience that while we need to use our frontal lobes to engage in a reasoned discussion—and to be open to persuasion—when we perceive an argument coming, our frontal lobes shut down and the fight-or-flight part of our brain turns on (the part of the brain that also holds our biases and stereotypes). To keep the possibility of persuasion open, we have to stay conversational.

We need to remember the acronym ABC, which stands for:

Affirm. First you find a feeling that you can genuinely affirm. So if the person said they are afraid of “x” say that you also agree with “x”. You have to mean this, that you authentically agree on this point.

Bridge. This does not stand for “but” or “however”. A bridge is a way of saying “and.” We can just say “and” or “that’s why” or “actually” or “the thing is” or even “the good news is”. You are trying to build means of getting to …

Convince. This is where you say whatever you were inclined to say in the first place.

It is clear that in many, if not most, situations, it will be impossible to do this. In that case, just let the point go. Arguing your point is highly unlikely to be successful, and the risk of a heated argument developing that increases enmity is high. If prevailed upon to give our opinions, it is important to be polite and respective. In other words to be the antithesis of Donald Trump.

We’ve Finally Seen How the Sleeping Brain Stores Memories

December 29, 2017

The title of this post is identical to the title of a post by Jessica Hamzelou in the 7 October 2017 issue of the New Scientist. To do this research needed to find volunteers who were able to sleep in an fMRI scanner. They needed to scan 50 people to find the 13 who were able to do so. These volunteers were taught to press a set of keys in a specific sequence. It took each person between 10 to 20 minutes to master this sequence.

Once they learned this sequence they each put on a cap of EEG electrodes to monitor the electrical activity of their brains, and entered an fMRI scanner, which detects which regions of the brain are active.

There was a specific pattern of brain activity when the volunteers performed the key-pressing task. Once they stopped, this pattern kept replaying in their brains as if each person was subconsciously reviewing what they had learned.

The volunteers were then asked to go to sleep, and they were monitored for two and a half hours. At first, the pattern of brain activity continued to replay in the outer region of the brain called the cortex, which is involved in higher thought.

When the volunteers entered non-REM sleep, which is known as the stage when we have relatively mundane dreams, the pattern started to fade in the cortex, but a similar pattern of activity started in the putamen, a region deep within the brain
(eLife, doi.org/cdsz). Shabbat Vahdat, the team leader at Stanford University, said that the memory trace evolved during sleep.

The researchers think that movement-related memories are transferred to deeper brain regions for long-term storage. Christoph Nissen at University Psychiatric Services in Bern Switzerland says, “this chimes with the hypothesis that the brain;’s cortex must free up space so that it can continue to learn new information.

The title of this post is identical to the title of a post by Jessica Hamzelou in the 7 October 2017 issue of the New Scientist. To do this research needed to find volunteers who were able to sleep in an fMRI scanner. They needed to scan 50 people to find the 13 who were able to do so. These volunteers were taught to press a set of keys in a specific sequence. It took each person between 10 to 20 minutes to master this sequence.

Once they learned this sequence they each put on a cap of EEG electrodes to monitor the electrical activity of their brains, and entered an fMRI scanner, which detects which regions of the brain are active.

There was a specific pattern of brain activity when the volunteers performed the key-pressing task. Once they stopped, this pattern kept replaying in their brains as if each person was subconsciously reviewing what they had learned.

The volunteers were then asked to go to sleep, and they were monitored for two and a half hours. At first, the pattern of brain activity continued to replay in the outer region of the brain called the cortex, which is involved in higher thought.

When the volunteers entered non-REM sleep, which is known as the stage when we have relatively mundane dreams, the pattern started to fade in the cortex, but a similar pattern of activity started in the putamen, a region deep within the brain
(eLife, doi.org/cdsz). Shabbat Vahdat, the team leader at Stanford University, said that the memory trace evolved during sleep.

The researchers think that movement-related memories are transferred to deeper brain regions for long-term storage. Christoph Nissen at University Psychiatric Services in Bern Switzerland says, “this chimes with the hypothesis that the brain;’s cortex must free up space so that it can continue to learn new information.

Progress Making Higher Education More Affordable

July 22, 2012

I was heartened by a short piece in Newsweek1 that addressed some concerns I raised in the Healthymemory Blog Post, “A Solution to the Excessive Cost of a Higher Education.” According the the National Center for Public Policy and Higher Education, the costs of a higher education have skyrocketed 450 percent in the past 25 years. As I argued in my blog post, the proper use of technology should have decreased, not increased, the costs of a higher education.

Apparently, two professors of computer science at Stanford University, Daphne Koller and Andrew Ng agree. They believe that the Internet should allow millions of people to receive first-class educations at little or no cost. They have launched Coursera, www.coursear.org, to make courses from first rate universities online at no charge to anyone. They offer full courses to include homework assignments, examinations, and grades. Go to the website to view the wide range of course offerings. It is worthwhile to note, that professors are not paid. So kudos to these professors who place education first and realize the potential of the Internet.

Ng and Koller made a class available at no cost online. The class in machine learning drew more than 100,00 enrolled students, 13,000 of whom completed the course. This result impressed not only Ng and Koller, but also such venture-capital firms as Kleiner Perkins Caufield & Byers and New Enterprise Associates, which together have invested $16 million combined in Coursera.

Providing free education is one matter, but as was pointed out in the healthymemory blog post, the money comes from the granting of degrees. The following is taken from the Coursera Website.

“…This Letter of Completion, if provided to you, would be from Coursera and/or from the instructors. You acknowledge that the Letter of Completion, if provided to you, may not be affiliated with Coursera or any college or university. Further, Coursera offers the right to offer or not offer any such Letter of Completion for a class. You acknowledge that the Letter of Completion, and Coursera’s Online Courses, will not stand in the place of a course taken at an accredited institution, and do not convey academic credit. You acknowledge that neither the instructors of any Online Course nor the associated Participating Institutions will be involved in any attempts to get the course recognized by any educational or accredited institution. The format of the Letter of Completion will be determined at the discretion of Coursera and the instructors, and may vary by class in terms of formatting, e.g., whether or not it reports your detailed scores or grades in the class, and in other ways.”

In my view they are not addressing this issue in a satisfactory manner. Some ideas regarding how to do so are offered in the healthymemory blog post.

1Lyons, D/ (2012). Cheaper Than Harvard: An Ivy League Education Online—For Free. Newsweek, 14 May, p.13.

© Douglas Griffith and healthymemory.wordpress.com, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.