Posts Tagged ‘New Scientist’

Can Science Survive in a Democracy?

April 22, 2017

This post is motivated by an article in the Comments section of the 22 April 2017 edition of the New Scientist by Dave Levit titled “Marchers, raise your banners for the tortoise pace of progress.”  The referenced March is the March for Science taking place today April 22.  His article begins, “The March for Science reflects the growing gap between slow, steady, vital scientific gains and quick-fire opportunist US politics.  A week is a long time in politics.  Science, however, is in it for the long haul.  Whether studying rising sea levels or isolating proteins in fruit fly nerve cells so that many years down the line we might have a new drug for Parkinson’s, science does not fit with the day-to-day fixed-term imperatives of government.

Politicians back fracking ventures that quickly create jobs, but talk down the risks of long-term pollution.  They take credit for the progress made in renewable energy, ignoring the decades of work underlying this progress.  Levit continues “The slow march of scientific progress does not match well with politics even on a good day.  “And today is not a good day.”

The science community has been shocked by the preliminary budget outlines from Donald Trump.  From the National Institutes of Health (NIH) to NASA’s earth science mission, science would get a buzz cut.  This makes perfect sense for Donald Trump.  Levit writes, “the impulsivity and lack of long-term thinking that places science at odds with politics seems less a feature and more a tenet of Trump’s view.   Why fund the NIH properly, helping to produce the medical advances of 2030, when he can’t see past his next tweet? If politics couldn’t handle science’s tortoise pace years ago, it should be no surprise to see this disdain reach a new peak in a faster moving age.”

This March is one day aimed at making people understand how unimportant one day actually is.  March participants are simply trying to drum up greater appreciation for evidence, scientific rigor, methodology, and expertise.  The March of Science is one of slow, steady, incremental progress.

Trump’s proposed cuts would have an immediate effect—less government spending.  But their long-term outcomes, such as delayed development of life saving drugs or preventing seas from rising to swallow Miami, apparently have little effect for many elected officials.

Levit notes that there is a chance cuts will accelerate the pace of impacts until it becomes impossible to ignore them, even though some of the damage would be irreversible.

It remains to be seen whether the March can wake us up before that happens.

Let us hope that it does wake up the congress.

We Dream Much More Than we Know

April 21, 2017

This is the conclusion from a News Piece written  by Chelsea Whyte in the 15 April 2017 issue of the New Scientist.  A new way to detect dreaming has confirmed that it doesn’t only occur during rapid eye movement (REM) sleep, and has shown why we don’t often remember our dreams.

Tore Nielsen at the University of Montreal says, “There is  much more dreaming going on than we remember.  It’s hour and hours of mental experience, and we remember a few minutes.  Low-frequency  brainwaves are detectable across the brain.  Francesca Siclari and her colleagues at the University of Wisconsin-Madison have discovered that a decrease in these waves in an area at the back of the brain is a sign that  someone is dreaming.  She says, “This zone was a little bit more awake, showing high frequency  brainwaves more common during wakefulness.  This one region seems to be all that’s necessary for dreaming.”

Siclari and her team used EEG caps to map the brain activity of 32 people while they slept.  They woke the sleeper when they showed various patterns of brainwave activity, and asked them if they had been dreaming.  Some participants reported having dreams with a narrative structure, while others were more impressionistic.  One had a dream about reporting a dream

There was such a strong correlation between dreaming and fewer low-frequency  waves in the “Hot zone” that they could successfully predict whether a person was dreaming 91% of the time.

The team found that dreams during REM sleep were linked to a rise in high-frequency brainwaves in areas the are active in waking hours.  The activity matched the brain areas that would have been active if the dreamers had been living our their dreams in real life.  The team found that the participants dreamed during 71% of their non-REM sleep in addition to 95% of their REM sleep.

Many dreams are forgotten.  Sometimes participants had a foggy idea that they had been dreaming, but couldn’t remember  what about.  In a further experiment the team found that being able to later remember a dream was linked to higher activity in the prefrontal cortex, which is associated with memory, while dreaming.  Siclari says, “The region for remembering the dream was different from the region having a dream.

So dreaming is very important for our brains.  The previous posts on willpower have shown the importance of having adequate sleep for effective mental functioning.  It seems like both education and employment typically employ schedules that hinder sufficient sleep.  This issue needs serious public attention.

Journal reference:  Nature Neuroscience, DOI:  10.1038/nn.4545

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Inside knowledge: What’s Really Going On in the Minds of Animals

April 11, 2017

The title of this post is identical to the title of a Feature article by Michael Brooks in the Features section of in the 1 Apr 2017 New Scientist.   The article begins, “Bright animals from chimps to crows know what they know and what others are thinking.  But when it comes to abstract knowledge, the picture is more mixed.”  Some qualifications need to be placed on “what others are thinking.”  There are definite limits as we humans often have difficulty trying to know what our fellow humans are thinking.

The article also fails to note “The Cambridge Declaration of Scientists.”  It begins as follows:
“On this day of July 7, 2012, a prominent international group of cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists gathered at the University of Cambridge to reassess the neurobiological substrates of conscious experience and related behaviors in human and non-human animals. While comparative research on this topic is naturally hampered by the inability of non-human animals, and often humans, to clearly and readily communicate about their internal states, the following observation can be stated unequivocally:”

and concludes:
“The absence of neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

The full statement can be found at

Fortunately, the scientists here are neuroscientists, which gives the statement more gravitas than had it been made by psychologists.  But psychologists are involved in designing experiments to assess how much and what kinds of abstract knowledge can be achieved by different species.  And there is a long row of research ahead of them.  HM was much encouraged by this declaration as he has long thought that dogs were man’s best friend, rather than men being man’s best friend, because dogs had the neurological substrates for love and loyalty, but were lacking in neocortex that allowed for rationalization and deviousness.

There is a tendency to evaluate what animals know with respect to what humans know.  Sometimes this research seems to reflect an inferiority complex in showing what these are things we can do that nonhuman species cannot. They also need to be evaluated with respect to the capabilities of the species and the environments in which they operate.

We need to consider species with respect to their sensory caoacities. Consider are best friend, dogs, for example.  The vision of most dogs is not that good, but their hearing is outstanding, and their sense of smell is extraordinary.  When we think of someone, we tend to see them in our mind’s eye.  However, when a dog thinks of a person it is likely in terms of how that person smells.

Recent research has indicated that non-human species are more human than has traditionally been thought.  This research is to be applauded.  We look forward to what we’ll learn from future research, but it should go beyond what they can do compared to what we can do.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Inside Knowledge: Why Knowing Thyself is the Hardest Thing

April 10, 2017

The title of this post is identical to a Feature article by Anil Ananthaswamy in the 1 April 2016 issue of the New Scientist.  As Anil writes, We are within ourselves, so any attempt to build a full picture is fraught with our own cognitive biases and problems of self-reference.  Moreover a big part of our self perception is tied up with how others see us, yet we can never now the biases that cloud their perception.

Philosophical investigation and scientific observation of human behavior allow us to delineate the question of what the self is a little more sharply.  There are several ways of doing this.

There is the phenomenal self.  This corresponds to our signs of existing, and that there is a distinct entity in our mind that experiences this existence.  The self is very real to each of us:  it’s a sense of being a body situated in the here and now, and also of being a person existing over time.  Unfortunately, this is not always a reliable source of true knowledge.  There is a rate neurological disorder Cotard’s syndrome in which the individual has the distinct and disturbing experience of non existence—a  subjective self-knowledge clearly at odds with the truth.  There are also people who do not feel that parts of their bodies, say particular limbs, are not theirs.  And when we dream we have a robust sense of self while being completely deluded about who and where we are.

The epistemic self is a more sophisticated type of self-knowledge.  The epistemic self is a sense of self that knows it knows.  The epistemic self is aware of the working of the phenomenal self, and can make us more aware of our motivations.  It is a new way of relating to oneself.

Imagine you are sitting in a mind-numbing meeting and start fantasying about an exotic vacation.  Your phenomenal self wanders with you into this dream world, but as you snap back to the reality of your meeting and become aware you’ve been daydreaming, your epistemic self flashes into action, only to disappear again as you mind focuses (or wanders) once more.

The aim of mindfulness and meditation is to enhance the epistemic self.  Doing so gives us greater mental autonomy, “the capacity to stop or better control what we’re thinking, feeling, and doing.”  There are many healthy memory posts on mindfulness and meditation.

Inside Knowledge: The Maximum Any One Person Can Ever Know

April 9, 2017

The title of this post is identical to the title of a Feature Article by Sean O’Neill in the 1 April 2017 issue of the New Scientist.  The preceding post explained why Homo Sapiens will never know everything.  This post estimates the maximum any one person can know.

A human brain has approximately 100 billion neurons connected in labyrinthine ways by 100 trillion synapses.  According to a 2015 estimate from the Salk Institute, this amounts to an information storage capacity measured in petabytes, which are millions of gigabytes.  In comparison the Large Hadron Collider, the particle smasher as CEN, pumps out some 30 petabytes of data in use one year.  A recent paperer published jointly by researchers using the collider credited 5000 people with producing and analyzing the data.

Of course creating knowledge is about a lot more than assimilating data.  Our brains are not an empty petabyte stick.  As O’Neill notes, if it were, you would send it back to the shop, disappointed in its slow upload rate.

What is relevant is how much an individual brain can know as we have never filled one up.  We reach a time limit before we reach a processing limit.  Hyperpolyglot Alexander Arguelles is already competent in over 50 languages.  He says, “Give me total freedom of time…and I could conceivably do 100 languages.”  O’Neill notes that this would be at the expense of everything else.

Cesar Hidalgo of the Massachusetts Institute of Technology has dubbed the amount a person can realistically learn in a lifetime a personbyte.   He notes that the knowledge you would need to throw a beautiful clay pot is less than 1 person byte.  But if you want to build an F-22 Raptor fighter jet complete with on-board missile-guidance systems, you’re going to need many thousands of person bytes.

O’Neill optimistically concludes that “we should not let our brains meagre bandwidth get us down.  And if the amount and complexity of human knowledge has increased over time, so the means of acquiring it have steadily improved too, with spoken knowledge, written language, the printing press and now the internet.  In that profusion of information, the barrier to progress lies not in the quantity of knowledge our brains can hold, but in its quality.”

Although what O’Neill writes is true, especially in putting the emphasis on the quality of knowledge instead of the quantity, we still need to be humble about how much we think we know.  HM writes “think we know” because we can never be sure of what we know.  Regardless of the technology, there are biological limits to the rate of knowledge acquisition and the capacity of short term memory.  So we need to walk and talk humbly.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Inside Knowledge: Why We’ll Never Know Everything

April 8, 2017

The title of this post identical to the title of an article by Richard Webb in the Features Section of the 1 Apr 2017 New Scientist.   There are many limitations, but let us just consider computational power.  One can argue that computational power is only a temporary limitation.  Webb notes, however powerful we make them, computers rely on human input to program them.

Webb goes on to comment “…human thought is a glorious, uproarious, complex mess. Statements like “this statement is false, hating someone yet loving them and yes, that small-yet-large jumbo shrimp, both compute and do not compute.”  Panofsky, an information scientist at the City University of New York says, “language is an expression of the mind, and my mind is full of contradictions.”

This flexibility allows us to think creatively, while remaining firmly grounded.  Webb says that “because we are predicated on contradiction, we see contradiction everywhere. But “the defining feature of reality is that it admits no contradiction.   Quantum objects apparently act as waves or as particles depending how we choose to measure them.”  Physicist Richard Feynman called this confusing duality “the only mystery” of the quantum world.  Webb conjectures, “In all probability, the basic building blocks of reality are neither wave nor particle, but something else entirely.  It’s just something that we lack the experience or cognitive ability to express.”

When HM was a naive undergraduate he did not want to waste time on philosophy courses where questions were raised, solutions were presented and argued about, but resolution or general agreement, was never achieved.  So he took courses in symbolic logic where, he thought, definitive conclusions could be reached.  Logic and mathematics is supposedly a cleaner, neutral language for a trained brain to describe in abstract terms what it cannot visualize.  What HM learned in symbolic logic was that there were logical limitations on both logic and mathematics.

For example, there is the well-known injunction that you should never divide a number by zero.  If you do, you can begin to do things like prove 1 = 2.  This can’t be allowed if mathematics are the language of a flawless universe.   Panofsky says, “if you want mathematics to continue without contradiction than you have to restrict yourself.”

Kurt Godel showed in the 1930s that any system of logic containing the rules of arithmetic is bound to contain statements that can be neither proved nor disproved.  It will remain “incomplete” , trapped in the same inconsistency as we are.  Model incompleteness is a mathematical expression of the logical-illogical statement “this statement is false.”  So there is no way for anything, be it a simple sentence, system of logic, or a human being to express the full truth about itself.

Webb continues “This problem of self-reference is endemic.  Godel’s contemporary Alan Turing showed that you cannot ask a computer program in advance whether it will run successfully.  Quantum mechanics sprouts paradoxes because we are part of the universe we are trying to measure.”

And Webb concludes, “So the sobering truth is that we can build the most powerful telescopes, microscopes and computers we want, we we will never overcome the limitations of our minds.  Our perspective on reality will always be skewed because we—and the jumbo shrimp—are part of it.”

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Inside Knowledge: How to Tell Truth from Lies

April 7, 2017

The title of this post is identical to the title of an article by Tiffany O’Callaghan in the Features section of the 1 Apr 2017 New Scientist.  This article notes there are hardcore relativist philosophers who argue that there’s no such thing as objective truth that exists outside our minds.  This is absurd.  What they might be intending to mean, and is something that HM thinks, is that we shall never find objective truth (see the immediately preceding post that follows this one, blog style).  Science is a systematic method for achieving an increasingly better understanding of objective truth.  The risk in believing that one has objective truth is the same as having beliefs having certainty.  They blind us to other better options.

Very often in both science and math, simplifying assumptions are made to make the research problem tractable.  These simplifying assumptions are necessary and bring us closer to objective truth.  However, it must always be remembered that these results were obtained using simplifying assumptions.

Unfortunately, we live in a world in which there are businesses devoted to making lies (see the healthy memory blog post, “Lies, Inc.”).

Steve Sloman says that as individuals, we hardly know anything.  “But most of us do very well, and as a society we create incredible things.  We sent  a person to the moon.  How is this possible”?  Because of the knowledge of other people.”

The article presents the following advice for treading the fine line between healthy skepticism and destructive cynicism.  “First, think critically and assess the credentials, track record and potential bias of the sources we rely on.” wrote Peter van Inwagen. He continues, “If someone is telling me this, what motives could that person have for wanting me to believe that, other than that it’s true?”

We should ask how do we know?  How do they know? We need to ask ourselves whether our reaction to new knowledge is rooted in something trustworthy or something else, like wishful thinking.  Those not believing in global warming in spite of scientific evidence might require them to do a certain amount of rather inconvenient stuff, stuff that would have financial costs, so they really rather not believe and start to make the sacrifices we would all have to make.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Inside Knowledge: What Makes Scientific Knowledge Special

April 6, 2017

The title of this post is identical to an article by Michael Brooks in the Features Section of the 1 Apr 2017 New Scientist.   The article begins, “NULIUS in verba:  “take nobody’s word for it”.  This is the motto of the Royal Society, the UK’s national academy of science.  This encapsulates the spirit of scientific inquiry.  The article continues, “Thanks to what science tells us about human physiology, the universe’s history, nature’s forces and Earth’s geology, flora and fauna, we know the earth isn’t flat, the universe is nearly 14 billion years old, and that there are no dragons or unicorns.  We live longer and in more comfort, and can send space probes to the edge of the universe.

But there are people who still contend the earth is flat.  Other people say the universe is 6000 years old.  Still others doubt the the theory of evolution by natural selection.  And there are people who question the reality of human-made climate change.  Unfortunately, some of these people are in positions of power like Donald Trump and his appointees.  For this, HM apologizes to the rest of the world.  However, a majority of the American voters did not vote for Trump.  Trump did not win the popular vote. He was elected by an electoral college, an institution developed to deny the principle of one citizen, one vote.

What is worse is there is an industry devoted to publishing and promoting scientific lies (see the healthy memory blog post, “Lies, Inc).  It needs to be understood that the scientific facts cited above could change.  Science is always an approximation of the truth.  Absolute truth is a destination we will likely never reach.  But to change science, experiments that produce data are required.  And there must be a means of disproving scientific theories.  There must be a way of disproving creationism, or it is not a scientific theory.  And there are arguments that question human-made climate change.  Unfortunately, some of these arguments come from Lies, Inc.  However, to be fair, there are scientists who question not the effects of humans on climate change, but on the rate at which these effects are taking place.  In this case, the opinion goes to the majority of research that argues climate change is real and is increasing at an alarming rate.

Philosopher Edward Hall of Harvard says “Authority in science is earned—at least, when a scientific community is functioning well—by  predicting and more generally at analyzing empirical phenomena.”

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Inside Knowledge: Why We Like to Know Useless Stuff

April 5, 2017

This blog post has the same title as a Feature article by Daniel Cossins in the 1 Apr 2017 New Scientist. The author notes that knowledge is more than just information.  “Even the nematode worm caenorhabditis elegant, the owner of one of the smallest brains we now, forages to maximize information about its environment, and so its chances of staying alive and reproducing.”  This is the typical analysis offered by scientists.  It seems like the entire point is for species to evolve and to reproduce.  However, evolution offers the hope that we can evolve into something better.  Homo sapiens as a whole is quite depressing.  We seem to be preoccupied with warfare and have developed weapons that can lead to our own extinction.  However, there are some members of our species who espouse transcendental values, which leads to the hope that we might become something better.

Cousins writes, “The precise details of how we first came to love knowledge may always elude us.  But it is easy to see how it would have spurred our success as individuals and as a species furnishing us with the tools—often literally, if you think of  cutting blades or fire—to survive and prosper.”

So the argument is that we are addicted to knowledge because it has served us so well in the past.  It still does today, in everyday life as well as the frontiers of technological progress.  The term infovores has be used to describe this propensity (enter “infovores” into the search block of the healthy memory blog).

Abraham Flexner, the founder of the Institute of Advanced Study (IAS) at Princeton pointed out in a 1939 essay “The usefulness of useless knowledge,” radio communications and all that came with it wasn’t ultimately the inventions of Guglielmo Marconi.  It was down to James Clerk Maxwell and Heinrich Hertz, scientist who worked out the basics of electromagnetic waves with no practical objectives.”

The current director of IAS, Robbert Dijkgraaf has written a companion essay to a reissue of Flexner’s original.  He wrote, “The theory of general relativity is used every day in our GPS systems, but it was not the reason Einstein solved it.

The problem is that too many people reject science.  Some reject science on the basis of religious texts.  Others are fundamentally ignorant.  What is most depressing is the leader of the United States rejecting scientific research.  For this HM apologizes.  Although he did not win the popular vote; he was chosen by the electoral college, an anachronism that denies the sacred principle of one citizen, one vote.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Inside Knowledge: What Separates Fact from Belief

April 3, 2017

The title of this post is identical to an article by Richard Webb in the Features Section of the 1 Apr 2017 New Scientist.  HM answers this question by saying that it is the degree of belief.  Research has indicated that absent any indications to the contrary when we hear or see a fact, the default is to believe it.  When supposed facts are heard or read that do not correspond to the individual’s belief system, a noticeable signal is recorded in the brain.  This indicates that System 2 thinking has been invoked and this fact will either be rejected or postponed until further information and thinking can be performed.  Kahneman terms System 1 intuition and System 2 reasoning.  System 1 is fast, that’s why it’s the default processing system.  System 2 is slow and requires further thinking.  System 2 is supposed to protect us from false beliefs.  At the turn of he 20th century there were many physical scientists who believed that practically everything that needed to be known about the physical sciences was known.  All that was needed was to add some more decimal places of precisions.  Just five years later Albert Einstein published his Special Theory of Relativity.  Beliefs should always be subject to change and should never reach certainty.

Technology has developed at such an alarming rate that there are an enormous number of facts to evaluate.  All of science, both physical and social, is producing facts that lay people do not have the knowledge to evaluate. Moreover, there is a business of deliberately publishing false facts (See the healthy memory blog post, “Lies, Incorporated.”)

The remainder of this post is motivated by the box titled “Where Knowledge Comes From”  at the bottom of the article.  One way of classifying knowledge is by how we acquire it.

Perceptual knowledge comes from our senses but involves significant processing by our brains.  Basically the brain builds models of the world using this information, but it must be appreciated that we do not have direct knowledge of the world.  The truth is that we infer it, and this knowledge changes as information grows.  Everyone should be familiar with perceptual illusions, in which the psychological interpretation does not agree with the physical representation.

Testimonial knowledge comes from other people and media.  Here belief should largely hinge on the source of the information.  Different sources have different biases, as these biases must be taken into consideration.  The credentials of the sources are of primary importance.  Whether there is scientific evidence for the facts is especially important.  Sources that contradict scientific data must be evaluated with skepticism.

Our inner sense, the awareness of our own feelings and states, such as pain and hunger would appear to be highly credible, but some times we are out of touch with our senses.  Beliefs can actually greatly deaden pain in many cases.  Enter “placebos”  into the search block of the healthy memory blog.  (Enter “placebos” into the search block of the healthy memory blog to learn more about their effectiveness)

Inferential knowledge goes beyond actual facts in assessing the credibility of facts, and in making inferences about facts.  Critical thinking is key here.

Beliefs can blind us to facts.  A good example of this is the problem of health care in the United States.  Health care in the United States is the most expensive in the world, yet health statistics in the United States approach those in the third world.  Every advanced country in the world has a national system of health care except the United States.  The reason for this is that the Republican party sees government as the problem and not the solution to health care.  But all other advanced countries have successful health care systems in which the governments play a central part.  The affordable health care act, frequently referred to as Obamacare, used the government to increase access to health care.  It was a small effort that fell far short of Obama’s goals.  Trump promised that Trumpcare would be much better than Obamacare.  Had he formulated an improvement over the affordable care act, it would have been welcome.  However, the plan that was formulated was woefully short of the Affordable Care Act, and was defeated.

Republicans trumpet the value of market forces in health care.  But back in 1963 Nobel Prize winning economist Kenneth Arrow offered an explanation as to why markets do not work well in health care.  There is a huge mismatch of power information between the buyer and seller.  For example, if a salesman tells us to buy a particular television, we can easily choose another or just walk away.  However, if a doctor insists we need a medication or procedure, we are far less likely to reject the advice.  Arrow also noted that people don’t think they don’t need health care until they get sick, and then they need lots of it.

Beliefs are frequently compartmentalized and this has adverse effects on inferential knowledge.  Here again the Republican Party and healthcare provide a good example.  It should be understood that both parties have religious beliefs, but Republicans are especially strong in their beliefs which center on loving our neighbors, and caring for the needy and sick.  Yet compartmentalization of the Republican beliefs about the role of government blocked addressing religious beliefs about caring for the sick  with the result of increased unnecessary suffering among their fellow human beings

Beliefs are necessary, but they should never be absolute.  They are dangerous in that they can foreclose meaningful solutions to critical problems.  And they can hinder effective inferential knowledge.  A useful exercise is occasionally to try to ignore one’s beliefs and explore the ramifications of ignoring those beliefs.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Tim Berners-Lee Speaks Out on Fake News

March 30, 2017

The title  of this post is identical to the title of an article in the Upfront Section in the 18 March 2017 issue of the New Scientist.  Tim Berners-Lee is the creator of the World Wide Web.  He gave it to the world for free to fulfill its true potential as a tool which serves all of the humanity.  So it is interesting what he thinks of the web’s 28th birthday that was reached on 12 March.

Berners-Lee wrote an open letter to mark the web’s 20th birthday.  He wrote that it is too easy for misinformation to spread, because most people get their news from a few social media sites and search engines prioritize content on what people are likely to click on.

He also questioned the ethics of online political campaigning, which exploits vast amounts of data to target various audiences.  He wrote, “Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups,” is that democratic?”

He also said that we are losing control of our personal data, which we divulge to sign up for free services.

Berniers-Lee founded The Web Foundation that plans to work on these issues.

Bill Gates’ Robot Tax Alone Won’t Save Jobs: Here’s What Will

March 10, 2017

The title of this post is identical to the title of an article by Sumit Paul-Choudhury in the 4 March  2017 issue of the New Scientist.   Bill Gates argued that we should raise the same amount of money by taxing robots as we would lose in payroll taxes from the humans they supplant.  Then this money could be directed towards more human-dependent jobs, such as caring for the young, old and sick.  EU legislators rejected just such a proposal due to lobbying efforts by the robotics industry.

The article makes the valid assertion that automation is the biggest challenge to employment in the 21st century.  Research has shown that far more jobs are lost to automation than to outsourcing.  Moreover, this will get worse as machines become ever more capable of doing human jobs—not just those involving physical labor, but ones involving thinking also.

The common argument from the robot revolution is that previous upheavals have always created new kinds of jobs to replace the ones that have gone extinct.  But previously when automation hit one sector, employees would decamp to other industries.  However, the sweep of machine learning means that many sectors are automating simultaneously.  So perhaps it’s not about how many jobs ar left after the machines are done taking their pick, but which ones.

The article suggests that the evidence might not be very satisfying.  The rise of the “gig economy”, in which algorithms direct low-skilled human workers.  Although this might be an employer’s dream, it is frequently an insecure, unfulfilling and sometimes exploitative grind for workers.

The article argues that to stop this, it’s employers that need to be convinced, not the people making the technology, but it will be difficult to convince the employers who have huge incentives to replace all-too-human workers with machines that never stop working and never get paid.

Although the article fails to mention this, there is the danger of extremely high unemployment, particularly among the well-educated and formerly  well-off.  There have been several previous healthy memory blog posts by HM in which he discusses the future he was offered in the 1950s.  In elementary school we were told that by today technological advances would vastly increase leisure time.  Bear in mind that in the 50s very few mothers worked.  Moreover, technology has advanced far more than anticipated.  So, why is everyone working so hard?  Where is this promised leisure?

Unfortunately modern economies are predicated on growth.  They must grow which requires people to purchase junk and to keep working.  These economies are running towards disaster.  People need to demand the leisure promised in the 50s.   Paul-Choudry’s article does suggest that a business friendly middle ground might be for governments to subsidize reductions in working hours, an approach that has fended off labour crises before.  HM thinks that Paul-Choudhury has vastly underestimated the dangers of job losses.  HM thinks that this is of a magnitude that will threaten the stability of society.  So the working week will need to be drastically shortened to 20 hours (See the Healthymemory Blog Post “More on Rest”).

There have been previous healthy memory blog posts on having a basic minimum income, which also will need to be passed.

The primary forces arguing for these changes are the risks of societal collapse.

However, people need to have a purpose (ikigai) in their lives.  They need to have eudaemonic not hedonic pursuits.  Eudaemonic pursuits build societies; hedonic pursuits destroy society.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

US Scientists Can Look to Canada for Ways to Fight a Stupidity Pandemic

February 10, 2017

This post is based on an Insight Piece in the 4 February 2017 issue of the New Scientist titled “US scientists can look to Canada for ways to fight a crackdown.”  “Stupidity Pandemic” is a term used in prior healthy memory blog posts, and it has been substituted for “crackdown” as it accurately characterizes what is happening in the United States.

The article notes that George Orwell, the author of “1984” said that  “Freedom is the right to tell people what they do not want to hear.”  Empirical facts are especially unwelcome to a political establishment that wants to provide their own “alternative” facts.

Already during just his first week in office, Trump launched orders to gag scientists in federal agencies, and raised the possibility that political officials may now need to clear empirical findings before they can be published.  The Environmental Protection Agency was hit with a freeze on all contracts and grants.  All existing information published by the EPA would also be examined, and the release of new work put on hold pending possible case-by-case scrutiny.  Agency staff have also been barred from updating its social media accounts or taking to the press without clearance from the top.  Does this not have some of the flavor of 1984?

The Department of Health and Human Services was ordered not to communicate with external officials.  This proscription included members of Congress.  The Department of Agriculture reminded staff to get clearance before talking to the press and its research division was old not to issue public statements.

The New Scientist article notes that this patten of gagging and censoring scientists will have a familiar ring in Canada.  During the conservative government of Stephen Harper between 2006 and 2015,  he sacked more than 2000 fisheries and environmental scientists, and cut climate, Arctic and air pollution research.

During this “war on science” libraries journal collections were trashed and researchers reported being leaned on to allay politically sensitive conclusions.   Federally employed scientists were banned from speaking in public or to the press without permission, and this permission was often denied or delayed.   Government chaperones sat in on press interviews.  Some scientists learned not to speak up at all.  Climate stories all but vanished from the press.

Michael Oman-Reagan of Memorial University in St John’s Canada says,”The lesson from the Canadian war on science for US scientists is:  speak out now, organize, stand in solidarity, be an activist, and resist.”

Some US scientists are doing this.  US scientists have started making additional precautionary backups of publicly funded environment data sets.  A scientists’ march on Washington is in the works, and an action group is trying to get more scientists to run for public office.
George Orwell said keep restating the empirically obvious—because “the quickest way of ending a war is to lose it.”

The good news is that Canada managed to recover.  Let us hope that US  citizens have the intelligence of ridding the country of an anti-science, anti-truth government.

A Painful Reminder for Donald Trump of Why Torture is Pointless

February 9, 2017

The title of this post is identical to the title of a Comment piece in the 4 February 2017 issue of the New Scientist.  This article begins ,”PRESIDENT Donald Trump says his nation should ‘fight fire with fire’ by using torture on terror suspects, insisting it works.”  The article ends, “The lesson for Trump is simple:  fighting fire with fire burns down the neighborhood.”

The purpose of torture, is similar to the purpose of much of science, to get reliable, replicable and verifiable information.  Professional interrogators say torture is the worst possible method for this.  Torture fails utterly as a means of getting at the truth, even more so compared with non-coercive investigative methods.  To be sure, torture gets the victim to respond, but why should the response be related to the truth?  In fact, the victim might not have the desired information, but if tortured enough, there will be a response.

Neuroscience agrees with the professional interrogators.  Imposing extremes of pain, anxiety, hunger, sleep deprivation and the threat of drowning does not enhance interrogation.  It degrades it.  This should not be surprising.  Behind the wheel of a car, even mild states of sleep deprivation are as risky as being drunk.  Reactions are slowed, judgement is impaired, and recollection is damaged.  The torturer hopes that enough residual function is unaffected so that intelligence can be gathered. However, the result is that people say whatever is needed to make the torture stop.

The article asks, what’s the alternative?  It is to talk because humans like to talk.  It is estimated that 40% of what we say to other people consists of self-disclosure.  Brain imaging shows that during self-disclosure, the brain’s reward system is activated.  We like talking about ourselves.

The legendary German interrogator Hanns-Joachim Scarf debriefed more than 500 allied airmen during the second world war.  He never used coercion, but cross-checked information carefully.  He never asked a direct question and never indicated any interest in any answer he received.  He was adept at taking the pilots’ perspective and actively listening.  The article notes, “these skills can be learned and are not so different from the skills of a highly trained doctor.

Ikigai Cuts the Risk of Alzheimer’s by Half

February 7, 2017

This finding comes from an article in the 28 January 2017 issue of the New Scientist by Teal Burrell titled “A meaning to life:  How a sense of purpose can keep you healthy.”  Ikigai is the Japanese word for having a purpose in life.  Ikigai also helps prevent heart attack(27%) and stroke (22%), enables people to sleep better, have better sex and live longer, and cuts the risk of Alzheimer’s by more than half according to a study by Patricia Boyle and her colleagues at the Rush Alzheimer’s Disease Center.

Burrell quotes Nietzsche, “He who has a why to live can bear almost any how.”  Burrell gives the Austrian psychiatrist Viktor Frankl who survived four Nazi concentration camps credit for studying of how purpose influences our health.  We encountered Viktor before in a healthy memory blog post titled “Another Quote Worth Pondering.”  That quote was “Everything can be taken from a man but one thing:  the last of the human freedoms—to choose one’s attitude in any given circumstance, to choose one’s own way.”

The critical reader might well ask, how do we know about the benefits of having a purpose in life?  A more parsimonious explanation might be that purposeful people may exercise more or eat better.  However, over the past ten years the findings about the health benefits have been remarkably consistent revealing that alcoholics whose sense of purpose increased during treatment were less likely to resume heavy drinking six months later.  People with  higher purpose were less likely to develop sleep disturbances with age, and that women with more purpose rated their sex lives as more enjoyable.  Victor Stecher, a public health researcher at the University of Michigan found that these findings persist “even after statistically controlling for age, race, gender, education, income health status and health behaviors.  Stecher is the author of the book, “Life on Purpose.”

A study of 7000 middle-aged people in the US found that even small increases in sense of purpose were associated with big drops in the chances of dying during a period of 14 years.  An analysis of more than 9000 English people over 50 years old found that after adjusting for things like education, depression, smoking, and exercise—those in the highest quartile of purpose had a 30% lower risk of death over nearly a decade compared with those in the lowest quartile.

Some might argue that this sense of purpose is confounded with wealth.   However, a 2007 Gallup poll of 141,000 people in 132 countries found that  even though people from wealthier countries rate themselves higher on measure of happiness, people from poorer countries tend  to view their lives as more meaningful.  Shierhio Oishi of the University of Virginia suspects this is in part because people in developing countries have more concrete things to focus on.  He says, “Their goals are clearer perhaps:  to survive and believe.  In rich countries, there are so many potential choices that it could be hard to see clearly.”

Another explanation could be in terms of religious faith.  Oishu’s study find that nations with the highest ratings of meaningful  life were also the most religious.  And religious people do tend to report having more purpose.  However, efforts to disentangle the two have revealed differences.  For example, religiosity does not  predict a lower risk of heart attack or stroke.

Steven Cole of the University of California at Los Angeles says , “If people are living longer, there’s got to be some biology underpinning it.”  Cole has spent years studying how negative experiences such as loneliness and stress can increase the expression of genes promoting inflammation, which can cause cardiovascular disease, Alzheimer’s, or cancer.

Cole has examined the influence of well-being.  He has focused on two types of well-being:  hedonic, from pleasure and rewards, and eudaemonic, for having a purpose beyond self-gratification.  Participants were measured by having them note down their well-being over the previous week, how often they felt happy (hedonic), or that their life had a sense of direction (eudaemonic).  Scoring highly on one often meant scoring highly on the other and both correlated with lower levels of depression, but they had opposite effects on gene expression. People with higher measures of hedonic well-being had higher expression of inflammatory genes and lower expression of genes for disease-fighting antibodies.  It was just the opposite for people scoring highest on eudaemonia who had lower expression of inflammatory genes, and higher expression of genes for disease-fighting antibodies.  Cole suspects the eudaemonia, with its focus on purpose, decreases the nervous systems reaction to sudden danger that increases heart rate and breathing and surges of adrenaline.  Over-activation of this stress-response system causes harmful inflammation.  Cole says there be something saying “be less frightened, or less worried, anxious or uncertain.”

An alternative, but not mutually exclusive theory for how purpose could affect biology is by preserving the telomeres, which are the caps on the chromosomes that protect DNA from damage, but that shorten with age and stress.  Research has also indicated that stress reduction through meditation has found that it could defend telomeres.  Close analysis showed the the benefit was down to a change in sense of purpose, not the meditation directly:  the greater a person’s purpose became, the more of the protein telomerase they had to protect their telomeres.

Of course, a key question is how can people boost heir sense of purpose if it is lacking?  The article suggests several different strategies.  Meditation can have an effect. Eudaemonic  well-being is strengthened  by carrying out random acts of kindness.  Cole has found that having a purpose that benefits others may be particularly helpful;.

Stretcher recommends setting a different purpose for each o four domains in life—family, work, community and personal—and acknowledging that you focus will shift among them over time, and the goals themselves can shift too.

Dolores Gallagher-Thompson has found that cognitive behavioral therapy can promote meaningfulness.  She encourages patients to consider their legacy and how they might prove a good example for children and grandchildren.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Donald Trump and Climate Change

January 25, 2017

It is not surprising that the “New Scientist” is alarmed by the presidency of Donald Trump as a threat to science and critical thinking.   The 21 January 2017 issue of the New Scientist offers 4 articles on the potential threats of a Trump Presidency.   It could have offered many more articles, and perhaps it will.  Two of the four published articles will be shared in healthy memory blog posts.  The preceding post was the first.  This post is the second

This article is titled, “Resisting Trump:  How scientists can fight a climate witch-hunt.”  Donald Trump has argued that global warming is a hoax created by China to damage US manufacturing.  As president-elect, he has chosen a climate change denier to head the Environmental Protection Agency, and his pick for the helm of the energy department (DOE) is Rick Perry, who once suggested dismantling it.  If carbon dioxide emissions rise faster as a result, the consequences for the global climate will be. dire.  “We can’t take a four-year break,” says Marcia DeLonge at the Union of Concerned Scientists (UCS) in Washington, D.C.

Moreover, a Trump presidency won’t just be a problem for climate change.  It could also spell trouble for the scientists trying to stave it off.  The Trump transition team asked for a list of DOE employees and contractors who worked on climate change or had attended climate change meetings.  Correctly, the agency refused, but the incident sent a chill through the scientific community, particularly in light of the Republicans revival of the Holman rule.  The Holman rule allows for specific federal employees have their pay slashed to $1.

These fears of being targeted are legitimate.  Already there has been an uptick in Freedom of Information Act requests for the scientists’ private emails, said Peter Fountainee, the lawyer who defended climate scientist Michael Mann in a case against the State of Virginia.  If such tactics also come from within their own agencies, federal scientists might leave en masse.

The director of science and policy at the Union of Concerned Scientists in Cambridge, Massachusetts, Peter Frumhoff, says this would permanently erode federal agencies’ ability to use science to inform public decisions.   He begs scientists not leave because if they leave they’ll lose their ability to know whats’s going on.

Even if they do stay, they may be forced to stop pursuing certain lines of research.  The Trump transition team suggested as much when it said NASA should shift its focus away from “politically correct environmental monitoring.”  Apparently, we are entering a new era of political management, “Management by Thuggery!”

Fears that data will be insured or altered have prompted crowd-sourcing to back up federal climate and environmental data.  Climate Mirror is a distributed volunteer effort supported by the Internet Archive and the Universities of Pennsylvania and Toronto.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Donald Trump and Nuclear Weapons

January 24, 2017

It is not surprising that the “New Scientist” is alarmed by the presidency of Donald Trump as a threat to science and critical thinking.   The 21 January 2017 issue of the New Scientist offers 4 articles on the potential threats of a Trump Presidency.   It could have offered many more articles, and perhaps it will.  Two of the four published articles will be shared in healthy memory blog posts.

One of these articles is titled: “Resisting Trump:  How his chaotic nuclear policy might play out.”  He has said that the US nuclear capability is broken.  As this nuclear capability can destroy the world many times over betrays his woeful ignorance on the topic.  Moreover, the United States is already modernizing its nuclear force along with Russia.  Nuclear official Bill Perry warns, “We seem to b sleepwalking into this new nuclear arms race.”  As planned this modernization would deal the final blow to the tottering Nuclear Non-proliferation Treaty.  Any testing of new weapons would kill the 1992 nuclear testing moratorium and the 1996 Comprehensive Test Ban Treaty.

This nuclear arms race could induce smaller nuclear powers to expand as well.  Moreover, Trump has encouraged additional countries to develop their own nuclear weapons.   And by abrogating the agreement to Iran, the additional of a new Nuclear threat will soon emerge.  And it is likely that Saudi Arabia, Turkey, and Egypt would develop nuclear weapons.

The New Scientist does its best to give Trump the benefit of any doubts.  Trump says that he will stop Kim Jong-Un’s nuclear threat.  Trump had said that he will talk with Kim.  The New Scientist article incorrectly states that talks have worked before halting North Korean weapons development in 1994—until their cessation let it resume.  The truth is that the North Koreans’ effort never ceased.  They continued their work in secret.

The article also mentions that Trump could take US missiles off their alert status.  This idea is especially relevant during the Trump presidency.  Trump does not control his emotions well.  He is childish in his responses to anything remotely sounding like criticism.  What is worse is that these responses are made quickly without any time for reflection.  In any case, he should not be given the nuclear football until it is installed with some safeguards.  To think that the world could end because Trump felt his honor was impugned.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Keep Track of Your Body in Space

January 15, 2017

HM works from his iPAD.  This is the print title of an article by Anil Ananthaswamy in the October 1 issue of the New Scientist.  The healthy memory blog has stressed the importance of the unconscious mind and provided suggestions as to how to make use of your unconscious mind.  This and the previous blog posts taken from this issue of the New Scientist elaborate on these ideas.

Proprioception is a much under-rated ability.  It is the result of unconscious processing and results from a constant conversation between the body and the brain, allowing us to know where our limbs are and what they are doing, and adds up to an unerring sense of a unified, physical “me.”

Proprioception predicts the cases of the various sensory inputs it receives — from nerves and muscles inside the body, and from the senses detecting what’s going on outside the body.  What we are aware of is the brain’s best guess of were the body ends and where the external environment begins.

In the famous rubber-hand illusions a volunteer puts one hand on the table in front of him, and a rubber hand is put in front of him.  A second person they strokes the real and rubber hands simultaneously with a paintbrush.  Within minutes many people start to feel the touches on the rubber hand and even claim it as part of their body.  The brain makes its best guess as to where the sensation is coming from and the most obvious option is the rubber hand.

Newer research suggests that this sixth sense extends to the space immediately surrounding the body.  Arvid Gutersam of the Karolinska Institute in Stockholm and his colleagues repeated the rubber-hand experiment, stroking the real hand but keeping the brush 30 centimeters above the rubber hand.  Participants still sensed the brush stokes above the rubber hand, implying that as well as unconsciously monitoring our body we keep track of an invisible “force field” around us.  Gutersam suggests this might have evolved to help us pick up objects and move through the environment without injury.

Make Decisions

January 14, 2017

HM works from his iPAD.  This is the print title of an article by Caroline Williams in the October 1 issue of the New Scientist.  The healthy memory blog has stressed the importance of the unconscious mind and provided suggestions as to how to make use of your unconscious mind.  This and the following blog posts taken from this issue of the New Scientist elaborate on these ideas.

Ap Dijksyrthuis of Radboud University Nijmegen in the Netherlands proposed this counter-intuitive idea 12 years ago.  He had found that volunteers asked to make a complex decision—such as choosing between different apartments based on a baffling array of specifications—made better choices after being distracted from the problem before deciding.  He reasoned that this is because unconscious thought can move beyond the limited capacity of working memory, so it can process more information at once.

Although his reasoning as to why unconscious thought might be superior is correct, the conclusion that important decisions should be based on unconscious thought is not only wrong, but dangerous.  Important decisions need to be reviewed by conscious thought before they are implemented.  In fact, there have been many healthy memory posts recommending to say “let me sleep on it,” before any important decisions are made.  This provides ample time for both conscious and unconscious processing.

Many think that unconscious processing is important for creativity, including HM.  As Dijksyrthius suggested, unconscious processing circumvents the constraints of working memory, primarily as there are no time constraints on unconscious processing, which can also occur while we’re sleeping.  Just taking a break from work can be quite helpful.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

How to Make the Unconscious Conscious

January 13, 2017

HM works from his iPAD.  This is the print title of an article by Caroline Williams in the October 1 issue of the New Scientist.  The healthymemory blog has stressed the importance of the unconscious mind and provided suggestions as to how to make use of your unconscious mind.  This and other blog posts taken from this issue of the New Scientist elaborate on these ideas.

Russell Hurlburt, a psychologist at the University of Nevada, Las Vegas uses the following technique to make the unconscious conscious.  He asks volunteers to wear an earpiece linked to a beeper, which goes off at random intervals six times a day, prompting them to note they thoughts.  At the conclusion of the day, Hurlburt conducts an hour long interview to tease out what people were thinking and how.  After four decades of research, Hurlburt has concluded that most people have no idea of what is running through their minds, but that they can be taught to tune into it in just a few days.

Hurlburt believes that we’re conscious of such thoughts while having them, but then they vanish “like a dream upon waking.”  The beeper is similar to mindfulness meditation.  Zen monks have a similar system —they sound a gong and you  pay attention to what’s going on right now.

Research has shown that regular meditators were quicker than others to consciously register a decision made by the unconscious mind.  There are many healthy memory blog posts on mindfulness and meditation.  And this is one of the many reasons for mindfulness and meditation, to get in touch with our unconscious minds.

Anyone with a cellphone can download Dr. Hurlburt’s app, IPromptU,

Predict the Future

January 12, 2017

HM works from his iPAD.  This is the print title of an article by Diana Kwon in the October 1 issue of the New Scientist.  The healthy memory blog has stressed the importance of the unconscious mind and provided suggestions as to how to make use of your unconscious mind.  This blog post taken from this issue of the New Scientist elaborates on this idea.

Every moment the brain takes in an enormous amount of information, more than it can process on the fly.  To cope effectively with this enormous amount of information, the brain constantly makes predictions that it tests by comparing incoming data against information.  And most of this is done via unconscious processing.

Just imagining the future is enough to put the brain in motion.  Imaging studies have shown that when a sound or image begin to appear, the brain generates an anticipatory signal in the sensory cortex.

The brain is continuously predicting the sounds, words, and meanings that we are trying to produce or communicate..

Moreover, the senses are used to inform each other.  When a recording of speech is degraded so that it is nearly unintelligible, the words sound clearer if you have previously read the same words in subtitles.  Matt Davis at the MRC Cognition and Brain Sciences Unit in Cambridge, says that “the sensory parts of the brain are comparing the speech we’ve heard to the speech we’ve predicted.”

Our brains also make predictions on the basis of emotional signals coming from our bodies.  Moshe Bar, a neuroscientist at Bar-Han University in Israel, suggests that we only consciously recognize an object once our unconscious mind has calculated its importance based on what our senses and emotional reactions our saying.  For example, the conscious fear of a snake on a hiking trail comes after the brain has processed the shape and initiated jumping out of the way.

There are downsides to making predictions. Incorrect inferences reinforced by repetition can be hard to reverse.  Stereotyping is an even more troublesome example of the same thing.  When it comes to human interactions it can lead to negative biases and discrimination.  Bar says that “stereotypes and prejudices are predictions working as they do with everything else, but in a way that is not desirable.”  Some neuroscientists also believe that the hallucinations experienced in psychosis are the result of expectations gone awry.  Despite its flaws predictions are necessary.  Otherwise our species never would have survived.

Run Your Life on Autopilot

January 11, 2017

HM works from his iPAD.  This is the print title of an article by Anil Ananthaswamy in the October 1 issue of the New Scientist.  The healthy memory blog has stressed the importance of the unconscious mind and provided suggestions as to how to make use of your unconscious mind.  This and the following blog posts taken from this issue of the New Scientist elaborate on these ideas.

An enormous part of our day-to-day lives, driving, making coffee, or touch typing, happens without conscious thoughts.  Unlike many of the brain’s other unconscious habits, these skills had to be learned before the brain can automate them.  How it does this could potentially provide a method for us to think our way out of bad habits.

Ann Graybiel of the Massachusetts Institute of Technology and her colleagues  have shown that a region deep inside the brain called the striatum is key to habit formation.  When we undertake an action, the prefrontal cortex, which is involved in planning complex tasks, communicates with the striatum, which sends the necessary signals to enact the movement,  Over time, input from the prefrontal cortex fades, to be replaced by loops linking the striatum to the sensorimotor cortex.  The loops, together with the memory circuits, allow us to carry out the behavior without having to think about it.  Practice makes perfect and no thinking is required.  The obvious upside is that we no longer  need to focus our attention on a frequent task, the spare processing power can be used for other things.  Unfortunately, similar circuitry is involved in turning all kinds of behavior into habits, including thought patterns, and once any kind of behavior becomes a habit, it become less flexible and harder to interrupt.  This is fine for good habits, but when bad habits are ingrained, its equally hard to get rid of it.  You lose the moment of choice when we can decide not to do something.

Fortunately, even with the most ingrained habits, a small area of the prefrontal is kept online, in case we need to take alternative action.  This offers hope to any of us looking to break a bad habit and to those suffering from habit-related problems such as obsessive compulsive disorder and Tourette’s syndrome  — both of which are associated with abnormal activity in the striatum and its connections to other parts of the brain.  These circuits are potential targets for future drug treatments.  However, for now the best way to get a handle on bad habits is to become aware of them.  Then, focus all your attention on them and hope that it’s enough  to help the frontal regions resist the call of the autopilot.  An alternative approach is to teach ourselves a new habit that counters the bad one.

Think While You Sleep

January 10, 2017

HM works from his iPAD.  This is the print title of an article by Simon Makin in the October 1 issue of the New Scientist.  The healthy memory blog has stressed the importance of the unconscious mind and provided suggestions as to how to make use of your unconscious mind.  This and the following blog posts taken from this issue of the New Scientist elaborate on these ideas.  This first post reviews a study done in 1999.  A team at the University of Libek in Germany put 15 volunteers to bed at midnight.  The team either told the participants they would wake them at 9 am and did, or told them they would wake them at 9 am, but actually woke them at 6 am, or said they would wake them at 6 am and did.

The last group had a measurable rise in the stress hormone adrenocorticotropin from 4:30 am, that peaked around 6 am, when these participants were told they would be awakened.  The participants woken unexpectedly at 6 am had no such peak.  The researchers concluded  that the unconscious mind can not only keep track of time while we sleep, but also set a biological alarm to jump-start the king process.

A 2014 study by Sid Koulder of the Ecole Normale Superireure in Paris and his colleagues found that the sleeping brain can also process language.  They trained participants to push a button with their left or right hand to indicate whether they heard the name of an animal or object as they fell asleep.  They monitored the brain’s electrical activity  during training and when the participants heard the same words when asleep.   Activity continued in the brain’s motor regions even when asleep, indicating that the sleepers were preparing to push the correct button.  The participants could also correctly categorize new words first heard after they had dropped off to sleep, indicating that they were genuinely analyzing the meaning of the words while asleep.

A more recent study found that while language processing continues in REM sleep for words heard just before bed, once in deep sleep all responses disappear as the brain goes “offline” to allow the day’s memories to be processed.  Boulder says that “your cognition about things in the environment  declines progressively towards deep sleep.  Sleep is not all-or-none in terms of cognition, it’s all-or-none in terms of consciousness.”

Regarding that New Year’s Eve Hangover

December 30, 2016

This post is based largely on an article by Richard Webb titled “Hung over:  What science says about why you feel so rough in the 7 December 2016 issue of the New Scientist.

First of all, alcohol is not the reason you feel so bad.  The onset of a hangover means that the blood’s concentration of ethanol is zero.  Moreover, there probably isn’t just one cause for all the hangover symptoms.  It could be the impact on sleep quality of forcing your body to break down a large amount of fluid substance.

Dehydration is a side effect.  The pounding head and mouth probably result from alcohol’s suppression of the antidiuretic hormone vasopressin.  This increases the desire to urinate.  During the hangover vasopressin  snaps back to a higher level than normal, but there does not appear to be a correlation between that or any other drink-induced hormonal imbalance and the severity of the hangover.

The delayed onset of a hangover means that the metabolic products of ethanol are prime suspects.  A study in Japan found that people with inactive genes for making enzymes that break down acetaldehyde, a highly reactive by-product of ethanol, experienced a hangover after fewer drinks.  However, an earlier Scandinavian study showed that acetaldehyde concentration were generally low when a hangover was most severe suggesting that its effects are indirect or delayed.  Perhaps it is acetate, which occurs further down the line as a product of acetaldehyde breakdown.

A recent study of the urine of a group of hung-over Dutch students found that ethanol concentration  was correlated with severity of symptoms that include sleepiness, sweating, concentration problems, nausea, thirst, and to a lesser extent, confusion headache, weakness and regret.  However, the same correlations were not present in a self-described hangover-immune group that had drunk a similar amount.  These people also had less alcohol in their urine.

So it seems the the ability to rapidly metabolize alcohol is more important than the amount consumed in determining hangover severity.   But this does not explain why we can have a severe hangover when we hardly drank anything, and at other times when we have drunk heavily experience only a mild hangover.

Some think that congeners, chemicals produced during fermentation, other than ethanol, that give each drink a distinctive aroma and taste, play a role.  According to a study that compared hangover severity in bourbon and vodka, dark spirits are worse than clear ones in inducing severe hangovers.  On the other hand, research in Japan found that higher levels of congeners of whiskies might inhibit the breakdown of ethanol and at least delay the onset of hangovers.

It should be clear that in all these studies there are uncontrolled confounding variables.

The advice offered by HM is to not drink so much.  There is no cure and there is unlikely that one will be developed.  Excessive alcohol consumption does not foster a healthy memory.

Read the below list of a 2011 survey of 1410 hung-over Dutch students. This is the order of frequency of hangover symptoms that should serve as a reminder to be cautious:  Fatigue, thirst, drowsiness, sleepiness, headache, dry mouth, nausea, weakness, reduced alertness, concentration problems, apathy, increase reaction time, reduced, appetite, clumsiness, agitation, vertigo, memory problems, gastrointestinal complaints, dizziness, stomach pain, tremor, problems with balance, restlessness, shivering, sweating, disorientation, auto sensitivity, photosensitivity, blunted emotions, muscle pain, loss of taste, regret, confusion, guilt, gastritis, impulsivity, hot/cold flashes, vomiting, pounding heart, depression, palpitations, tinnitus, nystagmus (uncontrolled eye movement), anger, respiratory problems, anxiety, suicidal thoughts.

And whatever you do, do not drink and drive.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Game Theory Guide to a Happy Family Holiday

December 21, 2016

This post is taken from a piece by Paul Raebun and Kevin Zollman titled “No more drama:  The game theory guide to a happy family holiday” in the 17 December 2016 issue of the New Scientist.  This piece asks the question how do we encourage our families to behave themselves in a way that reflects how, deep down, they truly love each other?  The answer, turn to game theory, the science of strategic thinking.

If there needs to be a decision of who will host,  you can draw straws.  If there are three choices you can use a Borda count:  Each person ranks their preferences, the numbers are added, and the host with the lowest score wins.

What about the question as to who is going to bring which dish.  As the authors note many people have delicious alternatives they would love to provide, but no one wants to hurt someone’s feelings, so those under appreciated dishes are lightly touched.  So frames can be switched regarding the dishes.  For example, seize on a change of time or venue.  Then say, for instance, “Since we’re at Dad’s house this year, let’s change dishes.”

It is  recommended that political discussion be off limits, so the squabble will probably be over who gets the last roast potatoes or the final sliver of something else.  To keep bickering to a minimum, game theorists recommend using I cut, you pick.  If there are two people hankering after the last of the yule log, one slices and the other chooses.

A common problem is that there is too much food and people are begged to take home leftovers. Everyone brings way too much food because of the incentives.  There’s no real penalty for bringing an excessive amount, but somebody might be offended if they bring too little.  So change the incentives.  Give a prize to the cook whose dish is totally gone, or make the guest with the most leftovers host next time.  Now it’s not a measure of love, it’s a game.

What about unruly children misbehaving?  To stop their misbehavior, there needs to be a credible threat.  A simple warning might not be credible.  However, a threat to make the children do the dishes will likely be credible and effective.

What to do about a guest who does not pitch in and help out?  You can put empirical expectations to make him work.  Make a point of having someone clean up around him so can see them doing it.  He just might feel obliged to pitch in.

What about arguments as to what game to play?  Propose an auction by having the opposing camps  bargain by offering to do chores.  Whoever makes the best offer—finish washing the dishes and tidy up the kitchen—gets to pick.

Don’t forget the ultimatum game.  How should two kids with a small box of chocolates divvy up the candy?  Ask one to keep some for herself and offer the rest to the other.  If the second party should regard the offer as unfair, then neither one gets any of the chocolates.

The authors, Paul Raeburn and Kevin Zollman have both written and are splitting credit for their book, “The Game Theorists Guide to Parenting.”  Perhaps this might be a Christmas gift for someone.

Super-you: Fine-tune Your Life by Making Goals Into Habits

December 20, 2016

In the 10 Dec 2016 issue of the New Scientist there was a series of articles whose titles began super-you.  HM is reviewing a select sample of these pieces.   The title of this post is identical to the title of a piece by Julia Brown.  At the beginning of the article she writes , “your environment controls you, as do habits that you don’t even know you have.  But realize what’s really pulling your strings, and you can work out how to manipulate yourself for the better.

Wendy Wood of the University of Southern California and her colleagues have shown how almost half of the behaviors we adopt in any given situation are habitual.  That is an automated action learned by repetition until we do it without thinking.  In Kahneman’s terms these are System 1 processes that include eating, napping, watching TV, and exercising.

We work this way because we have to.  If every act we perform required us to be deeply involved in thought, our species never would have evolved.

Brown writes that identifying how your unconscious is working provides you with ways to fine-tune your behavior.  If you want to change habits, look at where and how you enact them.  If you want to stop smoking, avoid places where you are likely to light up, or move your cigarettes out of sight.  If you want to start eating more healthily, stop meeting friends for lunch as a burger restaurant.

Val Curtis, who studies behavior change at the London School of Hygiene and Tropical Medicine has used these insights to develop ways to encourage hand washing with soap in India and to modify the tendency for mothers in Indonesia to feed their children unhealthy snacks.  She says we can all prime ourselves in similar ways.  So if you think you ought to do some exercise but don’t feel like it, put you running gear on anyway, and wait and see what happens.  You let your running gear control your behavior and it takes you for a run.

Super-you: Train Your Brain to Beat the Inbuilt Fear Factory

December 18, 2016

HM has written in previous posts about how annoyed he is about people’s fears of terrorists attacks.  HM lived through the Cuban Missile Crisis where the threat of nuclear annihilation was very real.  The threat of terrorism pales in comparison.  The probability of an individual suffering a terrorist attack is extremely small.  And even the number of lives lost during 9/ll was minuscule compared to the loss if a nuclear warhead had exploded over Manhattan.  As a result of 9/ll many people stopped flying and got into their cars,  The annual death toll on the road was on average 1100 higher than in the five preceding years.

The New Scientist piece that inspired this post has the same title as this post and was written by Sally Adee.    She begins the article noting that evolution has given us an inbuilt fear factory.  But by engaging a different way of thinking we can stop panicking and assess the real risks.

Adee draws upon Kahneman’s Two Process concept of cognition.  System 1 is fast and the product of evolved biases shaped over many thousands of years.  This worked well.  If you saw a shadow in the grass and it was a lion and lived to tell the tale, you’d make sure to run the next time you saw a shadow in the grass.   This inbuilt fear factory is highly susceptible to immediate experience, vivid images and personal stories.  Security companies, political campaigns, tabloid newspapers and ad agencies prey on it.  Adee notes that System 1 is good at catastrophic risk, but less good at risks that build up slowly over time—thus our lassitude in the face of climate change or our expanding waistlines.

She advises that when your risk judgment is motivated by fear, stop and think:  what other, less obvious risks might I be missing?  This amounts to engaging the more rigorous, analytical System 2 outlined by Kahneman.  People who deal with probability and risk professionally, and have excelled at it use System 2 quite heavily.  Successful bookies, professional card players and weather forecasters are heavy users of their System 2 processes.  Risk consultant Dan Gardner notes that even though meteorologists get a bad rap, they tend to be highly calibrated, unlike most of us.  One can never be right all the time.  But one should be attempting to calibrating risk assessment with the objective world.

Andy Spicer, who studies organizational behavior at City University of London notes that part of the problem in the run-up to the financial collapse of 2008 was that individuals were no longer accountable for their own actions.  “At banks, there was no direct relationship between what you did and the outcome.  That produced irrational decisions.

Gardner says, “there’s one feature you see over and over in people with good risk intelligence.  I think it wouldn’t be too grandiose to call it the universal train of risk intelligence—humility.”  The world is complex, so be humble about what you know an you’ll come out better.

HM would note that there is such a thing as risk intelligence and that it can be increased.  See the healthy memory blog post “Risk Intelligence.”

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Super-you: How to Harness Your Inner Braggert

December 16, 2016

In the 10 Dec 2016 issue of the New Scientist there was a series of articles whose titles began super-you.  HM is reviewing a select sample of these pieces.  This inner braggart piece was written by Tiffany O’Callaghan.  She begins the article, “Think you’re saner, smarter and better-looking than average?  Well, so does everyone else.  Recognizing our delusions is the first step to doing better.  She asks, “Ever had the sense that everyone else is an idiot?  Have you seen how those jerks drive?  As Garrison Keillor tells it, all the children in Lake Wobegon are above average.  This is not unique to Lake Wobegon.  When it comes to smarts, looks, charisma, and general psychological adjustment, there’s no denying that we’re a cut above the average person in the street.

O’Callaghan notes that viewing ourselves as above average applies across human ages, professions, and cultures, and to capabilities from driving to playing chess.  She notes that this does have advantages.  “People who are more impressed with themselves tend to make better first impressions, be generally happier and may even be more resilient in the fact of trauma.  Anthropologist Robert Trivers of Rutgers University says that high self-estimation might also let us get ahead by deceiving others.  He argues that once we’ve tricked ourselves, we don’t have to work so hard to trick others.

Confidence also helps in finding a romantic partner, which also increases the probability of reproduction.  Men appear to be worse offenders at overestimating their looks than women.  A study by Marcel Yoder and his team at the University of Illinois found that some men seem to suffer from a “frog prince” delusion:  They accurately asses other people’s lesser perception of them, while persisting in a more positive perception of themselves.

Problems come when we’re less aware of how others perceive us.  Those who are self-confident without being self-aware are likely to be seen as a jerk.  Yoder says, “It’s hard to come off as humble or modest when you’re clueless  about how other people see you.”  Moreover, we can make bad decisions on the basis of an inflated sense of expertise or understanding.

This is particularly dangerous in the political arena.  A “bias blind spot”, a belief that our world view is based on objective truth, while every else is a deluded fool can become problematic, especially as the echo chamber of social media exposes us to fewer contrary views (“Bias Blind Spot:  Structure, Measurement, and Consequences”, Scopelliti et al,  It can make opposing parties feel that the other side is too irrational to be reason with says Wenje Yan.

So the question is how can we preserve the goods while avoiding he downsides.  There are different strategies and training programs for overcoming our inbuilt biases.  Most begin by making people aware of them and how they can affect our decision making.  Psychologists have an exercise called perspective taking.  Irene Scopelliti says that the amounts to trying to see a dispute from the other person’s point of view.  She points out that acting when you’re all riled up—in a state of high emotion—exacerbates the problem by entrenching our bias.  She says, “ We know how o make unbiased decisions, but often emotion biases us, or we aren’t willing to put in the effort.  Nevertheless she maintains that practice can make us better.

Super-you: Use Your Better Instincts to Crush Your Inner Bigot

December 14, 2016

In the 10 Dec 2016 issue of the New Scientist there was a series of articles whose titles began super-you.  HM is reviewing a select sample of these pieces.  This instincts piece is written by Caroline Williams.  HM does not like this use of the word “instincts.”  “Predisposing biases” would have been a more fortunate choice.  However, this article accounts for much of the ugliness prevalent throughout the world.  The quick explanation is that these people are in their default mode of feeling and thinking.  But this is a very low level of thinking.  It is System 1 processing using Kahmeman’s terms.

The unpalatable truth is that we are biased, prejudiced and racist.   We put people into mental boxes marked “us” and them”.  Implicitly we like, respect and trust people who are similar to us and feel uncomfortable around everyone else.  This tendency towards in-group favoritism is so ingrained that we often don’t realize we are doing it.  “It is an evolutionary hangover affecting how the human brain responds to people it perceives as different.

A study from 2000 found that just showing participants brief flashes of faces of people of a different race was enough to activate the amygdala (Neuroreport 11(11):2351-5, September 2000 can be found at  HM readers should know that the amygdala is a key component of the brain’s fear circuitry.  But the amygdala doesn’t just control fear; it responds to many things and calls on other brain areas to pay attention.   Although we’re not automatically scared of people who are not like us, we are hardwired to flag them.  As Williams notes, “evolutionarily, that makes sense:  It paid to notice when someone from another tribe dropped by.”

When Susan Fiske of Princeton University scanned volunteers’ brains as they looked at pictures of homeless people, she found that the prefrontal cortex, which is activated when we think about other people, stayed quiet.  Apparently these volunteers seemed to process these homeless people as subhuman (Social cognitive ad affective neuroscience, 2007 Mar. 2(1) 45-51.)

Fiske says “The good news is that his hard-wired response can be overcome depending on context.”  In both the homeless study and a rerun of the amygdala study Fiske found that fear or indifference quickly disappeared when participants were asked questions about what kind of food the other person might enjoy,   Fiske continues, “As soon as you have a basis for dealing with a person as an individual, the effect is not there.”

What we put in “them” and “us” boxes is flexible.  Jay Van Bavel of New York University created in-groups including people from various races, participants still preferred people in their own group, regardless of race.  It seems that all you have to do to head off prejudice is to convince people that they are on the same team (Pers Soc Psychol Bull, December 2012, 38, 12, 2012  1566-1578.

It appears that we are instinctively cooperative when we don’t have time to think about it.  Psychologist David Rand of Yale University asked volunteers to play gambling games in which they could choose to be selfish, or corporate with other players or a slightly lower, but shared, payoff.  When pressed to make a decision people were much more likely to cooperate than when given time to mull it over.

Williams concludes her article thusly:  “So perhaps you’re not an asshole after all—If you know when to stop to think about it and when to go with your gut.  Maybe, just maybe, there is hope for the world.”

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Super-you: We’re All Reading Each Others Minds, All the Time

December 13, 2016

In the 10 Dec 2016 issue of the New Scientist there was a series of articles whose titles began super-you.  HM is reviewing a select sample of these pieces.  This mind reading piece is written by Gilead Amit.  As Amit notes,being able to predict what other people think is the secret sauce of culture and social connections.

According to psychologist Joseph Call we all possess a “theory of mind” that informs us every waking moment.  “When we get dressed in the morning, we’re constantly thinking about what other people think about us.”  He says that no other animal can match our ability to think about the minds of others and that this is the essential lubricant for social interactions that sets humans apart.  We humans are not unique in this ability, but our ability is superior to other species.

Artists need to imagine what their audiences will think of their characters.  A theory of mind is critical to compelling TV soaps, sculptures or books.  Some think William Shakespeare had a particularly well-developed theory of mind to create such rich, complex characters (See “Shakespeare:  Unleashing a tempest in the brain” by David Robson in the 15 April 2014 issue of the New Scientist).

Mind reading establishes society norms.  People not only respond to what we do, but to what we intend to do.  For example, if you hit someone with your car, the difference between a verdict of murder or manslaughter depends on your intent.

Psychologist Rory Devine notes that we can’t all read minds equally well.  Most of us have difficulty when attempting nested levels of mind reading.  For example, think of Sally hunting for her cake, but imagine where she might look if we take into account  what she thinks about how Andy’s mind works.  The more recursive steps we add, the more  difficult it becomes.  Call says, “When you go beyond five levels, people get really bad.  HM does not believe that he can get even close to four levels, much less five.

Obviously being a good mind reader is an important skill.  Children who are relatively proficient later report being less lonely and their teachers rate them as more sociable.

Devine says that that “the ability to read minds is something we might learn gradually from the guidance of others.”  This mind reading apparatus mostly develops before the age of 5, and the principal factor that determines its development is whether our families and friends talk much about the emotions and motivations of others.

Perhaps the first step is to think about what it’s like to be in other people’s shoes.  Devine and his colleagues showed that this learning can continue far beyond early childhood.  When they asked 9 and 10 year old children to read and discuss short vignettes about social situations, the children developed better mind reading skills than children in a control group.  It appears that we’re never too old to be a better mind reader.  Similar improvements have also been seen in people over the age of 60.

Super-You: You Have a Superstitious Mind—to Protect You

December 12, 2016

In the 10 Dec 2016 issue of the New Scientist there is a series of articles whose titles began super-you.  HM is reviewing a select sample of these pieces.  This superstitious mind piece is written by Graham Lawton.  Lawton writes, “The vast majority of people are religious, which generally entails belief in a supernatural entity or three.”  Nevertheless, among the oceans of religiosity are archipelagos of non belief.  Conservative estimates are that half a billion people around the world are non-religious.

However, among the scientists who study the cognitive foundations of religious belief, there is a widespread consensus that atheism is only skin-deep.  Should you scratch the surface of a non-believer and you’ll likely find a writhing nest of superstition and quasi-religion.

Lawton writes that this is because evolution has endowed us with cognitive tendencies that, while useful for survival, make us very receptive to religious concepts.  Psychologist Ara Norenzayan of the University of British Columbia says, “there are core intuitions that make supernatural beliefs easy for our brains.”

One of our cognitive abilities is known as theory of mind which enables us to think about and intuit other people’s thoughts.  That’s certainly useful for a social species like us, but it also tricks us into believing in disembodied minds with mental states of their own.  The idea that mind and body are distinct entities seems to come instinctively to us.  When teleology —the tendency to seek cause and effect everywhere and see purposes where there is none—it is obvious why the human brain is superstitious (See the healthymemory blog post (“Thinking 2.0).

Presumably these same thought processes underlie beliefs supernatural phenomena such as ghosts, spiritual healing, reincarnation, telepathy, astrology, lucky numbers and Ouija boards.  Three-quarters of Americans admit to holding at least one of ten common supernatural beliefs.

Lawton writes, “With all this supernatural equipment filling our heads, atheism and scientific materialism are hard work.  Overriding inbuilt thought patterns require deliberate and constant effort, plus a learned reference guide to what is factually correct and what is right and wrong.  Just like a dieter tempted by a doughnut, will power often fails us.”

Experiments have shown that supernatural thoughts are easy to invoke even in people who consider themselves skeptics.  Asked if a man who dies instantly in a car crash is aware of his own death, large numbers answer “yes”.  People who experience setbacks in their lives routinely invoke fate, and uncanny experiences are frequently attributed to paranormal phenomena.

Of course, it is impossible to prove that everyone falls prey to supernatural instincts.  The supernatural exerts a pull on us that is hard to resits.  It is likely that the belief that we are rational creatures is wishful thinking.

One can argue that Pascal’s Wager does provide a rational justification for a belief in God.   See the healthymemory blog post  “God.”

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Why Do We Sleep?

December 10, 2016

The question raised by the title of this post is highly relevant given that about one-third of our lives is spent sleeping.  A brief piece  titled “A bad night’s sleep messes with your brain’s memory connections in the In Brief Section of the August 27, 2016 Edition of the “New Scientist” provides a compelling answer.  The piece begins with the following sentence, “This is why you feel so awful after a bad night’s sleep—your brain is jammed with yesterday’s news.”

The research was done by Christoph Nissen and his team at the University Medical Center in Freiburg, Berman.  They examined the brains of 20 people after they’d slept well, and after a night of disruption.  They found that after a bad’s night sleep, people had higher levels of theta brainwaves, and it was easier to stimulate their brains using magnetic pulses (“Nature Communications.” DOI”10.1038/ncomms12455).

The findings support the theory that sleep serves to weaken memory connections, making way for new ones.  Nissan says that without this synaptic downscaling, the brain loses the capacity to for novel connections, impairing the encoding of novel memories.  The theory is that sleep evolved so that connections in the brain can be pruned down during slumber, making room for fresh memories to form the next day.

The idea that sleep is important to memory is not new.  And memory is certainly important enough that we need to devote about one-third of our lives supporting it. Of course, it is likely that memory is not the only capacity to benefit, but it is likely that other capacities that benefit are closely related to memory.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

A High Risk But Viable Alternative to Kurzweil’s Singularity

December 7, 2016

Kurzweil’s Singularity consists of uploading the information in human biological brains into the silicon of computer hardware.  Kurzweil even regards this as possibly occurring during his own lifetime, so Kurzweil is doing everything possible to extend his life.  HM has previously indicated that this is highly unlikely because Kurzweil is ignoring the reality that silicon and biology differ.

The 29 October 2016 issue of the “New Scientist” has an article titled, “$100 million project to make intelligence-boosting implant.”  Entrepreneur Bryan Johnson launched the company Kernel earlier this year.  Johnson is working with Theodore Berger at the University of Southern California, who is looking at the hippocampus.  Healthy memory blog readers should know that the hippocampus is a key brain region involved in the retrieval of memories.  There is a hippocampus in each hemisphere, so we have two hippocampi.

Berger is working with people who already have electrical implants in their brains to treat epileptic seizures.  Instead of using these implants to stimulate the brain, his team has been harnessing them to record brain activity, to learn more about how our memory works.   Johnson says that once we know how a healthy brain functions, we should be able to mimic it.  The goal is to restore function in people with memory disorders by stimulating the same pattern of activity.  Berger has had enough success with animals that he has begun experiments with people.

Johnson says, “The idea is that if you have a loss of memory function, then you could build a prosthetic for the hippocampus that would help restore the circuitry and restore memory.”

It is both appropriate and fair that people with memory disorders will be the first to try the device.  Johnson says, “The first potential superhumans are those who have deficits to start with.”  He then plans to develop the prosthesis to enhance memory and potentially other functions in healthy people.  His vision of the future is one in which it is normal for people to walk around with chips in their brains, providing them with a cognitive boost.

Johnson has put up $100 million of his own to go on developing such a device.  It will be as tiny and easy to implant as possible, while still being able to record or stimulate multiple neurons.  The research also involved working on ways to develop rules that underly patterns of activity that dictate normal brain function for an individual.

Johnson says “If we can mimic the natural function of the brain, then I posit the question, what can’t we do?”  Could we learn a thousand times faster?  Could we choose which memories to keep and which to get rid of?  Could we have a connection with our computers?”

Note that Johnson is asking questions and not making promises.  To be sure this is high risk research, but it is one that is based on viable approach, unlike Kurzweil’s proposal.  Johnson has identified necessary first steps on the way to a marriage between the brain and silicon.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Motivated Reasoning, Cognitive Dualism, and Scientific Curiosity

December 4, 2016

This post is based on a Feature Article by Dan Jones titled “Seeing reason:  How to change minds in a ‘post-fact’ world, in the December 3, 2016 issue of the New Scientist.   The article notes that politicians spin and politicians lie and that that has always been the case, and to an extent it is a natural product of a free democratic culture.  But Jones goes on to note, “Even so we do appear  to have entered a new era of ‘post-truth politics’, where the strongest currency is what satirist Stephen Colbert had dubbed ‘truthiness’:  claims that feel right, even if they have no basis in fact, and which people want to believe because they fit their pre-existing attitudes.”

However, facts are important, as Brendan Nyhan of Dartmouth College notes, “We need to have discussions that are based on a common set of accepted facts, and when we don’t, it’s hard to have a useful democratic debate.”  As Jones writes, “In the real world of flesh-and-blood humans, reasoning often starts with established conclusions and works back to find “facts” that support what we already believe.  And if we’re presented with facts that contradict our beliefs, we find clever ways to dismiss them.”  Psychologists call this lawyerly tendency motivated reasoning.

A Pew Research Center survey released  before the US election showed that compared with Democrats, Republicans are less likely to believe that scientists know that climate change is occurring, that they understand its causes, or that they fully and accurately report their findings.  They are also more likely to believe that scientists’ research is driven by careerism and political views.  Many liberals think this is a product of scientific illiteracy, which if addressed would bring everyone around to the same position.  Unfortunately, research by Dan Kahan at Yale University has shown that, in contrast to liberals, among conservatives it is the most scientifically literate who are less likely to accept climate change.  Kahn says, “Polarisation over climate change isn’t due to a lack of capacity to understand the issues.  Those who are most proficient at making sense of scientific information are the most polarized.

Kahan attributes this apparent paradox to motivated reasoning, the better one is at handling scientific information, the better one is at confirming his own bias and writing off inconvenient truths. For climate-change deniers studies suggest that motivation is often the endorsement of free-market ideology, which includes objections to government regulation of business that is required to address climate change.  Psychologist Stephan Lewandowsky of the University of Bristol says, “If I ask people four questions about the free market, I can predict attributes towards climate science with 60% accuracy.”

Jones writes, “But liberal smugness has no place here.  Consider gun control.  Liberals tend to want tighter gun laws, because, they argue, fewer guns would translate into fewer gun crimes.  Conservatives typically respond that with fewer guns in hand, criminals can attack the innocent with impunity.”

In spite by the best efforts of criminologists, the evidence on this issue is mixed.  Kahan has found that both liberals and conservatives react to statistical information about the effects of gun control in the same way:  they accept what fits in with the broad beliefs of their political group, and discount that which doesn’t.  Kahn writes, “The more numerate you are, the more distorted your perception of the data.”  Motivated reasoning is found on other contentious issues from the death penalty and drug legalization to fracking and immigration.

The UK’s Brexit both provides another compelling case study on the distorting power of motivated reasoning.  Researchers at the Online Privacy Foundation found that both Remainers and Brexiteers could accurately interpret statistical information when it came to assessing whether a new skin cream caused a rash, their numeracy skills abandoned them when looking at stats that undermined rationales for their views such as figures on whether immigration is linked to an increase or a decrease in crime.

It is not just a matter of political ideology.  Although the bogus link between autism and the vaccine for measles, mumps, and rubella is often portrayed as a liberal obsession, it cuts across politics.  Nyhan says, “There’s no demographic factor that predicts who is most vulnerable to anti-vaccine claims.”

It should not be concluded that myth-busting is a waste of time.   Nyhan and Reifler found that during the 2014 midterm elections in the US fact-checking improved the accuracy of people’s beliefs even when it went against ingrained biases.  Both Democrats and Republicans updated their beliefs after having a claim debunked.

Emily Thomson of George Washington University found that misconceptions of issues like how much of the US debt China owns, whether there’s  a federal time limit for receiving welfare benefits, and who pays for Social Security could be fixed by a single corrective statement.

Unfortunately the bad news is that myth-busting loses its power on salient and controversial issues.  Nyhan says, “It’s most effective for topics that we’re least concerned about as a democracy.  Even the release of President Obama’s birth certificate had only a limited effect on people’s belief that he wasn’t born in this country.”  Thomson has found that even when corrections work, for example getting to accept that a congressman accused of taking campaign money from criminals did no such thing—the taint of the earlier claim often sticks to the innocent target.  This phenomenon is termed “belief echoes.”

Graphical presentation of information can be more effective than verbal presentations, but this benefit requires that people be able to read graphs.  Many people have difficulty understanding graphs, so simple graphs have a higher likelihood of success.

Kahan calls the ability to hold two seemingly contradictory beliefs at the same time “cognitive dualism.”  Cognitive dualism was found in a recent Pew survey on climate change:  just 15% of conservative Republicans agreed that human activity was causing climate change, but 27% agreed that if we change our ways to limit carbon emissions it would make a big difference in tackling climate change.  This same dualism was found among US farmers.  A 2013 survey found that only a minority accepted climate change as a fact.  Yet a majority believed that some farmers would be driven out of business by climate change, and the rest will have to change current practices and buy more insurance against climate-induced crop failures.  By buying crops genetically engineered to cope with climate change and purchasing specialist insurance polices, many of them already have.

Kahan has discovered something interesting about people who seek out and consume scientific information for personal pleasure,  He calls this trait scientific curiosity.  He has devised a scale for measuring this trait.  He and his colleagues have found that, unlike scientific literacy, scientific curiosity is linked to greater acceptance of human-caused climate change, regardless of political orientation.  On many issues, from attitudes towards porn and the legislation of marijuana, to immigration and fracking,scientific curiosity makes both liberal and conservatives converge on views closer to the facts.

So exploiting cognitive dualism and fostering scientific curiosity appear to be the most promising avenues to pursue.  It is important to remember that it is scientific curiosity rather than scientific literacy that is important here.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

To Treat Chronic Pain, Look to the Brain Not Body

December 3, 2016

This post is taken from a Feature Article by Jessica Hamzelou, “Hurt Blocker:  To treat chronic pain, look to the brain not body” in the 26 November 2016 Issue of the New Scientist.  It is becoming increasingly clear that the root causes of chronic pain will require more than drugs to break the cycle.  The answer lies in how the brain processes pain.

As has been mentioned in previous posts, there are two pathways for pain.  One is from the actual physical injury, whereas there is a second pathway for emotion linked pain.  Recent research indicates that signals from psychological pain networks may take over when the problem becomes chronic.

People can be trained to more directly influence their own brain activity and, potentially, turn down the pain signal.  Neurofeedback can be provided by placing electrodes on participants’ scalps that provide a real-time display of the brain’s electrical activity.  People can learn to alter their brain activity to dial down their pain.  Initial research suggest that neurofeedback might be useful for people with fibromyalgia, as well as those with chronic pain resulting from spinal cord injuries and cancer.

Mindfulness meditation can achieve something similar.  The goal is to achieve a state of ‘detached observation,’ which can help cope with pain.  Studies have suggested that it improves  various types of chronic pain, including fibromyalgia and lower back pain.  A study of 17 people who practiced mindfulness-based stress reduction found that, over time, meditators experienced increases in grey matter in regions of their brains involved in learning, memory, and emotion.  All of these influence pain perception.

The following is taken from a previous healthy memory blog post “Pain and the Second Dart:”
“A great way to return your mind to its “ground state,” neither overexcited nor torpid, simply alert and open, is to become aware of the natural rhythm of the breath as you inhale and exhale.  This is focused attention, prerequisite for the second state of mindfulness meditation:  insight.

Start by focusing on the sensation of the breath entering and leaving you body at the nostrils.  Remember, you are observing your breathing rather than controlling it.  Follow each inhalation and exhalation from the start to the finish.  Notice any slight gap between the in-breath and out-breath.

Don’t be hard on yourself if your mind wanders or you get distracted by a noise.  This is all perfectly normal.  Just remind yourself:  “That’s how the mind works,” and return to the breath.  With repetition, you will get better at noticing when you have lost focus and develop greater mindfulness of the present moment.

Now that you have quieted your mind, allow your attention to broaden.  Whenever a positive or negative feeling arises, make it the focus of your meditation, noticing the bodily sensations associated with it:  perhaps a tightness, the heart beating faster or slower, butterflies in the stomach, relaxed or tensed muscles.  Whatever it is, address the feeling with friendly, objective curiosity.  You could silently label whatever arises in the mind, for example:  “There is anxiety,” “There is calm,: There is joy,” “There is boredom.”   Remember, everything is on the table, nothing is beneath your attention.
If you experience an ache or a pain, stitch or any other kind of discomfort, treat it in exactly the same way.  Turn the spotlight of your attention on the sensation but don’t allow yourself to get caught up in it.  Imagine that on the in-breath you are gently breathing air into the location where the sensation is strongest, then expelling it on the out-breath.  You may notice that when you explore the sensation with friendly curiosity—not trying to change it in any way, neither clinging to it or repressing it—the feeling will start to fade of its own accord.  When it has gone, return your full attention to your breath.

Mindfulness instructors will sometimes talk about “surfing” the wave of an unpleasant sensation such as pain, anxiety, or craving.  Instead of allowing yourself to be overwhelmed by the wave of feeling, you get up on your mental surfboard and ride it.  You experience it fully, but your mind remains detached, dignified, and balanced.  Knowing that the power of even the most fearsome wave eventually dissipates, you ride it out.

If a thought, emotion, or feeling becomes too strong or intrusive, you can always use the breath as a calm refuge, returning you whole attention to the breathing sensations at your nostrils.  Similarly if you feel you can’t cope with a pain such as stiffness in your legs, neck, or back, shift your posture accordingly.  But make your attention move to a mindful close rather than a reflex, and make the movement itself slow and deliberate.”
A previous healthy memory blog post, “Controlling Pain in Our Minds” explores this topic further and discusses the possibility of there being two different neural pathways processing the ‘two darts’.”

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Superagers with Amazing Memories Have Alzheimer’s Brain Plaques

November 30, 2016

The title of this post is identical to an UpFront News article in the 19 November 2016 issue of the New Scientist.  HM is hoping that healthymemory blog readers are asking, “Is this news?  I thought this was well known!”   Although this is not news, it remains a little known fact in the general public about  Alzheimer’s, when it is the most substantive fact existing about Alzheimers.

The article briefly summarizes work done by Aras Rezvanian and his colleagues at Northwestern University on brain samples donated by superagers to try to understand their exceptional memories.  Of the eight donated samples, two contained so many plaques and tangles that they looked like severe cases of Alzheimer’s.

But to repeat, this finding is not new.  Many such people have died.  Moreover these two individuals were not known to have Alzheimer’s.  After all, they were superagers.  And they died not knowing that they had the definitive symptoms for a diagnosis.

It would be good go  back and read the healthymemory blog “The Myth of Alzheimer’s.”   The senior author of this book is Peter J. Whitehouse, M.D., Ph.D, who was once a researcher earning a lucrative income looking for drugs to mitigate or eradicate Alzheimer’s. He came to the conclusion that such work is fruitless and is now working as a clinician treating and mitigating dementia cases.  Here is his advice, “”It is unlikely that there will ever be a panacea for brain aging and baby boomers should not rely on extraordinary advancements being made in their lifetimes in spite of the promises of the Alzheimer’s Disease (AD) empire that make their way into our headlines. Our attention must begin shifting from mythical cure to hard-earned prevention, from expecting a symptomatic treatment for AD to choosing behaviors that may delay the effects “of cognitve decline over the course of our lives.” Many, if not most, of the behaviors he discusses have been mentioned and advocated in the Healthymemory Blog.

The explanation for people living with the physical symptoms of Alzheimer’s but absent any of the behavioral and clinical symptoms of Alzheimer’s is that they have build up a cognitive reserve.  Cognitive activity, learning new things, is what builds up this cognitive reserve.  There are healthy memory blog posts on theoretical mechanisms for building cognitive reserves, but these posts are hypothetical conjectures.

That cognitive decline can be avoided by staying active has been known at least since the time of the Romans.   The Roman statesman Cicero held a view much more in line with modern-day medical wisdom that loss of mental function was not inevitable in the elderly and “affected only those old men who were weak-willed.”  HM would substitute  “not cognitively active” in the place of “weak-willed.”

When HM taught at a university he was amazed how so many students were able to get their degrees while spending a minimum of cognitive effort.  Other HM blog posts have argued that choices of News shows and political candidates might well be indications of the desire to spend the minimum in the way of cognitive effort.
In closing this post it should be noted that Alzheimer’s is not an inevitable consequence of aging, no matter how great an age is attained.  There are numerous documented supercentenarians (people living to 110+) that experienced no serious cognitive impairment.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Can the US Heal Its Political Rift

November 16, 2016

This blog is motivated by an article in the November 5, 2016 New Scientist’s Analysis Section titled, “Make America whole again:  how the US can heal its political rift.”  This article reviews proven approaches to get groups that differ, sometimes radically, in their beliefs or political positions, to work together productively and achieve useful objectives.  At one time these approaches would have worked in the United State.  But these approaches require that the different parties want to be able to work together.  They also require people to have open minds and be willing to think.

Unfortunately, in the United States there is only one party to clap.  The second party, Trump’s Party, and it is called Trump’s party because this person is no Republican, although he did win the Republican Primary.  Trump not only has no desire to work with the Democratic Party, he has little interest in working within his own Party.  He spoke using fear, bigotry, and misogyny and used the first person, “I”, not “we.”  It is the talk of a potential dictator.  It is extremely depressing to see so many people attracted to him.  Apparently, these people are long on fear and bigotry, and short on thinking.  Correction, they do not think.  Consequently, there is no basis for reasoned deliberation.

The New Scientist article notes that there is evidence that genetics may play a role in determining which party we side with.  Unfortunately, as John Hibbing of the University of Nebraska-Lincoln notes, this makes it difficult to change their opinions.  Hibbing argues that conservatives are more “threat-sensitive”.  Threatening images or sounds elicit a stronger physiological response from them than from liberals.

Another researcher, neuroscientist Read Montgue has also found a link between a person’s politics and the character of their emotional responses.  He put research participants into a brain scanner and measured their response to a series of images chosen to evoke a disgust response from images of feces to dead bodies to insect-covered food.  After they emerged from the scanner, they are asked if they would like to take part in another experiment.  If they say, “yes’ they take a ten minutes to answer a political ideology survey.  They are asked questions about their feelings on gun control, abortion, premarital sex, and so on.  Montague found that that the more disgusted a participant is by the images, the more politically conservative they are likely to be.  The less disgusted, the more liberal.  The correlation is so strong that a person’s neural response to a single disgusting image predicts their score on the political ideology test with 95% accuracy.  This score is remarkably high.

HM would like to see this experiment replicated with the following change.  Anonymity would be assured, numbers would be assigned, but the survey would be administered before the brain scanning.  Actually, this experiment would need to be replicated across a representative sample of US voters.  But if this result could be replicated and found to be extremely robust could anything be done?  Brain scanning at polls with medication administered where indicated?  This question is raised to illustrate how intractable this problem really is.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

How Donald Trump Manages to Do It

November 1, 2016

This post is inspired by a piece in the October 29, 2016 edition of  the “New Scientist” by an article titled, “Lying feels bad at first but our brains soon adapt to deceiving.”  The article reported an experiment run by Tali Sharot of University College London and her team.  This experiment encouraged volunteers to lie.  They were shown jars of pennies filled in varying degrees and asked to send estimates of how many there were to partners in another room.  The partners were shown blurrier images of the jar, so they relied on the volunteers’ estimates to guess the number of pennies in order to win a reward for each of them.

The volunteers were told that they would get a higher personal reward if their partner’s answer were wrong, and that the more inaccurate the answer, the greater the reward would be.  They started telling lies, which were small at first but then escalated.  For example a person who might have started with a lie that earned them one pound sterling, might have ended up telling fibs worth eight pounds sterling.

Brain scans showed that the first lie was associated with a burst of activity in the amygdalae, which are involved in emotional responding.  But this activity lessened as the lies progressed (Nature Neuroscience, DOI: 10.1038/nn.4426).

Donald Trump has had a long career lying, and his lies have rewarded him well.  HM doubts if there is any activity in his amygdalae when he lies.  Trump’s lies frequently contradict each other, so it is clear that he fails to remember lies.  The question is whether he is even aware that he is lying.  When confronted with the truth, including unequivocal evidence of the truth, he still denies it.  He invents conspiracies, which he apparently believes.  At first he complained that the Republican primary was rigged.  If so, it was rigged in his favor.  Now he threatens to disavow the results of the presidential election should he not be elected.  One concludes from this that Trump lives in an alternative reality, one which is largely divorced from reality.  A president who is divorced from reality would be disastrous.

Unfortunately, political polls have indicated that many have chosen to join Trump in his alternative reality.  This is frightening for democracy, and the size of the Trump vote will provide a good index of how frightened we should be.

One of the many ironies of this presidential election, is that Hillary Clinton is accused of lying and voters say that they do not believe her.  First of all, she is a politician.  Although the term politician has negative connotations, politicians are essential to a working democracy.  Saying that Hillary Clinton has lied is as enlightening as saying the Pope is a Catholic.  Even Honest Abe Lincoln lied.  Fact checkers have been monitoring  both candidates.  Comparing Hillary Clinton to Donald Trump regarding lies is like comparing the Chicago fire (Trump) to someone in his back yard burning leaves (Clinton).
Here is a link well worth clicking:

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Anything But a Healthy Memory

October 31, 2016

Paul McDevit, who edits Feedback Column of the New Scientist noted in the 13 August 2016 edition of the New Scientist that Donald Trump is a man who doesn’t lie so much as see the truth as a bad investment.  The following quotes are taken verbatim from his Twitter feed.

“I predicted the 9/11 attack on American in my book “The America We Deserve”” (29 December 2011).

“Not only are wind farms disgusting looking, but even worse they are bad for people’s health” (23 April 2012).

“An ‘extremely credible source’ has called my office and told me that @BarackObama’s birth certificate is a fraud” (6 August 2012).

“Remember, hew “Environmental friendly” lightbulbs can cause cancer.  Be careful—the idiots who came up with this stuff don’t care.”  (17 October 2012).

“If we didn’t remove incredibly powerful fire retardant asbestos & replace it with junk that doesn’t work, the World Trade Center would never have burned down.” (17 October 2012).

“The concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive.”  (6 November 2012).

“How amazing, the State Health Director who verified copies of Obama’s ‘birth certificate’ died in a plane crash today.  All others lived”  (12 December 2013)

“Snowing in Texas and Louisiana, record setting freezing temperatures throughout the country and beyond.  Global warming is an expensive hoax!”  (29 Januaty 2014)

“Healthy young child goes to doctor, gets pumped with massive shot of many vaccines, doesn’t feel good and changes—AUTISM.  Many such cases.” (28 March 2014).

“The U.S. cannot allow EBOLA infected people back.  People that go to far away places to help out are great—but must suffer the consequences!”  (2 August 2014).

“I am being proven right about massive vaccinations—the doctors lied.  Save our children and their future.”  (3 September 2014).

“Ebola is much easier to transmit that the CDC government representatives are admitting.   Spreading all over Africa—and fast.  Stop flights”  (2 October 2014)

“We must suspend immigration from regions linked with terrorism until a proven vetting method is in place.”  (26 June 2016)
McDevit concludes with “Feedback can at least are with one bombastic pronouncement from the ornery demagogue:  “The global warming we should be worried about is the global warming caused by NUCLEAR WEAPONS in the hands of crazy or incompetent leaders!”

Understanding Anxiety and How to Control It

October 27, 2016

This post is based largely on an article by Linda Geddes in the Feature Section of the 8 October 2016 issue of the “New Scientist” titled “Why we worry:  Understanding anxiety and how to control it”.  The reason why we worry is because we have brains to protect us from danger.  The prefrontal and anterior cingulate cortex amplify negative information and makes us pay attention to it.  Emotional memories and our learned reactions to them are stored in the amygdala.  When active, it triggers the release of hormones responsible for the fight-or-flight response.

In 1980 the American Psychological Association estimated that between 2% and 4%of people in the US had anxiety disorder.  Of course, that was before wired technology and smart phones.  Today, some studies suggest it’s more like 18% in the US and 14% in Europe.

It is normal to be anxious when confronted by threats.  It is the frequency and severity of anxiety that makes it maladapting.  Moreover, people can be anxious to specific events.  The most common type of anxiety disorder is social anxiety disorder, where you might believe the blushing will result in people laughing or shunning you.  This type of disorder  is persistent and overwhelming fear before, during and after social events.

If you have panic disorder you might think you are having a heart attack if your heart starts to race.  Then the physical symptoms of anxiety—a pounding heart, difficulty breathing, feeling dizzy or flushed come on in a rush.  From time to time everyone can experience such panic attacks, but in panic disorder the attacks are regular and become a source of anxiety themselves.

Generalized anxiety disorder is characterized by worrying about a range of different events or activities for at least six months.  Should you have this condition, the belief driving your anxiety, or that you have responsibilities that you must meet at all costs.

According to the article, cognitive behavioral therapy (CBT) is likely to be the gold standard in treatment that addresses the maladaptive  beliefs that drive your anxiety.  Once they have been identified, CBT helps you address them.  Although there is a shortage of therapists, this shortage has spurred the development of online delivery of CBT.  Try searching for online CBT.

Frankly, HM would recommend Cognitive Based Mindfulness Therapy.  HM would also say you should consider just trying meditation and mindfulness.  This should not be surprising given all the HM posts on mindfulness and meditation.  Mindfulness meditation should serve as a preventive in the first place.  And it is never too late to try to regain control of your mind and emotions via mindfulness and meditation.

Physical exercise is another remedy for anxiety.  It triggers the release of mood-boosting endorphins, and forces you to concentrate on something other than your own thoughts.

Try medications only as the last resort and only under the treatment of a physician.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Tired All the Time?

October 25, 2016

Tired all the time is the title of a feature piece by Emma Young in the 15 October 2015 New Scientist.  The subtitle of the piece is “Why fatigue  isn’t just about sleep.”  Perhaps the most obvious answer is that life is more exhausting than it has ever been.  There are the many competing demands of work and family together with the ever-present smartphone notifications.  Today’s omnipresent technology is a likely reason that we feel as if we’re running on empty.  There is a book titled “Exhaustion:  A history” written by Anna Katharina Schaffner, who is a historian at the University of Kent in Canterbury, UK.  She has documented that “people through the ages have consistently complained of being worn out, and harked back to the relative calm of simpler times.”  Throughout the centuries fatigue has been blamed on the alignment of the planets, a lack of godliness, and even an unconscious desire to die.  Schaffer says that “Freud argued that a very strong part of ourselves longs for a state of permanent  physical and mental rest.”

In the 19th century the American physician George M. Beard claimed that neurasthenia was caused by exhaustion of the nervous system and was responsible for physical and mental fatigue as well as irritability, hopelessness, bad teeth, cold feet, and dry hair.  Beard blamed neurasthenia on the advent of steam power and newfangled inventions such as the telegraph.  Schaffer says that “Beard feared that the modern subject was unable to cope with such chronic sensory overload.”

A lack of sleep is another apparent cause of fatigue.  Researchers are able to distinguish between the need for sleep and fatigue, considering them to be closely related but subtly different.  The sleep latency allows the subtle distinction between sleep and fatigue.  It is widely used in sleep clinics and is based on the idea that if you lie down somewhere quiet during the day and fall asleep within a few minutes, you are either lacking sleep or potentially suffering from a sleep disorder.  If you don’t drop off within 15 minutes or so, yet still feel tired, fatigue might be the problem.

Mary Harrington is one of the researchers looking for a tell tale biological signal.  One possibility is that daytime fatigue stems from a problem with the circadian clock, which regulates periods of mental alertness through the day and night.  This regulation is done by the brain’s suprachiasmastic nucleus (SCN), which coordinates hormones and brain activity to ensure that we feel generally alert during the day.  Normally, the SCN orchestrates a peak in alertness at the start of the day, a dip in the early afternoon, and a shift to sleepiness in the evening.   The amount of sleep you get at night has little impact on this cycle.  How alert you feel depends on the quality of the hormonal and electrical output signals from the SCN.  The SCN uses the amount of light hitting the retina to set its clock, so that it keeps in line with the solar day.  Too little light in the mornings, or too much at night, can disrupt SCN signals, and either can lead to a lethargic day.  Harrington says  “I think circadian rhythm disruption is quite common in our society and is getting worse with increased use of light at night.”  She says the if you spend the day feeling as if you have never quite woken up properly, but are not sleepy at bedtime, a poorly calibrated SCN might be to blame.  She recommends spending at least 20 minutes outside every morning and turning off screens by 10 pm to avoid tricking the SCN into staying in daytime mode.   Another way to reset the SCN is through exercise.  Studies have linked exercise to reduced fatigue.  Harrington says that exercise can make a big difference.  People who start exercising regularly often report sleeping better when some studies show that they don’t actually sleep any longer.  Quality of sleep appears to be more important than quantity.

Reducing fat levels can also be helpful.  Body fat not only takes more energy to carry around, but also releases leptin, a hormone that signals to the brain that the body has adequate energy stores. People who carry excess fat also show higher levels of inflammation.  Body fat stores large levels of cytokine, which are released into the bloodstream.  In addition to stimulating the immune system, cytokines also make you feel drained of energy.

Even if you are not overweight, inflammation could still be running you down.  A sedentary lifestyle, regular stress, and poor diet have all been lined to chronic lower-level inflammation.  There is also preliminary evidence that disruption of circadian rhythms can increase inflammation.

Low dopamine is also implicated in depression as it reduces availability of serotonin.  Since the vast majority of people with major depression report severe fatigue, it’s not surprising that depression is also a potential common in fatigue.

Harrington’s advice is not to let fatigue stop you doing something you enjoy.  Force yourself to keep at it because a potent reward could trigger the release of dopamine in brain areas linked to motivation and alertness.  Or do something stressful:  the release of adrenaline could help you overcome lethargy.  Ideally put stress and enjoyment together.

Evolution Evolves: Beyond the Selfish Gene

October 23, 2016

The title of this post is identical to title of a short piece written by Kevin LeLand and and published in the 24 September 2016 issue of the “New Scientist.”   The cover of the issue notes that the theory of life needs an update.  The changes in the theory of evolution have been monumental.  In HM’s humble opinion, they are comparable to the changes between Newton and Einstein in physics.  Kevin Leland has provided a precise summary.

Gone is the radical notion of the selfish gene, which argues the goal of genes is to propagate themselves, and we are merely vehicles for that propagation.  Gone also is the nature vs. nurture issue.  Genes interact with the world.  They provide inputs, but perhaps for some exceptionally rare occasions, they are not deterministic.

Natural selection is not solely in charge as the way that an organism develops can influence the direction and rate of its own evolution and its fit to the environment.

Inheritance goes beyond genes and includes epigenetic, ecological, behavioral, and cultural inheritance.  Similar to, but different from, Lamarkian transmission, acquired characteristics can be passed to offspring and play diverse roles in evolution.

Phenotypic variation is not random.  Individuals develop in response to local conditions such that novel features they possess are typically well suited to their environment.

Evolution is much more rapid than previously viewed.  Developmental processes allow individuals to respond to environmental changes and mutations with coordinate changes in suites of traits.  The new view is organism-centered, with broader conceptions of evolutionary processes.  Individuals adjust to their environment as they develop and modify selection processes.  Additional phenomena explain macroevolutionary changes by increasing evolvability,  the ability to generate adaptive diversity.  They include plasticity and niche construction.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Wistful Thinking: Why We Are Wired to Dwell on the Past

October 2, 2016

The title of this post is identical to the title of a piece by Teal Burrell in the 24 September 2016 issue of the “New Scientist.”  The article is about nostalgia.  Most of us experience it at least once a week according to research by Tim Wildschut and his colleagues at the University of Southhampton, UK.  Nostalgia is not the cause of loneliness.  Rather it is the antidote to loneliness.  It springs up when we are feeling low and, in general, boosts well-being.  Reflecting on nostalgic events we have experienced forges bonds with other people, and enhances positive feelings and self-esteem according to Wildschut and his colleagues.

Clay Routledge of North Dakota State University  evoked “personal nostalgia” in volunteers by having them listen to songs that had particular meaning to them, the emotion increased perceptions of purpose in life.  When volunteers were asked questions about the point of it all, nostalgia ramped up.  Rutledge says “When people feel uncertain or uncomfortable or unsure, they might use their memories as a stabilizing force.

One notion is that nostalgia gives us a sense of continuity in life.  Although many things in our lives can change—jobs, where we live, relationships—nostalgia reminds us that we are the same person we were on our seventh birthday party as on our wedding day and at our retirement celebration.  Kristine Batch of Le Moyne College says, “It is the glue that keeps us together, gives us continuity, and we need that, ever more so, in times of change.”

Sociologist Fred Davis compared being nostalgic to applying for a bank loan.  Looking back at out past is like checking our credit history.  Other researchers have found the reflecting on nostalgic memories boosts optimism and makes people more inspired to pursue their goals.

Julia Shaw who studies the fallibility of memory at London South Bank University says that nostalgia is a by-product of how we remember.  Memories are inaccurate:  we filter them to focus on the positive.  Each time we reactivate the memory, we make it susceptible to alteration.  Whenever we summon a memory, we might lose some nuances and add misinformation.

Nostalgic memory is about the emotion, not what really happened.  Specific details are either not accurate at all or we confabulate them.  We might not remember  the precise details, but we remember the emotions surrounding the event.

Shaw says that this bias towards positive emotion is at the heart of theories about why we feel nostalgia.  Nostalgic memories tend to be of the best days.  If we fixate on the negative instead, as depressed people are prone to do, it would leave us from an evolutionary perspective in a worse state in terms of adapting and surviving.

When a group shares a vision of the past, collective nostalgia, it promotes a sense of belonging and strengthens group bonds, which may ave had survival benefits in early triple societies.  But that cohesion comes at the cost of driving discrimination towards outsiders.

Nostalgia can lead to a belief in the carefree past that “never really existed.”  Nativist political campaigns in the UK, France, and the US have all hearkened back to a a fabled golden time—as epitomized by Donald Trump’s ‘Make America Great Again” slogan—but those “good all days” had worse standards of living, higher infant mortality rates, lower life expectancies and plenty of other troubles.  Holding up the ideal of a more homogeneous past also made it easy to scapegoat those who weren’t part of it.  So nostalgia can be used to promote disinformation.

More on the Universal Basic Income (UBI)

September 3, 2016

A previous post dealt with the topic of a Universal Basic Income (enter “Universal Basic Income” into the healthy memory search block).  Articles in the June 20, 2016 New Yorker by James Surowiecki and by Hal Hodson in the Features Section of the June 25 2016 New Scientist  titled “What Happens if we pay everyone just to live”  provide the motivation for this current post.   Surowiecki is the regular “New Yorker” correspondent for economics, business, and finance.  He has also written a book that Healthymemory would highly recommend, “The Wisdom of Crowds.”  His article in the New Yorker is titled “Free Money.”

Both articles describe an unusual experiment in the Canadian province of Manitoba in mid-nineteen seventies.  The town of Dauphin sent checks to thousands of residents every month to guarantee that all residents received a basic income.  The title of this project was Mincome.  The goal of the project was to see what happened.  Did people stop working?  Did poor people spend foolishly and stay in poverty?  A Conservative government  ended  the project in 1979 and buried Mincome.

Many years later an economist at the University of Manitoba, Evelyn Forget, dug up the numbers on the project.  She found that life in Dauphin improved markedly.  More teenagers stayed in school.  Hospitalization rates fell.  Work rates had barely dropped at all.  The program worked about as well as anyone could have hoped.  The earlier healthy memory blog post on this topic found that similar results were found for 20 villages in India.

The Hodson article notes that UBI has long history.  Thomas Paine, a US Founding Father, believed that natural resources were a common heritage and that landowners sitting on them should be taxed and their income redistributed. This idea of a UBI returned to the fore in the sixties and is now popular again among economists and policy folks.  According to Hodson the idea has been graining adherents across the  political spectrum.  In the UK proponents include the left-wing Green party and a right-wing think tank, the Adam Smith Institute.  In Canada, testing the approach forms part of the policy platform of the Liberal Party, which was elected to power last year.  There are many versions of this idea, but one would provide every adult citizen in the U.S. a stipend, say $10K, with children receiving smaller amounts. This would increase a willingness to take risks in jobs and to invest in education.  There were small scale experiments with basic income guarantees in the seventies and they showed  young people with a basic income were more likely to stay in school.  In New Jersey the chances of students graduating from high school increased 25%.  The fear that a UBI produces lazy unmotivated workers does not appear to be true.  The examples of the many direct-cash-grant programs in the developing world suggest that, as Columbia economist Chris Boatman puts it, “the poor do not waste grants.”

In Alaska an annual dividend from state oil revenues is paid to citizens each year.  This amounted to $2012 per person in 2015.  Economist Scott Goldsmith at the University of Alaska points that the state is the only one in the US in which the income of the poorest 20$% grew faster than that of the top 20% between the 1980s and  2000.

Now experiments are afoot to test such effects more exactingly.  As many as 10,000 Finns will get a no-strings attached monthly income for two years.   The sum is designed to guarantee subsistence, covering housing, food, and services like water and electricity.  The point is to test whether a basic income gets more people working.  The government is interested in removing disincentives to joining the labor force.  The ideal is to encourage people to enter the labour market on their own terms.

A study of 1000 children by Kimberly Noble of Columbia University found a strong positive correlation between family income and brain development.   One theory is that families with a secure income can focus extra resources on their children.  “But with purely correlational data we can’t say which way the arrow is pointing,” says Noble.  To find out she is running an experiment in which 1000 low-income mothers across the US will receive a basic income for three years.  One group will receive a nominal $20 a month, the other $333.   Noble’s focus is on brain development, not economics. But in a pilot study in New York in which money was handed on trackable, prepaid debit cards found that of 1100 transactions most of the money went on groceries.  Just three happened at a liquor store.

A basic income would be costly.  Depending on how the program was structured, it would likely cost at least twelve to thirteen percent of the GDP.  Of course, GDP is another problem.  There have been many previous healthy memory blog posts, particularly around Labor Day, arguing that the GDP is the wrong measure of economic success.  Requiring constant growth in GDP will eventually destroy itself.  There are better metrics of the health of the economy.

Surowiecki concludes that at the moment the prospects of a UBI do not seem favorable, but that the most popular social-welfare programs in the US seemed utopian at first.  Healthy Memory would argue that increasing job insecurity along with the a need for increased education throughout the lifespan, a UBI is all but guaranteed for sometime in the future.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Why When Matters are Objectively Good Do We Feel So Bad? Part One

August 19, 2016

By any objective standard, matters are quite good in the United States.  Just eight years ago, the world was on the verge of an economic collapse.  That collapse did not materialize,and today unemployment is low and the economy in the United States is among the best in the world.  So why are people saying that this country is on the wrong track?  Why are some people willing to vote for an emotionally unstable individual with none of the skills for the job for President of the United States?  There are a number of reasons for this, but this current post will focus on the following article in the Insight section of the 6 August 2016 issued of the New Scientist, titled “July was bad news but I’m fine—so why do I feel so terrible?”  The author notes that July brought an unusual dump of bad headlines including the televised deaths of Philander Castile and Alton Sterling, police being killed in Dallas and Baton Rouge, terror attacks in Istanbul, Baghad, Nice and Saint-Etieene-du-Rouvray, plus other acts of violence in Germany and Japan.

Peter Ayton who studies decision-making at City University in London says that we should be wary of the idea that there’s something in the water.  “This is an attempt at induction: grouping events on the idea of some force or influence may be engineering the shape of the days.”  Even if news stories are random, statistically we should still expect to see runs of more upsetting headlines.

Elaine Fox of the University of Oxford notes that we are predisposed to focus on bad stuff.  “Threat information activates the fear system, while positive news activates the reward system.  The fear system is stronger, and works to shut down the rational part of our brain.  Once we are in a fearful state, we’re conditioned to see out more bad news.

Fox continues, “The sense of immediacy provided by 24-hour rolling news means the brain is saying, “this is a real threat to me.”  This explains why we feel so personally affected even though chances of being caught up in a shooting or terrorist attack are vanishingly small.  The vividness of images may also skew our sense of risk.  In October 2014 after several months of disturbing TV reports from West Africa, a Gallup Poll found that 22 % of people in the US were worried about contracting Ebola, despite only six people in the country being infected and none picking it up on home soil.

Ayton notes that we underestimate our ability to adapt to huge changes.  A 1978 study showed that after two years, people paralyzed in accidents and lottery winners showed little change in overall happiness, instead habituating to their new state.  This finding has been replicated many times.

Donald Trump is Bending Reality to Get Into the American Psyche

July 18, 2016

The title of this post is identical to the title of an article in the Comment section of the July 16-22, 2016 issue of the New Scientist.  The article asks the question ”How is is possible that a self-absorbed, egoistical billionaire who criticizes Muslims, Mexicans and women could win more primary votes than any Republican candidate in history?

The answer is that reality does not matter to Trump, who sees himself as more powerful than the facts, nor does it matter to those attracted to his claims.  Yale philosopher Jason Stanley  says that figures such as Trump ruthlessly prey on public fears to reconstruct reality to pander to them.

Psychologist Bryant Welch notes that many people feel beleaguered trying to keep pace with change places ever greater demands on the brain, and this combines with worried about immigration, the economy, unemployment, terrorism, climate change and security.  Anxiety makes crowds turn to a power fun commander.   Unfortunately, the more this happens, the weaker and less capable people become.  Welch makes the comparison to a heroin addict craving larger and larger doses to get the same high.  Welch says, “People are mainlining the Trump drug, a cocktail or absolute certainty, strong opinion, and talk of control.”  Trump demonizes his opponents saying that they are not just wrong, but idiots.  This demonization triggers a primal response, both calming fears and awakening tribal instincts.

Being unhampered by facts and expert evidence, Trump promises:  “don’t worry about climate change, it’s not happening; don’t worry about terrorism, we can stop it with force; don’t worry about jobs, we can build a wall to protect yours; don’t fret abut the economy, we can just rip up free-trade deals.”  These versions of reality are mentally more comfortable than dealing with uncertainty and anxiety.  Trump does not bother with persuading; rather he manipulates fear.

The article concludes as follows:  “After the fireworks, the big question will be; will fear, insults, and hate win the White House?”

Previous healthymemory blog posts have used Kahneman’s Two Process theory of cognition, where System 1 is fast, emotional, and System 2 is slow, methodical and requires mental effort.  The vernacular term for System 2 is thinking.  For democracies to survive, thinking is essential.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

How Journalism Shapes Public Discourse

June 26, 2016

This post is motivated by an article by Lisa Grossman in the Features section of June 18 20016 Issue of the New Scientist.  The topic is the concern among whites that in just a few decades most people in the US won’t be white.  The article reports research done by Jennifer Richeson.  She is addressing the increasingly prevalent media narrative in the US the because a rapidly changing racial demographics, the country will become a so-called majority-minority country.  If all members of self-identified  racial ethnic groups—Asian Americans, black Americans, Latinos, Native Americans, multi-ethnic individuals, and so on, somewhere around 2045 those groups will add up to 50.1% of the population, with white people in the “minority.”  Jennifer Richeson wanted to know how people are responding to this information.

So she asked white Americans to read about the changing demographics that point to this so-called majority-minority distinction.  Control groups of white American read information about other aspects of demography.  Afterwards the first group expressed more negative attitudes to a variety of racial groups, black, Latinos, Asian American.  She asked questions like “How much do you like members of these groups and found it on measures of unconscious racial attitudes tool.  It is a robust effect.   Moreover, when whites read about these racial shifts, they were also more likely to endorse politically conservative policies that were not race related such as drilling for fossil fuel in the Alaska wildlife refuge.

It is important to understand that this response is not unique to whites.  The same type of experiment was done with black Americans, but this time it was tailored to highlight growth and the threat of the Latino population.  The same basic result was obtained including a general shift to conservatism.  So Richeson argues that the issue is not racism, but other the threat of losing status.  This is psychologically threatening and a way to cope with this is by becoming more conservative.

In follow on research Richeson did  studies reminding whites that even if they were in a numerical minority they would still have greater wealth, better jobs, and better education and so are still going to be doing well in the status hierarchy, regardless of changes in the US racial distribution.  This reduced white people’s perceived threat about what’s going to happen to them, and then they show no difference in their expression of racial bias or conservatism than participants in the control condition.

At this point Healthy Memory (HM)  will ask the question as to why this issue was raised in the first place.  Is this some conspiracy by the conservative press to elicit racial disharmony and enhance conservative attitudes?  HM does not think so.  HM thinks that the motivation of the press is to increase readers, and contentious issues such as this increases readers.

Currently in the US there is the phenomenon of Donald Trump.  Trump has earned many millions of dollars in free press coverage because of his outlandish statements and insults.  Moreover, many of his statement are contradictory, yet he thrives.

There is an explanation for this phenomenon, but first a quick overview of Kahneman’s Two Process Theory is needed.  The fast processing which we normally do and allows us to respond so quickly is called System 1.  System 1 is named Intuition. System 1 is very fast, employs parallel processing, and appears to be automatic and effortless. It is so fast that operations are executed, for the most part, outside conscious awareness. Emotions and feelings are also part of System 1.  System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors to slip through.  (To learn more enter “Kahneman” into the healthy memory blog search block).

Our default mode is System 1.  System 2 requires thinking and mental effort.  Trump supporters do not do much System 2 processing, thinking, so little, if any, of what Trump says is evaluated.  His statements resonate with their biases so they become strong supporters.

Unfortunately for democracies to thrive, System 2 processing, thinking, is required.  The upcoming election will indicate whether there is sufficient System 2 processing for our democracy to survive and thrive.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Physics Killed Free Will and Time’s Flow. We Need them Back

June 19, 2016

The title of this post is identical to the title of an important article by physicist Nicolas Gisin the May 20, 2016 edition of the “New Scientist.”  Descartes stated, “I THINK, therefore I am.  Most humans would agree with this statement.  After all, are we active agents free to influence our thoughts and decisions, or are we just passive laundry machines through which thoughts happen to pass?.

Gisin  notes that the ability to ask the question seems to require the first interpretation, yet modern science—in particular—modern physics—almost unanimously plumps for the second.  According to modern physics in a deterministic universe, where one thing leads inevitably  to the next, any conception we have of free will is an illusion.

Gist does not buy this.  He thinks that we are missing something fundamental to our formulation of science.   “And the solution of the problem of free will is linked to another glaring deficiency  of today’s physics—its insistence that time as we know it does not exist.”

Jules Lequyer, a French philosopher of the 19th century wrote, “Without free will, the certainty  of scientific truths would become illusory.”  We need free will to decide which arguments we find convincing, and which we dismiss, which is the essence of doing science.

Gisin  wrote, “What irony, then, that the search for scientific truth seemed to kill free will.  That started with Newton and his universal law of gravitation.  Derived from observations of the solar system bodies, it speaks of a cosmos that operates like clockwork and can be described by deterministic theories. Everything that happens today was set in motion yesterday, and indeed was determined in the initial conditions of the big bang; nothing truly new ever happens.

Gisin further writes, “Things became even more inscrutable with Einstein’s relativity, which showed that there was no unique definition of simultaneous events.  To square that with a deterministic universe, a picture known as the “block universe” emerged.  Here we dispense not just with free will, but also with a flowing time.  Past, present, and future are al frozen in one big icy block.  The present in which we are free to think and be—in which exercise free will—is just as illusory as free will itself.”

And, believe it or not, philosophers of science bend over backwards to explain why we think we have free will.  They argue that we are programmed to always make choices that correspond to a predetermined necessary future.  So the feeling that our choices are free is illusory.

This presumed reasoning is obviously nonsense and it is depressing to realize that so many intelligent people buy it.

Gisin is a quantum physicist and he argues that real numbers are not real at all.  He notes that most real numbers are never ending strings of digits that can contain an infinite amount of information.  He notes that they could encode the answers to all possible questions that can be formulated in any human language, but that a finite volume of space-time can only hold a finite amount of information.  So the position of a particle, or the value of any filed or quantum state in a fine volume, cannot be a real number.  Real numbers are non-physical monsters..

Gisin notes that free will chimes with the dominant “Copenhagen” interpretation of quantum theory, made popular by Werner Heisenberg.  Making a measurement “collapses” the wave function describing a quantum system into one of a number of pre-ordained states.  Quantum theory is a random, non-deterministic theory, but it creates a determined world—and seems in no way incompatible with a common sense conception of free will.  So, God can play dice with the universe and win.  Quantum theory is frequently used in practical scientific and engineering problems.

Gisin also frees up the flow of time.  He notes that there is a time before a non-necessary event happens and there is a time after it happens, and these times are different.  This happening of a non-necessary event, like the result of a quantum measurement, is a true creation that can’t be captured by a mere evolution parameter.  He calls the sort of time this requires “creative time.”

Gisin concludes by stating that “creative time’ is extraordinarily poorly understood by today’s science, but that could change with future physics, such as quantum theories of gravity that might replace Einstein’s theories that spawned the block universe.  Time passes, and free will exists—any other way, science makes no sense.

Gisin is not the only physicist who advocates free will Roger Penrose (who healtymemory (HM) believes was on the dissertation committee of Stephen Hawking)  is a distinguished physicist, mathematician, and philosopher, who extols consciousness and the role of quantum effects in consciousness and free will.  Penrose’s book, “The Emperor’s New Mind goes into considerable detail on these topics.  He formulates the notion of Correct Quantum Gravity (CQG).  Although this book was written for the general public parts involve heavy sledding.  Nevertheless it is good to know that an extremely intelligent and knowledgeable scholar is on the same or similar train of thought.  Unfortunately, Roger is much further down these tracks than HM is.  Should healthy memory ever manage to make it further down these tracks, he will get back to you.

Outside of physics there have been many HM blog posts on consciousness and free will.  Even though we all have intimate experience with our own consciousness, there are still many who contend that this is epiphenomenal.  HM argues that consciousness is an emergent phenomenon with adaptive value.  It is essential  to effective interactions with the environment and for choosing courses of action.  Neuroscientists have stated that all mammals, some invertebrates such as the octopus, and many birds are conscious and there consciousness has adaptive value.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Empathy vs. Compassion

May 25, 2016

This post is based on an article by Emma Young titled “How sharing other people’s feelings can make you sick,” in the May 14, 2016 issue of the New Scientist.  As this article notes empathy is undeniably a good thing.  The primatologist Frans de Waal has suggested that being affected by another’s emotional state was the earliest step in our evolution as a collaborative species.

The distinction between what we and others feel isn’t terribly clear to our brains.  Tania Singer and her colleagues demonstrated this in 2004 when they put 16 romantic couples into an MRI scanner.  When they gave these volunteers a painful electric shock,this elicited activity in brain regions known to respond to physical pain and also in regions tuned to emotional pain.  However, when volunteers saw their loved partners  get a shock, no activity registered in their physical pain center, but their emotion regions lit up like fireworks.  Subsequently many other studies have confirmed that this “empathy for pain” network exists, and that it does not distinguish whether the pain we’re observing is physical or psychological.

Moreover, we don’t just catch pain from those we are intimate with.  People in the care giving professions  such as hospice staff, nurses, psychotherapists, and pediatricians often see and feel the stress and pain of others, which leads to a kind empathy burnout.  This empathy burnout has be given names such as “secondary traumatic stress” and vicarious  traumatization.”  Symptoms include lowered ability to feel empathy and sympathy, increased anger and anxiety, and more absenteeism.  Studies have linked these symptoms with an indifferent attitude to patients, depersonalization and poorer care.  Apparently anyone can catch stress any time they understand someone else’s pain and share in it.  This activate empathy for the individual’s pain network.  Singer’s research ha shown that for some people the physical effects of emotional contagion apply even when they observe a person they don’t know suffering distress.  Experiments have shown that people who watched a 15-minute newscast reported increased anxiety afterwards, with their anxiety decreasing only after an extended relaxation exercise.

Other research has shown that empathy can be regulated, just as emotions can be regulated.  Christian Keysers and his colleagues have looked at how people diagnosed with psychopathy, who are commonly thought to lack all capacity for empathy, react when the see images of people in pain.  Initially the team presented images without any instructions as to what to feel.  Predictably, the psychopaths showed less activity in areas association with areas associated with empathy for sensations, and in the insult, than the brains of healthy people.  When Keysers asked these psychopaths to consciously empathize, something very different happened;  their brain responses were identical to healthy people.

Research has shown that the training Buddhist monks undergo give them a heightened ability to manipulate their neural circuitry for empathy.  Richard Davidson  asked these monks to engage in a form of compassion meditation known as loving kindness meditation, in which one is encouraged to gradually extend warmth and care from your self and others.  Davidson found that this process changed the firing of the monks’ neural circuitry.  It suppressed activity  in the anterior insult and in the amygdala a regions involved in threat detection but recruited during empathic responses.  But when one monk was asked to empathize with suffering instead of engaging in compassion, his empathy for pain network lit up, and almost immediately, he begged the proctor to stop the experiment, calling the feeling unbearable.  The subtle distinction is that compassion is feeling for and not with the other.

Research is being done on training people this distinction between compassion and empathy.  The initial results are promising. Let us hope that such training will be readily available to caretakers and others in need of this training.

Web of Lies

May 1, 2016

“Web of lies:  Is the Internet making a world without truth” is an article by Chris Baranluk in the Feb 20-26, 2016 edition of the New Scientist.  The World Economic Forum ranks massive digital misinformation as a geopolitical risk alongside terrorism.  This problem is especially pernicious as misinformation is very difficult to correct (enter “misinformation” into the healthy memory search block to see relevant posts).  Bruce Schneider, a director of the Electronic Frontier Foundation, says that we’re entering an era of unprecedented psychological manipulation.

Walter Quattrociocchi at the IMT Institute for Advanced Studies in Lucca, Italy, along with his colleagues looked at how different types of information are spread on Facebook by different communities.  They analyzed two groups:  those who shared conspiracy theories and those who shared science news articles.  They found that science stories received an initial spike of interest and were shared or “liked” frequently.  Conspiracy theories started with a low level of interest, but sometimes grew to be even more important than the science stories overall.  Both groups tended to ignore information that challenged they views.  Confirmation bias leads to an echo chamber.  Information that does not fit with an individual’s world view does not get passed on.  On social networks, people true their peers and use them as their primary information sources.   Quattrociocchi  says “The role of the expert is going to disappear.”

DARPA, a research agency for the U.S. Military,  is funding a Social Media in Strategic Communication Program, which funds dozens of studies looking at everything from subtle linguistic cues in specific posts to how information flows across large networks.
DARPA has also sponsored a challenge to design bots that can sniff out misinformation deliberately planted on Twitter.

Ultimately the aim of this research is to find ways to identify misinformation and effectively counter it, reducing the ability of groups like ISIS to manipulate events.  Jonathan Russell, head of policy at counter-terrorism think tank Quilliam in London says, “They have managed to digitize propaganda in a way that is completely understanding of social media and how it’s used.  Russell says that a lack of other voices also gives the impression that they are winning.  There’s no other  effective media coming out of Iraq and Syria.  Think tank Quilliam has attempted to counter such narratives with videos like “Not Another Brother,” which depicts a jihadist recruit in desperate circumstances.  It aims to show how easily people can be seduced by exposure to a narrow view of the world.

This research is key.  Information warfare will play an increasingly larger percentage of warfare than kinetic effects.

Pangiotis Metasxes of Wellsley College believes that we have entered a new ea in which the definition of literacy needs to be updated.  “In the past to be literate you needed to know reading and writing.  Today, these two are not enough.  Information reaches us from a vast number of sources.  We need to learn what to read, as well as how.”

Wise Psychological Interventions

April 29, 2016

Wise Psychological Interventions (WPIs) were the subject of an article titled “A mind trick that can break down your brain’s barrier to success” by Dan Jones in the March 12 2016 edition of the New Scientist.  “Mental unblocking” is at the heart of WPIs.  Entering “Wilson” into the health memory blog search block will take you to many examples of WPIs.  Entering “REDIRECT” into the search block will take you to many more.

Wilson is a professor of psychology at the University of Virginia and here is an example of one of his WPIs.  His goal was to help new college students cope better with worries about their academic performance.  His solution was inspired by attribution theory, which describes how people accept for events.  For example whether they blame failures and setbacks on enduring facts about themselves, or on external factors.  His goal was to get students think about the external situation, rather than to facts about themselves.  He presented the students with statistics tat showed the majority of new students start with disappointing grades but do better over time.  He also showed them videos of older students talking about their improving academic performance.   Students he received these presentations grades got better more quickly that those of students who did not receive these messages.  They were also less likely to have dropped out by the end of the second year.

The concept of growth mindsets is central to the message of the healthy memory blog.  The distinctions  between fixed and growth mindsets were well articulated by Carol Dweck in her best selling book “MIndset.”  People with fixed mindsets believe that their intelligence is fixed as to whether they are smart, stupid, or average.  People with growth mindsets believe that it is up to themselves to grow and improve.  The notion of growth mindsets is central to the basic message of the healthy memory blog.  We need to continue to grown our mindsets throughout our entire lifespan.  Enter “Dweck” and “growth mindsets” to read more posts on this topic.  Deck has found that when students were told about how the brain changes and learns, and that intelligence can be boosted. showed increased motivation in class and better test scores as compared to a control group.

Stanford psychologist Geoffrey Cohen has developed WPIs aimed at reducing the achievement gap between white and black university students.  An effective strategy  against stereotype threat is to get people to write about values that are important to the, which is a process called self affirmation.  He found that even a short session improve the grades of black students relative to controls.  I closed the achievement gap by 40%.  Two years later, after a few top-up sessions, the intervention was still having a clear effect.  Cohen has applied this same approach to the achievement gap between men and women in university science courses.

New students frequently feel alienated  and out of place when they arrive in a University setting.  Cohen and a colleague got first-year students to read a report summarizing a survey of older students experience at a university.  They report described how they felt out of place, at first, but that these feelings passed as they settled in and made new friends.  Reading this report not only improved the grades of black students, but also increased their self-reported happiness and health.  These effects persisted three years on, and they have been replicated by much larger studies.

WPIs go way beyond academic performance.  Iran Halperin of the Interdisciplinary Center in e Herizliya, Israel have been developing WPIs to reduce tensions in the Israeli-Palestinian conflict.  He has demonstrated that nurturing a growth mindsets makes people on both sides more open to listening, more willing to compromise for peace, and more likely to forgive.

The Behavioral Insights Team (BIT) in the United Kingdom is a partly government owned firm exploring the potential of WPIs.  President Obama launched the US Social and Behavioral Sciences Team to develop WPIs in the U.S.  Similar unites have been established in Germany, Australia, Singapore, Finland, and the Netherlands.

So it appears that WPIs are catching on.


March 30, 2016

This post is based on an interview Shannon Fischer conducted with Laurence Sugarman  that was titled “I can tell you how to heal yourself with hypnosis,” and published in the March 12, 2016 edition of the New Scientist.  Laurence  Sugarman directs the Center for Applied Psychophysiology  and Self-regulation at the Rochester Institute of Technology.  He is a former president of the American Board of Medical Hypnosis and is on the faculty of the National Pediatric Hypnosis Training Institute.

Dr. Sugarman worked for 20 years as a sole primary care pediatrician but found that his training was inadequate for the behavioral and psychophysiological issues he encountered.  He now believes that hypnosis can take healthcare to a new level.  As readers should know, employing the mind in healthcare is a continuing theme  in the healthymemoy blog (enter “Cure” and “The Relaxation Revolution” in the search block of the healthy memory blog for examples).

When asked why is hypnosis not widely used, he responded “In Part, because nobody knows what it is.  We first need to be able to say, this is what hypnosis is, and this is all that it is.  Then we can say how we think it works.”  Dr. Sugarman and his colleagues “propose that hypnosis is simply a skill set for influencing people.  It involves facial expression, language, body movement, tone of voice, intensity, metaphor, understanding how people interpret and represent things.  It isn’t something that you’re in, or that you do:  hypnosis is something you use.  That means that it is not a therapy;  it’s a means to a therapy.”

When asked “Where does the hypnotic trance fit?  he responded “Trance is a process of intense learning.  It happens when we change our minds in significant ways, when we become neuroplastic; we are thoughtful, we pause, change our breathing.  There is a shift in the parasympathetic  part of the autonomic  nervous system—an intensified focus of attention and narrow peripheral awareness.  Trance happens when we are traumatized and when we are in love.  There’s no such thing as “hypnotic trance” as distinct from the trance of yoga or prayer, for example.  But part of the skill set of hypnosis is recognizing and facilitating trance, because it makes whatever you’re learning more effective.”

He states that the ultimate power to change lies within each of us.  An earlier healtymemory blog post on hypnotism was titled  “Self Hypnotism” because ultimately it is the individual who either is letting herself be hypnotized or doing the hypnosis.  Dr Sugarman responded, “People can be influenced into cults and violent religious movements, be depersonalized and become the victims of abuse.  If I have poor self-esteem and self-efficacy, I may let people use hypnosis to “overpower” me.  But ultimately the power to change lies with the person who, as we say, Owns the trance.”

Dr. Sugarman says that hypnosis is a medium for delivering placebos.  He also says that mindfulness meditation is an example of hypnosis.  In other words it is one of many ways of doing mindfulness meditation.  Hypnotism provides a means for directing change.

He notes that we unknowingly use hypnosis on ourselves, and that most of our self-hypnosis  is not very nice.  Most of it is:  “I suck at that, I’m not a very nice person, I’m lazy, I deserve this abuse, every time I do that I am going to get a headache.”  If trance is this intense learning process, we use a lot of that plasticity to reinforce our ruts.”

He goes on to say, “Clinical hypnosis is a way of helping somebody change their self hypnosis, to understand what trance-formation looks and feels like, and use both the novelty and intensity of conversation to teach them to do their own trance.”

Thinking 2.0

March 9, 2016

This  post was inspired by an article in the February 26, 2016 edition of the “New Scientist” written by Michael Brooks.  The title of the article is “A new kind of logic:  How to upgrade the way we think.”    There are many healthymemoy blog posts about the limitations of our cognitive processes.  First of all, are attentional capacity is quite limited and requires selection.  Our working memory capacity is around 5 or fewer items.  There are healthy memory blog posts on cognitive misers and cognitive spendthrifts.  Thought requires cognitive effort that we are often reluctant to spend making us cognitive misers.  And there are limits to the amount of cognitive effort we can expend.  Cognitive effort spent unwisely can be costly.

Let me elaborate on the last statement with some personal anecdotes.  Ohio State was on the quarter system when I attended and my initial goal was to begin college right after graduation in the summer quarter and to attend quarter consecutively so that I would graduate within three years.  Matters when fairly well until my second quarter when I earned the only “D” in my life.  Although I did get one “A” it was in a course for which I had already read the textbook in high school.  I replaced and continued to attend consecutive quarters, but only part time during he summer.  I was in the honors program and managed to graduate in 3.5 years with a Bachelor’s of Arts with Distinction in Psychology.  I tried going directly into graduate studies, but found that I had already expended my remaining cognitive capital.  So I entered the Army to give my mind a rest.

When I returned and began graduate school I was a cognitive spendthrift who wanted to learn as much as I could in my field.  However, I found that I could not work long hours.  If I did my brain turned to mush and I was on the verge of drooling.  So I found it profitable to stop my cognitive spendthrift days and marshal my cognitive resources. It worked and I earned my doctorate psychology from the University of Utah.

Michael Brooks argues that we are stuck in Thinking 1.0.   He mentions that our conventional economic models bear no resemblance to the real world.  We’ve had unpredicted financial crises because of incorrect rational economic models.  This point has been  made many times in the healthy memory blog.  Behavioral economics should address these shortcomings, but it is still in an early stage of development.

Ioannidis’s article has convinced  statisticians and epidemiologists that more than half of scientific papers reach flawed conclusions especially in medical science, neuroscience and psychology.

Currently we do have big data, machine learning, neural nets, and, of course, the Jeopardy champion Watson.  Although these systems provide answers, they do not provide explanations as to how they arrived at the answers.  And there are statistical relations in which it is difficult to determine causality, that is, what causes what.

Michael Brooks argues that Thinking 2.0 is needed.  Quantum logic makes the distinction between cause and effect (one thing influencing another) and common cause (two things responding to the same effect).  The University of Pittsburgh opened the Center for Causal Discovery ( in 2014.

Judea Pearl, a computer scientist and philosopher at UCLA (and the father of the tragically slain journalist Daniel Pearl) says “You simply cannot grasp causal relationships with statistical language.”  Judea Perl has done some outstanding mathematics and has developed software that has made intractable AI programs tractable and has provided for distinguishing  cause and effect.  Unlike neural nets, machine learning, and Watson, it provides the logic, 2.0 logic I believe, as to reasoning behind the conclusions or actions.

It is clear that Thinking 2.0 will require computers.  But let us hope that humans will understand and be able to develop narratives from their output.  If we just get answers from machine oracles will we still be thinking in 2.0

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Marathon Mind: How Brain Training Could Smash World Records

February 15, 2016

The title of this post is identical to the title of a feature article by James Witts in the January 30, 2016 “New Scientist” on which this blog post is based.  There are three brain regions that are involved in regulating our tolerance to endurance exercise.  The motor cortex plans and controls movement.  Its communication with the insular cortex intensifies as the runners reach exhaustion.  The insular cortex processes signals from around the body, from muscle pain to emotions, and gives instructions to the motor cortex.  The anterior cingulate cortex is thought to govern perception of effort.  According to one theory, training this region with cognitively challenging but dull tasks can increase endurance.

An exercise physiologist Samuele Marcora asked 35 British soldiers to take a 60-minute cycle trial during which he measured physiological limiters.  Then he split the soldiers into two groups.  Both trained on an indoor bicycle three times a week for 12 weeks, but one group performed a mentally fatiguing task—picking out combinations of letters on a screen—as they pedaled.  Although the control group improved their time to exhaustion by 42%, the brain-training group improved by 126%.  The brain—training group also reported finding the test less painful.  Marcora says “The results showed that the subjects could tolerate a harder perceived effort, so when the cognitive task was removed, the effort felt easier.”

Clearly this astounding result bears replication.  How might this brain-training method work?  Marcora thinks that the answer lies in the anterior cingulate cortex (ACC), which has been implicated in a variety of cognitive and emotional functions.  His belief is that if you systematically stress this brain region with cognitive tasks, you build up resistance.  He proposes that monotonous mental tasks can lead to a build-up of adenosine, which is a brain chemical produced by neurons during prolonged activity (see the healthy memory blog post, “Hunger, Caffeine, Cognition”).  Adenosine accumulates when you are deprived of sleep, finding to adenosine receptors on cells in the brain and elsewhere.  Slowing down the activity of these cells makes us feel mentally fatigued.

Marcora thinks that consistently flooding the brain with adenosine by doing mundane mental tasks forces brain cells to adapt, building resistance to this fatigue—inducing chemical.    Marcora says that “Given that the ACC is likely to be intensely activated during prolonged exercise, the hypothesis is that adenosine builds up in this area, causing changes in perception of effort and self-control.  The result is that your sense of exertion goes down for the same level of actual effort.”

Marcora is not the only endurance researcher convinced that manipulating the brain can bring performance gains.   Kevin Thompson had a group of cyclists undertake a 4-kilometer time trial at personal-best pace.  Then he had them race against an o-screen avatar that they thought was going at their best pace when it was really going 2% faster.  The riders kept up cycling faster than they ever had before.  But when the avatar was set to go 5% faster, the riders could not handle it.  Thompson interpreted this result that the body has an energy reserve of 2 to 5% and suggest that this can be tapped by tricking the brain.

It is amazing what competition supplemented with research in physical and mental training has accomplished.  We don’t know what the limits are with respect to endurance.  I hope that we do not find these limits from the deaths of competitors.  Different individuals have different limits.  This research needs to proceed with care.

Technology and Poverty

January 28, 2016

The October 2, 2015 edition of the New Scientist had two interesting articles in the Comments section.  The first by Federico Pistero is titled “As tech threatens jobs, we must test a universal basic income.”  An earlier healthy memory blog post, “The Second Machine Age,” reviewed a book by Erik Brunjolfsson & Andrew McAfee titled, “The Second Machine Age:  Work, Progress, and Prosperity in a Time of Brilliant Technologies”l  predicted that many jobs, jobs that would be regarded as advanced, will disappear during this second machine age.  Other healthy memory blog posts reviewed books whose authors argued that humanity’s “unique” capacity for empathy would still keep people employed.  I wrote that there would not be enough jobs requiring this “unique” capacity to keep everyone employed, even if these skills could not be implemented with technology.

The comment piece by Pistero  stated that it is possible that within 20 years almost half of all jobs will be lost to machines, and nobody really knows how we are going to cope with that.  Pistero writes “One of the most interesting proposals, that doesn’t rely on the fanciful idea that the market will figure it out, is an unconditional basic income (UBI).

A UBI would provide a monthly stipend to every citizen, regardless of income or employment status.  A key criticism of the UBI is that it would kill the incentive to work.  However, research cited by Pistero involving a whole town in Canada and 20 villages in India found that not only did people continue working, but they were more likely to start businesses or perform socially beneficial activities compared with controls.  Moreover, thee was an increase in general well-being , and no increase in alcohol, drug use, or gambling.

Of course, this research needs to be replicated, but it is good to know that this problem is being researched.  The poverty resulting from large scale unemployment would be devastating.

A second article in the same Comment section by Laura Smith is titled “Pay people a living wage and watch them get healthier.”   Paying the lowest earners less than a living wage, which occurs in both the US and the UK, leaves full-time workers unable to lift their families our of poverty.   The problem goes far beyond unpaid bills.

Poverty keeps people from resources such as healthcare and safe housing.  People in poverty experience more wear and tear from stress than the rest of us, they are sicker, and they die earlier.  Children living in poverty are more likely to be depressed and to have trouble in school.  Newborns are more likely to die in infancy.  Poor people are marginalized.  They often live outside the scope of therapeutic, vocational, social, civic, and cultural resources.  This experience of “outsiderness” reduces cognitive and emotional function.  Brain activity associated with social exclusion has been shown to parallel that of bodily pain.

Research addressing the question of whether raising people’s incomes would improve their health looked at the impact of a community-wide income rise when a casino was built on a Cherokee reservation in North Carolina.  The research compared psychiatric assessments of children before and after this even.  Children’s symptom rates began to decline.  By the fourth year out of poverty, the symptom rates could not be distinguished from children who had never been poor.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Syndrome E

November 27, 2015

In the recent healthymemory blog post, “A Single Shifting Mega-Organism,” Syndrome E (E stands for evil) was briefly discussed.  Syndrome E was developed to describe the atrocities, mass-killings, genocides such as the holocaust and the killings by ISIS.  The neurosurgeon Itzhak Fried describes these atrocities as examples of Syndrome E.   He defined the following seven symptoms of Syndrome E:

Compulsive repetitive violence
Obsessive beliefs
Rapid desensitisation to violence
Flat emotional state
Separation of violence from everyday activities
Obedience to an authority
Perceiving group members as virtuous

Having decided that neuroscience has come a long way since his original paper in 1997 (Syndrome E in The Lancet, Volume 150, No. 9094, p1845-1847) Fried  organized a conference in Paris earlier this year to revisit the concept.  Highlights of this conference were published in the New Scientist, November 14-20, 2015 in a feature by Laura Spinney.

Fried’s theory starts with the assumption that people normally have an aversion to harming others.  If this is correct, the higher brain overrides this instinct in people with Syndrome E.  So how might this occur.

The lateral regions of the prefrontal cortex (PFC) are sensitive to rules from the newer parts of the brain.  The medial region of the PFC receives information from the limbic system, a primitive part of the brain that processes emotional states and is sensitive to our innate to preferences.  An experiment using brain scanning was designed to put these two parts of the brain in conflict.  Both these parts of the PFC were observed to light up.  People followed the rule but still considered their personal preference showing that activity in the lateral PFC overrode the personal preference.  The idea here is in the normal brain the higher brain overrides signals coming from the primitive brain.  However, in the pathological brain with Syndrome E, the primitive brain prevails.

Fried suggests that people experience a visceral reaction when they kill for the first time, but some become rapidly desensitized.  And the primary instinct not to harm may become more easily overcome when people are “just following orders.”  Unpublished research using brain scans has shown that coercion makes us feel less responsible for our actions.  Although coercion can cause people to take extraordinarily actions (see the healthy memory blog post “Good vs. Evil”), there are individuals who are predisposed to violence who are just awaiting an opportunity.

Unfortunately, the question remains as to how to prevent people from joining such radicalized groups.  Research in this area is just beginning and much more needs to be done (See the healthy memory blog post,”Why DARPA is studying stories”). Being a neuroscientist, it is not surprising that Fried thinks  that we should use our growing neuroscientific knowledge to identify radicalization early, isolate those affected and help them change.  We wish him, and hopefully many others in this effort.

What is not mentioned in this article is that it can be advantageous for one group to adopt Syndrome E to take from or to take advantage of another group.  Consider North America.  Syndrome E was involved in vacating Native American lands for Europeans.  Moreover, up until the Civil War, blacks were enslaved and slavery was a key component of the economy of the United States.  I sometimes ponder how would North America been settled by Europeans had we the moral and ethical standards of today.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

We Can’t Rely On Science Alone to Make Us Better People

October 8, 2015

The title from another article in the September 26, 2015 New Scientist was chosen as the title to this blog post.  The conclusion to this article can be found in its first two sentences.  “Our sense of right and wrong is often inadequate for modern challenges.  But the combination of rationality and humanity can lead us to more effective morality.”

The immediately preceding healthy memory blog post made the point that computer technology could be used to compensate for the narrow focus of empathy.  Of course, this technology we be drawing upon both science and mathematics.

I was encouraged to learn of an organization whose aim is to optimize the good we can do by quantifying the outcomes of our actions.  The name of this organization is the Center for Effective Altruism in Oxford, UK,
Rather than continuing this post it might be better for you to go to this website and explore the activities.

The Shortcomings of Empathy

October 7, 2015

Previous blogs have included many good comments on empathy.  Perhaps one of the primary ones, is that humans excel a empathy and computers are short on empathy.  Paul Bloom, a psychologist at Yale University says that people who think that empathic concern is an unalloyed force for good are wrong.  The problem is that empathy is a spotlight and is very narrow.  It illuminates the suffering of a single person rather than the fate of millions.  It is more concerned with the here and now than with the future.  Bloom goes on to say, “It’s because of empathy that we care more about, say, the plight of a little girl trapped in a well than we do about potentially billions of people suffering or dying from climate change.”   According to the article, Morality 2.0 by Dan Jones in the September 26, 2015 New Scientist,  empathy’s shortcomings are compounded by the fact that we end up pointing its beam on cause that come into our field of view.  These are typically the most newsworthy moral issues rather than those where we can do the most good.

There is also a general belief that our brains are wired to be empathic.  This accounts for our success as a species.  But, again, the problem is the narrowness of our empathy beam.  Conflict among groups, be they tribes, nations, religions, or even professional organizations is the rule rather then the exception.  Our record is one of the abuse and even the enslavement of others who we believe “do not belong.”

The New Scientist article discusses a variety of means of prodding humans to make more meaningful moral choices.  It concludes with the following statement:  “Moral issues are complicated and hard, and they involve serious trade-offs and deliberation.  it would be be better if people thought more about them.”

It strikes me that non-empathic computer technology might be of considerable assistance. The problem of addressing the wide variety of moral needs in an efficient manner is an enormous computational task. one that is certainly beyond an individual human’s intellect, and is perhaps beyond the capacity of the collective intellect of humanity.  Humans could program their empathic concerns into computers.  Computers could then  compute enormous cost/benefit analysis.  Humans could then discuss and debate how resources could best be used to address these human and planetary needs.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Controlling Pain in Our Minds

May 16, 2015

This blog post is based on an article in the New Scientist (17 Jan 2015, p.10) by Jessica Hamzelou titled “Pain Really Can Be All in Your Mind.”  She reported research  by Tor Wager at the University of Colorado Boulder that was published in the Public Library of Science (PLoS Biology,    They used fMRI to examine the brain activity  of 33 healthy adults.  They first watched the changing activity  as they applied increasing  heat to the participants arms.  A range of brain structures lit up as the heat became painful.  This was a familiar pattern of activity  called the neurologic pain signature.

The researchers wanted to know if the participants  could control the pain by thought alone.  They asked the participants to rethink their pain either as blistering heat, or as a warm blanket on a cool day.  Although the participants couldn’t change the level of activity in the neurologic pain signature, they could alter the amount of pain they felt.  When they did this, a distinct  set of brain structures linking the nucleus accumbens and the ventromedial prefrontal cortex became active.

Vanaia Apkarian of Northwestern University noted,”It’s a major finding.  For the first time, we’ve established  the possibility of modulating pain through two different pathways.”  Brain scans can compare the strengths of activation of these two brain networks to work out how much pain has a physical cause, and how much is due to their thoughts and emotions.

These finding built on prior work by Apkarian’s team, who discovered that chronic back pain seems to be associated with a pattern of brain activity not usually seen with physical pain.  The brain regions active in Apkarian’s patients are the same as those active  in the participants controlling pain in Wager’s study.

It is possible that in chronic pain conditions, psychological pain might overtake physical pain as the main contributor to the overall sensation.  This might be the reason that traditional pain relief such as opiods don’t offer much relief from pain.

Hamezelou notes, “Wager’s study suggests that cognitive therapies and techniques such as nuerofeedback—where people learn to control their brain activity by watching how it changes in real time—might offer a better approach.”

Ben Seymour, a neuroscientist at the University of Cambridge notes, “in the next five to 10 years, we’ll see a huge change in the way clinicians deal with pain.  Rather than being passed on what the patient says, we’ll be building  a richer picture of the connections in the person’s brain to identify what type of pain they have.


April 18, 2015

Our beliefs direct our lives and how we think.  The initial part of this post comes from an American Scientist (4 April 2015, 28-33) article by Graham Lawton.

Initially our beliefs are determined by default.  Children believe what they are told.  This is fortunate, otherwise the child’s development would be retarded.  So our brains are credulous.  A brain imaging study by Sam Harris illustrated how our brain responds to belief.  People were put in a brain scanner and asked whether the believed in various written statements.   Statements that people believed in produced little characteristic brain activity, just a few flickers in regions associated with reasoning and emotional reward.  However, disbelief  produced longer and stronger activation in regions associated with deliberation and decision making.  Apparently it takes the brain longer to reach a state of disbelief.  Statements that were not believed also activated regions associated  with emotion such as pain and disgust.  These responses make sense when regarded from an evolutionary perspective.

There is also a feeling of rightness that accompanies our beliefs.  This makes evolutionary sense except in the case of delusional beliefs.  People suffering from mental illness can feel quite strongly about delusional beliefs.  And when we here a belief from a friend or acquaintance we find to be incredulous, we might ask, “Are you out of your mind?”

So a reasonable question is where does this feel in of rightness originate.  One is our evolved biology, that has already been discussed.   Another is personal biology.  The case of mental illness has already been mentioned, but there are less extreme examples that researchers have found.  For example, conservatives generally react more fearfully than liberals to frightening images as reflected in measures of arousal such as skin conductance and eye-blink rate.

Of course, the society we keep influences both what we believe and the feeling of rightness.  We tend to associate with like minded people and this has a reinforcing effect on our beliefs.

The problem with beliefs is that progress depends on the questioning of beliefs.  The development and advancement of science depended on questioning not only religious beliefs, but the adequacy of these beliefs.  Progress in the political arena depended on questioning the validity of the concepts of royalty and privileged positions.

Beliefs are a good default position.  Absent beliefs, it would be both difficult and uncomfortable to live.  Nevertheless, beliefs should be challenged when they are clearly incorrect or when they are having undesired consequences,

My personal belief about beliefs is that we manage to live on the basis of internal models we develop about the world.  But I don’t believe that any of my beliefs are certain.  They are weighted with probabilities that can change as the result of new information (data) or as the result of new thinking and reasoning.  Even my most strongly held beliefs are still hedged with some small degree of uncertainty.

A good example of this is Pascal’s argument for believing in God.  His argument was that the payoff for not believing in God could be extremely painful.  However, even if one’s belief was infinitesimally small, one should believe.  I have always found this to be one of the few philosophical arguments to be compelling.  So I believe in God.  Anyone who does believe in God has the comfort of this belief while living.  And if there is no God, one will be dead and have no means of knowing that one was wrong.

Richard Dawkins is a brilliant scientist that has made significant contributions to science.  However, he is one of the most outspoken atheists.  Recently he has admitted that he does have some uncertainty and that he is more accurately an agnostic.  However, he argues that he is far enough down on the agnosticism scale to call himself an atheist.  Here we have a stupid argument from a brilliant man.

I find that  many of the problems people have regarding the existence of God stem from religion.  It is important to keep in mind that religions are human institutions and are flawed.  Religions have done much good, but they have also done harm.  Apart from Pascal’s wager, I have a philosophical need for God.  Of course, I realize that my philosophical needs are not necessarily supported by reality.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Cognitive Potential Hiding in Plain Sight

March 1, 2015

This phrase is taken from the cover article of the New Scientist, 21 Feb 2015, “Meet Your Other Brain”, 30-33 by Ted Burrell.  Grey matter in the brain is grey due to myelin.  At one time it was thought that the main purpose of myelin  was to speed up reflexes to so we could react faster.  However, William Richardson who studies neural plasticity at University College, London said  that “Ultimately it allows us to have clever brains.”  A small amount of myelin is made while we are still in the womb, but after birth it takes off .  It surges as infants learn to crawl, walk, and talk.  At  around age 4, the rate of myelination slows and teenagers still have the prefrontal cortex left to myelinate.  The prefrontal cortex is crucial for planning and consideration of consequences.  Consequently, processing in the prefrontal cortex is slow and inefficient and teens remain impulsive.  By the time we reach our forties, during which there have been many opportunities to ruin our lives, the final circuitry is completed.  But from our 60s onwards the coverings start to fray and degenerate, which fits the common experience of cognitive decline as we age.  As myelin degenerates, the signals get fuzzier.

Neural plasticity is at the neurons and the synapses between them.  The number of neurotransmitter  receptors increase in a synapse the more the pathway is used, which enables the brain to adapt according to learning or experience.  Consequently, our quest to understand cognitive decline, and the potential for activities that boost brain power has focused on grey matter, the part of the brain and spinal cord packed with the neurons cell bodies and synapses.

It wasn’t until  2009 that the new neuroimaging  method called diffusion MRI was available that allows measures of human white matter in the living brain.  Heidi Johansen-Berg of the University of Oxford examined a 2004 study, which found that learning a new skill such as juggling changed the density of grey matter, which is an example  of classic synaptic plasticity.  She replicated this juggling study  and found that after six weeks brain scans showed  that myelin had increased more than that of a control group who had no training (Nature Neuroscience, 12, p. 1370).  She found the change not only in the grey matter but also in the underlying white matter pathways, which suggested that these pathways strengthen in some way as the result of experience.  These changes in white and grey matter took place over different timescales, which suggested two different processes.  Johansen-Berg thinks that the increase in white matter would have enabled faster conduction along the circuits coordinating juggling.  This effect was seen in everyone who learned to juggle, regardless of how well they learned to juggle, implying that it is the learning process itself that is responsible.

Myelin is formed by oligodendrocytes, which are octopus-shaped cells with long arms that  wrap thin layers of fat 50 to 100  times around an axon, preventing  electrical signals from slipping out and expediting the conversation between brain regions.  These cells are made throughout life by oligodendrocyte precursor cells (OPCs), that tile the brain, ready to morph at moment;s notice.

Myelin plasticity is a second type of plasticity distinct from the well-known synaptic plasticity.  More studies are needed with human subjects, but the animal studies have important implications for learning and memory.  Well-used pathways get more myelin, speeding up  the signals and making the brain more efficient.  Gabriel Gorfas of the University of Michigan says, “it’s not only that the information is stored in the plasticity of the synapses but actually in the myelin as well.  For instance, if you are learning Mandarin, myelination  would help you remember the right character faster and more intuitively.  This gives a new dimension to the amount of information and the toes of information the nervous system can store.  The importance of these and other non-neuronal cells has led to the term our “other brain.””

Myelin information can also be lost.  The brain is a use it or lose it organ.  “If electricity  isn’t flowing, the myelin can degrade, and this can lead to psychological and social problems.  If the brain were a city, and myelin the insulation, some parts would end up in the dark.  A lack of myelin is implicated in conditions like autism, and in mental illnesses such as schizophrenia, and in spinal cord and traumatic brain injuries.”

So the bottom line is, “Keep learning, keep your mind active,”  Learning new things is recommended, like a new piano piece (assuming that you do play the piano), keep up with ordinary activities like talking a walk.  If it’s an unfamiliar route, with changing scenery, and the requirement t learn the way home, all the better.  Take a new hobby, another.  The goal is to keep the electricity flowing a little better, a little longer.

How the Illusion of the Present is Created

January 20, 2015

This blog post is based in large part on an article in New Scientist (10 January 2015, 28-31) by Laura Spinney. Although we feel like we are living in the present, we need to construct the present from what has happened in the recent past. First of all, we need to work with data processed by our senses. Different senses process information at different rates. For example, the auditory system can distinguish two sounds just 2 milliseconds apart, whereas the visual system requires tens of milliseconds. It takes even longer to detect the order of stimuli. There is evidence that even at the subconscious millisecond level, our brains make predictions. For example, when we watch a badly dubbed movie, our brains predict that the audio and visual streams should occur simultaneously, but if the lag between them does not exceed 200 milliseconds we stop noticing that the lip movements and voices of the actors are out of synch. Our brains need to blend these different sources of information coming in at different rates into a coherent present, so we can deal with what is happening in what appears to be now, but is actually the future.

Marc Wittman of the Institute for Frontier Areas of Psychology and Mental Health in Freiberg, Germany has developed a model of how this process occurs by drawing on a very large mass of psychophysical and neuroscientific data (Frontiers in Integrative Neurosciece, vol. 5, article 66). He believes that there are a hierarchy of nows, each of which forms the building blocks of the next, until the property of flow emerges into an the illusion of the present.

Virginie van Wassenhove and her colleagues at the French Medical Research Agency’s Cognitive Neuroimaging Unit in Gif-sur-Yvette have been investigating how the brain might bind incoming information into a unified functional moment. They exposed people to sequences of bleeps and flashes. Both occurred once per second, but 200 milliseconds out of synch. Brain imaging was used to reveal the electrical activity produced by these two stimuli. This consisted of two distinct brain waves, one in the auditory cortex and the other in the visual cortex, both oscillating at the rate of once per second. At first the two oscillations were out of phase and the research participants experience the light and sound as being out of synch. But later they reported starting to perceive the beeps and flashes as being simultaneous, the auditory oscillation became aligned with the visual image. So our brain seems to physically adjust signals to synchronize events if it thinks that they belong together (Neuroimage, vol 92, 274). So it appears that even at the subconcious level our brains are choosing what it allows into a moment. But according to Whittman this functional moment is not the now of which we are conscious. This comes at the next level of his hierarchy, the “experienced moment.”

This experienced moment seems to last between 2 and 3 seconds. David Melcher and his colleagues at the University of Trento, Italy provided a good demonstration of this moment. They presented research participants with short movie clips in which segments lasting from milliseconds to several seconds that had been subdivided into small chunks that were shuffled randomly before presentation. If the shuffling occurred within a segment of 2.5 seconds, people could still follow the story as if they hadn’t noticed the switches. But the participants became confused if the shuffled window was longer than 2.5 seconds (Plos ONE, vol 9, pe102248). So our brains seem able to integrate into a cohesive, comprehensible whole within a time frame of up to 2.5 seconds. The researchers suggest that this window is the “subjective present,” and exists to allow us to consciously perceive complex sequences of events. Melcher thinks that this window provides a bridging mechanism to compensate for the fact that ourackf brains are always working on outdated information. Our brains process stimuli that impinged on our senses hundreds of milliseconds ago, but it we were to react with that lag we would not function effectively in the world. Melcher goes on “Our sense of now can be viewed as psychological illusion based on the past and a prediction of the near future, and this illusion is calibrated so that it allows us to do amazing things like run, jump, play sports or drive a car.”

Wittman acknowledges that it is not clear how all this works. The biological of the experienced moment has yet to be found, however neuroscientist Georg Northoff set of the University of Ottawa has proposed one possibility in his 2013 book, Unlocking the Brain. He speculated that implicit timing could be related to slow cortical potentials that provide a kind of background electrical activity measurable across the brain’s cortex. Wittman notes that it’s telling that these waves of electrical activity can last several seconds. He also notes that consciousness itself is kind of filter because it focuses our attention on some things to the exclusion of others. It could be influenced by factors such as emotion or memory, it might tag or label a subset of functional moments as belonging together, to create an experienced moment.

What about meditators who say they are “in the now?” It is clear that it is impossible to be “in the now.” But is it possible that although they appear to be fooling themselves, they are actually accomplish something good? Data indicate that the answer is yes. Wittman did research in which meditators were able to maintain one interpretation of an ambiguous figure longer than non-meditators. Meditators also tend to score higher on tests of attention and working memory capacity. Wittman notes, “If you are more aware of what is happening around you, you not only experience more in the present moment, you also have more memory content.” He also notes, “Meditators perceive time to pass more slowly than non-meditators, both in the present and retrospectively.”

The final paragraph of the New Scientist article merits direct quotation. “This suggests that with a bit of effort we are all capable of manipulating our perception now. If meditation expands your now, then as well as expanding your mind, it could also expand your life. So, grab hold of your consciousness and revel in the moment for longer. There’s no time like the present.”

10 Innovations That Changed History and 10 Innovations That Will

November 5, 2014

This is the title of a special report in the New Scientist (October 25-31, 2014). Articles like this are fun, but should not be taken too seriously. However, they do provide food for thought.

10 innovations that have changed history

Cooking. Clearly learning how to start and control fires was a prerequisite, but cooking enable early humans to enjoy a better diet for advancing physical and cognitive health.

Weapons. Weapons enabled hunting, which provided for a better diet that advance physical and cognitive health. They also brought about warfare. The article argues that this enabled the weakest group member to take down the strongest group member of the opposing group. So weapons encouraged early human groups to embrace and egalitarian existence unique among primates. There appears to have been a link between warfare and technological advancement, the most recent example being the cold war between the United States and the Soviet Union. The launch of Sputnik encouraged a giant increase in US funding for technology development and the training and education of scientists and engineers. I was one of the many beneficiaries of this funding. Would man have reached the moon without this funding? What about progress in computers? The internet is a product of defense spending.

Jewelry and Cosmetics. This is certainly an item I would have left off my list, but the authors argue that they hint at dramatic revolutions in the nature of human beliefs and communication. They are indications of symbolic thought and behavior because wearing a particular necklace or form of body paint has meaning beyond the apparent. As well as status, it can signify things like group identify or shared outlook. That generation after generation adorned themselves in this way indicates that these people had language complex enough to establish traditions.

Sewing. It is obvious that without sewing there would be little in the way of clothes to product our bodies and allow us to live outside of highly temperate bodies.

Containers. This is an obvious advancement that is easily overlooked. Groups of humans would have needed to remain small absent containers.

Law. Obviously codified rules are needed for societies to survive. Then, there is the concept of justice. The law and justice need to be better aligned. One might be tempted to argue that currently they are orthogonal dimensions.

Timekeeping. Contemporary could not exist without a system of timekeeping. For much of history, timekeeping systems were local. It was not until the development of the railroads were the different timekeeping systems brought into alignment to keep trains from crashing into each other.

Ploughing. Obviously without agriculture, societies would not have developed, and advanced agriculture requires plumbing.

Sewerage. Absent sewerage, not only would the stench be unbearable, but diseases would be widely spread. I will not step into a time-travel machine and go back to a time before sewerage.

Writing. But of course. If there were not writing, there could be no healthymemory blog.

10 innovations that will change history.

End of aging. This might come to pass, but what will be its ramifications? Will warfare break out between the ages. Will people eventually grown tired of who they have been for so long and opt out?

Aging might end, but can the quality of life be maintained or improved?

Decision Making Machines. Not only do we not like making too many decisions, we are not good at making decisions. Perhaps decision making machines could replace our non-functioning legislative systems.

Customisable Bodies. Well this has already started with plastic surgery. Will the results be improved? How will people choose to customize their bodies. How might this affect athletic competitions?

Cryptocurriences. The bitcoin is provided as an example here. Is it an improvement? Would these currencies constitute an improvement or just another means of speculation?

Virtual Reality. Might virtual reality be seen as better than real reality and virtual living would replace real living?

Brain Uploads. Here we have the singularity with silicon. This is Kurtzweil’s future, one of which I am quite skeptical.

Genetic engineering. I expect great things here to include the end of disease and genetic defects. There might even be substantive genetic improvements to mitigate our many shortcomings.

Space Colonization. Yes, in our solar system. But we need to learn how to break the laws of physics to colonize outside our solar system.

End of Privacy. This might already have occurred. What is needed are laws to prevent abuses of this end of privacy.

And Abundance of Everything. Something to be hoped for. Today we are a minority of our species who enjoy the bnefits of technology and are not suffering from war and atrocities at the hand of fanatics. So why was the end of war and terrorism not on the list of ten advances for the future?

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

You Can Teach an Old Dog New Tricks

October 8, 2014

This post is based on “Old dog, new tricks” in The Scientific Guide to a Better You: New Scientist: The Collection. The saying “you can’t teach an old dog new tricks” has been around for a long, long time. Too long, in fact, to hold under the new findings in science. Neurogenesis continues as long as we live, as well as the ability to learn new things.

I had long believed that there was a critical age for language acquisition. The idea was that we were designed to pick up languages naturally at an early age. However, after the onset of puberty, the task became more difficult. A study by Ellen Bialystok at York University in Toronto, Canada, disabused me of this notion. She studied US census records that detailed the linguistic skills of more than 2 million Hispanic and Chinese immigrants. If there had been a “critical period” for learning a second language in infancy should have created a sharp difference between those who changed country in early childhood and those who were uprooted in adolescence. There was no sharp difference. Rather there was a very gradual decline with age among immigrants. This could reflect differences in environment as well as adults’ rusty brain circuits. It is not that old dogs can’t learn, but rather a matter of old dogs not expending the effort to learn.

Gary Marcus, a psychologist devoted himself to learning how to play the guitar when he was 38. He wrote a book on his experience titled Guitar Zero. Initially his family laughed at him, but eventually they saw that he was that he was making progress. Typically adults are impatient when learning to play a new instrument. They do not want to put up with the frustration associated with this learning, something to which most students adapt.

Another study by Uang Zang at the University of Minnesota in Minneapolis focused on the acquisition of foreign accents in adults. When the adults were given recordings that mimicked the exaggerated baby talk of cooing mothers, the adults progressed quite rapidly.

Volunteers visiting Virginia Penhune’s lab at Concordia University in Montreal learned to press keys in a certain sequence, the adult volunteers outperformed the younger volunteers.

Juggling is a challenging ask of hand-eye coordination. Nearly 1,000 volunteers from all age groups learned to juggle over six training sessions. Although the 60 to 80-year olds started slowly, they soon caught up with the 30-year-olds. At the end of the six session all adults were juggling more confidently than the 5 to 10 year olds.

Adults also tend to hamper progress with their own perfectionism, whereas children jump onto tasks while adults are agonizing over the mechanics of movement. Adults tend to conceptualize exactly what is required. Gabriele Wulf of the University of Nevada at Las Vegas says “Adults think so much more about what they are doing. Children just copy what they see.” Wulf’s work shows that we should focus on the outcome of our actions rather than on the intricacies of movement. Similarly overly rigid practice regimes can stifle long term learning. For example, it is better to shoot around the court, rather than trying to perfect a shot from a particular position. Even if one really feels compelled to do this, they should intersperse their shooting with shots from different positions on the court.

We also may have a tendency to lose confidence as we get older, and this can have a big impact on performance. In one study half the students were given a sham test on pitching a ball in which they were told that their performance was above average. They performed better on a test than a ground that had practiced but had not been given sham feedback.

One of the big problems we adults have is finding time to learn. We work, have errands and commitments to others including our families. However, babies have all the time in the world to learn. Food, drink, even their personal hygiene is taking care of for them. Gradually some obligations develop, but some of them regard learning and they still have gobs of time to learn. When we are freed of these obligations, we adults should not forget to take advantage of this additional time to learn new things and to engage in new pursuits.

To address the short amount of time that working adults have, the cognitive scientist Ed Cooke has developed a website, that works to integrate learning into the adult day and to take some of the pain out of testing.

It is also important to remember that exercise is important and the amount of exercise can be fairly modest. (See the healthymemory blog post, “To improve your memory, build you hippocampus.”)

Why Have We Stopped Getting Smarter?

September 10, 2014

“Why Have We Stopped Getting Smarter” is the subtitle of a an article in the NewScientist August 23 2014 titled “Dumbing Down.” I feel compelled to post about this article because it is a likely sign of the ending of, or perhaps even a reversal of, the Flynn Effect. I have written several posts on the Flynn Effect (type “Flynn Effect” into the healthymemory blog search box to find them.). The Flynn Effect is the increase in IQ scores that has been occurring over the past several decades. This has required the repeated re-norming of IQ tests so that the average remains at 100. Well that increase has now stopped and might even be reversing.

The New Scientist article goes into several explanations as to why this has happened. One of them is that smarte r people are having fewer children, so that dumber people are contributing more to the average wih the result that the average IQ has stopped increasing and might even have begun to decrease.. There seems to be a belief among some that we have stopped getting smarter and might even be dumbing down, hence the title and subtitle of the article.

This is ironic because Flynn himself used the effect to argue that IQ tests were not accurately measuring intelligence. He argued that had there been true increases in intelligence, society would have advanced much more than it has, and would be in much less trouble than it is in. So I think he would also argue that the end and possible reversal of the Flynn Effect does not mean that we have stopped getting smarter or that we are dumbing down.

Knowing and believing one’s IQ score can be a problem. Those with high scores might reason that they do not need to learn or apply themselves because they are blessed with so much brain power. On the other hand, those who know and believe their low IQ scores might think that they lack sufficient brain power and concede defeat.

Of course readers of the healthymemory blog should believe that they should use whatever brain power they have to best advantage. Moreover, their goal should be to continue to learn and grow their cognitive capacity as long as they live. They should also know that neurogenesis provides for this growth as long as the maintain their physical health and grow the health of their memories by following some of the activities (there are way to many to follow them all) they find in the healthymemory blog.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Fooling Ourselves Beneficially

May 15, 2013

The following bits of wisdom are taken from an article in the April 6, 2013 edition of the New Scientist, “Lost In Translation” by Caroline Williams. The article reports a study1 showing how faking calmness and confidence can not only change the way others see us, but can help us change ourselves. The participants in this experiment were asked to hold either a “high power” or a “ “low power” pose for two minutes. The high power pose was expansive, including sitting with legs on a desk and hands behind the head and standing with legs apart and hands on the hips. The low power pose involved hunching and taking up little space. Then they played a gambling game where the odds of winning were 50:50. The researchers took saliva samples to test for the levels of testosterone and cortisol. Testosterone is a “power’ hormone, whereas cortisol is a “stress” hormone. Those in the high power pose group were significantly more likely to gamble than those in the low power pose group (86% to 60%). Participants in the high power pose group had a 20% increase in testosterone and a 25% decrease in cortisol, whereas those in the low power pose group had a 10% decrease in testosterone and a 15% increase in cortisol. Increased testosterone has also been linked to increased pain tolerance.

Research has also shown that sitting up straight leads to positive emotions, whereas sitting with hunched shoulders leads to feeling down. There is plenty of research that has also shown that faking a smile makes you feel happier, whereas faking a frown has the opposite effect. There is even evidence that people with Botox injections, which prevent them from frowning, feel generally happier. Of course, there are other interpretations for the Botox results.

The New Scientist article ends with an excerpt from one of Madonna’s hits: “Don’t just stand there, let’s get to it, strike a pose. There’s something to it.”

1Carney, D. Psychological Science, 21, p.1463.

New Approaches to Alzheimer’s Disease

October 17, 2012

Between 1998 and 2011, 101 experimental treatments for Alzheimer’s were scrapped. Only three drugs made it to market, and they do not cure Alzheimer’s, they merely slow it down. Treatments that target the obvious hallmarks of Alzheimer’s disease are the sticky plaques that clog up people’s brains. Two of the largest trials of treatments to attack these plaques failed. So it appears that other approaches are needed that focus on other earlier events. The immediately preceding post outlined one of these new approaches. Another article1 described new trials that are focusing on protecting synapses. Synapses are the gaps across which neurons communicate.

Bryostatin 1 is a cancer drug that has been shown to boost an enzyme, PKC episilon. This enzyme both helps form synapses and protects them against plaque. A trial that will test this drug in people with Alzheimer’s is about to begin.

Patricia Salinas and her colleagues at University College in London have shown that soluble beta-amyloid raises concentration of a synapse destroying enzyme called Dkk1. When the enzyme was blocked in cultures of brain cells, synapses remained intact. Potentially this could provide a way to protect the aging brain.

Gary Landreth and his team at Case Western University have found that another cancer drug, bexarotene, got rid of half the plaques within three days in an experiment using mice. The drug also reduced levels of beta-amyloid and the animals rapidly recovered their cognitive abilities.

The Healthymemory blog always takes pains to note that although these amyloid plaques appear to be a necessary condition, they do not appear to be a sufficient condition for Alzheimer’s. There have been autopsies of individuals whose brains all show conspicuous signs of Alzheimer’s, yet these individuals never evidenced any of its symptoms when they were alive. The explanation offered for this finding is that these people had built up a cognitive reserve during their lifetimes. The healthymemory blog is a strong advocate of building this cognitive reserve through cognitive exercise (e.g.,mnemonic techniques), and by remaining cognitively active and engaging in cognitive growth throughout one’s entire life.

1Hamzelou, J., (2012). A New Direction. New Scientist, 29 September, p. 7.

© Douglas Griffith and, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

VENs: the Key to Consciousness?

July 28, 2012

VENs stands for Von Economo Neurons. Constantin von Economo was the neuroscientist who discovered these neurons.1 VENs are quite distinctive in appearance. They are at least 50 per cent and sometimes up to 200 percent larger that typical neurons. They have a long spindly cell body with a single projection at each end and very few branches. They are quite rare. They make up just about one per cent of the neurons in two small areas of the brain: the anterior cingulate cortex (ACC) and the fronto-insular (FI) cortex. The ACC and FI are heavily involved in many of the more advanced aspects of cognition and feeling. They make up a social monitoring network that keeps track of social cues so that we can react appropriately.

The ACC and FI keep a subconscious tally of what is going on around us and direct attention to the most important events as well as monitoring sensations from the body to detect any changes. Both these brain regions are active when we recognize our reflection in a mirror. This suggests that these parts of the brain underlie our sense of self. It is a key component of consciousness providing a sense of self identify and a sense of the identity of others. They provide the sense of how we feel.

The notion is that VENs provide a fast relay system, a kind of social superhighway that allows the gist of a situation to move quickly through the brain, enabling us to react intuitively. This is a crucial survival skill in social species such as our own. VENs are also found in social mammals.

People with fronto-temporal dementia lose large numbers of VENs in the ACC and FI early in the disease. The main symptom of the diseases is a complete loss of social awareness, empathy, and self-control.

According to one study2 people with autism fall into two groups. One group consists of those who have too few VENs, so they might not have the necessary wiring to process social cues. The other group consists of those who have far too many VENs. Having too many VENS might make emotional systems fire intensely, causing people with autism to feel overwhelmed.

Another study3 found that people with schizophrenia who committed suicide had significantly more VENs in their ACC than schizophrenics who died of other causes, The notion is that the over-abundance of VENs might create an overactive emotional system that leaves them prone to negative self-assessment and feelings of guilt and hopelessness.

Bud Craig, a neuroanatomist at Barrow Neurological Institute has pointed out that the bigger the brain, the more energy it takes to run, so it is crucial that it operates as efficiently as possible. He said, “Evolution produced an energy calculation system that incorporated not just the sensory inputs from the brain. And the fact that we are constantly updating this picture of “how I feel now” has an interesting and very useful by-product: we have a concept that there is an “I” to do the feeling. Evolution produced a very efficient moment-by-moment calculation of energy utilization that had an epiphenomenon, a by-product that provided a subjective representation of my feelings.”4

The author of the New Scientist article concludes “If he’s right—and there is a long way to go before we can be sure—it raises a very humbling possibility: that far from being the pinnacle of brain evolution, consciousness might been a big, and very successful accident.”5

Although I am excited by the possibility that the neurological basis of consciousness has been found, I am disturbed by their reductionist conclusions. Most of us assume that there is a neural basis for consciousness. But the finding of this neural basis does not prove that consciousness is an epiphenomenon (not real). The next post will provide evidence regard the reality and purpose of consciousness.

1Williams, C. (2012). The Conscious Connection. New Scientist, 21 July, 33-35.

3PloS One, vol 6, pe20936).

4Op cit.p. 35


© Douglas Griffith and, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Importance of Ikigai

November 2, 2011

Ikigai is a Japanese word roughly translated as “the reason for which we wake up in the morning.” In other words, having a purpose in life. Knowing your purpose in life is important to your well being.1 Many studies have purported to show a link between some aspect of religion and better health. For example, religion has been associated with lower rates of cardiovascular disease, stroke, blood pressure, metabolic disorders, better immune functioning, improved outcomes for infections such as HIV and meningitis, and lower risk of developing cancer. Of course, it was not possible for any of these studies to be Random Controlled Trials (RCTs), where participants were randomly assigned to religious and non-religious groups. So it is possible that there is a strong element of self-selection here.

However, there are other possible reasons for these results. Religious people tend to pursue lower risk lifestyles. Churchgoers typically enjoy strong social support. And, of course, seriously ill people are less likely to attend church. However, there was recent study that tried to statistically control for these factors and concluded that “religiosity/spirituality” does have a protective effect, but only for healthy people.2 Some researchers attribute this to the placebo effect (See the Healthymemory Blog Post, “”Placebo and Nocebo Effects”). Others believe that positive emotions (See the Healthymemory Blog Post, “Optimism”) associated with “spirituality” promote beneficial physiological responses.

Still others think that what really matters is having a sense of purpose in life, whatever it might be. Presumably knowing why we are here and what is important increases our sense of control over events making them less stressful. Remember the study by Saron that was reported in the Healthymemory Blog Post, “The Benefits of Meditation.” The increase in the levels of the enzyme that repairs teleomeres correlated with an increased sense of control and an increased sense of purpose in life. The meditators were doing something they loved and provided a purpose in life.

So, it is important to have a purpose in life when you awaken in the morning. This is important throughout one’s life and is something that needs to be considered before retiring (See the Healthymemory Blog Posts, “The Second Half of Life,” and “Could the AARP Be Telling Us Not to Retire?”).

1Much of this post is based on an article, Know your purpose, by Jo Marchant in the New Scientist, 27 August 2011, p. 35.

2Psychotherapy and Psychosomatics, 78, p.81.

© Douglas Griffith and, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

To Remember It, Sleep on It

March 2, 2011

A recent article1 reports an interesting experiment2 illustrating the role of sleep in memory. They had 191 adults perform different memory tasks, for example, learning word-pairs. Approximately half of the adults were told to expect a memory retest 9 hours later. The remainder were misled and told that they would be performing a different kind of task. Both groups were re-tested and those who expected the retest recalled 12 percent more word pairs than those who slept with no expectation of a test. Their brain waves were monitored during their sleep and those who were anticipating a test exhibited more slow-wave sleep. Slow-wave sleep is known to be linked to memory consolidation.

Sleep alone did not significantly improve memory. Those participants who were not expecting a retest performed just as badly regardless as to whether or not they had slept before the exam.

The principal author of the report, Jan Born of the University of Tubingen, noted that “There is an active memory process during sleep that selects certain memories and puts them in long-term storage.” Another memory researcher, Penny Lewis of the Univerity of Manchester who also studies sleep said that the study is “very convincing.” She also noted, “It looks like if you tell someone something is important, it gets enhanced more.”

Historically, sleep has presented a mystery. We spend about one-third of our lives sleeping and the question has been why. Sleep is both necessary and beneficial. It has been theorized that memory consolidation is one of the benefits of sleep. This study indicates that an intention to learn improves memory consolidation during sleep.

I have read that Leonardo da Vinci would go over his notes before going to sleep. Apparently, he had some insight that doing so would cause his mind to keep working on this information during sleep. This would appear to be a good general process.

Students should realize that one of the worst ways to prepare for a test is to pull an all-nighter. Sleep is critical to test performance. So get the studying out of the way before going to sleep and let the enhanced memory consolidation proceed.

1(2011). Sleep Sorts the Memory Wheat from the Chaff, New Scientist, 5 February, 8.

2Born, J. (2011). The Journal of Neuroscience, DOI:10.1523/jneurosci.3575-10.2011. 

© Douglas Griffith and, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

How Do We See?

October 3, 2010

The study of visual perception is difficult because it happens so fast. Somehow light comes into our eyes, makes contact with our memory and, lo and behold, we see a meaningful scene. A recent article1 in the New Scientist provides an overview of how this occurs, or, at least,with out current state of knowledge how we think this occurs.

Since perception happens so quickly, agnosias, specific disorders, can be quite informative. The previous blog post explored Propagnosia. Other types of agnosias include:

Simultanagnosia – seeing one object at a time when viewing a scene comprised of many items.

Integrative agnosia – Inability to recognize whole objects, tending instead to focus on individual features of an object.

Visual form agnosia – Inability to describe the shape, size, or orientation of objects, but still being able to manipulate them.

Optic ataxia – Ability to report the shape and size of an object, though manipulating the object clumsily.

Pure alexia (aka agnosia for words) – inability to identify individual characters or even text, although sometimes being able to write.

Topographical agnosia – Inability to recognize known landmarks or scenes.

Color agnosia – Ability to perceive colors without being able to identify, name, or group them according to similarity.

Research using brain scans can be quite useful in identifying the specific areas in the brain that accomplish these functions. Brain scans have revealed that people with visual form agnosia tend to have damage to the ventral (lower) part of the brain’s visual area. However, people with optic ataxia tend to have damage to the dorsal (upper part) of the brain’s visual area. So it appears that we have two streams of visual processing. The ventral pathway recognizes the object, while the dorsal pathway determines where that object is located in the visual field.

Some neuroscientists think that the brain binds all the different features of the ventral stream to a “master map of location”, which is held in the dorsal stream. They believe that this binding process is so fundamental that this link needs to be formed before an image can pop into consciousness.

So our perceptual system seems to be highly modular with many different modules contributing to conscious experience. All this activity occurs below the level of consciousness to yield the conscious world we do experience.

1Robeson, D. (2010). Seeing Isn’t Believing. New Scientist, 28 August, 30-33.

© Douglas Griffith and, 2010. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.