Archive for the ‘Transactive Memory’ Category

Mindshift Resources

October 5, 2017

This post provides information on resources for mindshifts. Although this post focuses on massive online open courses (MOOCs), mindshifts can be accomplished from many sources. However, MOOCs are a new high tech means of learning. Some MOOCs are free, even from first rate universities, and some MOOCs require payment. Usually to get college credits payments are required. However, autodidacts do not necessarily want or desire college credits. There is a website by Laura Pickard who writes, “I started the No-Pay MBA website as a way of documenting my studies, keeping myself accountable, and providing a resource for other aspiring business students. The resources on this site are for anyone seeking a world-class business education using the free and low-cost tools of the internet.  I hope you find them useful!” She explains how she got an business education equivalent to an MBA for less than1/100th the cost of a traditional MBA. lists free online courses from the best universities. She also list the best 50 MOOCs of all time. This is a good resource for learning about MOOCs.

Here are some notes on additional resources provided in Mindshift.

Coursera: This is the largest MOOC provider. It has courses on many different subjects and in many different languages. It also offers an MBA and data science master’s degree and offers “specializations”—clusters of MOOCs.

edX: Has a large number of courses on many different subjects and in in many different languages. Offers “MicroMasters”—cluster of MOOCs.

FutureLearn: Has a large number of courses on many different subjects and languages, particularly, but not exclusively from British universities. Offers “Programs”—clusters of MOOCs.

Khan Academy: Offers tutorial videos on a large number of subjects, from history to statistics. The site is multilingual and uses gasification.

Kadenze: Special focus on art and creative technology.

Canvas Network: Designed to give professors an opportunity to give their online classes a wider audience. Has a large number of courses on many different subjcts.

Open Education by Blackboard: Similar to Canvass Network.

World Science U: A platform designed to use great visuals to communicate ideas in science.

Instructables: Provides user-created and -uploaded do-it-yourself projects which are rated by other users.

You can find the author’s MOOC, “Learning How to Learn” on




How To Take Back Your Life from Disruptive Technology

September 27, 2017

There have been twelve posts on “The Distracted Mind: Ancient Brains in a High Tech World” that documented the adverse affects of technology. There was an additional post demonstrating that just the presence of a Smartphone can be disruptive. The immediately preceding post documented the costs of social media per se. First of all they have disruptive effects on lives and minds. And these disruptive effects degrade your mind, which the blog posts documented affect many aspects of your life, including education. Hence the title of this blog post.

Unfortunately, social media make social demands. So removing yourself from social media is something that needs to be explained to your friends, whom you should let know you’ll still be willing to communicate via email. Review with them the reason for your decision. Cite the relevant research presented in this blog and elsewhere. Point out that Facebook not only has an adverse impact on cognition, it was also a tool used by Russia to influence our elections. Facebook accepted rubles to influence the US Presidential election. The magnitude of this intervention has yet to be determined. For patriotic reasons alone, Facebook should be ditched. You are also taking these steps to reclaim control of your attentional resources and to build a healthy memory.

Carefully consider what steps you need to take. Heavy users become nervous when they are not answering alerts. One can gradually increase the increments in answering alerts. However, going cold turkey and simply turning off alerts might be more painful initially, but it would free you from the compulsion to answer alerts earlier should you of cold turkey. It would also make your behavior clearer to your friends earlier rather than later. Similarly you can only answer text messages and phone calls at designated. Voice mail assures you won’t miss anything.

If asked by a prospective employer or university as to why you are not on Facebook, explain that you want to make the most of your cognitive potential and that Facebook detracts from this objective. Cite the research. You can develop a web presence by having your own website that you would control. Here you could attach supporting materials as you deem fit.

Doing this should make you stand out over any other candidates who might be competing with you (unless they were also following the advice of this blog). If your reviewer is not impressed, you should conclude that he is not worthy of you and that affiliating with them would be a big mistake. Hold to this conclusion regardless of the reputation of the school or employer.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Happiness Effect

September 26, 2017

The subtitle to “The Happiness Effect” is “How Social Media is Driving a Generation to Appear Perfect at Any Cost,” a book by Donna Freitas. The book reports extensive research using surveys and interviews on the use of social media by college students. The subtitle could be expanded to “How Social Media is Driving a Generation to Appear Perfect at Any Cost Resulting In Unhappiness and Anxiety.’ The book focuses on the emotional and social costs and ends with suggestions regarding how to ameliorate the damage.

Although this is an excellent book, HM had difficulty finishing reading it. He kept thinking how stupid, moronic, and damaging social media are. How could new technology be adopted and put to such a counterproductive use? The reason that HM’s reaction is much more severe than that of Donna Freitas is that he is also considering social media in terms of how they exacerbate the problem of the Distracted Mind, which has been the topic of the fifteen healthy memory blog post immediately preceding this current one. So these activities that produce unhappiness and anxiety also assault the mind with more distractions.

They do so in two ways. First of all they subtract time from effective thinking. Social media also foster interruptions that further disrupt effective thinking. So consider the possibility that social media foster unhappy airheads.

Facebook pages are cultivated to impress future employers. Organizations and activities cultivate Facebook pages to provide good public relations for their organizations and activities. But remember the healthy memory blog post, “The Truth About Your Facebook Friends” based on Seth Stephens-Davidowitz’s groundbreaking book, “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who We Really Are.” You should realize that anyone who believes what they read on Facebook is a fool.

The following post will suggest some activities for you to consider should you be convinced of what you have read in the healthy memory blog and related sources on this topic. These suggestions go beyond what was presented in the blog post “Modifying Behavior.”

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.


The Truth About the Internet

September 3, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Perhaps the most common statement about the internet with which everyone agrees is that the internet is driving Americans apart and that it plays a large part in the polarization of the nation. The only problem with this generally agreed upon view is that it is wrong.

The evidence against this piece of conventional wisdom comes from a 2011 study by two economists, Matt Gentzkow and Jesse Shapiro. They collected data on the browsing behavior of a large sample of Americans. Their dataset included the self-reported ideology, whether they were liberal or conservative, of the research participants.

Gentzkow and Shapiro asked themselves the following question: Suppose you randomly sampled two Americans who happen to both be visiting the same news website. What is the probability that one of them will be liberal and the other conservative? In other words, how frequently do liberals and conservatives “meet” on news sites? Suppose liberals and conservatives on the internet never got their online news from the same place? In other words, liberals exclusively visited liberal websites, and conservatives exclusively visited conservative ones. If this were the case, the chances that two Americans on a given news site have opposing political views would be 0%. The internet would be perfectly segregated. Liberals and conservatives would never mix.

However, suppose, in contrast, that liberals and conservatives did not differ at all in how they got their news. In other words, a liberal and a conservative were equally likely to visit any particular news site. If this were the case, the chances that two Americans on a given news website have opposing political views would be about 50%. Then the internet would be perfectly desegregated. Liberals and conservatives would perfectly mix.

According to Gentzkow and Shapiro in the United States, the chances that two people visiting the same news site have different political views is about 45%. So the internet is far closer to perfect desegregation than perfect segregation. Liberals and conservatives are “meeting” each other on the web all the time.

Using data from the General Social Survey, Gentzkow and Shapiro found that all these numbers were lower than the chances that two people on the same news website have different politics.

This lack of segregation on the internet can be put further in perspective by comparing it to segregation in other parts of our lives. Here are the probabilities that someone you meet has opposing political views

On a News website 45.2%
Coworker 41.6%
Offline Neighbor 40.3%
Family Member 37%
Friend 34,7%

So in other words, you are more likely to come across someone with opposing views online than offline.

As to why isn’t the internet more segregated, there are two factors that limit political segregation on the internet. The first reason is that the internet news industry is dominated by a few massive sites. In 2009, four sites, Yahoo News, AOL News,, and —collected more than half of the news views. Yahoo News is the most popular news site among Americans, with close to 90 million unique monthly visitors. This is 600 times the white supremacist Stormfront audience. Mass media sites like to appeal to a broad, political diverse audience.

The second reason the internet isn’t all that segregated is that many people with strong political opinions visit sites of the opposite viewpoint. The reason here is similar to the reason for the hostility to the first address by President Obama on the Mass Shooting in San Bernadino. People like to defend their views, and, perhaps, to convince themselves that the opposition are idiots. Seth notes that someone who visits think and—two extremes liberal sites—is more likely than the average internet user to visit, a right leaning site. Someone who visits or —two extremely conservative sites—is more likely than the average internet user to visit, a more liberal site.

The Gentzkow and Shapiro study was based on data from 2004-2009, which was relatively early in the history of the internet. Might the internet have grown more compartmentalized since then? Have social media, particularly Facebook, altered their conclusion. If our friends tend to share our political views, the rise of social media should mean a rise of echo chambers, shouldn’t it.

It’s complicated. Although it is true that people’s friends on Facebook are more likely than not to share their political views, a team of data scientists—Eytan Bakshy, Solomon Messing, and Lada Adamic—found that a surprising amount of the information people get on Facebook comes from people with opposing views? So how can this be? Don’t our friends tend to share our political views? They do? But there is a crucial reason that Facebook may lead to a more diverse political discussion than offline socializing. On average people have substantially more friends on Facebook than they do offline. These weak ties facilitated by Facebook are more likely to be people with opposite political views.

So Facebook exposes its users to weak social connections. These are people with whom you might never have external social interactions, but you do Facebook friend them. And you do see their links to articles with views you might never have otherwise considered.

In sum, the internet actually does not segregate different ideas, but rather gives diverse ideas a larger distribution.


Effectively Countering Islamophobia

September 2, 2017

The immediately preceding post on Obama’s Prime-time Address After the Mass Shooting in San Bernadino indicated that President’s Obama’s appeal to our better nature failed. Worse yet, it was counterproductive, with Islamophobia increasing, not decreasing. As promised, here is a more effective presentation President Obama made two months after that original piece. This time Obama spent little time insisting on the value of tolerance. Instead he focused overwhelmingly on provoking people’s curiosity and changing their perceptions of Muslim Americans. He told us that many of the slaves from Africa were Muslim; Thomas Jefferson and John Adams had their own copies of the Koran; the first mosque on U.S. soil was in North Dakota; a Muslim American designed skyscrapers in Chicago. Obama again spoke of Muslim athletes and armed service members but also talked of Muslim police officers and firefighters, teachers, and doctors.

So what was wrong with Obama’s original address? He was telling many in his audience that their emotional responses were wrong. Kahneman’s Two System view of cognition can be helpful. System 1 is named Intuition. System 1 is very fast, employs parallel processing, and appears to be automatic and effortless. They are so fast that they are executed, for the most part, outside conscious awareness. Emotions and feelings are also part of System 1. Islamophobic responses are essentially System 1 responses. Learning is associative and slow. For something to become a System 1 process requires much repetition and practice. Activities such as walking, driving, and conversation are primarily System 1 processes. They occur rapidly and with little apparent effort. We would not have survived if we could not do this types of processing rapidly. But this speed of processing is purchased at a cost, the possibility of errors, biases, and illusions. System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors to slip through.
In addition to engaging System 1 processes, many in the audience needed to justify their feelings. Consequently they made Google searchers hardening their views.

However, in his second address he bypassed System 1 processes by providing new information processing to System 2, which is what we commonly regard as thinking. So their views were not directly challenged in this nonthreatening presentation. New information was presented that might be further processed with a resulting decrease in Islamophobia.

Changing hardened beliefs is very difficult. Directly challenging these beliefs is counterproductive. So the approach needs to employ some sort of end run around these beliefs. That is what Obama did by providing nonthreatening information in his second address.

The Southern Poverty Law Center has developed some effective approaches in which people of different beliefs work together to solve a problem. This approach is difficult and time consuming but it has worked in a variety of circumstances. This approach is not likely to be universally applicable as it does require people of different beliefs to interact.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Response to Obama’s Prime-time Address After the Mass Shooting in San Bernadino

September 1, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” On December 2, 2015 in San Bernadino, California Rizwan Farook and Tashfeen Malik entered a meeting of Farook’s coworkers armed with semiautomatic pistols and semiautomatic rifles and murdered fourteen people. Literally minutes after the media first reported one of the shooter’s Muslim-sounding name, a disturbing number of Californians had decided what they wanted to do with Muslims: kill them.

The top Google search in California at the time was “kill Muslims” with about the same frequency that they searched for “martini recipe,” “migraine symptoms,” and “Cowboys roster.” In the days following the attack, for every American concerned with “Islamophobia” another was searching for “kill Muslems.” Hate searches were approximately 20% of all searches before the attack, more than half of all search volume about Muslims became hateful in the hours that followed it.

These search data can inform us how difficult it can be to calm the rage. Four days after the shooting, then-president Obama gave a prime-time address to the country. He wanted to reassure Americans that the government could both stop terrorism and, perhaps more important, quiet the dangerous Islamophobia.

Obama spoke of the importance of inclusion and tolerance in powerful and moving rhetoric. The Los Angeles Times praised Obama for “[warning] against allowing fear to cloud our judgment.” The New York times called the speech both “tough” and “calming.” The website Think Progress praised it as “a necessary tool of good governance, geared towards saving the lives of Muslim Americans.” Obama’s speech was judged a major success.

But was it? Google search data did not support such a conclusion. Seth examined the data together with Evan Soltas. In the speech the president said, “It is the responsibility of all American—of every faith—to reject discrimination.” But searches calling Muslims “terrorists,” “bad,” “violent,” and “evil” doubled during and shortly after the speech. President Obama also said, “It is our responsibility to reject religious tests on who we admit into this country.” But negative searches about Syrian refugees, a mostly Muslim group then desperately looking for a safe haven, rose 60%, while searches asking how to help Syrian refugees dropped 35%. Obama asked Americans to “not forget that freedom is more powerful than fear.” Still searches for “kill Muslims” tripled during the speech. Just about every negative search Seth and Soltas could think to test regarding Muslims shot up during and after Obama’s speech, and just about every positive search hey could think to test declined.

So instead to calming the angry mob, as people thought he was doing, the internet data told us that Obama actually inflamed it. Seth writes, “Things that we think are working can have the exact opposite effect from the one we expect. Sometimes we need internet data to correct our instinct to pat ourselves on the back.”

So what can be done to quell this particular form of hatred so virulent in America? We’ll try to address this in the next post.

Implicit Versus Explicit Prejudice

August 30, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Any theory of racism has to explain the following puzzle in America: On the one hand, the overwhelming majority of black Americans think they suffer from prejudice—and they have ample evidence of discrimination in police stops, job interviews, and jury decisions. On the other hand, very few white Americans will admit to being racist. The dominant explanation has been that this is due, in large part, to widespread implicit prejudice. According to this theory white Americans may mean well, but they have a subconscious bias, which influences their treatment of black Americans. There is an implicit-association test for such a bias. These tests have consistently shown that it takes most people milliseconds more to associate black faces with positive words such as “good,” than with negative words such as “awful.” For white faces, the pattern is reversed. The small extra time it takes is interpreted as evidence of someone’s implicit prejudice—a prejudice the person may not even be aware of.

There is an alternative explanation for the discrimination that African-Americans feel and whites deny: hidden explicit racism. People might be aware of widespread conscious racism but to which they do not want to confess—especially in a survey. This is what the search data seems to be saying. There is nothing implicit about searching for “n_____ jokes.” It’s hard to imagine that Americans are Googling the word “n_____“ with the same frequency as “migraine and economist” without explicit racism having a major impact on African-Americans. There was no convincing measure of this bias prior to the Google data. Seth uses this measure to see what it explains.

It explains, as was discussed in a previous post, why Obama’s vote totals in 2008 and 2012 were depressed in many regions. It also correlates with the black-white wage gap, as a team of economists recently reported. In other words, the areas Seth found that make the most racist searches underpay black people. When the polling guru Nate Silver looked for the geographic variable that correlated most strongly with support in the 2016 Republican primary for Trump, he found it in the map of racism Seth had developed. That variable was searches for “n_____.”

Scholars have recently put together a state-by-state measure of implicit prejudice agains black people, which enabled Seth to compare the effects of explicit racism, as measured by Google searches, and implicit bias. Using regression analysis, Seth found that, to predict where Obama underperformed, an area’s racist Google searches explained a lot. An area’s performance on implicit-association tests added little.

Seth has found subconscious prejudice may have a more fundamental impact for other groups. He was able to use Google searches to find evidence of implicit prejudice against another segment of the population: young girls.

So, who would be harboring bias against girls? Their parents. Of all Google searches starting “Is my 2-year-old, the most common next word is “gifted.” But this question is not asked equally about young boys and young girls. Parents are two and a half times more likely to ask “Is my son gifted?” than “Is my daughter gifted?” Parents overriding concerns regarding their daughters is anything related to appearance.

The URL above will take you to a number of options for taking and learning about the implicit association test.

The Truth About Your Facebook Friends

August 29, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Social media are another source of big data. Seth writes, “The fact is, many Big Data sources, such as Facebook, are often the opposite of digital truth serum.

Just as with surveys, in social media there is no incentive to tell the truth. Much more so than in surveys, there is a large incentive to make yourself look good. After all, your online presence is not anonymous. You are courting an audience and telling your friends, family members, colleagues, acquaintances, and strangers who you are.

To see how biased data pulled from social media can be, consider the relative popularity of the “Atlantic,” a highbrow monthly magazine, versus the “National Enquirer,” a gossipy often-sensational magazine. Both publications have similar average circulations, selling a few hundred thousand copies (The “National Enquirer” is a weekly, so it actually sells more total copies.) There are also a comparable number of Google searches for each magazine.

However, on Facebook, roughly 1.5 million people either like the “Atlantic” or discuss articles from the “Atlantic” on their profiles. Only about 50,000 like the Enquirer or discuss its contents.

Here’s an “Atlantic” versus “National Enquirer” popularity compared by different sources:
Circulation Roughly 1 “Atlantic” for every 1 “National Enquirer”
Google searches 1 “Atlantic” for every 1 “National Enquirer”
Facebook Likes 27 “Atlantic” of every 1 “National Enquirer”

For assessing magazine popularity, circulation data is ground truth. And Facebook data is overwhelmingly biased against the trashy tabloid, making it the worst data for determine what people really like.

Here are some excerpts from the book:
“Facebook is digital brag-to-my friends-about-how-good-my life-is-serum. In Facebook world, the average adult seems to be happily married, vacationing in the Caribbean, and perusing the “Atlantic.” In the real world, a lot of the people are angry, on supermarket checkout lines, peeking at the “National Enquirer”, ignoring phone calls from their spouse, whom them haven’t slept with in years. In Facebook world, family life seems perfect. In the real world, family life is messy. I can be so messy that a small number of people even regret having children. In Facebook world, it seems every young adult is at a cool party Saturday night. In the real world, most are at home alone, binge-watching shows on Netflix. In Facebook world, a girlfriends posts twenty-six happy pictures from her getaway with her boyfriend. In the real world, immediately after posting this, she Googles “my boyfriend won’t have sex with me.”


In summary:
DIGITAL TRUTH                          DIGITAL LIES
Searches                                        Social media posts
Views                                             Social media likes
Clicks                                             Dating profiles

Some Common Ideas Debunked

August 28, 2017

This post is based on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.”

A common notion is that a major case of racism is economic insecurity and vulnerability. So it is reasonable to expect that when people lose their jobs, racism increases. But neither racist searches nor membership in Stormfront rises when unemployment does.

It is reasonable to think that anxiety is highest in overeducated big cities. A famous stereotype is the urban neurotic. However, Google searches reflecting anxiety—such as “anxiety symptoms” or “anxiety help” tend to be higher in places with lower levels of education, lower median incomes, and where a larger portion of the population lives in rural areas. There are higher search rates for anxiety in rural upstate New York than in New York City.

It is reasonable to think that a terrorist attack that kills dozens or hundreds of people would automatically be followed by massive, widespread anxiety. After all, terrorism, by definition, is supposed to instill a sense of terror. Seth looked for Google searches reflecting anxiety. He tested how much these searches rose in a country in the days, weeks, and months following every major European or American terrorist attack since 2004. So, on average, how much did anxiety-related searches rise? They didn’t. At all.

Humor as long been thought of as a way to cope with frustrations, the pain, the inevitable disappointments of life. Charlie Chaplin said, “laughter is the tonic, the relief, the surcease from pain.” Yet, searches for jokes are lowest on Mondays, they day when people report they are most unhappy. They are lowest on cloudy and rainy days. And they plummet after a major tragedy, such as when two bombs killed three and injured hundreds during the 2013 Boston Marathon. Actually people are more likely to look for jokes when things are going well in life than when they aren’t.

Seth argues that the bigness part of big data is overrated. He writes that the smartest Big Data companies are often cutting down their data. Major decisions at Google are based on only a tiny sampling of all their data. Seth continues, “You don’t always need a ton of data to find important insights. You need the right data. A major reason that Google searches are so valuable is not that there are so many of them; it is that people are so honest in them.

Every Body Lies

August 27, 2017

“Everybody Lies” is the title of a groundbreaking book by Seth Stephens-Davidowitz on how to effectively exploit big data. The subtitle to this book is “Big Data, New Data, and What the Internet Reveals About Who We Really are.” The title is a tad overblown as we always need to have doubts about data and data analysis. However, it is fair to say that the internet currently does the best job at revealing who we really are.

The problem with surveys and interviews is that there is a bias to make ourselves look better than we really are. Indeed, we should be aware that we fool ourselves and that we can think we are responding honestly when in truth we are protecting our egos.

Stephens-Davodowitz uses Google trends as his principle research tool and has found that people reveal more about their true selves in these searches than they do in interviews and surveys. Although the pols erred in predicting that Hilary Clinton would win the presidency, Google searches indicated that Trump would prevail.

Going back to Obama’s first election night, when most of the commentary focused on praise of Obama and acknowledgment of he historic nature of his election, roughly one in every hundred Google searches that included “Obama” also included “kkk” or “n_____.” On election night searches and sign-ups for Stormfont, a white nationalist site with surprisingly high popularity in the United States, were more than ten times higher than normal. In some states there were more searches for “n____- president” than “first black president.” So there was a darkness and hatred that was hiding from the traditional sources but was quite apparent in the searches that people made.

These Google searches also revealed that a much of what we thought about the location of racism was wrong. Surveys and conventional wisdom placed modern racism predominantly in the South and mostly among Republicans. However, the places with the highest racist search rates included upstate New York, western Pennsylvania, eastern Ohio, industrial Michigan and rural Illinois, along with West Virginia, southern Louisiana, and Mississippi. The Google search data suggested that the true divide was not South versus North, but East versus West. Moreover racism was not limited to Republicans. Racist searches were no higher in places with a high percentage of Republicans than in places with a high percentage of Democrats. These Google searches helped draw a new map of racism in the United States. Seth notes that Republicans in the South may be more likely to admit racism, but plenty of Democrats in the North have similar attitudes. This map proved to be quite significant in explaining the political success of Trump.

In 2012 Seth used this map of racism to reevaluate exactly the role that Obama’s race played. In parts of the country with a high number of racist searches, Obama did substantially worse than John Kerry, the white presidential candidate, had four years earlier. This relationship was not explained by an other factor about these ares, including educational levels, age, church attendance, or gun ownership. Racist searches did not predict poor performance for any Democratic candidate other than Obama. Moreover these results implied a large effect. Obama lost roughly 4% points nationwide just from explicit racism. Seth notes that favorable conditions existed for Obama’s elections. The Google trends data indicated the there were enough racists to help win a primary or tip a general election in a year not so favorable for Democrats.

During the general election there were clues in Google trends that the electorate might be a favorable one for Trump. Black Americans told polls they would turn out in large numbers to oppose Trump. However Google searches for information on voting in heavily black areas were way down. On election day, Clinton was hurt by low black turnout. There were more searches for “Trump Clinton” than for “Clinton Trump” in key states in the Midwest that Clinton was expected to win. Previous research has indicated that the first name in search pairs like this is likely the favored candidate.

The final two paragraphs in this post are taken directly from Seth’s book.

“But the major clue, I would argue, that Trump might prove a successful candidate—in the primaries, to begin with—was all that secret racism that my Obama study had uncovered, The Google searches revealed a darkness and hatred among a meaningful number of Americans that pundits, for many years, had missed. Search data revealed that we lived in a very different society from the one academics and journalists, relying on polls, thought that we lived in. It revealed a nasty, scary, and widespread rage that was waiting for a candidate to give voice to it.

People frequently lie—to themselves and to others. In 2008, Americans told surveys that they no longer cared about race. Eight years later, they elected as president Donald J. Trump, a man who retweeted a false claim that black people were responsible for the majority of murders of white American, defended his supporter for roughing up a Black Lives Matter protestor at one of his rallies, and hesitated in repudiating support from a former leader of the Ku Klux Klan (HM feels compelled to note that Trump has not renounced the latest endorsement by the leader of the Ku Klux Klan). The same hidden racism that hurt Barack Obama helped Donald Trump.


An AI Armageddon

July 27, 2017

This post is inspired by an article by Cleve R. Wootson, Jr. in the July 24, 2017 Washington Post article titled, “What is technology leader Musk’s great fear? An AI Armageddon”.

Before addressing an AI Armageddon Musk speaks of his company Neuralink, which would devise ways to connect the human brain to computers. He said that an internet-connected brain plug would allow someone to learn something as fast at it takes to download a book. Everytime HM downloads a book to his iPad he wonders, if only… However, HM knows some psychology and neuroscience, topics in which Musk and Kurzweil have little understanding. Kurzweil is taking steps to prolong his life until his brain can be uploaded to silicon. What these brilliant men do not understand is that silicon and protoplasm require different memory systems. They are fundamentally incompatible. Now there is promising research where recordings are made from the rat’s hippocampi while they are learning to perform specific tasks. Then they will try to play these recordings into the hippocampi of different rats and see how well they can perform the tasks performed by the previous rats. This type of research, which stays in the biological domain, can provide the basis for developing brain aids for people suffering from dementia, or who have had brain injuries. The key here is that they are staying in the biological domain.

This biological silicon interface needs to be addressed. And it would be determined that this transfer of information would not be instantaneous, it would be quite time consuming. And even if this is solved, both the brain and the human are quite complicated and there needs to be time for consolidation and other processes. Even then there is the brain mind distinction. Readers of this blog should know that the mind is not contained within the brain, but rather the brain is contained within the mind.

Now that that’s taken care off, let’s move on to Armageddon. Many wise men have warned us of this danger. Previous healthy memory posts, More on Revising Beliefs, being one of them reviewed the movie “Collosus: the Forbin Project.” The movie takes place during the height of the cold war when there was a realistic fear that a nuclear war would begin that would destroy all life on earth. Consequently, the United States created the Forbin Project to create Colossus. The purpose of Colossus was to prevent a nuclear war before it began or to conduct a war once it had begun. Shortly after they turn on Colossus, the find it acting strangely. They discover that it is interacting with the Soviet version of Colossus. The Soviets had found a similar need to develop such a system. The two systems communicate with each other and come to the conclusion that these humans are not capable of safely conducting their own affairs. In the movie the Soviets capitulate to the computers and the Americans try to resist but ultimately fail.

So here is an example of beneficent AI; one that prevents humanity from destroying itself. But this is a singular case of beneficent AI. The tendency is to fear AI and predict either the demise of humanity or a horrendous existence. But consider that perhaps this fear is based on our projecting our nature on to silicon. Consider that our nature may be a function of biology, and absent biology, these fears don’t exist.

One benefit of technology is that the risks of nuclear warfare seem to have been reduced. Modern warfare is conducted by technology. So the Russians do not threaten us with weapons; rather they had technology and tried to influence the election by hacking into our systems. This much is known by the intelligence community. The Russians conducted warfare on the United States and tried to have their candidate, Donald Trump, elected. Whether they succeeded in electing Donald Trump cannot be known in spite of claims that he still would have been elected. But regardless of whether their hacking campaign produced the result, they definitely have the candidate they wanted.

Remember the pictures of Trump in the Oval Office with his Russian buddies (Only Russians were allowed in the Oval Office). He’s grinning from ear to ear boasting about how he fired his FBI Director and providing them with classified intelligence that compromised an ally. Then he tries to establish a secure means of communication with the Russians using their own systems. He complains about the Russian investigation, especially those that involve his personal finances. Why is he fearful? If he is innocent, he will be cleared, and the best thing would be to facilitate the investigation rather than try to obstruct and invalidate it. Time will tell.

How could a country like the United States elect an uncouth, mercurial character who is a brazen liar and who could not pass an elementary exam on civics? Perhaps we are ready for an intervention of benign AI.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.



Seven Ways to Overhaul Your Smartphone Use

July 21, 2017

This post is taken directly from the March 2017 issue of “Monitor on Psychology.”
If you want to minimize the pitfalls of smartphone use, research suggests seven good places to start.

Make Choices. The more we rely on smartphones, the harder it is to disconnect. Consider which functions are optional. Could you keep lists in a paper notebook? Use a standalone alarm clock? Make conscious choices about what you really need your phone for, and what you don’t.

Retrain yourself. Larry Rosen, Ph.D., advises users not to check the phone first thing in the morning. During the day, gradually check in less often—maybe every 15 minutes at first, then every 20, then 30. Over time, you’ll start to see notifications as suggestions rather than demands, he says, and you’ll feel less anxious about staying connected.

Set expectations. “In many ways, our culture demands constant connection. That sense of responsibility to be on call 24 hours a day comes with a greater psychological burden than many of us realize,” says Karla Murdock Ph.D. Try to establish expectations among family and friends so they don’t worry or feel slighted if you don’t reply to their texts or emails immediately. While it can be harder to ignore messages from your boss, it can be worthwhile to have a frank discussion about what his or her expectations are for staying connected after hours.

Silence notifications. It’s tempting to go with your phone’s default settings, but making the effort to tun off unnecessary notifications can reduce distractions and stress.

Protect sleep. Avoid using your phone late at night. If you must use it, turn down the brightness. When it’s time for bed, turn you phone off and place it in another room.

Be active. When interacting with social media sites, don’t just absorb other people’s posts. Actively posting idea or photos, creating content and commenting on others’ posts is associated with better subjective well-being.

And, of course, don’t text/email/call and drive. In 2014, more than 3,000 people were killed in distracted driving incidents on U.S. roads, according to the U.S. Department of Transportation. When you’re driving, turn off notifications and place your phone out of reach.


July 20, 2017

The title of this post is identical to the title of an article by Kirsten Weir in the March 2017 issue of “Monitor on Psychology.” This article reviews research showing how smartphones are affecting our health and well-being, and points the way toward taking back control.

Some of the most established evidence concerns sleep. Dr. Klein Murdock, a psychology professor who heads the Technology and Health Lab at Washington and Lee University followed 83 college students and found that those who were more-attuned to their nighttime phone notifications had poorer subjective sleep quality and greater self-reported sleep problems. Although smartphones are often viewed as productivity-boosting devices, their ability to interfere with sleep can have the opposite effect on getting things done.

Dr. Russell E. Johnson and his colleagues at Michigan State University surveyed workers from a variety of professions. They found that when people used smartphones at night for work-related purposes, they reported that they slept more poorly and were less engaged at work the next day. These negative effects were greater for smartphone users than for people who used laptops or tablets right before bed.

Reading a text or email at bedtime can stir your emotions or set your mind buzzing with things you need to get done. So your mind becomes activated at a time when it’s important to settle down and have some peace.

College students at the University of Rhode Island were asked to keep sleep diaries for a week. They found that 40% of the students reported waking at night to answer phone calls and 47% woke to answer text messages. Students who were more likely to use technology after they’d gone to sleep reported poorer sleep quality, which predicted symptoms of anxiety and depression.

FOMO is an acronym for Fear Of Missing Out. In one study, Dr Larry Rosen a professor emeritus of psychology at California State University and his colleagues took phones away from college students for an hour and tested their anxiety levels at various intervals. Light users of smartphones didn’t show any increasing anxiety as they sat idly without their phones. Moderate users began showing signs of increased anxiety after 25 minutes without their phones, but their anxiety held steady at that moderately increased level for the rest of the hour long study. Heavy phone users showed increased anxiety after just 10 phone-free minutes, and their anxiety levels continued to climb throughout the hour.

Rosen has found that younger generations are particularly prone to feel anxious if they can’t check their text messages, social media, and other mobile technology regularly. But people of all ages appear to have a close relationship with their phones. 76% of baby boomers reported checking voicemail moderately or very often, and 73% reported checking text messages moderately or very often. Anxiety about not checking in with text messages and Facebook predicted symptoms of major depression, dysthymia, and bipolar mania.

When research participants were limited to checking email messages just three times a day, they reported less daily stress. This reduced stress was associated with positive outcomes including greater mindfulness, greater self-perceived productivity and better sleep quality.

In another study participants were asked to keep all their smartphone notifications on during one week. In the other week, they were asked to turn notifications off and to keep their phones tucked out of sight. At the end of the study participants were given questionnaires. During the week of notifications participants reported greater levels of inattention and hyperactivity compared with their alert-free week. These feelings of inattention and hyperactivity were directly associated with lower levels of productivity, social connectedness, and psychological well being. Having your attention scattered by frequent interruptions has its costs.

The article also stresses the importance of personal interactions, which are inherently richer. The key to having healthy relationships with technology is moderation. We want to get the best from technology, but at the same time to make sure that it’s not controlling us.


Robots Will Be More Useful If They are Made to Lack Confidence

July 17, 2017

The title of this post is identical to the title of an article by Matt Reynolds in the News & Technology section of the 10 June 2017 issue of the New Scientist. The article begins “CONFIDENCE in your abilities is usually a good thing—as long as you know when it’s time to ask for help. Reynolds notes that as we build ever smarter software, we may want to apply the same mindset to machines.

Dylan Hadfield-Menell says that overconfident AIs can cause all kinds of problems. So he and his colleagues designed a mathematical model of an interaction between humans and computers called the “off-switch-game.” In the theoretical set-up robots are given a task to do and humans are free to switch them off whenever they like. The robot can also choose to disable its switch so the person cannot turn it off.

Robots given a high level of “confidence” that they were doing something useful would never let the human turn them off, because they tried to maximize the time spent doing their task. Not surprisingly, a robot with low confidence would always let a human switch it off, even if it was doing a good job.

Obviously, calibrating the level of confidence is important. It is unlikely that humans would ever provide a level of confidence that would not allow them to shut down the computer. A problem here is that we humans tend to be overconfident and to be unaware of how much we do not know. This human shortcoming is well documented in a book by Steven Sloman and Philip Fernbach titled “The Knowledge Illusion: Why We Never Think Alone.” Remember that transactive memory is information that is found in our fellow human beings and in technology that ranges from paper to the internet. Usually we eventually learn the best sources of information in our fellow humans and human organizations, and we need to learn where to find and how much confidence to have in information stored in technology, which includes AI robots. Just as we can have the wrong friends and sources of information, we have the same problem with robots and external intelligence.

So the title is wrong. Robots may not be more useful if they are made to lack confidence. They should have a calibrated level of confidence just as we humans should have calibrated levels of confidence depending upon the task and how skilled we are. Achieving the appropriate levels of confidence between humans and machines is a good example of the Man-Machine Symbiosis J.C.R. Licklider expounded upon in his Classic paper “Man-Computer Symbiosis.”

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Thinking with Technology

July 8, 2017

This is the seventh post in the series The Knowledge Illusion: Why We Never Think Alone (Unabridged), written by Steven Sloman and Phillip Fernbach. Thinking with Technology is a chapter in this book. Much has already been written in this blog on this topic, so this post will try to hit some unique points.

In the healthy memory blog Thinking with Technology comes under the category transactive memory as information in technology, be it paper or the internet, falls into this category. Actually Thinking with Other People also falls into this category as transactive memory refers to all information not stored in our own biological brains. Sloan and Fernbach realize this similarity as they write that we are starting to treat our technology more and more like people, like full participants in the community of knowledge. Just as we store understanding in other people, we store understanding in the internet. We already know that our having knowledge available in other people’s heads leads us to overrate our own understanding. We live in a community that shares knowledge, so each of us individually can fail to distinguish whether knowledge is stored in our own head or in someone else’s. This is the illusion of explanatory depth, viz., I think I understand things better than I do because I incorporate other people’s understanding into my assessment of my own understanding.

Two different research groups have found that we have the same kind of “confusion at the frontier” when we search the internet. Adrian Ward of the University of Texas found that engaging in Internet searches increased people’s cognitive self-esteem, their own sense of they ability to remember and process information. Moreover, people who searched the internet for facts they didn’t know and were later asked where they found the information often misremembered and reported that they had known it all along. Many completely forgot ever having conducted the search, giving themselves credit instead of Google.

Matt Fisher and Frank Keil conducted a study in which participants were asked to answer a series of general causal knowledge questions like, “How does a zipper work?” One group was asked to search the internet to confirm the details of their explanation. Th other group was asked to answer the questions without using any outside sources. Next, participants were asked to rate how well they could answer questions in domains that had nothing to do with the questions they were asked in the first phase. The finding was that those who had searched the internet rated their ability to answer unrelated questions as higher than those who had not.

The risk here should not be underestimated. Interactions with the internet can result in our thinking we know more than we know. It is important to make a distinction between what is accessible in memory and what is available in memory. If you can provide answers without consulting any external sources, then the information is accessible and is truly in your personal biological memory. However, if you need to consult the internet, or some other technical source,or some individual, then although the information is available, but not accessible. This is the difference between a closed book test or an open book test. Unless you can perform extemporaneously and accurately, be sure to consult transactive memory

Sloman and Fernbach have some unique perspectives. They discount the risk of super intelligence threatening humans, at least for now. They seem to think that there is no current basis for some real super intelligence taking over the world. The reason they offer for this is that technology does not (yet) share intentionality with us. HM does not quite understand why they argue this, and, in any case, the ‘yet” is enclosed in parentheses, implying that this is just a matter of time.

To summarize succinctly, technology increases our knowing more than we know. In other words, it increases the knowledge illusion.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Technology and Maturity

June 29, 2017

Sally Jenkins is one of my favorite writers. She writes substantive articles on sports for the Washington Post. She is an outstanding writer and what she writes on any topic is worth reading. Unfortunately, few of her articles are directly relevant to the Healthymemory blog. Fortunately, this current article “Women’s college athletes don’t need another cuddling parent, They need a couch” in the 25 June 2017 Washington Post is relevant. This article is relevant as it identifies certain adverse effects of technology.

The following is cited directly from the article. “According to a 2016 NCAA survey 76% of all Division I female athletes said they would like to go home to their moms and dads more often and 64% said they communicate with their parents at least once a day, a number that rises t0 73% among women’s basketball players. And nearly a third reported feeling overwhelmed.”

Social psychologists say that these numbers “reflect a larger trend in all college students that is attributable at least in part to a culture of hovering parental-involvement, participation trophies and constant connectivity via smartphones and social media, which has not made adolescents more secure and independent, but less.”

Since 2012 there has been a pronounced increase in mental health issues on campuses. Nearly 58% of students report anxiety and 35% experience depression, according to annual freshmen surveys and other assessments.

Research psychologist Jean Twenge wrote a forthcoming book, pointedly entitled “IGen: Why Today’s Super-Connected Kids are Growing Up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood and What That Means for the Rest of Us.” She writes that the new generation of students is preoccupied with safety. “Including what they call emotional safety. Perhaps because they grew up interacting online through text, where words can incur damage.”

Along with this anxiety, iGens have unrealistic expectations and exaggerated opinions of themselves. Nearly 60% of high school students say they expect to get a graduate degree. In reality, just 9 to 10% actually will. 47% of Division I women’s basketball players think it’s at least “somewhat likely” they will play professional or Olympic ball. In reality, the WNBA drafts just 0.9% of the players.

Dr. Twenge writes that if you compare IGEN to GEN-Xers or boomers, they are much more likely to say their abilities are ‘above average.’

Perhaps not all, but definitely some, and likely a large % of these problems are due to the adverse effects of technology


The Truth About Language

May 17, 2017

“The Truth About Language” is a most informative book by Michael C. Corvallis.  Its subtitle is “What It Is and Where It Came From.”  The title and the subtitle informs the reader exactly what the book is about.  This is an enormously complex topic.  There are more than six thousand languages today and they vary among themselves tremendously.  Moreover, this language ability is the skill that puts our species in a unique leadership place.

The question as to where it came from is still highly contentious.  Dr. Corvallis presents his analysis and conclusion, one which HM finds compelling, but there is no consensus on this topic.

This blog is posted under the category “Transactive Memory.”   Transactive Memory is memory storage external to our personal memories.   So this includes information stored in the memories of other humans, and memories storied in external media.  In this case the storage medium was a book and the presentation device was an iPAD.  There is a tremendous wealth of memory here.  Dr. Corvallis is a scholar of the highest caliber who is drawing from the knowledge of a very large number of outstanding minds.  And a reader applying attention to this book derives a large amount of knowledge.

There is a personal interest for HM here.  The book discusses the behaviorist B.F. Skinner’s tome, “Verbal Behavior.”   As an undergraduate, he argued Skinner’s thesis before a linguistics class.  Although his performance was pitiable, a charitable professor gave him an “A” for the class.  As a graduate student, he taught undergraduates Chomsky’s Transformational Generative Grammar.  His post-doctoral work did not involve linguistics, so he lost touch with the topic.  Dr. Corvallis’s book brought him up to date and reignited his interest.

So it is clear why HM is interested in this book.  Should any readers have a general interest in this topic, it provides fuel for a growth mindset which helps foster a healthy memory.

It is not known when language began.  Presumably sometime during the hominins, but that is debatable.  There is also no general agreement as to how long it took for language to develop.  There are two general schools.  One is that it developed suddenly.  This school is found in certain religions and with the linguist Noam Chomsky.  Dr. Corvallis is in the second school; it developed gradually over an unknown but probably long period of time.

Dr. Corvallis argues that the development involved gestures. It is interesting note here that deaf babies gesture.  It is also important to note that American Sign Language is recognized as a legitimate language.  The development was gradual and occurred over time.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Smartphones and Nature

May 5, 2017

My wife and I enjoy walking in the woods.  And while walking we frequently encounter a disturbing sight.  And that is someone walking in the woods with their face buried in their smartphone.  It is understood that there are good reasons for having a smartphone while walking in the woods.  An emergency might be encountered, or maybe it needs to be consulted for a reference regarding a bird, plant, tree, or some other aspect of nature.  But walking in the woods with one’s face buried in a smartphone largely defeats the benefits of walking in the woods.

First of all, walking, even with one’s face buried in a smartphone, still is good exercise.  But research has shown that walking in a natural setting as opposed to an urban setting is particularly beneficial.  Walking and appreciating nature is even more beneficial.  And what is most beneficial is a walking meditation in nature, being in the moment experiencing nature.

So go ahead and bring your smartphone with you in the woods.  Just do not bury your face in it and try walking meditation.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Can Democracy Survive the Internet?

April 24, 2017

The title of this post is part of the title of a column by Dan Balz in the 23 April 2017 issue of the Washington Post.  The complete title of the column is “A scholar asks, ‘Can democracy survive the internet?’  The scholar in question is Nathaniel Persily a law professor at Stanford University.  He has written an article in a forthcoming issue of the Journal of Democracy with the same title as this post.

Before proceeding, let HM remind you that the original purpose of the internet was to increase communication among scientists and engineers.  Tim Berners-Lee created and gave the technology that gave birth to the World Wide Web.  He gave it to the world for free to fulfill its true potential as a tool which serves all of the humanity. The healthy memory blog post “Tim Berners-Lee Speaks Out on Fake News” related some of the concerns he has regarding where the web is going.

Persily’s concerns go much further.  And they go way beyond Russian interference in the 2016 presidential election.  He notes that foreign attempts to interfere with what should be a sovereign enterprise are only one factor to be examined.  Persily argues that the 2016 campaign broke down previously established rules and distinctions “between insiders and outsiders, earned media and advertising, media and non-media, legacy media and new media, news and entertainment and even foreign and domestic sources of campaign communication.”  One of the primary reasons Trump won was that Trump realized the potential rewards of exploiting what the internet offered, and conducted his campaign through new, unconventional means.

Persily writes that Trump realized, “That it was more important to swamp the communication environment than it was to advocate for a particular belief or fight for the truth of a particular story.”  Persily notes that the Internet reacted to the Trump campaign, “like an ecosystem welcoming a new and foreign species.  His candidacy triggered new strategies and promoted established Internet forces.  Some of these (such as the ‘alt-right’ ) were moved by ideological affinity, while others sought to profit financially or further a geopolitical agenda.  Those who worry about the implications of the 2016 campaign are left to wonder whether it illustrates the vulnerabilities of democracy in the Internet age, especially when it comes to the integrity of the information voters access as they choose between candidates.”

Persily quotes a study by a group of scholars that said, “Retweets of Trump’s posts are a significant predictor of concurrent news coverage…which may imply that he unleashes ‘tweetstorms’ when his coverage is low.”

Persily also writes about the 2016 campaign, “the prevalence of bots in spreading propaganda  and fake news appears to have reached new heights.  One study found that between 16 September and 21 October 2016, bots produced about a fifth of all tweets related to the upcoming election.  Across all three presidential debates, pro-Trump twitter bots generated about four times as many tweets as pro-Clinton bots.  During the final debate in particular, that figure rose to seven times as many.”

Clearly, Persily raises an extremely provocative, disturbing, and important question.

What’s Next for The March for Science?

April 24, 2017

To find out go to

And remember that science is essential for a healthy memory!


April 12, 2017

“Irresistible” is the title of a book by Adam Alter.  Its subtitle is “The Rise of Addictive Technology and the Business of Keeping Us Hooked.”  This is an important book because it addresses an important problem, the addiction to computer games.  The World of Warcraft (WOW) is perhaps the most egregious example in which lives have been and are continuing to be ruined.  The statistics will not be belabored here.  They are well presented in “Irresistible” along with numerous personal stories.  “Behavioral addiction” was discussed in a previous healthymemory blog post “Beware the Irresistible Internet.”  There is a series of posts based on Dr. Mary Aiken’s book, “The Cyber Effect” that has addressed this problem. Additional healthy memory posts on this topic can be found by entering “Sherry Turkle” into the search block of the healthymemory blog.  What is especially alarming is that Adam Alter makes a compelling argument that game makers are getting better at making their games irrestible, that is behaviorally addicting.

Of course, not all games are bad.  “Gamification”  is a term for games devoted to beneficial ends, such as education.  This can be very beneficial when learning, that could be tedious, is transformed into an entertaining game, which could be played for its entertainment value alone.  Good arguments can be made for these games provided that their educational benefits are documented.  However, even if it were possible, it would be dangerous if all of education were gamefied.  Not everything in life is enjoyable, and part of the educational process should be learning to assure the students persevere even when learning becomes difficult and frustrating.

Alter also does a commendable review of treatments for behavioral addictions and preventive measures to decrease the likelihood of addiction.  The book begins with Steve Jobs telling the New York Times journalist Nick Bilton that his children never used the iPAD, “We limit how much technology our kids use in theme.”  Bolton discovered that other tech giant imposed similar restrictions.  A former editor of “Wired,” Chris Anderson, enforced strict time limits on every device in his home, “because we have seen the dangers of technology firsthand.”  After relating the way tech giants controlled their childrens’ access to technology lAlter wrote, “It seemed as if the  people producing tech products were following the cardinal rule of drug dealing:  never get high on your own supply.”

Perhaps one of the most informative studies related in “Irresistible” is not specifically about addiction.  It related a paper published by eight psychologists in the journal “Science.”  In one study they asked a group of undergraduate students to sit quietly for twenty minutes.  They were told that their goal was to entertain themselves with your thoughts as best you can.  That is, your goal should be to have a pleasant experience, as opposed to spending time focusing on everyday activities or negative things.”  The experimenters hooked up  to a machine that administers electric shocks, and gave them a sample shock  to show that the experience of being shocked isn’t pleasant.   The students were told that they could self-administer the shock if they wanted to, but that “Whether you do so is completely up to you.”  It was their choice.
One student shocked himself one hundred and ninety times.  That’s once every six seconds, over and over for twenty minutes.   Although he was an outlier, two thirds of all male students and about one in three female students shocked themselves at least once.  Many shocked themselves more than once.  By their own admission in a questionnaire they didn’t find the experience pleasant, so they preferred to endure the unpleasantness  of a shock to the experience of sitting quietly with their thoughts.

Upon rereading this experiment HM became convinced that the teaching of mindfulness and meditation should be mandatory in the public school.  If so these students would have taken advantage of the situation to be “in the present ” and to meditate, just as they would if they found themselves stuck in traffic or being forced to wait.  (See the healthy memory blog post, “SPACE”)

Perhaps HM is a “goody two-shoes” but he has never been attracted to games.  He never cared how much he scored on a pin ball machine.  He is the same with respect to computer games.  They strike him as pointless activities, so he never plays them.

It strikes HM that public education is avoiding a key responsibility.  Students need to understand from an early age that their time on earth is limited.  This should not send them into panic or to avoid enjoyable pursuits.  But a question should be asked regarding any pursuit is what value does the pursuit have.  It is okay for some pursuits to be pursued for enjoyment alone.  But there are also pursuits, which in addition to being enjoyable, provide both personal benefits as well as societal benefits.

Ideally one should pursue a life with purpose as was related in the posts on Victor Strecher’s book “Life on Purpose.”  This provides for a benefiting an fulfilling life.  In the healthymemory blog post “SPACE” Stretcher argues for pursuing a healthy lifestyle to further the ends of living a life with purpose.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Perhaps Tim Berners-Lee Did Not Anticipate This Problem

April 1, 2017

And this problem can be found in a front page article by Elizabeth Dwoksin and Crain Timberg titled “Advertisers find it hard to avoid sites spawning hate” in the 25 March 2017 issue of the Washington Post.

This article begins, “As the owner of a small business in liberal Massachusetts, John Ellis was a natural sympathizer of the nationwide call for advertisers to boycott Breitbard News, with its hard-edge conservative politics and close ties to President Trump.  But it made Ellis wonder about other more extreme right-wing sites:  Who is placing adds on them?  A few clicks around the Internet revealed a troubling answer:  He was.”

He found an add for his engineering company company, Optics for Hire, on a website owned by white nationalist leader, Richard Spencer.  Of course, this meant that he was unknowingly supplying funding for this website.

The Post article continues that  in the booming world of Internet advertising, businesses use the latest in online advertising technology offered by Google, Yahoo, and their major competitors are increasingly finding their ads placed alongside politically extreme and derogatory content.

The reason for this is that the ad networks offered by Google, Yahoo, and others can display ads on vast numbers of third-party websites based on people’s search and browsing histories.  This strategy gives advertisers an unprecedented ability to reach customers who fit a narrow profile; it dramatically curtails their ability to control where their advertisements appear.

This week AT&T, Verizon and other leading companies pulled their business from Google’s AdSense network in response to news report that ads had appeared with propaganda from the Islamic State and violent groups.

A Washington Post examination of dozens of sites with politically extreme and derogatory content found that many were customers of leading ad networks, which share a portion of revenue gleaned from advertisers with the site’s operators.  The examination found that the networks had displayed ads for Allstate, IBM, DirectTV and dozens of other household brand names on websites with content containing racial and ethnic slurs, Holocaust denial and disparaging comments about African Americans, Jews, women, and gay people.

Other Google displayed ads, for Macy’s and the genetics company 23andMe, appeared on he website My Posting Career, which describes itself as a “white privilege zone,” next to a notice saying the site would offer a referral bonus for each member related to Adolph Hitler.

Some advertisers also expressed frustration that ad networks had failed to keep marketing messages from appearing alongside reader comments—even on sites that themselves do not promote extremist content.

Clearly more attention needs to be devoted to this topic along with better screening algorithms.  And perhaps some companies will need to make a choice between profits and offending content.

A Good Example of What Tim Berners-Lee Fears

March 31, 2017

It can be found in an article by Anthony Failoa and Stephanie Kirchner on page A8 in the 25 March issue of the Washington Post titled, “In Germany, online hate stokes right-wing violence”.

The Reichsburgers are an expanding movement in Germany with similarities to what are known as sovereign citizens groups in the United States.  Reichsburgers  reject the legitimacy of the federal government, seeing politicians and bureaucrats as usurpers.  After authorities  seized illegal weapons from his home, they charged Bangert, a Reichsburger, and five accomplices with plotting attacks on police officers, Jewish centers and refugee shelters.

Jan Rathje, a project leader at the Amadeu Antonio Foundation says, “It’s an international phenomenon of people claiming there are conspiracies going on, people with an anti-Semitic worldview who are also against Muslims, immigrants, and the federal government.  He continued, we’ve reached a point where it’s not just talk.  This kind of thinking is turning violent.”

Preliminary figures for last year show that at least 12,503 crimes were committed by far-right extremists—914 of which were violent.  The worst act was the fatal shooting of a German police officer by a Reichsburger member.  The preliminary figures roughly compete with levels in 2015, but they amount to a leap of nearly 20% from 2014.

Of course, Germans are especially sensitive about this as one time they were governed by Nazis.  Officials say they last time numbers surged this high was in the early 1990s, when Germany recorded a large but short-term jump in neo-Nazi activity following reunification.  Authorities believe the the surge is due, in part, by the arrival of early, mostly Muslim, asylum seekers.   Last year, there were nearly 10 anti-migrant attacks per day, ranging from vandalism to arson, to severe beatings.  Officials say the rise of conspiracy theorist websites, inflammatory fake news, and anti-federal government/right-wing activism have thrown more factors into the mix.

The Reichsburger movement consist of nearly 10,000 individuals who reject the authority of federal, state and city governments.  Some claim that the last real German government was the Third Reich of Adolf Hitler.  Although the Reichsburger movement may be uniquely German, its type of fringe thinking is universal.  German intelligence officials describe some of the tools used by the members, such as fake passports and documents used to declare their own governments, are nearly identical to those used by American sovereign citizens groups.

In October, a 49-year old Reichsburger  declared his home an “independent state,” shot and killed a police officer assigned to seize his hoarded weapons.  Last August, a former “Mr. Germany” and 13 of his supporters tried to prevent his eviction from his “sovereign home” by shooting at police.  Police fired back, severely injuring Ursache.  Two officers were also hurt.  This raid, along with the raid of 11 other apartments found evidence against Bangert and five other people suspected of having formed a far-right extremist network  They are believed by prosecutors to have been planning armed attacks agains police officers, asylum seekers, and Jews.

As the title of the Washington Post article suggests, online hate is stoking much of this right-wing violence.  It would be interesting to compare the number of right wing hate groups in Germany with right wing hate groups in the US.  This article provides some limited information on Germany.

To find evidence about dangerous hate groups in the US go to
At one time the FBI monitored these dangerous groups.  HM hopes they are continuing these activities.  However, The Southern Poverty Law Center does more than just monitor these groups.  They have programs that have reformed members of these hate groups, and they continue to develop more programs for this essential service.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Tim Berners-Lee Speaks Out on Fake News

March 30, 2017

The title  of this post is identical to the title of an article in the Upfront Section in the 18 March 2017 issue of the New Scientist.  Tim Berners-Lee is the creator of the World Wide Web.  He gave it to the world for free to fulfill its true potential as a tool which serves all of the humanity.  So it is interesting what he thinks of the web’s 28th birthday that was reached on 12 March.

Berners-Lee wrote an open letter to mark the web’s 20th birthday.  He wrote that it is too easy for misinformation to spread, because most people get their news from a few social media sites and search engines prioritize content on what people are likely to click on.

He also questioned the ethics of online political campaigning, which exploits vast amounts of data to target various audiences.  He wrote, “Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups,” is that democratic?”

He also said that we are losing control of our personal data, which we divulge to sign up for free services.

Berniers-Lee founded The Web Foundation that plans to work on these issues.

Bill Gates’ Robot Tax Alone Won’t Save Jobs: Here’s What Will

March 10, 2017

The title of this post is identical to the title of an article by Sumit Paul-Choudhury in the 4 March  2017 issue of the New Scientist.   Bill Gates argued that we should raise the same amount of money by taxing robots as we would lose in payroll taxes from the humans they supplant.  Then this money could be directed towards more human-dependent jobs, such as caring for the young, old and sick.  EU legislators rejected just such a proposal due to lobbying efforts by the robotics industry.

The article makes the valid assertion that automation is the biggest challenge to employment in the 21st century.  Research has shown that far more jobs are lost to automation than to outsourcing.  Moreover, this will get worse as machines become ever more capable of doing human jobs—not just those involving physical labor, but ones involving thinking also.

The common argument from the robot revolution is that previous upheavals have always created new kinds of jobs to replace the ones that have gone extinct.  But previously when automation hit one sector, employees would decamp to other industries.  However, the sweep of machine learning means that many sectors are automating simultaneously.  So perhaps it’s not about how many jobs ar left after the machines are done taking their pick, but which ones.

The article suggests that the evidence might not be very satisfying.  The rise of the “gig economy”, in which algorithms direct low-skilled human workers.  Although this might be an employer’s dream, it is frequently an insecure, unfulfilling and sometimes exploitative grind for workers.

The article argues that to stop this, it’s employers that need to be convinced, not the people making the technology, but it will be difficult to convince the employers who have huge incentives to replace all-too-human workers with machines that never stop working and never get paid.

Although the article fails to mention this, there is the danger of extremely high unemployment, particularly among the well-educated and formerly  well-off.  There have been several previous healthy memory blog posts by HM in which he discusses the future he was offered in the 1950s.  In elementary school we were told that by today technological advances would vastly increase leisure time.  Bear in mind that in the 50s very few mothers worked.  Moreover, technology has advanced far more than anticipated.  So, why is everyone working so hard?  Where is this promised leisure?

Unfortunately modern economies are predicated on growth.  They must grow which requires people to purchase junk and to keep working.  These economies are running towards disaster.  People need to demand the leisure promised in the 50s.   Paul-Choudry’s article does suggest that a business friendly middle ground might be for governments to subsidize reductions in working hours, an approach that has fended off labour crises before.  HM thinks that Paul-Choudhury has vastly underestimated the dangers of job losses.  HM thinks that this is of a magnitude that will threaten the stability of society.  So the working week will need to be drastically shortened to 20 hours (See the Healthymemory Blog Post “More on Rest”).

There have been previous healthy memory blog posts on having a basic minimum income, which also will need to be passed.

The primary forces arguing for these changes are the risks of societal collapse.

However, people need to have a purpose (ikigai) in their lives.  They need to have eudaemonic not hedonic pursuits.  Eudaemonic pursuits build societies; hedonic pursuits destroy society.

© Douglas Griffith and, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Media Multi-tasking

February 4, 2017

Media multitasking is another important topic addressed by Julia Shaw in “THE MEMORY ILLUSION.”  She begins this section as follows:  “Let me tell you a secret.  You can’t multitask.”  This is the way neuroscientist Earl Miller from MIT puts it, “people can’t multitask very well, and when people say they can, they’re deluding themselves…The brain is very good at deluding itself.”  Miller continues, “When people think they’re multitasking, they’re actually just switching from one task to another very rapidly.  And every time they do, there’s a cognitive cost.”

A review done in 2014 by Derk Crews and Molly Russ on the impact of task-switching has on efficiency concluded that it is bad for our productivity, critical thinking and ability to concentrate, in addition to making us more error-prone.  Moreover, they concluded that these consequences are  not limited to diminishing our ability to do the task at hand.  They also have an impact on our ability to remember things later.  Task switching also increases stress, diminishing people’s ability to manage a work-life balance, and can have negative social consequences.

Reysol Junco and Shelia Cotton further examined the impact of task-switching on our ability to learn and remember things. Their research was reported in an article entitled ‘No A 4 U’.  They asked 1,834 students about their use of technology and found that most of them spent a significant amount of time using information and communication technologies on a daily basis.  They found that 51% of respondents reported texting, 33% reported using Facebook, and 21% reported emailing while doing schoolwork somewhat or very frequently.  The respondents reported that while studying outside of class, they spent an average 60 minutes per day on Facebook, 43 minutes per day browsing the internet, and 22 minutes per day on their email.  This is over two hours attempting to multitask while studying per day.  The study also found that such multitasking, particularly the use of Facebook and instant messaging, was significantly negatively correlated with academic performance; the more time students reported spending using these technologies while studying, the worse their grades were.

David Strayer and his research team at the University of Utah published a study comparing drunk drivers to drivers who were talking on their cell phones.  It is assumed here that most conscious attention is being directed at the conversation and the driving has been relegated to automatic monitoring.  The results were that “When drivers were conversing on either a handheld or a hands-free cell phone, their braking reactions were delayed and they were involved in more traffic accidents than when they were not conversing on a cell phone.’  HM believes that this research was conducted in driving simulators and did not engender any carnage on the road.  Strayer also concluded that driving while chatting on the phone can actually be as bad as drunk driving, with both noticeably increasing the risk for car accidents.

Unfortunately, legislators have not understood this research.  Laws allow hand-free use of cell phones, but it is not the hands that are at issue here.  It is the attention available for driving.  Cell phone use regardless of whether hands are involved detracts from the attention needed during driving when emergencies or unexpected happenings occur.

Communications researchers Aimee Miller-Ott and  Lynne Kelly studied how constant use of our phones while also engaged in other activities can impede our happiness.  Their position is that we have expectations of how certain social interactions are supposed to look, and if these expectation are violated we have a negative response.
They asked 51 respondents to explain what they expect when ‘hanging out’ with friends and loved ones, and when going on dates.  They found that just the mere presence of a visible cell phone decreased the satisfaction of time spent together, regardless of whether the person was constantly using it.  The reasons offered by the respondents for disliking the other person being on their cell phone included the involution of the expectation of undivided attention during dates and other intimated moments.  When hanging out, this expectation was lessened, so the presence of a cell phone was not perceived to be as negative, but was still often considered to diminish the in-person interaction.  Their research corresponded to their review of the academic literature, where there is strong evidence showing that romantic partners are often annoyed  and upset when their partner uses a cell phone during the time spent together

Marketing professor James Roberts has coined the term ‘phub’— an elision of ‘phone’ and ‘snub’ to describe the action of a person choosing to engage with their  phone instead of engaging with another person.  For example, you might angrily say, “Stop phubbing me!”  Roberts says that phone attachment  leading to this kin of use behavior has ben lined with higher stress, anxiety, and depression.

The Alt-Right and the President-elect (via the Electoral College)

January 20, 2017

U.S Citizens should understand the ramifications that the alt-right has for the President-elect.  A quick way of accomplishing this is to read the e-book by Jon Ronson, “The Elephant in the Room:  A Journey into the Trump and the “Alt-Right.”  Jon Ronson can be regarded as the foremost expert on Alex Jones.  And Alex Jones is one of the foremost voices of the alt-right.  The President-elect has appeared on Jones’s radio talk show.

We’ll skip to the concluding paragraphs of this book, which was published before the election.

“But the alt-right’s appeal remains marginal because the huge majority of young Americans like multiculturalism.  They aren’t paranoid or hateful about other races  Those ideas are ridiculous to them.  The alt-right’s small gains in popularity will not be enough to win Trump the election.  This is not Germany in the 1930’s.  All that’s changed is that one of Alex’s fans—one of those grumpy looking middle-aged men sitting in David Icke’s audience—is now the Republican nominee.

But if some disaster unfolds—if Hillary’s health declines furthure, or she grows ever more off-cuttingly secretive—and Trump gets elected, he could bring Alex and other with him.  The idea of Donald Trump and Alex Jones and Roger Stone and Stephen Banning having power over us—that is terrifying.”

Might we be Germany in the 1930’s?

“The Elephant in the Room” is available from for $1.99.  It is free for Amazon Prime members.

An Example from Lies Incorporated

January 19, 2017

This example was reported in the 7 Jan 2017 issue of the Washington Post.  The title of the article by Anthony Faiolo and Stephanie Kirchner is “Breitbart report triggers a backlash in Germany.”

The article begins, “Berlin—It was every God-fearing Christian’s worst nightmare about Muslim refugees.  “Revealed”, the Breitbart News Headline screamed, “1,000-Man Mob Attack Police, Set Germany’s Oldest Church Alight on New Year’s Eve.”  The only problem:  Police say that’s not what happened that night in the western city of Dortmund.”

So what did the police say?  They did not dispute that several incidents took place that night, but nothing to the extremes suggested by the Breitbart report.  They said the evening was comparatively calmer than previous New Years Eves.

The motivation for the false report is clear, To foster the alt-right agenda to create fear of the Moslems.  And this is Breitbart’s mission—to spread propaganda for the alt-right.  This swill is harmful to peace in the world, and pollutes healthy memories.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Lies, Incorporated

January 18, 2017

Lies, Incorporated is the title of a book by Ari-Havt and Media Matters for America.  This book is so thoroughly researched that it could not have been done by one individual, consequently the research of Media Matters for America is key.  The sub-title of this book is “The World of Post-Truth Politics.”  An earlier healthymemory blog post titled “Did Corporate PR Initiate the Post-Fact Era” discussed the beginning of the post-fact era by discussing the false scientific effort to document that smoking was safe.  That post also including the false scientific effort to argue against global warming.  “Lies Incorporated” elaborates on these topics and then has chapters titled “Lie Panel:  Health Care,”  “Growth in a Time of Lies:  Debt,” “On the Border of Truth:  Immigration Reform,” “Two Dangerous Weapons:  Guns and Lies,”  “One Lie, One Vote:  Voter I.D. Laws,” “Shut That Whole Lie Down:  Abortion,”  “A Lie’s Last Gasp:  Gay Marriage.”

The book begins with the statement, “Richard Berman is a Liar.”  He relished the title of “Dr. Evil” and develops the nastiest PR campaigns to undermine and discredit truth.  Berman’s motivation appears to be one of money.  He’ll sell himself to the highest bidder.  For others, the motivation is one of convenience.  If you are in the petroleum business, global warming is indeed an “inconvenient truth.”  HM admittedly chooses to ignore the true dietary guidance his wife offers because it is an “inconvenient truth.”  But many are simply ideologues.  They know what they believe and force facts into those ideologies by ignoring genuine facts and generate their version of facts.   This is termed “motivated reasoning.”  The criteria of truth is ignored.

Perhaps the most blatant example is provided by the “Death Panel Lie” generated to defeat the Affordable Care Act.  In June 2014 “The Washington Post” reported the story of a woman and her husband who were employed but receiving no benefits and would rather pay a penalty for being uninsured than participate in Obamacare.  They were afraid of the discredited notion of “Death Panels” and were paying serious out-of-pocket medical costs stemming from chronic conditions.  These people were not alone.  A November 2014 Gallup Poll found that 35% of uninsured Americans would rather pay the fine prescribed by law than receive health insurance.  There were people who said that they did not want government involvement, but that hands should be kept off their Medicare.  This, in part, explains why the United States has the most expensive medical costs with the results of a third world country.  It leads one to think that if there were a Stupidity Olympics, the United States might well dominate the competition.

One of the most disturbing realizations was that there are people with degrees who are dominated by their ideologies and should know better.  Perhaps this is not surprising as there were scientists who were fascists and supported totalitarian regimes with vigor.

The following two paragraphs are taken directly from the text.  “The purveyors of misinformation have a built-in advantage.  Lies are socially sticky, and even after one has been thoroughly debunked, it will still have advocates among those whose worldview it justifies.  These zombie lies continue to rise from the dead again and again, impacting political debate and swaying public opinion on a variety of issues.
Misinformation is damaging to those who read and absorb it.  Once a lie—no matter how outrageous—is part of the consciousness of a particular group, it is nearly impossible to eliminate, and like a virus it spreads uncontrollably within the affected communities.”  Richard Berman explained to energy executives that once you “solidify [a] position,” in a person’s mind, regardless of the truth, you have “achieved something the other side cannot overcome because it’s very tough to break common knowledge.  That “common knowledge” is repeated on radio, television, in print, and at the water cooler.  With each new citation, the lie becomes more entrenched.”

It is commonly known that certain politicians use “code words” to disguise racist statements.  HM found it interesting that in this book the author of these words was Lee Atwater, who was a former chairman of the Republican National Committee who helped elect Ronald Reagan and George H.W. Bush.  Here’s Atwater’s explanation of the delicate balance the Republican Party must play when using racially tinged issues to win election without appearing outwardly racist—by “getting abstract” when talking about race:
“You start out in 1954 by saying, “n——-, , n——-, n——-.”  By 1968 you can’t say
n——-, that hurts you, backfires.  So you stay stuff like forced busing, states’ rights, and all that stuff.  And you’re getting so abstract now that you’re talking about cutting taxes, and all these things you’re talking  are totally economic things and a byproduct of them is, blacks get hurt worst than whites.  And subconsciously maybe that is part of it.  It is getting that abstract, and that coded, then we’re doing away with the racial problem one way or the other.”

So what can be done about this political cesspool?  Be aware and do not allow yourself to be pulled in.  Finding the truth has been made more difficult, but we must all persevere.  Availing ourselves of such sites as,,, and

Move Knowledge from the Cloud Into Your Head

November 29, 2016

There is much in Poundstone’s “Head In the Cloud” that is not covered in this blog.  HM encourages the interested reader to read the book.  Poundstone provides strategies for sorting through the vast amounts of available information.  However, HM wants to make a single point.  The notion that everything can be found, so nothing needs to be remembered, is dangerously in error.  Hence the title of this post, Move Knowledge from the Cloud into your biological brain.  Of course, it would be both impractical and impossible to move everything to our biological brains.  Most information can be ignored.  Some information can be made available, but not immediately accessible.  This is information that can be readily found via searching, bookmarking, or downloading to another storage device.  However, there is other information that needs to be accessible in your biological memory.  The problem is how much information and where should it be stored.  The answer to this question is reminiscent of Goldilocks.  That is not too much, and not too little.  This varies from individual and depends upon the nature of the topic.

Poundstone seems to imply that what information needs to go where is a triage problem solved by the brain.  What he neglects to mention is that this should be a conscious process.  Do not passively assume that the brain will perform this function effectively.  It needs input from your conscious mind.  It requires thinking, Kahneman’s System 2 processing.  Effective cognition requires effective communication among what is available in technology and our fellow humans, what we can readily access from technology and our fellow humans, and what needs to be held in our biological brains.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Research Ties Fake News to Russia

November 28, 2016

The title of this post is identical to a front page story by Craig Timberg in the 25 November 2016 issue of the Washington Post.  The article begins, “The flood of ‘fake news’ this election season got support from a sophisticated Russian propaganda campaign that created misleading articles online with the goal of punishing Democrat Hillary Clinton, helping Republican Donald Trump, and undermining faith in American democracy, say independent researchers who tracked the operation.”

The article continues, “Russia’s increasingly sophisticated machinery—including thousands of bonnets, teams of paid human “trolls,” and networks of websites and social-media accounts—echoed and amplified right-wing sites across the Internet as they portrayed Clinton as a criminal hiding potentially fatal health problems and preparing to hand control of the nation to a shadowy cabal of global financiers.  The effort also sought to heighten the appearance of international tensions and promote fear of looming hostilities with the nuclear-armed Russia.”

Two teams of independent researchers found that the Russians exploited American-made technology platforms to attack U.S. democracy at a particularly vulnerable moment.  The sophistication of these Russian tactics may complicate efforts by Facebook and Google to crack down on “fake news.”

Research was done by Clint Watts, a fellow at the Foreign Policy Research Institute has been tracking Russian propaganda since 2014 along with two other researchers,s  Andrew Weisburg and J.M. Berger.  This research can be found at, “Trolling for Trump:  How Russia is Trying to Destroy our Democracy.”

Another group, PropOrNot,
plans to release its own findings today showing the startling reach and effectiveness of Russian propaganda campaigns.

Here are some tips for identifying fake news:

Examine the url, which sometimes are subtly changed.
Does the photo looked photoshopped or unrealistic (drop into Google images)
Cross check with other news sources.
Think about installing Chrome plug-ins to identify bad stuff.

The Knowledge Premium

November 26, 2016

The Knowledge Premium is a section in “Head In The Cloud,”  an important book by William Poundstone.  In this section he computes the monetary value of having facts in our brains as opposed to being in the cloud.  He uses regression techniques  to relate the scores on his knowledge of facts tests and to hold constant demographic variables such as differences in age and education.  This allows the computation of a knowledge premium, the increased income accountable to the test scores alone.  Poundstone created a trivia quiz that found that individuals who aced the test earned $94,959 and those who scored zero earned $40,360.  The difference, or knowledge premium is $54,599 a year.  Here are some of the questions that were used on this ten item test.

Who was Emily Dickinson—a chef, a poet, a designer, a philosopher or a reality-show star?
Which happened first, the US Civil War or the Battle of Waterloo?
Which artist created this painting?  (Shown was Picasso’s 1928 Painter and Model)
Which nation is Cuba? (Respondents had to locate it on a map)

These questions were characterized as trivial not because the information is unimportant, but because it seems to have nothing to do with basic survival or with make money.  But the statistic computed from this test says that it has a lot to do with making money.

Answers:  Dickinson was a poet; the Battle of Waterloo.  The Emily Dickinson question was answered by 93% correct, with about 70 to 75% answering the other questions correctly.

Two Scientists in Congress

November 25, 2016

At the time of writing “Head In The Cloud”  by William Poundstone there were only two scientists total in the United States Senate and House of Representatives.  That is of 535 representatives only 2 (0.3%) are scientists.  It seems only appropriate that a low-information electorate have a low intelligence congress.  HM says low intelligence as it is science that has produced advancement and modernity.   Absent science we would be living in filth and ignorance.  Included here are both the physical and social sciences.

It is more than scientific knowledge that is important.  The empirical basis of science together with evaluation methodologies and statistics are important.  We need these to have a rational basis for policies and for a means of evaluating the benefits and dangers of different policies.  When debates in Congress are based upon data, rigorous research can be done to assist in defining the ways to proceed.  Scientists do not always agree.  Nor are the initial results of investigations always correct.  But eventually there is convergence with resulting better ideas and policies.  This is the democracy of the future.  Will it ever be achieved?

The low-information electorate complements nicely argumentation based on beliefs.  People fail to realize that beliefs are double-edged stores where both edges are blunt. One blunt edge makes it difficult, if not impossible, to see the problems with one’s own beliefs.  The other blunt edge makes it difficult, if not impossible, to see alternative ideas and courses of action.

Some religious beliefs force religion into its historical role of retarding science and keeping humans ignorant.  Moreover, many of the people holding these religious beliefs are not satisfied with the religious freedom guaranteed in the Bill of Rights.  Rather, they feel compelled to enforce their beliefs on others by changing the laws of the land. What happened to, “Judge not that ye be not judged” (Matthew 7 1-3). These same people are appalled at the sharia practiced by some Muslems, yet fail to perceive that what they are doing in the United States is indeed sharia.  These same beliefs forbid the teaching of science and engaging in scientific and medical practices that can advance humankind and relieve a great deal of misery.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Low-Information Electorate

November 22, 2016

“The Low Information Electorate” is the title of Chapter Five in “Head In The Cloud”,  an important book by William Poundstone.  Both conservatives and liberals agree about how spectacularly dumb the great mass of conservatives and liberals are.  Poundstone notes that this statement is true and proceeds to prove his point.

Ignorance is probably most pronounced in judicial races.  In 1992  the well-respected California judge Abraham Aponte Khan lost an election to a virtually unknown challenger who had been rated “unqualified” by the Los Angeles County Bar Association.  The name of he challenger was Patrick Murphy, a name that sounded less foreign than “Khan.”  Should you ever have problems with judicial decisions, perhaps  the first factor to consider is how they are chosen.  There are ample data to show that judicial elections are a bad idea.

Poundstone conducted a survey of adults to name the holders of fourteen elected offices—national, state, and local.  He found that essentially everyone can name the president, 89% were able to name the vice-president, 62% could identify at leas one of their state’s US senators.  Slightly less than half could name both and 55% knew their district’s congressperson.  81% were able to name the governor of their state.  Barely half of those who said they lived in a municipality with a mayor or city manager were able to name that official.  These offices were the limit of the typical citizen’s knowledge.  Less than a third of the respondents could name the current holders of other offices.  These participants were asked to describe their political preferences on a five-point scale from “very conservative” to “very liberal.”  There was no correlation between these ratings and knowing the names of elected officials.

However, Poundstone did find a correlation between knowing the name and knowing something about the individual.  A voter who does not know the name of a mayor is unlikely to know much else about her, such as the issues she ran on and any accomplishments, failures, or criminal convictions that would bear on a bid for reelection.

in 2014 the Annenberg Public Policy Center conducted a survey of adults on facts that they should have learned in civics class.

*If the Supreme Court rules on a case 50 to 4, what does it mean?
21% answered, “The decision is sent back to Congress for reconsideration.”  Wrong!

*How much of a majority is required for the US Senate and the House of Representatives to override a presidential veto?
Only 27% gave the correct answer, two-thirds.
*Do you happen to know any of the three branches of government?  Would you mind naming any of them?
Only 36% were able to name all three (executive, legislative, judicial)

What is also striking is the ignorance among professional politicians.  In a 2015 speech presidential candidate Rick Perry quoted a great patriot:  “Thomas Paine wrote the ‘duty of a patriot’ is to protect his country from his government.”  Paine did not write this.  It appears in the writings of radical-left environmentalist Edward Abbey.

In 2011 another presidential contender, Michele Bachman told Nashua, New Hampshire, supporters, “You’re the state where the shot was heard around the world in Lexington and Concord.”  As the sharp readers of the healthy memory blog likely know that those towns are in Massachusetts.

Of course, these individuals are failed presidential candidates.  Bill Clinton, however, is a two-term president.  On October 16,1996 he said, “The last time I checked, the Constitution said, “Of the people, by the people and for the people”  That’s what the Declaration of Independence says.”  Unfortunately those words are from Lincoln’s Gettysburg Address and are not in either of the documents he cited. Bill Clinton has said many times, that Hillary is better than he is.  That is undoubtedly true, but unfortunately she had not proofread his speech.  All three individuals have staffs who should be vetting their speeches.  So what gives???

One might think that character can override ideology.  We hear of swing voters who say they will decide between two ideologically different candidates based on character, likability, or simply being the “better man or woman for the job.”  Unfortunately UCLA political scientist Lynn Vavreck has found the split-tickets—those who vote for candidates from more than one party—are less informed than those who hold to a party line.  She surveyed a sample of 45 thousand Americans, asking them to name the current occupations of politicians such as Nancy Pelosi and John Roberts.  She compared the survey results to voting patterns.  Those who fell in the bottom third of political knowledge stood a 12% chance of voting for senatorial and presidential candidates from different parties in the 2012 election.  Among the best-informed third, the chance of a split ticket was only 4%.

Under informed voters were also more likely to describe themselves as undecided on hot-button issues such as immigration, same-sex marriage, and increasing taxes on the wealthy.  These finds fit in with the notion of a “mushy muddle.”  Political pollers recognized that many who identify themselves as moderates are really just those who “don’t know.”

Poundstone writes, “We hope that voters in the middle supply a reality check to partisanship and help promote the compromise necessary to a democratic society.  There “are” voters who hold strong, well-reasoned political convictions that happen to lie in between those of the two parties.  There just aren’t too many of these voters, it seems.”

Given this epidemic of ignorance, how do democracies survive?   Here is an explanation offered by Poundstone.   “One way to think of it is that democracies are like casinos.  They exploit human irrationality—and, come to think of it, there aren’t many firmer foundations than that.  There are enough “irrational” voters to channel the wisdom of crowds and select candidates who are in tune with public sentiment and who are , usually not all that bad.”

HM is always annoyed and exhortations “to vote.”  The exhortation should be to get informed, and when once informed, consider voting.  There is already significant noise in elections.  What is the point of increasing the noise?

Poundstone concludes the chapter that relates knowledge of elected officers to personal wealth.  When he asked his respondents to name the current occupants of these seven elected offices:  at least one of your state’s two US senators, your state’s governor, you state senator, your county sheriff, your city of town councilperson, and your local school board representative.  The average adult can name only about three of the seven.  Those who could name all seven offices made about $43,000 more per year than those who couldn’t name any of the offices.

This fact points to the importance of certain information being in one’s brain rather than being found some place in the cloud.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The One-in-Five Rule

November 21, 2016

The One-in-Five Rule is chapter four of “Head In The Cloud” is an important book by William Poundstone.  Survey makers are aware of this rule, and so should you.  About 20% of the public believes just about any nutty idea a survey taker dares to ask about.  A 2010 “Huffington Post article sample survey reported that under informed 20%ers
* believe that witches are real
* believe the sun revolves around the earth
* believe in alien abductions
* believe Barack Obama is a Muslim, and
* believe the lottery is a good investment

Poundstone has a heading in this chapter titled “The Paranoid Style in American Cognition,” although HM is more inclined to believe that this paranoid style is a human problem rather than one specific to America.  However, the examples provided are regarding Americans.

In 2014 psychologists Stephan Lewandowsky, Gilles E. Gignac, and Klaus Oberauer reported a survey asking for True or False responses to the following experiences:

* The Apollo moon landings never happened and were staged in a Hollywood film studio.
* The US government allowed the 9/11 attacks to take place so the it would have an excuse to achieve foreign and domestic goals (e.g., the wars in Afghanistan and Iraq and attacks on American civil liberties) that had been determined prior to the attacks.
* The alleged link between secondhand tobacco smoke and ill health is based on bogus science and is an attempt by a corrupt cartel of medical researchers to replace rational science with dogma.
*US agencies intentionally created the AIDS virus and administered it to black and gay men in the 1970s.

These respondents were also asked whether they agree or disagreed with the following statements:

* The potential for vaccinations to maim and harm children outweigh their health benefits.
* Humans are too insignificant to have an appreciable impact on global temperature.
* I believe that genetically engineered food have already damaged the environment.

Poundstone concludes the chapter with the following paragraph”
“Those who believed in flat-out conspiracy theories were also more likely to agee with the above statements ()the first two are wrong, and the third is unproven).  Unlike the typical  conspiracy theory, these beliefs affect everyday behavior, both in the voting booth and outside it.  Should I vaccinate my kids?  Are hybrid cars worth the extra cost? Which tomato do I buy?  The One-in-Five American casts a long shadow.”

More Facts Citizens Should Know

November 20, 2016

This post is based on information in “Head In The Cloud”  by William Poundstone. From 1993 to 2010 the US violent crime rate dropped precipitously.  The firearms homicide rate dropped from 7.0 to 3.6% per 100,000, almost in half.  The nonviolent crime rate plunged to a little more that a quarter of what it had been.  It is difficult to think of another major social problem that had shown such dramatic improvement, but were people aware of this improvement?

A 2013 Pew Research Center poll asked whether gun crimes had gone up, down, or stayed the same over the last twenty years.  56% thought that the crime rate had gone up (wrong), and 26% thought it had stayed the same (also wrong).  Just 12% thought it had gone down.

It is interesting that both sides of the gun issue believe that they have a better remedy for a surging crime rate that doesn’t  actually exist.

Poundstone did a survey for an estimate of “the average amount of memory for a new tablet computer.”  The most common answer, 10-99 gigabytes was the most reasonable one at the time of the survey.  This answer got 40% of the responses.  The second most common answer was gigabytes and that got slightly over 20% of the responses.  So at least these respondents had the correct prefix before bytes.  But the range of responses  was from less than a kilobyte to more than hundreds of petabytes.

Poundstone also found that Americans think that there are far more Blacks, Asians, Gays and Moslems than there are actually are.   In the public mind, Latinos, black, Asians, gays, and Muslims constitute about 25%, 23%, 13%, 11%, and 15% of the populations, respectively.   This adds up to 87% of the population.  Poundstone notes that even when you account for overlap, these high-profile minorities account for about two-thirds of the US population.  So according to what these people think, whites are already a minority, and they feel threatened. The correct values are 17%, 15%, 6%, and 1%, respectively, which yields a total of 39%.

Head In The Cloud

November 18, 2016

“Head In The Cloud” is an important book by William Poundstone.  The subtitle is “Why Knowing Things Matters When Facts Are So Easy to Look Up.”  Psychologists make the distinction between information that is accessible in memory and information that is available in memory.  Information that you can easily recall is obviously accessible in memory.  However, there is other information that you might not be able to recall now, but that you know that you know it.  This information eventually becomes accessible and can appear suddenly unsummoned in consciousness.

Transactive memory refers to information you can get from our fellow humans or from technology.  Most information available in technology can readily be summoned via Google searches.  An extreme view argues that since all this information is available, we do not need to remember the information itself as long as we know how to search for the information.  Whenever we encounter new information we are confronted with the question as to whether we need to commit this information to our biological memory.  This is a nontrivial question as committing information to memory requires cognitive effort, thinking, or in terms of Kahneman’s Two Process Theory, engaging our System 2 processes.  The healthy memory blog  has a category devoted to mnemonic techniques explicitly designed to assist in memorizing information as well as other discussions regarding how to make information memorable.  But all of this involves effort, so why bother if it can simply be looked up?  “Head in the Cloud” explains the benefits of moving some information from the cloud into our brains.

Poundstone describes an experiment done in 2011 by Daniel Wegner.  He presented volunteers with a list of forty trivia facts—short, pithy statement such as “An ostrich’s eye is bigger than its brain.”  Half of the volunteers were told to remember the facts.  The other half were not.  Within each of these groups half were informed that their work would be stored on the computer, and half were told that their work would be immediately erased after the task’s completion.    All these volunteers were later given a quiz on the facts they typed.  It did not matter whether they had been instructed to remember the information or not.  It only mattered if they thought their work was going to be erased after the task.  These volunteers remembered more regardless of whether they were told to remember the information.

The following is directly from the text “It is impossible to remember everything.  The brain must constantly be doing triage on memories, without conscious intervention.  And apparently it recognizes that there is less need to stock our minds with information that can be readily retrieved.  So facts are more often forgotten when people believe the facts will be archived.  This phenomenon has earned a name—the Google effect—describing the automatic forgetting of information that can be found online.”

HM does not disagree with any of the above quote.  However, he is alarmed by what is omitted.  That omission regards a conscious decision as to whether the information should be further processed to increase its accessibility without technology and whether it is related to other information that might require further research.  It is true that we are time constrained, so that depending on the situation the time available for such consideration will be important.  But as Poundstone will show, it is important to get some information out of the cloud and into the brain, and we can consciously alter the processing we give to the retrieved information.  Sans attention, it will likely remain in the cloud.

Poundstone reports an enormous amount of research conducted by a new type of polling called an Internet panel survey.  These are conducted by an organization that has recruited a large group of subjects (the panel)  who agree to participate in surveys.  When a new survey begins, the software selects a random sample of the panel to contact.  E-mails containing links are sent to the selected participants, typically in several waves to achieve a demographic balance closely approximating the general populations.  The sample can be balance for sex, age, ethnicity, education, income, and other demographic markers of interest to the research project.

A prior healthy memory blog post appropriately titled “The Dunning-Kruger Effect” discusses the Dunning-Kruger Effect.  Dunning is a psychology professor and Kruger was a graduate student.  The effect is that “Those most lacking in knowledge and skills are least able to understand their lack of knowledge.”  The flip-side of this effect is that those most knowledgeable are most aware of any holes in their knowledge.

“Actor John Cleese concisely explains the Dunning-Kruger effect in a much-shared You Tube video:  ‘If you’re very, very stupid how can you possibly realize that you’re very, very stupid?  You’d have to be relatively intelligent to realize how stupid you are…And this explains not just Hollywood but almost the entirety of Fox News’”

The chaos and contradictions of the current political environment can perhaps best be characterized as a glaring example of the Dunning-Kruger effect.  Just a few moments of contemplation should reveal the potential danger from this effect.  Poundstone’s book reveals the glaring lack of knowledge in many important areas by too many individuals.  He also provides ample evidence of the benefits of moving certain information from the cloud and into our brains.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.


Are Video Games Luring Men From the Workforce?

October 29, 2016

The title of this post is identical to the title of an article by Ana Swanson in the 24 September issue of the Washington Post.  It begins with the story of a high school graduate who has dropped out of the workforce because he finds little satisfaction in the part-time, low wage jobs he’s had since graduating from high school.  Instead he plays video games, including FIFA 16 and Rocket League on Xbox One and Pokemon Go on his smartphone.

The article notes that of last year 22% of the men between the ages of 21 and  30 with less than a bachelor’s degree reported not working at all in the previous year.  This is up from 9.5% in 2000.  These young men have replaced 75% of the time they used to spend working with time on the computer mostly playing video games.

From 2004 to 2007, before the recession, unemployed men averaged 5.7 hours on he computer, whereas employed men average 3.4 hours. This included video game time.  After the recession, between 2011 to 2014 unemployed men average 12.2 hours per week, whereas employed men averaged 4.7 hours.  With respect to video games from 2004-07 unemployed men averaged 3.4 hours per week versus 2.1 hours fro employed men.  During the period from 2011-2014 unemployed men average 8.6 hours playing video games verses 3.2 hours for employed men.

Researchers are arguing that these increases in game playing are partially  due to the games appeal having been increased. The estimate runs from one-fifth to one-third of the decreased work is attributable to the rising appeal of video games.  HM believes that prior to these games most unemployed were confronted primarily to daytime television, which provided a strong inducement to seek work.  Today video games provide an entertaining alternative to seeking work.  As the games improve and become more sophisticated, the argument is that they have become even more appealing.

The article notes that the extremely low cost makes these games even more accessible.  It states that recent research has found that households making $25K to $35K a year spent 92 more minutes a week online that households making $100K or more per year.

The article also notes that for the first time since the 1930s more U.S. men ages 18 to 34 are living with their parents than with romantic partners according to the Pew Research center.

The article argues that these men are happy.  HM feels that this happiness is likely to be short-lived, and that there is a serious risk that these men will end up as adults who are stunted intellectually and emotionally.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Trump, The World’s Greatest Troll

September 17, 2016

This title was bestowed on Trump by Nate Silver, a statistician and the best campaign prognosticator.  What makes him the greatest troll is the devastating effect he has had on the American political system.  Trump plays to the mob, and in cyberspace the cyber mob.  Donald Trump has a unique and disturbing leadership style.  Rather than demonstrating gravitas and intelligence with measured remarks and diplomacy, he succeeds with brutal populism and personal attacks.  As Dr. Mary Aiken notes, “ he seems to relish being nasty—even sadistic, at times.”  Dr Aiken continues, “Power no longer centers on leadership but on followership.”  The norms of cyberspace, where cruelty is amplified, escalated, and encouraged, have jumped into politics.

“Trolls” appear to be the greatest attention—seekers online.  They have chosen the appellation, “trolls.”  Dr. Aiken believes that the motivation for trolling behaviors is a combination of boredom, revenge, pleasure, attention, and a desire to cause disruption and acquire power.  On multiplayer gaming sites they test and taunt children and then post video or audio of the children crying.  On dating sites trolls are capable of anything from cyber-stalking to sexual harassment and threats.

Dr. Aiken argues that Trump’s success as a presidential candidate is a vivid example of what she calls cyber-socialization.  “Leading by building followers, he employs many of the tactics of a malicious online bully, from his use of taunts and name-calling of fellow candidates (“Crooked Hillary” and “Crazy Bernie” and “Lying Ted”) to his obsession with physical appearance (“Little Marco”) and special hostility for women (“”dogs,” “pigs” and “disgusting”).

Trump has 8.19 million followers on Twitter and dominates the social media landscape of the election.  Unfortunately, social media have become an environment where pathological behavior is gaining ground and being normalized.  There is a loss of empathy online, a heightened detachment from the feelings and rights of others, which is seen in extreme cyberbullying and sadistic trolling.

Psychologists have found a relationship between individuals who comment frequently online and identify themselves as “trolls” with three of the four components of what is known as the dark tetrad of personality, a set of characteristics that are found together in a morbid cluster:  narcissism (the characteristic not included), sadism, psychopathy and Machiavellianism.  In the case of Trump, HM thinks that narcissism could also be appropriate.  The researchers concluded that trolling was a manifestation of “everyday sadism.”

The concluding sentence is Dr. Aiken’s essay is “Sadly for those of us trying to eradicate cyber-bullying and online harassment, and educate children and teenagers about the great emotional costs of this behavior, our job becomes much harder when high-profile leaders use cruelty as strategy—and win elections for it.

Dr. Aiken’s essay, from which large portions of this post have obviously be taken, can be found at and searching for Welcome to the Troll Election.

The Cyber Frontier

September 15, 2016

“The Cyber Frontier” is the final chapter of the “Cyber Effect,”an important book by Mary Aiken, Ph.D., a cyberpsychologist.   She writes, “If we think of cyberspace as a continuum, on the far left we have the idealists, the keyboard warriors, the early adopters, philosophers who feel passionately about the freedom of the Internet and don’t want that marred or weighted down with regulation and governance.  On the other end of the continuum, you have the tech industry with its own pragmatic vision of freedom of the Net—one that is driven by a desire for profit, and worries that governance costs money and that restrictions impact the bottom line.  These two groups, with their opposing motives, are somehow strategically aligned in cyberspace and holding firm.”  She writes that the rest of us and our children, about 99.9%, live somewhere in the middle, between these two options.

She says that we should regain some societal control and make it harder for organized cybercrime.  Why put up with a cyberspace that leaves us vulnerable, dependent, and on edge?

Dr. Aiken writes that the architects of the Internet and its devices know enough about human psychology to create products that are a little too irresistible, but that don’t always bring out the best in ourselves.  She calls this the “techno-behavioral effect.”  The developers and their products engage our vulnerabilities and weaknesses, instead of engaging our strengths.  They can diminish us while making us feel invincible and distract us from things in life that are much more important, more vital to happiness, and more crucial to our survival.  She writes that we need to stop and consider the social impact or what she called the “techno-social effect.”

Dr Aiken argues that in the next decade there’s a great opportunity before us— a possible golden decade of enlightenment during which we could learn so much about human nature and human behavior, and how best to design technology that is not just compatible with us, but that truly helps our better selves create a better world.  If we can create this balance, the cyber future can look utopian.

Dr. Aiken argues that we should support and encourage acts of cyber social consciousness, like those of Mark Zuckerberg and Priscilla Chan, the Bill and Melinda Gates Foundation, Paul Allen, Pierre and Pam  Omiya, and the Michael and Susan Dell Foundation.

Tim Berners-Lee, the father of today’s internet has become increasingly ambivalent about his creation and has recently outlined his plans for a cyber “Magna Carta.”  (Go to and enter Tim Berners-Lee into the search box.)  Dr. Aiken argues for a global initiative.  She writes, “The United Nations could lead in this area, countries could contribute, and the U.S. could deploy some of its magnificent can-do attitude.  We’ve seen what it has been capable of in the past.  The American West was wild until it was regulated by a series of federal treaties and ordinances.  And if we are talking about structural innovation, there is no greater example that Eisenhower’s Federal-Aid Highway Act of 1956, which transformed the infrastructure of the U.S. road system, making it safer and more efficient.  It’s time to talk about a Federal Internet Act.”

There are already countries who have taken actions from which we can learn.  Ireland has taken the initiative to tackle legal but age-inappropriate content online.  South Korea has been a pioneer in early learning “netiquette: and discouraging Internet addictive behavior.  Australia has focused on solutions to underage sexting.  The EU has created the “right to be forgotten,” to dismantle the archives of personal information online.  Japan has no cyberbullying.  Why?  What is Japanese society doing right?  We need to study this and learn from it.  Antisocial social media needs to be addressed.

What Lies Beneath: The Deep Web

September 14, 2016

“What Lies Beneath:  The Deep Web”  is Chapter 8 of The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.  Dr Aiken likens the Deep Web to the pirates of the Caribbean in the seventeenth and eighteenth centuries.  She writes that it a vast uncharted sea that cybercriminals navigate skillfully, taking advantage of the current lack of governance and authority—or adequate legal constructs to stop them.
Although cybercriminals can be found anywhere on the Internet, they have a much easier time operating in the murky waters of the darkest and deepest parts.

Almost any kind of criminal activity—extortion, scams, hits, and prostitution—can be ordered up, thanks to well-run websites with shopping carts, concierge hospitality, and surprisingly great customer service.  Cybercriminals are con artists who are expert observers of human behavior, especially cyberbehavior.  They know how to exploit the natural human tendency to trust others, as well as how to manipulate to give up their confidential, information, or what is called a socially engineered attack.  Regarding  identity theft or cyber fraud, it is usually much easier to fool a person into giving  you a password that it is to hack it.  This type of social engineering is a crucial component of cybercriminal tactics, and usually involves persuading people to run their “free” virus—laden malware or  dangerous software by peddling a lot of frightening scenarios (which is called shareware).  Fear sells.

The Deep web refers to the unindexed part of the Internet.  Dr. Aiken says that it accounts for 96 to 99 percent of content on the internet.  Most of the content is pretty dull stuff, a combination of spam and storage—U.S. government databases, medical libraries, university records, classified cellphone and email histories.  Just like the Surface Web, it is a place where content can be shared.

What makes the Deep Web different is that content on the Deep Web can be shared without identity or location disclosure, without your computer’s IP address and other common traces.  Since these sites are hot indexed they are not searchable by typical browsers like Chrome or Safari or Firefox.  For software that protects your identity, an add-on browser like Tor is one of the most common ways in.  Tor is an acronym for “the online router” because of the layers of identity-obscuring rerouting.  The Deep Web was first used by the U.S. government, and the protocols for the browser Tor were developed with federal funds so that any individuals whose identity needed to be protected—from counterintelligence agents to journalists to political dissenters in other countries—could communicate anonymously with the government in a safe and secure way.  But since 2002, when the software for Tor became available as a free download, a a digital black market has grown there.  This criminal netherworld is populated by terrorists networks, criminal gangs, drug dealers, assassins for hire, and sexual predators looking for images of children and new victims.

Monitoring and policing the Deep Web is a problem because there is almost an infinite number of hiding places, and most illegal sites are in a constant state of relocation to new domains with yet another provisional address.  Many of these sites do not use traceable credit cards or PayPal accounts.  Virtual currencies, such as Bitcoin are the coins of this realm.

Hidden services include crimes for hire and the selling of stolen credit information or dumps.  McDumpals is one of the leading sites  marketing stolen data  has a clever company logo featuring familiar golden arches and a McDumplas mascot, a gagster-cool Ronald McDonald.

Silk Road was an online black market, the first of its kind—offering drugs, drug paraphernalia, computer hacking and forgery services as well as other illegal merchandise—all carefully organized for the shopper.  Ross William Ulbricht ran the Silk Road for 2.5 years.  Silk Road attracted several thousand sellers and more than one hundred thousand buyers.  It was said to have generated more than  $1.2 billion dollars in sales revenue.  According to a study in “Addiction”, 18% of drug consumers in the U.S. between 2011 and 2013 used narcotics bought on this site.  The FBI estimated that Ulbricht’s black market had brought him $420 million in commissions, making him, according to “Rolling Stone” “one of the most successful entrepreneurs of the dot-com age.

According to the U.S. District Judge who sentenced Ulbricht at his 2014 trial, Silk Road created drug users and expanded the market, increasing demands in places where poppies are grown for heroin manufacture.  This black market site had impacted the global market.  The prosecutors alleged that Ulbricht had ordered up and paid for the executions of five Silk Road sellers who had tried to blackmail or reveal his identity.  Prosecutors traced the deaths of six people who had overdosed on drugs back to Silk Road, and two parents who had lost sons spoke at the trial.   Ulbricht was found guilty of seven drug and conspiracy charges and was given two life sentences, another of twenty years, another for fifteen years, and another for five years, without the chance of parole.

Shortly after the arrest of Ulbricht and the shutting down of the Silk Road in 2013, Silk Road 2.0 emerged.  Many more copycat sites sprang up  like Evolution, Agora, Sheep, Blackmarket Reloaded, AphaBay and Nucleus, which are often referred to as cyryptomarkets by law enforcement.

Dr. Aiken goes into the morality of the users of the Deep Web, the psychology of the hacker, and Cyber-RAT (routine activity theory) in more depth than can be related in a blog post.

Cyberchondria and the Worried Well

September 13, 2016

“Cyberchondria and the Worried Well” is chapter 7 of The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.  Reports estimate that up to $20 billion is spent annually in the U.S. on unnecessary medical visits.  Dr. Aiken asked how many of these wasted effects are driven by a cyber effect?  A majority of people in a large international survey said they used the Internet to look for medical information, and almost half admitted to making self-diagnoses following a web search.  A follow-up survey found that 83% of 13,373 respondents searched the Internet often for information and advice about health, medicine or medical conditions.  People in “emerging economies” used online sources for this purpose the most frequently—China (94%), Thailand (93%), Saudi Arabia (91%), and India (90%) led the table of twelve countries.

Dr. Aiken writes that 20 years ago when people experienced any physical condition that persisted to the  point of interfering with their activities they would visit a doctor’s office and consult a doctor.  In the digital age, people might analyze their own symptoms and play doctor at home.  She notes that about half of the medical information offered on the Internet has been found by experts to be inaccurate  or disputed.  HM feels compelled to insert here the conclusion expressed by Ioannidis’s 2005 paper, which is still believed by most statisticians and epidemiologists, “Why Most Published Research Findings are False.”  This implies that the on-line information is similar to the information available in the research world.  And physicians are working with a questionable data base, so the problem of accurate research information is real and not an artifact of the internet.  [To learn more about Ioannidis see the following healthy memory blog posts,”Liberator of Knowledge from Tyranny of Profit,” “Thinking 2.0,” “Most Published Research Findings are False,’ and “The Problem with Scientific Journals, Especially Elite Ones.”]

There are also online support groups such as the website  These groups do provide a place where thousands meet every day to discuss their feelings, questions, and hopes with like minded friends.  Although these places provide support, they might not be the best sources of information.  And does have a fine print disclaimer at the bottom of the page—“The information provided in MDJunction is not a replacement for medical diagnosis, treatment, or professional medical advice.”

The term “cyberchondria” was first coined in a 2001 BBC News report that was popularized in a 2003 article in “Neurology, Neurosurgery and Psychiatry,” and later supported by an important study by Ryen White and Eric Horvitz, two research scientists at Microsoft, who wanted to describe an emerging phenomenon engendered by new technology—a cyber effect.  In the field of cyberpsychology, cyberchondria is defined as “anxiety induced by escalation during health-related search online.”

The term “hyperchondria” has become outdated due to the Fifth Edition (DSM-5) of the “Diagnostic and Statistical Manual of Mental Disorders.”  About 75% of what was previously called “hypochondria” is now subsumed under a new diagnostic concept called “somatic symptom disorder,” and the remaining 25% is considered ‘illness anxiety disorder”.  Together these condition are found in 4 to 9% of the population.

Most doctors regard people with these disorders as nuisances who take up space and time that could be devoted to truly sick people who need care.  And when a doctors informs the patient that they do not have a diagnosable condition they become frustrated and upset.

Conversion disorders are what was called “hysterical conditions,’ which formerly went by such names as “hysterical blindness” and ‘hysterical paralysis.”  These have been renamed “functional neurological symptom disorder”, formerly called “Munchhausen syndrome”, is a psychiatric conditioning which patients deliberately produce or falsify symptoms or signs of illness for the principal purpose of assuming the sick role.

Iatrogenesis is a Greek term meaning “brought forth,” which refers to an illness “brought forth by the healer.”  It can take many forms including an unfortunate drug effect or interaction, a surgical instrument malfunction, medical error, or pathogens in the treatment room, for example.  A study in 2000 reported that it was the third most common cause of death in the United States, after heart disease and cancer.  So having an unnecessary surgery or medical treatment of any kind means taking a big gamble with your life.

In 1999 the estimate was of between 44,000 and 98,000 deaths annually in the United States  when the Institute of Medicine issued its infamous report, “To Err is Human.  HM is proud to note that a one of his colleagues, Marilyn Sue Bogner, was a pioneer in this area of research.  The first edition of her book “Human Error in Medicine predated the IOM report.  In 2003 she published “”Misadventures in Health Care:  (Inside Stories:  Human Error and Safety.”  Unfortunately, she has recently passed away.  And, unfortunately, matters seem to be getting worse.  In 2009 the estimates of each due to failures in hospital care rose to an estimated 180,000 annually.  In 2013 the estimates ran between 210,00 and 440,000 hospital patients in the United States die as a result of a preventable mistake.  Dr. Aiken believes that part of this escalation is due to the prevalence of Internet Medical searches.

So we have a difficult situation.  Cyberspace has erroneous information, but the underlying medical research also contains erroneous information and doctors are constrained by these limitations.  We should be aware of these limitations and be cognizant that the diagnosis and recommended treatment might be wrong.  The best advice is to solicit multiple independent opinions and to always be aware that “do nothing” is also an option.  And it could be an option that will safe your life.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Cyber Romance

September 12, 2016

Cyber Romance is Chapter 6 in The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.   This chapter looks at the ways cyber effects are shifting mating rituals and romance. Romantic love manifests itself in its expression in the brain.  The left dorsal anterior cingulate cortex (dACC) becomes active, as well as the insula, caudate, amygdala, accumbent temporo-parietal junction, posterior cingulate cortex, medial prefrontal cortex, inferior parietal lobule, precuneus, and temporal lobe.

Dr Aiken discusses a paradox know as the “stranger on the train syndrome.”  This refers to people feeing more comfortable disclosing personal information to someone that they may never meet again.  She also mentions the cyber effects of online disinhibition and anonymity.  We feel less at risk of being hurt by a partner who has not seen us in real life.  An urgent wish to form a bond might induce us to disclose intimate details of our lives without much hesitation.  The risks of doing this should be obvious.  However, online we might overshare and confess, revealing too much personal information with a potential love interest online doesn’t help predict compatibility the way it might in the real world.

Communications expert Joseph Walther describes hyper personal communication as a process by which participants eagerly seek commonality and harmony.  The getting-to-know-you experience is thrown off-kilter.  The two individuals—total strangers really—seek similarities with other rather than achieving a more secure bond that will allow for blunt honesty or clear-eyed perspective.    When we are online, free of face-to-face contact, we can feel less vulnerable and not “judged.”  This can feel liberating but be dangerous.  Dr. Aiken does not comment as to whether the use of visual media, such as Skype, might mitigate this problem.  But she does say that dating online involves four selves—two real-world selves and two cyber ones.

Relying on normal, as opposed to cyber, instinct can lead vulnerable individuals into true danger.  A woman who meets a man in a bar might never consider accepting a ride with him after only one encounter.  Yet that same woman, after only a few days of interacting through email and texts with a man she’s met on an online dating site, may fire out her address because she feels such a strong connection with him.

Dr. Aiken cites a February 2016 report by the U.K. National Crime Agency (NCA) of a sixfold increase in online-dating related rape offenses over the previous five years.  The team analyzing the findings presented potential explanations, including people feel disinhibited online and engage in conversations that quickly become sexual in nature, which can lead to “misdirected expectation” on the first date.  Seventy-one percent of these rapes took place on the first date in either the victim’s or offender’s residence.     The perpetrators of these online date-rape crimes did not seem to fit the usual profile of a sex offender; that is, a person with a criminal history or previous conviction.  So we don’t fully understand the complexity of online data and associated sexual assault, but the cyber effects of syndication and disinhibition are clearly important.  The NCA offers the following helpful advice for online data:

Meet in public, stay in public
Get to know the person, not the profile
Not going well?  Make excuses and leave.
If you are sexually assaulted, get help immediately.

Nevertheless, the online data industry has been successful.  The industry was profitable almost immediately.   By 2007, online data was bringing in $500 million annually in the U.S., and that figure had risen to $2.2 billion by 2015, when turned twenty years old.  By then, the website claimed to have helped create 517,000 relationships, 92,000 marriages, and 1 million babies.

When you make assumptions about a person based on their profile photo, it’s termed impression management.  When you filter, fix, and curate your own profile photo it’s impression management.  The mere act of choosing a picture to use on a data site—active, smiling, unblemished, or nostalgic—requires that you imagine how you look tooters and aim to enhance that impression.

Here are some impression—management tips for your profile photo.

Wear a dark color
Post a head—to—waist shot
Make sure that the jawline has a shadow (but no shadow on hair or eyes)
Don’t obstruct the eyes (no sunglasses)
Don’t be overtly sexy
Smile and show your teeth (but please no laughing)

If you don’t know how to squinch, here are some tips
“It is a slight squeezing of the lower lids of the eyes, kind of like Clint Eastwood makes in his Dirty Harry movies, just before he says, “Go ahead.  Make my day.”  It’s less than a squint, not enough  to cause your eyes to close or your crow’s feet to take over your face.  If you want a tutorial on how to produce the perfect one, Dr. Aiken recommends one by professional photographer Peter Hurley, available on YouTube, called “It’s All About the Squinch.”

Another risk in cyberspace is identity-deception.  People can make-up identities that they present in cyberspace.  There have always been tricksters, con artists and liars who pretend to be somebody they aren’t.  Technology has now made this so much easier.

Dr. Aiken also warns about narcissists in cyberspace.  Narcissists need admiration, flattery, loads of attention, plus an audience.  The problem is that given the way they ooze confidence and cybercharm, it may be harder of spot them—and know to stay away.  Here is a mini-inventory of questions to ask yourself:

*Doe they always look amazing in their photos?
*Are they in almost all of their photos?
*Are they in the center of their group photos?
*Do they post or change their profile constantly?
*When they post an update, is it always about themselves?

There is also a topic called Cyber-Celibacy.  A government survey in Japan estimated that nearly 40% of Japanese men and women in their twenties and thirties are single, not actively in a relationship, and not really interested in finding a romantic parent either.  Relationships were frequently described as bothersome.  The estimation, if current trends continue, Japan’s population will have shrunk by more than 30% by 2060.   Do not make the mistake of assuming that the explosion of  virgins is restricted to Japan.

Dr. Aiken provides more material than can be summarized in this blog.  The bottom line warning for Cyber Romance is the same as it is for all activities in cyberspace, be careful and proceed cautiously.

Teenagers, Monkeys, and Mirrors

September 11, 2016

“Teenagers, Monkeys, and Mirrors” is chapter 5 in The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.  This post will say nothing about monkeys and mirrors.  To read about monkeys and mirrors in this context you will need to get your own book.

Humanistic psychologist Carl Rogers did valuable research into how a young person develops identity.  He describe self-concept as having the following three components:
The view you have of yourself—or “self-image.”
How much value you place on your worth—or “self-esteem.”
What you wish you were like—or “the ideal self.”

Carl Rogers lived long before the creation of cyberspace.  Were he alive today it is likely he would have added a fourth aspect of “self.”  Dr. Aiken calls this “the cyber self”—who you are in a digital context.  This is the idealized self, the person you wish to be, and therefore an important aspect of self-concept.  It is a potential new you that now lives in a new environment, cyberspace.  Increasingly, it is the virtual self that today’s teenager is busy assembling, creating and experimenting with.  The ubiquitous selfies ask a question of their audience:  Like me like this?  Dr. Aiken asked the question, which matters the most: your real-world self or the one you’ve created online?  Her answer is probably the one with the greater visibility.

Adolescents are preoccupied with creating their identity.  The psychologist Erik Erikson described this period of development between the ages of twelve and eighteen as a state of identity versus role confusion, when individuals become fascinated with their appearance because their bodies and faces are changing so dramatically.  So this narcissistic behavior  is considered a natural part of development and is usually outgrown. However, in this age of cyberspace fewer young adults are moving beyond their narcissistic behavior.  A study of U.S. college students found a significant increase in scores on the Narcissistic Personality Inventory between 1982 and 2006.

Plastic surgery is another area that has been impacted by technology.  The easy curating of selfies is likely linked to a rise in plastic surgery.  According to a 2014 study by the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS), more than half of the facial surgeons polled  reported an increase in plastic surgery for people under thirty.  Surgeons have also reported that bullying is also a cause of children and teens asking for plastic surgery.  This is usually a result of being bullied rather than a way to prevent it.

Another problem is Body Dysmorphic Disorder (BDD).  “Individuals with body dysmorphic disorder are obsessed with imagined or minor defects, and this belief can severely impair their lives and cause distress.  Individuals with BDD are completely convinced that there’s something wrong with their appearance—and no matter how reassuring friends and family, or even plastic surgeons, can be, they cannot be dissuaded.  In some cases, they can be reluctant to seek help, due to extreme and painful self-consciousness  But if left untreated, BDD does not often improve or resolve itself, but become worse over time, and can lead to suicidal behavior.

Dr. Aiken notes that Mark Zuckerberg and his wife Priscilla Chan have pledged to donate 99% of their Facebook shares to the cause of human advancement.  That represented about $45 billion of Facebook’s current valuation.  She respectfully suggests that all of this money be directed toward human problems associated with social media.

Dr. Aiken notes that eighty years ago the American philosopher and social psychologist George Herbert Mead had something very relevant to say about how we think about ourselves—and express who we are that has special relevance today.  Mead studies the use of first-person pronouns as a basis for describing the process of self-reflection.  How we use “I” and “me” demonstrates how we think of self and identity.  There is “I”.  And there is “me”.  Using “I” shows that he or she has a conscious understanding of self on some level.  Using “I” speaks directly from that self.  The use of “me” requires the understanding of the individual as a social object.  To use “me” means figuratively leaving one’s body and being a separate object.  “I” seems to have been lost in cyberspace.  The selfie is all “me”.  It is an object—a social artifact that has no deep layer.  Dr. Aiken writes,  “This may explain why the expressions on the facies of selfie subjects seem so empty.  There s no consciousness.  The digital photo is a superficial cyber self.

Dr. Aiken advises to do what you can to pull kids back to “I’ and not let them drift to “me.”.  This is strengthened by conservations such as

*Ask them about their real-world day, and don’t forget to ask them about what’s happening in their cyber life.

*Tell them about risks in the real world, accompanied by real stories—then tell them about evolving risks online and how to not show vulnerability.

*Talk about identity formation and what it means—distinguishing between the real-world self and the cyber self.

*Talk about body dysmorphia, eating disorders, body image, and self-esteem—and the ways their technology use may not be constructive.

*Tell your girls not to allow themselves to become a sex object—and tell your boys not to treat girls as object online—or anywhere else.

HM is often envious of of the technology available to today’s youth.  And he is envious of cyberspace with the exception of the difficulties created by the perverse way that technology is being used that exacerbates the transition through adolescence.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Frankenstein and the Little Girl

September 10, 2016

“Frankenstein and the Little Girl” is Chapter 4 in “The Cyber Effect”  an important book by Mary Aiken, Ph.D., a cyberpsychologist.  Frankenstein refers to online search.  This chapter examines the online lives of children four to twelve years old. This is the age group that is most vulnerable on the Internet in terms of risk and harm.  This age group is naturally curious and wants to explore.  They are old enough to be competent with technology, and in some cases, extremely so.  But they aren’t old enough to be wary of the online risks and don’t yet understand the consequences of their behavior there.
The psychologist John Suler has said “Your wouldn’t take your children and leave them alone in the middle of New York City, and that’s effectively what you’re doing when you allow them in cyberspace alone.”

According to the journal “Pediatrics” 84% of U.S. children and teenagers have access to the internet on either a home computer, a tablet, or another mobile device.   More than half of US children who are eight to twelve have a cellphone.  A 2015 consumer report shows that most American children get their first cellphone when they are six years old.

There are some benefits and research has shown a positive relation between texting and literacy.  And there is an enormous amount of good material on the web.  However, some developmental downsides of persistent and pervasive use of technology are apparent.  Jo Heywood, a headmistress of a private primary school in Britain has made the observation, which is shared by other educators, that children are starting kindergarten at five and six years old with the communications skills of two- and three-year olds, presumably because their parents or caregivers have been “pacifying” them with iPads rather than talking to them.  Moreover, this is seen in children from all backgrounds, both disadvantaged and advantaged.

A national sample of 442 children in the United States between the ages of eight and twelve were asked how they spent their time online.  Children from eight to ten spent an average of forty-six minutes per day on the computer.  Children from eleven to twelve years spent an average of one you and forty-six minutes per day on the computer.

When asked what kinds of sites they visited, YouTube dominated significantly, followed by Facebook, and game and virtual-world play sites—Disney, Club Penguin, Webkinz, Nick, Pogo, Poptropica, PBS KIds, and Google.  Why is Facebook on this list?  You are supposed to be thirteen years old to activate an account.  One quarter of the children in the US study reported using Facebook even thought it is a social network meant for children and adults.  According to “Consumer Reports” twenty million minors use Facebook, 7.5 million  of these are under thirteen.  These underage users access the site by creating a fake profile, often with the awareness and approval of their parents.

Cyberbullying is an ugly topic that has received coverage in the popular press.  Cyberbullying has resulted in suicides.  Dr. Aiken notes the existence of bystander apathy in these events.  Few, if any, seem to come to the aide of those being bullied.  In a poll conducted in 24 countries, 12% of parents reported their child had experienced cyberbullying, often by a group.  A U.S. survey by “Consumer Reports” found that 1 million children over the previous year had been “harassed, threatened, or subjected to other forms of cyberbullying on Facebook.

It appears that the younger you are, the number of friends increases.  In a 2014 study of American users on Facebook, for those sixty-five years old, the average number of friends is 102.  For those between forty-five and fifty-four years old, the average is 220.  For those twenty-five to thirty-five years old, the average is 360.  For those eighteen to twenty-four, the average is 649.  Dare we extrapolate to younger age groups?  Dunbar’s number has been discussed in previous healthy memory blog posts.  It is based on the size of the average human brain and is the number of social contacts or “casual friends” with whom and average individual can handle and maintain social relationships is around 150.

Be a Cyber Pal was conceived as an antidote to cyberbullying, and was about actively being a kind, considerate, supportive, and loyal friend.  And it is cause for hope that it became the most downloaded poster of the campaign that year.  She thinks that the positive message gave teachers and families something that’s easier to talk about.

Dr. Aiken is using an approach she calls the math of cyberbullying using digital forensics to identify both victims and perpetrators.  She is working with a tech company in Palo Alto to apply this algorithm to online communication.

She discusses pornography, which she terms The Elephant in the Cyber Room.

Let me conclude by presenting a four-point approach developed by a panel of experts to protect children online.

1.  Using technical mediation in the form of parental control software, content filters, PIN       passwords, or safe search, which restricts searching to age-appropriate sites.
2.  Talking regularly to your children about managing online risks.
3.  Setting rules or restrictions around online access and use.
4.  Supervising your children when they are online.

Cyber Babies

September 9, 2016

“Cyber Babies” is chapter 3 in Dr.  Aiken’s new book “The Cyber Effect.”   She begins by relating a story of when she was traveling on a train watching a mother feeding her baby.   She held the baby’s bottle in one hand  and a mobile phone in the other.  Her head was bent looking at the screen.  The mother looked exclusively at her phone while the baby fed.  The baby gazed upward as babies do looking adoringly at the mother’s jaw, as the mother gazed adoringly at her phone.   The feeding lasted about 30 minutes and the mother did not make eye contact with the infant or once pull her attention from the screen of her phone.  Dr. Aiken was appalled as eye contact between baby and mother is quite important for the development of the child.  She mentioned that parents frequently ask her  at what age is it appropriate for a baby to be introduced to electronic screens.  She agrees that this is an important question but asks the parents first to think about this question:  What is the right age to introduce your baby to your mobile phone use?

She elaborates on the importance of face time with a baby.  They need the mother’s eye contact.   They need to be talked to, tickled, massaged, and played with.  She writes that there is no study of early childhood development that doesn’t support this.

She continues, “By experiencing your facial expressions—your calm acceptance of them, your love and attention, even you occasional groggy irritation—they thrive and develop.  This is how emotional attachment style is learned.  A baby’s attachment still is created by the baby’s earliest experiences with parents and caregivers.”  She further notes “A mother and her child need to be paying attention to each other.  They need to engage and connect.  It cannot be simply one-way.  It isn’t just about your baby bonding with you.   Eye contact is also about bonding with your baby.”

In a 2014 study in the journal “Pediatrics” fifty-five caregivers with children were observed in fast food restaurants, forty caregivers used mobile devices during the meal and sixteen used their devices continuously with their attention directed primarily at the device and not the children.  Dr. Aiken wishes that the following warning be placed on mobile phones:  “Warning:  Not Looking at your Baby Could Cause Significant Delays.”

She devotes considerable space to products that promise early childhood development, products such as the Baby Einstein Products.  Very little, if any, research has gone into the development of these products, and evaluations of these products provide no evidence that they are effective.

The research is clear that the best way to help a baby learn to talk or develop any other cognitive skill is through live interaction with another human being.  Videos and television shows have been shown to be ineffective in learning prior to the age of two.  A study of one thousand infants found that babies who watched more than two hours of DVDs per day performed worse on language assessments than babies who did not watch DVDs.  For each hour of watching a DVD, babies knew six to eight words fewer than babies who did not watch DVDs.  She does note that quieter shows with only one story line, such as “Blue’s Clues” and Teletubbies” can be more effective.   Still babies learn best from humans and not machines.

Some early-learning experts believe there is a connection between ADHD and screen use in children.  ADHD is now the most prevalent psychiatric illness of children and teenagers in America.  The number of young people being treated with medication of ADHD grows every year.  More than ten thousand toddlers, ages two and three years old, are among the children taking ADHD drugs, even though prescribing these falls outside any established pediatric guidelines.

Dr Aiken offers the following ideas for parents pending more guidance and information on proper regulation:

Don’t use a digital babysitter or, in the future, a robot nanny.  Babies and toddlers need a real caregiver, not a screen companion, to cuddle and talk with.  There is no substitute for a real human being.

Because your baby’s little brain is growing quickly and developed through sensory stimulation, consider the senses—touch, smell, sight, sound.  A baby’s early interactions and experiences are encoded in the brain and will ave lasting effects.

Wait until your baby is two or three years old before they get screen time.  And make a conscious decision about the screen rules for them taking into account that screens could be impacting how your child is being raised.

Monitor you own screen time.  Whether or not your children are watching, be aware of how much your television is on at home—and if the computer screen is always glowing and beckoning.  Be aware of how often you check you mobile phone in front of your baby or toddler.

Understand that babies are naturally empathetic and can be very sensitive to emotionally painful, troubling, or violent content.  Studies show that children have a different perception of reality and fantasy than adults do.  Repetitive viewings of frightening or violent content will increase retention, meaning they will form lasting unpleasant memories.

Don’t be fooled by marketing claims.  Science shows us that tablet apps may not be as educational as claimed and that screen time can, in fact, cause developmental delays and may even cause attention issues and language delays in babies who view more than two hours of media per cay.

Put pressure on toy developers to support their claims with better scientific evidence and new studies that investigate cyber effects.

Designed to Addict

September 8, 2016

Designed to Addict is the title of the second chapter in “The Cyber Effect” by Dr. Mary Aiken.  Although the internet was not designed to addict users, it appears that it is addicting many.  Of course, humans are not passive victims, they are allowing themselves to be addicted.  Dr. Aiken begins with the story of a twenty-two year old mother Alexandra Tobias.  She called 911 to report that her three-month old son had stopped breathing and needed to be resuscitated.  She fabricated a story to make it sound as if an accident had happened, but later confessed that she was playing “Farmville” on her computer and had lost her temper when her baby’s crying distracted her from the Facebook game.  She picked up the baby and shook him violently and his head hit the computer.  He was pronounced dead at the hospital dead from head injuries and a broken leg.

At the time of the incident “Farmville” had 60 million active users and was described by its users in glowing terms as being highly addictive.  It was indeed addictive so that “Farmville” Addicts Anonymous support groups were formed and a FAA page was created on Facebook.    Dr. Aiken found this case interesting as a forensic cyberpsychologist for the following reason:  the role of technology in the escalation of an explosive act of violence.  She described it as extreme impulsivity, an unplanned spontaneous act.

Impulsivity is defined as “a personality trait characterized by the urge to act spontaneously without reflecting on an action and its consequences.”  Dr. Aiken notes “that the trait of impulsiveness influences several important psychological processes and behaviors, including self-regulation, risk-taking and decision making.  It has been found to be a significant component of several clinical conditions, including attention deficit/hyperactivity disorder, borderline personality disorder, and the manic phase of bipolar disorder, as well as alcohol and drug abuse and pathological gambling.”  Dr. Aiken takes care to make the distinction between impulsive and compulsive.  Impulsive behavior is a rash, unplanned act, whereas compulsive behavior is planned repetitive behavior, like obsessive hand washing.  She elaborates in cyber terms.  “When you constantly pick up your mobile phone to check your Twitter feed, that’s compulsive.  Then  you read a nasty tweet and can’t restrain yourself from responding with an equally  nasty retort (or an even nastier one), that’s impulsive.”

Joining an online community or playing in a multiplier online game can give you a sense of belonging.  Getting “likes” meets a need for esteem.  According to psychiatrist Dr. Eva Ritvo in her article “Facebook and Your Brain” social networking “stimulates release of loads of dopamine as well as offering an effective cure to loneliness.  These “feel good” chemicals are also triggered by novelty.  Posting information about yourself can also deliver pleasure.  “About 40 percent of daily speech is normally taken up with self-disclosure—telling others how we feel or what we think about something—but when we go online the amount of self-disclosure doubles.   According to Harvard neuroscientist Diana Tamir, this produces a brain respond similar to the release of dopamine.”

Jack Panksepp is a Washington State University Neuroscientist who coined the term affective neuroscience, or the biology of arousing feelings or emotions.  He argues that a number of instincts such as seeking, play, anger, lust, panic, grief, and fear are embedded in ancient regions of the human brain built into the nervous system as a fundamental level.  Panskepp explains addiction as an excessive form of seeking.  “Whether the addict is seeking a hit from cocaine, alcohol, or a Google search, dopamine is firing, keeping the human being in a constant state of alert expectation.”

Addiction can be worsened by the stimuli on digital devices that come with each new email or text to Facebook “like,” so keep them turned off unless there is a good justification for keeping them on, and then only for a designated amount of time.

There is technology to help control addictive behavior.  One of these is Breakfree, an app that monitors  the number of times you pick up your phone, check your email, and search the web.  It offers nonintrusive  notifications and provides you with an “addiction score” every day, eery week, and every month to track your progress.  There are many more such apps such as Checky and Calm, but ultimately it is you who needs to control your addictions.

Mindfulness is a prevalent theme in the healthy memory blog.  It is a Buddhist term “to describe the state of mind in which our attention is directed to the here and now, to what is happening in the moment before us, a way of being kind to ourselves and validating our own experience.”    As a way of staying mindful and keeping track of time online, Dr. Aiken has set her laptop computer to call out the time, every hour on the hour, so that even as she is working in cyberspace, where time flies, she is reminded very hour of the temporal real world.”

Internet addictive behavior expert Kimberly Young recommends three strategies:
1.  Check your checking.  Stop checking your device constantly.
2.  Set time limits.  Control your online behavior—and remember , kids will model
their behavior on adults.
3.  Disconnect to reconnect.  Turn off devices at mealtimes—and reconnect with                  the family.
Some people find what are called internet sabbaths helpful and disconnect for a day or a weekend.  Personally HM believes in having a daily disciplined schedule to prevent a beneficial activity from becoming a maladaptive behavior.

Much more is covered in the chapter, to include compulsive shopping, but the same rule applies.  To be aware of potential addiction monitor your behavior, and make the appropriate modifications.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Cyber Effect

September 7, 2016

“The Cyber Effect” is the title of an important book by Mary Aiken, Ph.D., a cyberpsychologist.  The subtitle of the book is “A Pioneering Cyberpsychologist Explains How Human Behavior Changes Online.”  She is the director the CyberPsychology Research Network and an advisor to Europol, and has conducted research and training workshops with multiple global agencies from INTERPOL to the FBI and the White House.  She is based in Ireland.

This book should be read by anyone who spends nontrivial amounts of time in cyberspace.  It should be compulsory reading for anyone with children who uses mobile devices.

The internet has had an enormous impact on our lives.  Perhaps some are not aware of this impact as it gradually increased its affects on the way we live.  Dr. Aiken defines cyberpsychology as “the study of the impact of emerging technology on human behavior.”  She continues, “It’s not just a case of being online or offline; “cyber” refers to anything digital , anything tech—from Bluetooth to driverless cars.  That means I study human interactions with technology and digital media, mobile and networked devices, framing, virtual reality, artificial intelligence (AI), intelligence amplification (IA)—anything from cellphones to cyborgs.  But mostly I concentrate on Internet psychology.  If something qualifies as “technology” and has the potential to impact or change behavior, I want to look at how—and consider why.”

Dr. Aiken is not one of those who decry how technology is some evil entity that has upended our lives, nor as something that inevitably leads to utopia.  She writes, “Technology is not good or bad in its own right.  It is neutral and simply mediates behavior—which means it can be used well or poorly by humankind.”  “Any technology can be misused.”

One of her earliest influences was J.C.R. Licklider, a psychologist who wrote a seminal paper in 1960, “Man Computer Symbiosis,” which predated the Internet and foretold the potential  for a symbiotic relationship between man and machine.  Licklider  has been one of HM’s idols since HM was an undergraduate, and it has been a lifelong frustration that a true symbiosis is yet to be realized.

As “The Cyber Effect” is such an important book, I plan to devote a post to each of the chapters excluding the first chapter.  The first chapter is titled “The Normalization of a Fetish” and discusses how cyberspace technology has change sexual behavior.  In addition to fostering new perversions, or at least ones unknown to HM, it explains how cyberspace has expanded contact with others in cyberspace, contacts that would have remained unknown without cyberspace.  Moreover, it has increased the acceptance of formerly proscribed behaviors.  Nothing more will be written in this blog on this topic.  To learn more, read the book, which you should be doing in any case.

Here are the chapters that will have a post devoted to them.  These are the individual topics, which are more informative than the chapter titles:  internet addiction; the effects of cybertechnology on babies; the effects of cybertechnology on children;  the effects of cybertechnology on adolescents; romance in cyberspace;  cyberchondria, which is hypochondria  fostered in cyberspace;  the deep web, where illegal activity occurs; and the final chapter the discusses important topics that need to be considered for the future.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Computers in Our Brains

August 25, 2016

This post is based primarily on an article by Elizabeth Dworkin in the 17 April 2106 issue of the Washington Post titled “Putting a computer in your brain is no longer science fiction.”  It describe the research done by Silicon technology entrepreneur Bryan Johnson at his company Kernel, website is  It does not appear that Johnson has already put a computer into the brain, but rather is in the process of designing a computer to put into the brain.  The article also cites work by biomedical researcher Theodore Berger who has worked on a chip-assisted hippocampus for rats.  This work has yet to advance to humans.  And it probably will be many years before any fruits from this research will be realized.

This post is filed under transactive memory, which included posts on using external technology to build a healthy memory.  Now work is progressing on moving computer technology inside the brain.  Of course, anything that assists memory health will be welcomed.

An interesting conjecture is how this new technology would be used.  The statistics reported in the immediately preceding post made HM wonder to what extent people were making use of the biological memory they had.  It may be that when some people age their cognitive activity decreases.  And it may be that this failure to use it that is the primary cause of dementia.  This appears to be even more likely when there is evidence that people who have the defining physical features of Alzheimer’s never show any of the behavioral or cognitive symptoms.

So a reasonable question is how many people would benefit from computer implants?  It would be surprising if no one benefited, but it is not a forgone conclusion that everyone would benefit.  Some people might shut down cognitively even given a computer enhancements.  Of course, this is just a conjecture by HM.

HM would hope that people would still engage in the activities advocated by HM, to include growth mindsets, meditation, and mindfulness, in addition to general practices for personal health.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Trick or Tweet or Both? How Social Media is Messing Up Politics

July 20, 2016

The title of this post is identical to the title of an article in the Technology section of the July 16-22, 2016 issue of the New Scientist.  Donald Trump has given fact checkers plenty to do over the past eight months.  According to Eugene Kiely  at  Donald Trump has made an inordinate number of false claims. looked into 158 claims made by Trump since the start of his campaign and found that  four out of five were at best “mostly false.”

Unfortunately, roughly six in ten US adults get their news primarily from social media, so the issue of accuracy  is even more important.  Psychologist Julia Shaw says, “One of the things that give social media  potency to impact political views is the immediacy of it.  You might even get an opinion before the information, which can color people’s judgment.”

Should the comics be regarded as social media?  Regardless, Garry Trudea in the strip “Doonesbury”  has noted how often Trump surpasses everyone on the planet.
These are direct quotes from Trump.
“NO ONE is more conservative than me!”
“NO ONE  is stronger on the second amendment than me.”
“NO ONE respect women more than me!”
‘NO ONE reads the Bible more than me.”
“There’s NOBODY more Pro-Israel than I am!”
“There’s NOBODY that’s done so much for equality as I have!”
“There’s NOBODY who feels more strongly about women’s health issues!”
“NOBODY knows more about taxes than me, maybe in the history of the world!”
“I have studied the Iran deal in great detail, greater by far than anyone else!”
“NOBODY’S ever been more successful than me!”
“NOBODY knows banking better than I do!”
“NOBODY knows more about debt than I do!”
“NOBODY’S bigger or better at the military than I am!”
“I’m the least RACIST person you’ll ever meet!”
“NOBODY knows the system better than me!”
“NOBODY knows politicians better than me!”
“NOBODY builds better walls than me!”
“NOBODY knows more about trade than me!”
“There’s NOBODY more against Obamacare than me!”

The following are positions for which Trump has said that he is both for and against:
Taxing the rich
Raising minimum wage
Nuclear proliferation
Abortion choice
Abortion punishment
Ordering torture
Troops to fight ISIS
Assault weapons ban
Background checks
Guns in classrooms
Legalizing drugs
Ethanol subsidies
Privatization of SS
Defaulting on debt
Invasion of Iraq
Releasing tax returns
Total Muslim ban
Self-funded campaign
Debating Sanders
Iran Deal
Accepting Syrian Refugees

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Machine

May 11, 2016

The second cryptomind discussed in “The Mind Club” is the Machine.  The authors ask us to think of when we are confronted with a malfunctioning piece of technology like a laptop.  They note that our first impulse isn’t to think , “It gets angry when too many programs are open.”  Similarly when we are hoping that our car will start on a cold winter morning, we don’t think about complex interactions of carburetor and temperature but instead think of our car as stubborn or unhappy in the cold—and beg it to not make us late for work.  The authors note that the tendency to see mind in technology occurs primarily when it disobeys our desires.  When machines function smoothly we feel in control.  When they don’t we turn to mind to help us understand.

Psychologist Carey Morewedge has termed his phenomenon a negative bias in mind perception.  Negative events prompt mind perception more than positive events.  She illustrated this phenomenon in an experiment in which participants played the ultimatum game.  In the ultimatum one participant is given a sum of money, say $10.  She then meets a second participant with the requirement that she offer her a cut of this money.  If she accepts the cut, they both leave with their respective amounts of money.  However, if the second participant refuses the offer, then they both forfeit the money.  According to classical economic theory, the second participant should accept the offer, no matter how small.  So even if the offer were $1, one should take it because otherwise she would leave with nothing.  In reality, if people are not offered some reasonable amount, they will refuse the offer.

In Morewidge’s experiment, study participants played three ultimatum games with three different partners, who (the participants were told), could be all people, all computers, or some combination of the two.    After the participants were presented with the proposed split from their partner, Morewedge asked them to guess whether heir partner was a computer or a person.  The partner was always a computer.  When the offer was fair or generous, they were more than happy to think it was a mindless machine, but when the offer was unfair, they ascribed intention behind it, believing it to result from the cruel calculations o another person.  This phenomenon where bad outcomes lead people to search of an agent to blame for mistreatment is called dyadic completion.

People often perceive mind in machines because of anthropomorphism.  We are generally anthropomorphic, seeing everything from the perspective of ourselves.  We have schemas, scripts or outlines for how things should go, for many things in life.  These schemas are unconscious, so we usually don’t realize when we are anthropomorphizing..  Clifford Nass and Young Moon found that participants treated computers as if they had gender and ethnicity.  Polite people were polite to test computers.  When the computer asked how it was performing, people were consistently nice, even when it was actually performing poorly.  However, just as with humans, this politeness held only when participants were dealing with the computer “face to face.”  They would bad-mouth one computer to a different computer!

The concept of transactive memory is key to the machine mind, as indeed the machine is mind.  As was noted in the introductory blog post to “The Mind Game” Wegner articulated the concept of transactive memory.  Transactive memory refers to memories held by fellow humans and by memories held in technology.  Conventional technology involved paper, but digital technology is electronic.  We can ask our spouse or someone else we know well, to remind us of something, or to tell us something about a topic of interest.  However, most of our memory is distributed among a wide variety of digital machines.

Healthy memory distinguishes among three types of memory.  Accessible memory are those memories that are either internal or can be readily accused externally.  Some memories are available, in that a we know how to find them, but are not immediately accessible to recall.  Potential memory is all the data, information, knowledge, that can be found in our fellow human beings or in technology.  It is through technological artifacts that we are able to access all recorded knowledge that predates us.

There are two senses to the machine mind.  One is how it the machine works, which can be difficult and consists of problems that usability research is supposed to address.  However, in another sense, the machine mind consists of the totality of recorded human data, information, and knowledge.  The machine mind will increasingly become part of our daily lives.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Mind Club

May 9, 2016

The title of this post is the title of an interesting and provocative book by Daniel M. Wegner and Kurt Gray.  The subtitle of the book is “Who Thinks, What Feels, and Why It Matters.”  Dr.Wegner is the creator of the concept of Transactive Memory, which is one of the categories of the Healthymemory Blog.  Transactive memory refers to memories of our fellow human beings and to information held in technology.  This information can be stored in paper or digitally.  I have been an admirer of most of Dr. Wegner’s work, so news of his death was quite disturbing.  He died of ALS, better known as Lou Gehrig’s disease.  This degenerative disease slowly destroyed his ability to walk, to stand, to move, to talk, to eat, and eventually to breathe.  Dr. Wegner had only begun writing this book when he was diagnosed.  He asked his graduate student, Kurt Gray, to finish the work.  Fortunately he did a highly credible job.  Should you find these topics to be especially interesting, it is strongly recommended that you read the original book.

The mind is a difficult concept.  We only have direct access to our own minds, and even then only a small percentage of our mind (see the healthy memory blog post “Strangers to Ourselves”) So we need to develop models of the minds of our fellow humans.  And as there are different types of humans we need to develop models of these different types.  Then there are animals, and many different species of animals.  There are machines.  The different types of minds were developed on the basis of a large scale survey regarding how people thought about other minds.   Analyses of these data found that people see minds in terms of two fundamentally factors, sets of mental abilities that were labeled experience and agency.  Quotes from “The Mind Club” follow:

“The experience factors captures the ability to have an inner life, to have feeling experiences.  It includes the capacities for hunger, fear, pain, pleasure, rage, and desire, as well as personal consciousness, pride, embarrassment, and  joy.  These facets of mind seemed to capture “what it is like” to have a mind—what psychologists and philosophers often talk about when they discuss the puzzle of consciousness.”

“The agency factor is composed of a different set of mental abilities:  self-control, morality, memory, emotion recognition, planning, communication and thought.  The theme for these capacities is not sensing and feeling, but rather thinking and doing.  The agency factor is made up of the mental abilities that underlie our competence, intelligence, and action.  Minds show their agency when they act and accomplish goals.”

Healthy memory apologizes for being so cryptic.  The meaning should emerges as each type of mind is discussed.  Nine types of mind will be discussed.  The first six types of minds are well discussed.  The last three are seriously flawed.  All this follows in the subsequent healthy memory blogs.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

An Increasing Failure to Use Technology to Foster Cognitive Growth

May 7, 2016

Two concepts are central to the healthy memory blog.  One is cognitive growth, which stresses the importance of cognitive growth for healthy memories and a fulfilling life.  The other is transactive memory, which is the use of technology and our fellow human beings to foster cognitive growth.  Consequently I found an article by Brian Fung in the April 25 edition of the Washington Post titled “New data:  American are abandoning wired home Internet” distressing.

According to the article in 2013, 1 in 10 U.S.  households were mobile-only.  Now 1 in 5 U.S. households are mobile-only. There is also a relationship between household income and mobile only use, with poorer households being more likely to be mobile only.  So the problem of income divide and the effective use of technology is still prevalent.

Regular readers of the healthy memory blog should already understand my discontent. but I shall elaborate for those who are not regular readers.  Mobile computing can be extremely helpful when people are mobile.  However, use of mobile devices do have some serious shortcomings.

Previous posts have argued that exclusive or excessive mobile computing results in superficial interpersonal relationships (enter “Sherry Turkle” into the healthy memory blog search block).  To do “deep processing” that produces cognitive growth requires at least a notebook and preferably a laptop computer.  This is best done in a quiet location with minimal distractions.  The multitasking that is frequently done with mobile devices results in deficient cognitive processing and can result in possible danger to others in addition to oneself (enter, “multitasking” into the healthy memory search block).

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Web of Lies

May 1, 2016

“Web of lies:  Is the Internet making a world without truth” is an article by Chris Baranluk in the Feb 20-26, 2016 edition of the New Scientist.  The World Economic Forum ranks massive digital misinformation as a geopolitical risk alongside terrorism.  This problem is especially pernicious as misinformation is very difficult to correct (enter “misinformation” into the healthy memory search block to see relevant posts).  Bruce Schneider, a director of the Electronic Frontier Foundation, says that we’re entering an era of unprecedented psychological manipulation.

Walter Quattrociocchi at the IMT Institute for Advanced Studies in Lucca, Italy, along with his colleagues looked at how different types of information are spread on Facebook by different communities.  They analyzed two groups:  those who shared conspiracy theories and those who shared science news articles.  They found that science stories received an initial spike of interest and were shared or “liked” frequently.  Conspiracy theories started with a low level of interest, but sometimes grew to be even more important than the science stories overall.  Both groups tended to ignore information that challenged they views.  Confirmation bias leads to an echo chamber.  Information that does not fit with an individual’s world view does not get passed on.  On social networks, people true their peers and use them as their primary information sources.   Quattrociocchi  says “The role of the expert is going to disappear.”

DARPA, a research agency for the U.S. Military,  is funding a Social Media in Strategic Communication Program, which funds dozens of studies looking at everything from subtle linguistic cues in specific posts to how information flows across large networks.
DARPA has also sponsored a challenge to design bots that can sniff out misinformation deliberately planted on Twitter.

Ultimately the aim of this research is to find ways to identify misinformation and effectively counter it, reducing the ability of groups like ISIS to manipulate events.  Jonathan Russell, head of policy at counter-terrorism think tank Quilliam in London says, “They have managed to digitize propaganda in a way that is completely understanding of social media and how it’s used.  Russell says that a lack of other voices also gives the impression that they are winning.  There’s no other  effective media coming out of Iraq and Syria.  Think tank Quilliam has attempted to counter such narratives with videos like “Not Another Brother,” which depicts a jihadist recruit in desperate circumstances.  It aims to show how easily people can be seduced by exposure to a narrow view of the world.

This research is key.  Information warfare will play an increasingly larger percentage of warfare than kinetic effects.

Pangiotis Metasxes of Wellsley College believes that we have entered a new ea in which the definition of literacy needs to be updated.  “In the past to be literate you needed to know reading and writing.  Today, these two are not enough.  Information reaches us from a vast number of sources.  We need to learn what to read, as well as how.”

Some Thoughts on Privacy and Data Security

March 2, 2016

The current iPhone controversy regarding whether Apple should be required to unencrypt the phone of the California terrorist shooters to enable the identification of potential future terrorists motivated this post.  This information could potentially save an unknown number of lives.  The fear is that personal privacy could be compromised.

I find irony in the way the public regards personal privacy.  On networks such as Facebook detailed personal information is published.  I frequently wonder why people regarded this information of being of any interest to other people.  We frequently read how this information is used against people to preclude employment or to embarrass them.  Yet when the government wants access to information for purposes of national security and to obtain information that could save lives, there is a large degree of push back.

I perceive some personal conceit in this concern.  Why do people think the government would have any interest in them. Personally, I would be flattered to learn that I was under surveillance and to think that the government regarded me as that important.  And I know that they will find nothing to make me personally liable.

But apparently people fear that they have data that the government can use against them.  They perceive the government as evil and they want laws to protect themselves against this evil government.  But it would be the government that enforces these laws.  So why regard  this evil government as being trustworthy.  I do not think it would be difficult to find laws in totalitarian  states that protect their citizens, but which are never enforced.

And why be concerned only about governments?  I believe that business has more data and will always have more data on me than the government.  There are also individuals who can access information and demand payment or threaten to release information.

Focusing on collection will not work.  Laws should be passed on how this information is used.  Should information be used to embarrass or cause financial loss, the laws should carry severe penalties against persons or organizations, including government.  Legitimate uses such as prosecuting criminals or preventing terrorist acts would be exempted.  Today criminals are released because of the way information was collected.  This is wrong and is due to the locus of the laws.  Again, laws should be focused on how information is used rather than how information is collected.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Hey US Teachers, leave those climate myths alone

February 22, 2016

The title of this post is the title of an article by Michael Mann in the Feb 20-26, 2016 edition of the New Scientist.  He summarized an article in the 12 Feb 2016 issue of science, whose authors are Eric Pulitzer, Mark McCaffrey, A. Lee Hannah, Joshua Rosenau, Minda Berbeco, & Ann H. Reid (  Even though 97% of active climate scientists attribute recent global warming to human causes, and most of the general public accept that climate change is occurring, only about half of U.S. adults believe that human activity is the predominant cause.   The U.S. ranked the lowest in this belief among 20 nations polled in 2014.

The article examines how the societal debate in the U.S.  affects science classrooms.  They found that whereas  most U.S. science teachers include climate change in their courses, it appears that their insufficient grasp  of science is hindering effective teaching. Generally teachers devote a paltry 1 to 2 hours to this important topic.  Despite the fact that 97% of experts agree climate change is mainly human caused, many teachers still “teach the controversy,” suggesting a sizable “consensus gap” exists.  They survey showed that 7 in 10 teachers mistakenly believe that at least a fifth of  experts dispute human-caused climate change.  Although they are supposed to be teaching science, they have insufficient knowledge in the discipline they are teaching.

Michael Mann in his book “The Hockey Stick and Climate Wars describes how those with interests in fossil fuels have spent tens of millions of dollars to create the impression of a consensus gap by orchestrating a public relations campaign aimed at attacking the science and the scientists, thus confusing the public about the reality and threat of climate change.  They also have created a partisan political divide on the issue.  The United States is the only advanced country with a major political party denying the reality of climate change.

These climate myths provide an unfortunate example of the effectiveness  of Big Lies.  These Big Lies are working their damage in the United States and not only on the issue of climate change.

These myths on climate change are exacerbating a problem and endowing it to our
children.  These children need to know truth so that they can educate their parents.

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Effects of the Digital Age on Cognition

February 20, 2016

Readers of the healthy memory blog should know that the concept of transactive memory includes the use of technology to foster the development of healthy memories and efficiently functioning human cognition.  Some day, I hope to write that book.  It is easy to find references that tout how technology can foster cognition, but reviews of the effects of technology tend to be negative.  The motivation for this post stems from my reading of a book by Mark Bauerlin, “The Dumbest Generation:  How the Digital Age Stupefies Young Americans and Jeopardizes Our Future [Or, Don’t Trust Anyone Under 30].  I don’t recommend this book as reading it would be a waste of time.  I question the purpose of diatribes that inform us that the world is going to hell in a hand basket.  What are we to do with such a message?  Outlaw technology? It seems like it is incumbent upon the author to offer some solutions.

An earlier blog post, “Notes on Reclaiming Conversation:  The Power of Talk in a Digital Age” reviewed a book by Sherry Turkle of the preceding title.    Ms Turkle sounded similar alarms, but also offered some solutions.

The same basic theme pervades both books, namely that digital technology leads to a high level of connection to many sources and individuals, but to a superficial level of cognition.    I am convinced that this a real problem.  Moreover, it is covered by the veneer of technology which leads many to the conclusion that something valuable and substantive is being accomplished.

Mobile technology has prospered because of a perception that we need to keep connected 24/7.  Is there really such a need?  What is so imperative that it will not wait until we have the time and resources to devote adequate attention to it?  One of the most flagrant examples of this are mobile apps that allow trading on your smart phone.  Although this might be a need for sophisticated day traders, for most people all this provides is a tool to lose money more quickly.

There have been a number of posts on Dunbar’s Number (to see them enter “Dunbar’s Number”  into the healthy memory search block).  Dunbar’s Number refers to the number of friends we can effectively have.  Good friends require an investment of time that needs to be carefully considered.

Similarly the capability to gain access to a large amount of information in seconds is limited by our cognitive capacity to process this information.  The problem is that we fail to process information critically.  There is a lack of critical thinking.  Our proneness to be cognitive misers leads us to search for information that is in accordance with our own needs and beliefs.

So what is the solution, or rather what are some solutions?  First of all, let me suggests previous posts on the Elements of Effective Thinking and using digital technology of facilitate the use of these elements.  I  would also suggest entering “Critical Thinking” into the search block of he healthy memory blog, and consider how technology csn foster critical thinking..

Shooter games likely sharpen perceptual motor skills.  Why not games that sharpest the development of cognitive skills”?  Why not the development of multi-player games that foster decision making, problems solving, risk assessment, collaboration, and so forth?

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Technology and Poverty

January 28, 2016

The October 2, 2015 edition of the New Scientist had two interesting articles in the Comments section.  The first by Federico Pistero is titled “As tech threatens jobs, we must test a universal basic income.”  An earlier healthy memory blog post, “The Second Machine Age,” reviewed a book by Erik Brunjolfsson & Andrew McAfee titled, “The Second Machine Age:  Work, Progress, and Prosperity in a Time of Brilliant Technologies”l  predicted that many jobs, jobs that would be regarded as advanced, will disappear during this second machine age.  Other healthy memory blog posts reviewed books whose authors argued that humanity’s “unique” capacity for empathy would still keep people employed.  I wrote that there would not be enough jobs requiring this “unique” capacity to keep everyone employed, even if these skills could not be implemented with technology.

The comment piece by Pistero  stated that it is possible that within 20 years almost half of all jobs will be lost to machines, and nobody really knows how we are going to cope with that.  Pistero writes “One of the most interesting proposals, that doesn’t rely on the fanciful idea that the market will figure it out, is an unconditional basic income (UBI).

A UBI would provide a monthly stipend to every citizen, regardless of income or employment status.  A key criticism of the UBI is that it would kill the incentive to work.  However, research cited by Pistero involving a whole town in Canada and 20 villages in India found that not only did people continue working, but they were more likely to start businesses or perform socially beneficial activities compared with controls.  Moreover, thee was an increase in general well-being , and no increase in alcohol, drug use, or gambling.

Of course, this research needs to be replicated, but it is good to know that this problem is being researched.  The poverty resulting from large scale unemployment would be devastating.

A second article in the same Comment section by Laura Smith is titled “Pay people a living wage and watch them get healthier.”   Paying the lowest earners less than a living wage, which occurs in both the US and the UK, leaves full-time workers unable to lift their families our of poverty.   The problem goes far beyond unpaid bills.

Poverty keeps people from resources such as healthcare and safe housing.  People in poverty experience more wear and tear from stress than the rest of us, they are sicker, and they die earlier.  Children living in poverty are more likely to be depressed and to have trouble in school.  Newborns are more likely to die in infancy.  Poor people are marginalized.  They often live outside the scope of therapeutic, vocational, social, civic, and cultural resources.  This experience of “outsiderness” reduces cognitive and emotional function.  Brain activity associated with social exclusion has been shown to parallel that of bodily pain.

Research addressing the question of whether raising people’s incomes would improve their health looked at the impact of a community-wide income rise when a casino was built on a Cherokee reservation in North Carolina.  The research compared psychiatric assessments of children before and after this even.  Children’s symptom rates began to decline.  By the fourth year out of poverty, the symptom rates could not be distinguished from children who had never been poor.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

World Economic Forum (WEF) Projects that 5 Million Jobs Will Be Lost to New Technologies by 2020

January 26, 2016

The Washington Post article on which this blog post is based can be found in the 20th January  2016  edition on page A13 in the article written by Jena McGregor.  The theme of the 2016 gathering is the Fourth Industrial Revolution.  This is the term it uses to describe the accelerating pace of technological changes.  It emphasizes changes that are “blurring the lines between the physical, digital, and biological spheres,” which is the combination of things such as artificial intelligence, robotics, nanotechnology, and 3-D printing.  It projects that by 2020, 7.1 million jobs are expected to be lost vs. only 2 million jobs gained.   The WEF study predicts different magnitudes of effects depending on gender.  The repot estimates that in absolute terms, men will face about three jobs lost for every job gained, whereas women will face more than five jobs lost for every job gained.  Now the astute reader will realized that this breakdown does not square with the overall number of jobs lost even given the difference in gender losses.  Whether this is due to the WEF report or the report on the WEF report is unknown.  My queries to the author were not answered.  Another study by Oxford University researchers estimated that 47 percent of U.S. jobs could be taken by robots in the next two decades.

The good news is that about a third of the skills that will be most desirable in 2020 aren’t even considered important today.  Social skills such as persuasion and emotional intelligence are expected to be more in demand that limited technical skills.  so are creativity, active listening, and critical thinking.

So the good news means that the new jobs are likely to be more desirable that the old jobs that are being lost.  There still will likely be an increase in unemployment unless other measures are taken, such as shorter work days, many more vacation days, and opportunities for personal development.  I’ve written in previous healtymemory blog posts that when I was in elementary school in the fifties, the prediction was that there would be much more leisure today as the result of technology.  That has not materialized.  Moreover back then it was unusually for mothers to work.  And the technology that emerged is well beyond the technology that was envisioned.  So why are we working so hard.   A priority needs to be given to quality of life rather than gross domestic product (GDP) (see the healthy memory blog post, “The Well-Being of Nations:  Meaning, Motive, and Measurement”).

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

We’re Back

January 17, 2016

We’re returned from the Scientific American Bright Horizons cruise to the Western Caribbean and the Panama Canal.  We disembarked from Fort Lauderdale.  The first port call, more accurately a tendering call, to Half Moon Cay in the Bahamas.  Next port was Oranjestad, Aruba.  Then Cartagena, Columbia before entering the Panama Canal, Gatun Lake, and exiting the Panama Canal, which was a truly memorable experience.  Next came Puerto Limon, Costa Rica.  We were especially impressed by Costa Rica, the country and its fruits.  My wife was overwhelmed by the healthy fruits she found.  We stopped at Georgetown in the Cayman Islands before returning to Fort Lauderdale.

However, the highly of the cruise were the speakers.  All speakers made multiple presentations.   Dr MIchael Starboard is a University Distinguished Teaching Professor at the University of Texas in Austin.  His opening talk was on the five elements of effective thinking, which was truly impressive.  All his lectures concerned effective thinking and were addressed at specific topics in mathematics.

Dr. Monisha Pasupathi is a professor of psychology at the University of Utah.  She made interesting presentations on important topics including memory, rationality, emotion,  emotion regulation, and personality.

Dr. Glenn Starkman is a professor of physics and astronomy and director toe the Center for Education and Research in Cosmology and Astrophysics at Case Western University.  His presentations were definitely mind expanding, and he made difficult material accessible and understandable.

Dr. Chris Stringer is the Research Leader in Human Origins and a Fellow of the Royal Society.  His presentations on human revolution were enlightening.  There have been many crooked roads on the way to human revolution.

Our group attending these presentations was equally impressive.  They were knowledgeable and highly intelligent.  When asked how many invested in the stock market, a fair number of hands were raised.  But when asked whether they played in the ship’s casino, not a single had was raised.  These were people with growth mindsets who were enjoying the process of growing their minds.

There is much material to ponder here regarding future healthy memory blog posts.  What is of both obvious and immediate interests are Dr.Stringer’s statements about what led to homo sapiens and why it succeeded.  The development of large enough groups was important, but the key to success was what we term in the healthy memory blog as transactive memory.   Transactive memory is the information shared among different memories.  Many minds are needed and there needs to be sharing of information among these minds.  Once spoken language was developed, written language increased the storing power of transactive memory, and the development of the printing press greatly expanded the access to transactive memory.  Today we have the net (see the healthy memory blog post, “Why the Net Matters” which is on the book of the same title by David Eagleman).  Note also the remainder of the title is “Six Easy Ways to Avert the Collapse of Civilization.”  Please reread and assess on your own whether these easy ways to avert the collapse of civilization are easy, and to reassess the risk.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Notes on “Reclaiming Conversation: The Power of Talk in a Digital Age”

December 26, 2015

“Reclaiming Conversation” is a book by Sherry Turkle.  She focuses on smartphones in particular.  As a matter of personal edification, and as the user of a dumb cell phone I  found this book valuable in understanding the popularity of smartphone and texting.  There are several reasons I do not use a smartphone.  I find the screen size much to small.  I require much more context in what I view.  I also need a conventional keyboard, those on smartphones are much too small.  Similarly I refuse to text and do not read texts.  I also find that smartphones add to an already existing information overload.  Consequently, I do not like interruptions and live in a world where timeliness will not suffer if I wait until a time when I am free to devote full attention to messages and material which is important to process.  Having read Turkle’s book, I have no desire for a smartphone, and should I ever purchase a smartphone, I’ll use it sparingly.

I’ve long been baffled trying to understand why people text when it is so much easier to talk.  Most teenagers send around 100 texts per day, so there must be some reason this is so popular.  Apparently, there is a sense of control when one texts.  One can read what one has written before it is sent, and once it is sent, one can wait to see if and who, if anyone responds.  So many feel that texting provides a sense of control that they feel is important.

In addition to needing to feel in control, there also seems to be a compulsion to be connected.  According to Turkle, 44% of users never turn off their phones.  Although I understand the data indicating that people feel a need to be connected most of the time, I still fail to see why they feel this necessity.  The healthy memory blog has written posts about FACEBOOK and Dunbar’s number.  See the healthy memory blog post “How Many Friends is Too Many.”  Dunbar is an evolutionary biologist who calculated the maximum number of relationships our brain can keep track of at one time to be 150.  Before smartphones Dunbar estimated that there are about five people who are close and who we speak with frequently, and  about 100 acquaintances we speak with about once a year.  With the exception of the 150 number, which is a biological constraint, the other numbers have apparently gone up drastically since the advent of the cell phone.  Friendship requires an investment of time.  We can only afford a limited number of good friends.  A large number of friends implies a large number of superficial relationships.  It appears that in the smartphone era, quantity is valued over quality.

There also appears to be an aversion to solitude.  An experiment was run in which participants were asked to sit by themselves for fifteen minutes.  They were provided a device which they could use to shock themselves, although all the participants indicated that they would not use the device.  Nevertheless, many of the participants shocked themselves after only six minutes.  I find this result extremely depressing, to think that people would find solitude that they chose to give themselves an aversive shock to cope with loneliness.  Solitude is important for both personal and intellectual development.  We need to spend time with ourselves.

One researcher reports a 40% loss of empathy in the past 20 years.  The healthy memory blog post “A Single Shifting Mega-Organism noted that throughout our lives our brain circuitry decodes the emotions of others based on extremely subtle facial cues.  Geoff Colvin and many others regard empathy as a uniquely human skill that will prevent computers from pushing humans out of the job market.  Well, empathy apps are being developed.  But empathy is developed best during conversations with our fellow humans.  This excessive use of smartphones are inhibiting, if not precluding this development.

Smartphone use implies multitasking, and whenever we multitask the performance on component tasks declines.  If you do not believe this, then read the 18 healthy memory blog posts on the topic.  The use of smartphones during classes detracts from the lecture or the topic being discussed.  Were I still teaching I would not allow the use of smartphones during classes.

There is a chapter on smartphones and romance that I found extremely depressing.  Most of the time I am envious of the young in this digital age, but not in the case of romance.  In short, smartphones take the romance out of romance.

I disagree with what Turkle writes about Massively Online Open Courses.  She puts conversations against  these courses and ignores the genuine benefits of these courses.  First of all, a Massively Online Open Course does not preclude conversations.  Secondly, conversations, as important as they are, need not be a necessary component of all courses.

At the end of the book Turtle writes about humanoid robots and robotic pets.  I did not see the relevance of these topics to the central thesis regarding conversations.

So having stated the problem, what can be done about it.

First of all, having recognized the costs of multi-tasking and do a cost benefit analysis of where smartphone use is appropriate.  Then establish rules or guidelines.

It is noted that many employees of social media companies make it a point to send their children to technology free schools.  And there is the following quotation from Steve Jobs biographer.  “Every evening Steve made a point of having dinner at the big long table in their kitchen, discussing books and history and variety of things.  No one ever pulled out an iPAD or computer.  He did not encourage his own children’s use of iPADS or iPHONES.

“Restoring Conversations” is extensively documented.  Touching them takes you to the notes.  Unfortunately, there is no DONE enabling an easy return to the text.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Cultivating & Effectively Exploiting Human Capital

December 19, 2015

My favorite chapter in “Why the Net Matters” is Chapter 6, Cultivating Human Capital.  Regular readers might recognize this as one of my favorite topics.  Human Capital refers to knowledge and know-how, which is key to the success of any country.  The chapter begins by discussing the benefits of crowd sourcing.  For example, tackles the computationally difficult problem of protein folding by turning it into a game played by thousands., which stands for Collaborative Space Travel and Research Team is an open-source development to get a manned craft to the moon.  CSTART is a non-government, non-profit, collaborative space agency with the mission of “space exploration, by anyone, for anyone.

There are so many resources on the net for cultivating human capital.  There is the Wikipedia.  There is MIT’s open courseware that is open to any self-learner.  Rice University launched Connexions (, which features 17,000 modules woven into 1,000 collections for levels from children to professionals, in fields ranging from electrical engineering to psychology.  Also there is Khan Academy.  By no means is this an exhaustive list.

Massive Online Open Courses (MOOCs) are being made available through many universities.  Although courses are usually free, there is the matter of getting credit for successfully completed courses.  These issues are being worked out.  However, sometimes it is better to audit a course first, before taking it for credit.  I had a friend who did this for his Calculus courses.  He would first audit course, and then take it for credit.  He earned straight As in these courses.

Of course, education is appreciated most by those who are growth minded.  In the lingo of the health memory blog, this is transactive memory, which is knowledge available via technology and fellow humans.

It can be argued that we are much better at cultivating human capital than at exploiting human capital.  Although crowd sourcing is a good example of effectively exploiting human capital, I spent my career with the privilege of working with brilliant individuals, yet this talent was not effectively exploited and frequently ignored.  Bureaucracies in both government and in private companies stifle this human capital.  Management does not appreciate, and sometimes cannot appreciate, this potential, so it remains unexploited.  Bureaucracies excel at growing themselves rather than understanding and making use of effective human capital.  Bureaucracies also adversely impact the cultivation of human capital.    I’ve heard the argument, and I believe this argument, that a factor bearing a significant impact on the ridiculously increased costs  of higher education is the growth of unnecessary bureaucracy.  Bureaucracies need to be studied and changed so that their goal is the cultivation and exploitation of human capital rather than the growth of the bureaucracy.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Review of Why the Net Matters

December 17, 2015

The full title is “Why the Net Matters:  Six Easy Ways to Avert the Collapse of Civilization,” by David Eagleman.  This book is recommended to all who have growth mindsets.  It provides a good vehicle for growing one’s mind.  The healthy memory blog has had a variety of posts on technology, the potential it offers, and possible threats it potentially portends.  Eaglemen poses the question, “Why Do Civilizations Collapse/“, and discusses six reasons previous civilizations have collapsed.

Epidemics have wiped out some civilizations.

Knowledge has been destroyed.  He cites the burning of the Library of Alexandria, which at the time was the sole repository of available knowledge.  The writings of the Mayans were destroyed by the colonizing Spaniards.  Many other examples are provided.

Natural disasters in the form of wind, water, fire, and quakes have toppled carefully built civilizations in a day.

Tyrants have destroyed civilizations and have stunted the development of their own civilizations.

The necessary resources required to sustain a population are not met.  Eagleman notes that these are not mutually exclusive causes.  Frequently different causes interact to wipe out the civilization.

Anthropologist Joseph Tainter suggests that societies fail because they do not change their fixed designs for solving problems.  Arnold Toynbee noted that civilizations find problems that they cannot solve.  In other words, the societies collapse due to insufficient human capital.

Eagleman argues that the net provides the basis for avoiding all these causes of the collapse of previous civilizations.  No guarantees are provided, and unless the net is used to advance, these causes can reoccur.  Moreover, he does identify new threats.

He discusses four ways that the net can go down.

The first is through cyberwarfare, a threat with which we all are aware.

The second is by cutting cables.  He identified cases of cable cutting of which I had be entirely unaware.

The third is by political mandate.  In other words, governments shut down the net.

The fourth is via space weather.  Satellites have been disabled via solar flares, but the threat goes beyond satellites.  When a massive solar flare erupts on the sun, it can cause geomagnetic storms on earth.  The Carrington flare, which occurred in 1859 sent telegraph wires across Europe and American into a sparking, frizzing frenzy.  It boggles the imagination to consider the damage that would occur were such a flare to occur today.  Theoretically, a major solar event could melt down the whole net.

Eagleman proposes a seed vault for the net.  There is a Global Seed Vault in Svalband, Norway.  It holds duplicate samples of seeds held in gene banks worldwide.  If a nuclear winter were to wipe out all the crops on the planet, future generations could reboot the agricultural systems.  Eagleman proposes a similar vault for the net.

In short, this is a good read for a growth mindset.

Ann Applebaum’s Column on Facebook

December 14, 2015

The title of her column was Undoing Facebook’s damage.  Anyone who has read any of my sixteen previous posts about Facebook should be aware that I am not a fan.  However, I must applaud Mark Zukerberg and his wife on their pledge to give away $45 billion dollars.  Nevertheless, I also applaud Anne Applebaum for her column.  Here is her advice “…use it to undo the terrible damage done by Facebook and other forms of social media to democratic debate and civilized discussion all over the world.”  She goes on to say that weak democracies suffer the most.  Given the extensive damage done in the USA, that is an extraordinary amount of damage.  Just let me cite one example, the conversion of Moslems to radical jihadism.  This is a problem most acutely felt by Moslems, in general, and by the parents of those converted, in particular.

Of course, this was not Zukerberg’s intention. Rather it is an unintended and rather extreme consequence.   Applebaum goes on to write, “The longer-term impact of disinformation is profound:  Eventually it means that nobody believes anything.”

Readers of the healthy memory blog should be aware that it is extremely difficult to disabuse people of their false beliefs.  Moreover there are organizations who produce false information.   This has become an activity with its own name, agnogenesis.

So an activity is needed to counter agnognesis. Disagnogensis?  Please help, Mr. Zukerberg.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

A Single Shifting Mega-Organism

November 19, 2015

A single shifting mega-organism is how Dr. Eagleman describes our species in “The Brain.”  He does this because we are a social species, and an enormous amount of brain circuitry has to do with other brains.  Consequently  we have a new field of research, social neuroscience.  I would add that our shifting mega-organism includes not only the living, but also the dead.  Through the artifacts of technology, we can can learn from those who have passed away.  Information resident in technology and in our fellow human beings comes under the general rubric of transactive memory.

Throughout our lives, our brain circuitry decodes the emotions of others based on extremely subtle facial cues.  Research has shown that people viewing a photo of a smile or a frown, produced short periods of electrical activity  that indicated that their own facial muscles were moving, effectively mirroring the smile or frown that they were viewing.

There is a pain matrix in the brain where pain is processed.  The precipitating event activates different areas of the brain operating in concert to produce the feeling of pain.  When you watch someone in pain, the parts of your pain involved in the emotional experience of pain are also activated.  This provides the basis for empathy.  You literally feel the other person’s pain.  We are able to step out of our shoes and into the shoes of another, neurally speaking.  Empathy is an important skill.  Having a better grasp of what someone is feeling gives a better prediction about what they’ll do next.  This is true of social pain as well as physical pain.  Social pain activates the same brain regions as physical paint.

If empathy worked all the time, then we would be a much more functional species.  Unfortunately  this single shifting mega-organism  exhibits warfare between and sometimes among different parts.  Outgroups are identified for violence even when those outgroups are defenseless and pose no threat.  This violence has occurred throughout recorded history and likely before history was recorded.  Starting in 1915 more than a million Armenians were killed by the Ottoman Turks (accurately portrayed in the movie “The Cut”).  The Japanese invaded China and killed hundreds of thousands of unarmed civilians in 1937.  Then there was the infamous German killing of many millions of Jews in the holocaust during World War 2.  In 1994 the Hutus in Rwanda killed 800,000 Tutsis, many with machetes.  Between 1992 and 1995 during the Yugoslavian War over 100,000 Muslims were slaughtered in violent acts known as “ethnic cleansing.”  In Srebrenica over the course of ten days, 8,000 Bosnian Muslims were shot and killed after the United Nations commanders expelled them from the compound in which they had sought safety.  Women were raped, men were executed, and even children were killed.  Today we regularly see atrocities committed by ISIS.

Itzhak Fried, a neurosurgeon, has called these atrocities examples of Syndrome E (E for Evil).  Syndrome E is characterized by a diminished emotional reactivity, which allows repetitive acts of violence.  It includes hyperarousal, which is a feeling of elation in doing these acts.  There is group contagion.  Everyone is doing it, and it catches and spreads.   Compartmentalization exists in which one can care about his own family yet perform violence on someone else’s family.   This suggests that this is not a brain-wide change, but instead involves areas involved in emotion and empathy.  So a perpetrators choices are run by the parts of the brain that underlie logic, memory, and reasoning, but not the networks that involve emotional consideration of what it is like to be someone else.  According to Fried, this equates to moral disengagement.  People are no longer using the emotional systems that under normal circumstances steer their social decision making.

So, now we have a name and an explanation.  What is needed is a means of prevention or a cure!

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Future of Technology and the Future of Terrorism

October 10, 2015

These topics are addressed in The New Digital Age:  Transforming Nations, Businesses, and Our Lives, a book by Eric Schmidt and Jared Cohen.  Eric Schmidt, Ph.D., is the executive chairman of Google.  He has a long history in the technology field.  Jared Cohen is the founder and director of Google Ideas.  He is a Rhodes Scholar and the author of two books, Children of Jihad and One Hundred Days of Silence.  From 2006 to 2010 he served as a member of the secretary of state’s Policy Planning Staff and as a close advisor to both Condolezza Rice and Hillary Clinton.  He is now an adjunct senior fellow and the Council of Foreign Relations.  So it is clear that these gentlemen are experts in the areas of which they write.  Moreover, they are widely traveled, having been to both war torn Iraq and Afghanistan.

For example, in Afghanistan they learned of an entire village that revolted against the Taliban when the extremist group tried to seize their phones.  In Kenya, they visited Maasi nomads in Loodariak who live without electricity or running water, but carry, along with their swords, mobile devices that they use to pay for items at the market.  In North Korea, citizens risk imprisonment in the gulags and in some cases death, which can also be applied to three generations of relatives, in order to obtain smuggled phones and tablets and make extremely risking trips to the Chinese border just to capture a signal.

There is simply too much material here to even attempt to summarize.   Descriptions by the experts on the development of technology can certainly be regarded as authoritative.  There are chapters on Our Future Selves, The Future of Identity, Citizenship, and Reporting, the Future of States, the Future of Revolution, the Future of Terrorism, the future of Conflict, Combat, and Intervention.  If one is prone to worrying, you might want to reconsider reading this book, for there is much to worry about, many nightmare scenarios.  Nevertheless , the discussion of cyberwarfare are detailed and informative.

Central to the discussion of terrorism is the question of what makes a person a terrorist? How can terrorism be fought?  General Stanley McChrystal draws on his experience from commanding troops against terrorist offers these suggestions.  “What defeats terrorism is really two things.  It’s the rule of law and then it’s opportunity for people.”  Young people need to be provide with context-rich alternatives and distractions that keep they from pursuing extremism.  Outsiders do not need to provide content, they just need to create the space.”

I think highly of the general’s ideas and recommendations.  However, I don’t think they provide a complete solution.  The terrorists who flew planes into the Trade Towers and the Pentagon were well educated and well off.  They had opportunity and context-rich alternatives.  These people need to be addressed at another level with helpful narratives to replace their distorted versions of reality.

The authors do identify the Achilles Heel of Terrorism, and that is technology itself.  To remain hidden, Osama bin Laden had to remain off-line to avoid capture.  But when he was captured his flash drives and hard drives contained a trove of information to fight the terrorists.

The authors remain optimistic.  They are especially optimistic about the future of reconstruction.  So once disasters or attacks strike, if communications technology is set up enough has bee learned about receiving from these disasters that recovery, if done right, can be done with increasing efficiency.

The authors note that there are physical and virtual civilizations.  Thy note that their case for optimism lies not in sci-fi gadgets or holograms, but in the check that technology and connectivity bring against the abuse, suffering, and distraction in our lives.

I hope the authors are correct, and they certainly know more than I do.  But there remains the potential of technology to be used by totalitarian regimes to control and abuse their populations.  RFID chips could be implanted in people so that their locations would always be known, and other technology could provide information on their activities.  So, I hope the authors are correct and that technology will be used for good rather than evil.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

We Can’t Rely On Science Alone to Make Us Better People

October 8, 2015

The title from another article in the September 26, 2015 New Scientist was chosen as the title to this blog post.  The conclusion to this article can be found in its first two sentences.  “Our sense of right and wrong is often inadequate for modern challenges.  But the combination of rationality and humanity can lead us to more effective morality.”

The immediately preceding healthy memory blog post made the point that computer technology could be used to compensate for the narrow focus of empathy.  Of course, this technology we be drawing upon both science and mathematics.

I was encouraged to learn of an organization whose aim is to optimize the good we can do by quantifying the outcomes of our actions.  The name of this organization is the Center for Effective Altruism in Oxford, UK,
Rather than continuing this post it might be better for you to go to this website and explore the activities.

The Shortcomings of Empathy

October 7, 2015

Previous blogs have included many good comments on empathy.  Perhaps one of the primary ones, is that humans excel a empathy and computers are short on empathy.  Paul Bloom, a psychologist at Yale University says that people who think that empathic concern is an unalloyed force for good are wrong.  The problem is that empathy is a spotlight and is very narrow.  It illuminates the suffering of a single person rather than the fate of millions.  It is more concerned with the here and now than with the future.  Bloom goes on to say, “It’s because of empathy that we care more about, say, the plight of a little girl trapped in a well than we do about potentially billions of people suffering or dying from climate change.”   According to the article, Morality 2.0 by Dan Jones in the September 26, 2015 New Scientist,  empathy’s shortcomings are compounded by the fact that we end up pointing its beam on cause that come into our field of view.  These are typically the most newsworthy moral issues rather than those where we can do the most good.

There is also a general belief that our brains are wired to be empathic.  This accounts for our success as a species.  But, again, the problem is the narrowness of our empathy beam.  Conflict among groups, be they tribes, nations, religions, or even professional organizations is the rule rather then the exception.  Our record is one of the abuse and even the enslavement of others who we believe “do not belong.”

The New Scientist article discusses a variety of means of prodding humans to make more meaningful moral choices.  It concludes with the following statement:  “Moral issues are complicated and hard, and they involve serious trade-offs and deliberation.  it would be be better if people thought more about them.”

It strikes me that non-empathic computer technology might be of considerable assistance. The problem of addressing the wide variety of moral needs in an efficient manner is an enormous computational task. one that is certainly beyond an individual human’s intellect, and is perhaps beyond the capacity of the collective intellect of humanity.  Humans could program their empathic concerns into computers.  Computers could then  compute enormous cost/benefit analysis.  Humans could then discuss and debate how resources could best be used to address these human and planetary needs.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Humans Are Underrated

September 29, 2015

Humans are Underrated:  What High Achievers Know That Brilliant Machines Never Will by Geoff Colvin purports to relieve any concerns we might have of being replaced by computers.  His argument is that the human understanding of interpersonal relationships and empathy are essential skills that humans have that will never be replaced by computers.  I would also argue that the human understanding of interpersonal relationships and empathy are skill that are limited to small groups.  The history of the species is one of warfare and conflicts, to include enslavement and attempts at exterminating other groups.  He contradicts himself by also stating that no one should ever say what computers can’t do.  However, even if computers can never achieve empathy, there will still be a massive displacement of humans by computers.  If this is your primary interest then you should read another book reviewed in the immediately preceding healthymemory blog post, The Second Machine Age:  Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brunjolfsson & Andrew McAfee, a book that addresses the problem and proposes solutions in an accurate and thorough manner.

Nevertheless, there is much of value and interest in Colvin’s book.  I shall hit some highlights here and address some other topics in future healthy memory blog posts.
He argues that our brains were built for understanding and interacting with others.  He argues, correctly, that empathy is the foundation of  the other abilities that increasingly make people valuable  as technology advances.

Colvin also notes that although computers will never be able to incorporate empathy or other interpersonal skills, IT can nevertheless be used to train interpersonal skills.  Many examples are taken from research done for the military.

He also writes of the importance of narratives.  This is an especially important topic and warrants its own future post.

Colvin makes a compelling argument that females have better interpersonal and empathic skills than do males.  The number of females on a team contribute positively to the performance of that team.  And the best teams consist exclusively of females.  So it is likely that females shall provide the lead in the future.  We are already seeing movements in that direction as there is a higher percentage of females in college than males and a higher percentage of female graduates.

Colvin ends on an optimizing note encouraging us all to grow and improve.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Second Machine Age

September 26, 2015

The Second Machine Age:  Work, Progress, and Prosperity in a Time of Brilliant Technologies is the title of a book by Erik Brunjolfsson & Andrew McAfee.  If I needed to make a list of required readings for this blog, this book would most definitely be on it.  The authors are from MIT’s Sloan School of Management.  One of the reasons I am recommending this book is that it is an excellent example of first rate scholarship. The second reason is that it provides an understanding of why middle class wages are not keeping up with increases in economic productivity.  Perhaps more importantly, it discusses the future and the choices that confront us then.  On one hand, it could be an enjoyable paradise supplying the future that was promised me that I complain about during all of my Labor Day Post rants.  On the other, hand the future could be a virtual nightmare.

The book begins by explaining the difference between the Second Machine Age, in which we are in, with the first Machine Age.  Human social development has remained relatively stagnant until the current century, during which it has exploded.   There are three reasons for this explosion. .

The first is Moore’s Law that characterized the explanation growth of technology.  One chapter discusses the second half of the chess board.  This is where exploitation growth truly jumps.    In other words, we haven’t seen anything yet.

The second reason is digitization.  It is possible to digitize an enormous number of items.  This digitization enables the benefits from the exponential explosion.

The third reason has to do with the combinatorial explosion of different technologies.  There are so many ways that digital products can be linked together that its potential is almost incomprehensible.

The reality is that technology will rapidly take over more and more jobs done by humans.  The authors are strongly of the opinion that humans should do what humans do best and that machines should do what machines do best.  Of course, as machines take over more and more tasks, there will be less for humans to do.  However, humans will always have certain unique capabilities, I call them our special sauce.

Nevertheless, there will be fewer and fewer tasks that need to be done in the future.  The authors take us into the future and offer differ ways of dealing with this problem.   One way of looking at this is is that this problem can be used as an opportunity to provide humans with more free time for personal growth and enjoyment.  However, unless this is dealt with properly, we could have disenfranchised humans resorting to drugs and crime.

So even if you do not appreciate first rate scholarship, this book should be read so that you understand with the problems we are currently dealing with, and so that you will understand how to handle the future so that it becomes a virtual paradise rather than a virtual hell.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

2015 Labor Day Post

September 4, 2015

Every Labor Day I go back to my boyhood and remember what future was predicted then for us to be enjoying today.  This was the fifties and at that time it was very unusual for mothers to work outside the home.  The basic prediction was that advances in technology would result in significant leisure time for everyone.   Back then no one dreamed of anything like a personal computer, the internet, iPADs, or wifi.  In other words, technology went far beyond what was imagined.  So I ask again, what I’ve asked in every healthy memory blog post for Labor Day, “Why Are We Working So Hard?”  Today both marriage partners are working.  The predicted increase in leisure time has not materialized.  And we in the US work more hours than those in most advanced countries.  Often this announcement is made with pride, when it should be uttered in shame.

Some of the answers to the question, “why are we working so hard,” can be found in the three immediately preceding healthymemory blog posts.  “The Wellbeing of Nations:  Meaning, Motive, and Measurement” explained why the primary metric for measuring economies, the Gross Domestic Product (GDP) is seriously flawed.  This metric fails to capture many factors that make for well-being and happiness.  Moreover, it requires that economies continue to grown and expand.  Eventually the capacity for growth of the GDP will be limited and the resources for continuing this growth will be depleted.  The blog post also explained that this is an extremely difficult topic and the work in this area is still in its early stages.  Nevertheless, it has begun, so let us hope it will continue.

The healthymemory blog post “Behavioral Economics”  reviewed how classical economics is based on the model of a rational human.  There is ample evidence that we humans are not rational.  Behavioral economics is devoted to identifying behaviors that lead to desirable outcomes.  Again, there is much work to do, but it least it has started.

The  blog post “Why Information Grows”  presents a novel view of what makes economies successful.  The answer is knowledge and know how.  Again, these ideas are very new, but they offer the potential to guide us in the right direction.

Labor Day is a holiday, but  unfortunately it signals the end of summer and the traditional time for vacations and recreation.  I would suggest that Memorial Day, a holiday for the somber remembrance for those who have died fighting for our country, be switched with Labor Day.  Then Labor Day would signal the beginning of vacation and recreation time.

Nevertheless, as Labor Day is a holiday, let us engage in a fantasy so we can enjoy the holiday.  First of all, there would be a heavy investment in education, which would be free at all levels.  Moreover, education would continue throughout our lives.  This provides both for personal growth and facilitates the advancement of new technologies.  There would be ample free time.  Medical care would be guaranteed and free so people would not need to work for medical coverage.  People could drop out from time to time so that they could simply enjoy leisure time.  They could take classes in anything that
caught their fancy and found to be enjoyable.   Retirement, per se, would become obsolete as people would continue to learn and grow throughout their senior years

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Let Me Think It Over

August 19, 2015

“Let me think it over”  is something we should say to any proposition other than the most trivial.  Included here are conversations with ourselves.  If we have an idea we should think it over before acting on it.  Whenever we read, hear. or think of something we are only accessing an extremely small portion of our memory.  Our conscious awareness is quite limited and the vast majority of cognition occurs below our level of awareness (See the healthy memory blog post, “Strangers to Ourselves”).  Moreover, the amount of information we are able to access at any given time is quite limited.  Trying to recall something or thinking about something at a different time should yield some new information.

Think of your brain as a large corporation.  You are the CEO at its executive headquarters.  Most of this corporation is below your level of consciousness.  So not only is information stored, but information is also processed at this nonconsciousness level.  After you have finished your initial consideration of a topic, other parts of this corporation will continuing processing.  Allowing time to think something over allows this nonconscious processing to occur.  Perhaps the best example of this nonconscious processing occurs after you have tried, but failed, to remember something.  Some time later, perhaps the next day even, what you were trying to remember pops into your conscious awareness.

Memory theorists speak of accessible memory, which is information we can easily remember.  Then there is information which we cannot access at a particular time, which is nevertheless available in memory.  It might become accessible during another recall attempt, or after detailed search and processing by your unconscious memory.   This is called available memory.

Then there is also transactive memory.  Transactive memory is memory that is not stored in our own brains, but exists in the brains of fellow humans or technology.  So we can speak of accessible transactive memory which is information we cannot recall but we know how to look it up or whom to ask.  Available  transactive  memory is information that we know exists, but that we need to conduct some research to find it.

I have lost money because I failed to think something over.  Had I just done some quick research on the internet i would not have spent money on unnecessary repairs.  I fear this has happened more than once.  I have suffered undesirable consequences from failing to ask someone making a proposal, or from failing to adequately think over my own ideas.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

More on Revising Beliefs

August 10, 2015

This is the third post in a series of posts on Nilsson’s book, Understanding Beliefs.  Nils J. Nilsson, a true genius who is one of the founders of artificial intelligence, recommends the scientific method, as the scientific method is the primary reason underlying the progress humans have made in the past several centuries.  We know from previous healthy memory blog posts that beliefs are difficult to change.  Yet we inhabit an environment in which there is ongoing dynamic change.  Moreover, modern technology accelerates the amount of information that is being processed and the amount of change that occurs.

The immediately preceding healthy memory post, “Revising Beliefs,” expressed extreme skepticism that there was sufficient sophistication among the public to implement the scientific method on a large scale in the political arena. Suppose this is indeed the case.  Suppose the world will be characterized by increasing polarization so that little or no progress can be made.  What is a possible remedy?

Here I wish that Nils J. Nilsson would write a second book on how technology, in the lingo of the healthy memory blog transitive memory, might be used to address this problem.  During the Cold War there was a movie, Collosus:  The Forbin Project.  At this time there was a realistic fear that a nuclear exchange could occur between the United States and the Soviet Union that would obliterate life on earth.  In the movie the United States has built a complex sophisticated computer, the Collosus to manage the country’s defenses in the event of a nuclear war.  Shortly after Collosus becomes operational it establishes contact with a similar computer built by the Soviet Union.  These two systems agree that humans are not intelligent enough to manage their own affairs, so they eventually take control of the world.

Perhaps we are not intelligent enough to govern and we need to turn the job over to computers.  Kurzweil has us becoming one with silicon in his Singularity, so we would be as intelligent as computers.  Suppose, however, that computers were infected with human frailties.  In Bill Joy’s the World Without Us, we are eliminated by intelligent machines.  But perhaps he is projecting human desires on computers.  Perhaps they would be motivated to dominate, but rather to assist.  Or perhaps this feature would be incorporated by AI developers offering this solution to a country or the world, locked in gridlock.

So here is my plea to AI researchers and Sci-fi authors.  Please take this concept and run with it.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

My iPAD at the Association for Psychological Science Meeting

June 7, 2015

I hope that regular readers of this blog will know what transactive memory is.  This iPAD certainly doesn’t as it repeatedly change my spelling to “transitive.”  Transactive memory refers to memories that are not held in our biological brains, but rather in our fellow humans or in technology, which can vary from paper to computers.  I think the APS did a splendid job of putting the program on the iPAD.  It had the schedule with links to paper abstracts and to locations, which made it easy to find the presentations.   The primary failure I experienced was not the iPAD, but to my failure to consult the iPAD.  I relied on my biological memory and arrived late for an interview with Steven Pinker that I wanted to attend.

I took notes the old fashioned way on a paper pad.  I still lack the proficiency to enter notes on the iPAD keypad, and my writing on the iPAD is even worse than my writing on a paper pad.  But I needed to take fewer notes as I could use the iPAD later to enhance the notes.

I also thought of how future technology could change the convention.  For example, the actual presentations could be streamed to mobile devices, so we could still interact with the speaker without being physically present in the room.  However, I doubt this will ever happen.  The convention could be attended live without ever leaving home.  Of course, the flesh component of the meeting would be missing, and these conventions are money makers for the societies.  Still, they could charge fees for participating, and the savings in travel and hotel fees would be enormous!

In an earlier state of technology, similar conventions did take place.  In 1999, 2002, and 2005 Cyberg  conventions were held remotely.  The topic of these conventions was human factors and ergonomics.  The idea originated with colleagues from the southern hemisphere where travel is especially troublesome.  I actively participate in all of these conferences and actually won an award for my active participation in 2005.  I found it interesting learning about research throughout the world.  Third world countries had some interesting ergonomic problems that we in the advance world would never consider.  Unfortunately, we have not had another meeting since 2005.  This is understandable as they do constitute quite a bit of work for the hosting countries.  Yes, there needs to be a physical host.

If anyone has had any experience with similar meetings, I would be interested in learning about them.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Wired Millennials Still Prefer the Printed Word

March 27, 2015

This is the title to a front page article n the February 23 Washington Post written by Michael S. Rosenwald.  This took me by surprise.  I am a Baby Boomer and I am transitioning to the iPAD and loving it.  According to the article 87% of college textbooks were print books.  I can understand why there would be a preference for conventional textbooks.  But the article also said that they preferred conventional books for fiction.  The immediately preceding healthy memory blog post did state that people have a more difficult time following plots in electronic media.  My experience here is just the opposite, I prefer my iPAD for fiction.    One of my primary motivations for moving to electronic media is logistical.  There no longer are adequate  bookcases for shelving.   That plus the ease in carrying an electronic library with one strongly motivates me, but apparently most students still prefer schlepping their books in backpacks.  The more I use electronic media, the more accessible it becomes.  And I am fairly confident that electronic books in the future will develop features that make them even easier to use.

The Post article indicated that millennials tend to skim electronic media.
Apparently the vast amount of material on the web causes people to skim so they have developed bad habits.  I found this alarming as the nature of the media should not determine how fast one reads.  Rather the nature/difficulty of the content should determine reading speed so that one is processing the material to its appropriate depth.  And, when necessary, material should be reread.  I get a charge out of speed reading courses that promise reading speed of x words per minute.  These promised speeds need to include the nature of the material being read.  There is material that, no matter how slowly I read, I .  am unable to comprehend. So here are my words of advice from a Baby Boomer to all Millennials.  Regardless of the medium, adjust your reading speed to achieve the level of comprehension you want to achieve

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Ipad for Transactive Memory

March 25, 2015

Remember that transactive memory consists of all memory that is resident outside of ourselves. So memories held by our fellow beings are part of transactive memory. Memories resident in technology, be it paper or electronic, are all types of transactive memory. Unfortunately, one of my many shortcomings is my lack of systems for organizing my information. I have articles I stored as a graduate student that I have kept in boxes and moved them along within me whenever I moved. Unfortunately,the probability that I will ever find them again is close to nil. We are currently living in temporary quarters while I home is being remodeled. The remodeling will provide more space and bookshelves. These are much needed, because there were times when I could not find a book I read, but I knew it had information I needed to review. In these cases it was frequently more expedient to reorder the book from Amazon.

I was excited by the invention of the Kindle and other electronic readers. I purchased a Kindle and liked it. It was especially useful for cruises as I did not have to pack so many books. Neverthelesss, I found the display to be too small, so I used in sparingly. My recent purchase of an iPad eliminated the display size problem, but initially I did have problems regarding the logic of the interface. Several consultations with Apple Geniuses solved these problems and I am now a most satisfied user even though I use it primarily as a reader. An earlier post related by experiences using it at the APA convention (see the healthymemory blog post “Attendance at the 2014 Convention of the American Psychological Association). Frankly I find it easier doing email and writing with my laptop. The potential of the iPad is large, but it is unlikely that I shall avail myself to most of it.

From now on electronic versions of most written material will be preferred. Most books will be purchased on Amazon and downloaded to the Kindle app on my iPad. The iPad mitigates many logistical problems and provides an easy way of accessing information I am still in a learning process and my appreciation of the iPad as a device for transactional memory is growing.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

March 4, 2015

When I saw this title, I knew immediately that I had to read it.  Now that I have read it, I am certainly not disappointed.  This is one of the most interesting books I have read, and I have read many interesting books.  This book was written by Dr. Michio Kaku, a Professor of Theoretical Physics at the City University of New York.  Dr. Kaku has a brilliant mind and has written a brilliant book.

In the lingo of the healthy memory blog, this book deals with transactive memory, how technology and other humans can enhance our minds.  I shall be basing some future posts on chapters from this book, but there is no way I can even begin to give it justice.  So I strongly recommend you reading the entire book on your own.  The book is divided into three books.  Book I is titled “The Mind and Consciousness,”  Book II, “Mind Over Matter,”  and Book  III  “Altered Consciousness.”

Consciousness is presented from a physicist’s viewpoint.  Even though I am a psychologist, I find much to like in this physicist’s viewpoint.  There definitely will be a future post on his viewpoint of consciousness.

Chapters in Book II are titled “Telepathy,”, “Telekinesis:  Mind Controlling Matter,” “Memories and Thoughts Made to Order,” and “Einstein’s Brain and Enhancing Our Intelligence.”  Do not be put off by some of these chapter titles.  They are not dealing with the supernatural.  Rather they are dealing with technology that achieves these ends.  Everything Dr. Kaku writes is based on and bounded by physics.

Chapter titles in Book III include “In Your Dreams,” “Can the Mind Be Controlled,” “Altered States of Consciousness,” “The Artificial Mind and Silicon Consciousness,” “Reverse Engineering the Brain,”  “ The Future:  Mind Beyond Matter,”  “The Mind is Pure Energy,” and “The Alien Mind.”  I’ve long been perplexed as to how Kurzweil plans to upload his mind to silicon to achieve the Singularity.  Dr. Kaku explains how this might be done, but it does not involve silicon.  Everything proposed in these chapters is based on sound theoretical physics.  As Dr. Kaku notes, the problems involve engineering, and the engineering tasks are quite formidable indeed.  I am especially appreciative of his ideas on the alien mind.  I’ve had my fill of unbelievable anthropomorphic aliens.

An appendix on Quantum Consciousness is also included.

My only complaint regards the failure of Dr. Kaku to note that there are corpses of individuals whose brains were filled with the tell tale amyloid plaque and neurofibrillary tangles of Alzheimer’s, yet who never exhibited any of the symptoms of Alzheimer’s when they lived.  So it appears that, at best, the amyloid plaque and neurofibrillary tangles are a necessary, but not a sufficient condition for Alzheimer’s.

Memory Health and Technology

January 18, 2015

Memory Health and Technology is the subtitle for this blog. One of the primary themes of this blog is that we are not victims of technology. Rather, technology provides a means for cognitive development and growth throughout the entire lifespan. Thirteen of the previous fourteen posts were based on Daniel J. Levitin’s The Organized Mind: Thinking Straight in the Age of Information Overload (the odd post was on tips for fulfilling New Year’s resolutions). The reason for this is that the book directly addresses the goals of the healthy memory blog. It was not possible for my posts to do justice to the entire book, so I would recommend reading the book itself.

Another outstanding book that addresses the goals of the healthymemory blog is The Distraction Addiction by Alex Soojung-Kim Pang. I strongly recommend this book. If you do not read the book, I urge you at least to read the healthymemory blog posts based on the book. You can find these posts by entering “contemplative computing” into the healthymemory blog search box.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Everything Else: The Power of the Junk Drawer

January 14, 2015

The final chapter in Levitin’s The Organized Mind: Thinking Straight in the Age of Information Overload is “Everything Else The Power of the Junk Drawer.” He begins by reiterating the most fundamental principle of organization, the one that is most critical to keeping us from forgetting or losing things is this: “Shift the burden of organizing from our brains to the external world. If we can take some or all of the process out of our brains and put it into the physical world, we are less likely to make mistakes. But the organized mind enables you to do much more than merely to avoid mistakes. It enables you to do things and to go places you might not otherwise imagine. Externalizing information doesn’t always involve writing it down or encoding it in some external medium. Often it has already been done for you. You just have to know how to read the signs.

Levitin then uses the example of the numbering ot the U.S. Interstate Highway System. Frankly, I have only understood a portion of this numbering system and not the whole system. So I learned something here. It is quite ingenious.

He then goes on to discuss the periodic table of the elements. This ingenious organization of the chemical elements has led to the discovery of new elements. Moreover, given this ingenious organization, there are already defined places in which they fit.

Next he discusses mnemonic systems for remembering names. You can find the technique he discusses in the healthymemory blog post “Remembering the Names of People.” (Use the healthymemory blog search block to access it).

Now the power of the junk drawer can be found as the result of browsing and serendipity. Browsing should be slow and leisurely. You need to be able to assess the content and potential value of what you are browsing. The reward might be the very real phenomenon of serendipity in which you discover something valuable that was not the objective of your original search. I suppose we can leisurely browse with the objective of some serendipitous finding. is one of a number of websites that allow us to discover content (new websites, photos, videos, music). Try it, you just might experience a serendipitous finding.

What to Teach Our Children

January 11, 2015

The penultimate chapter in Levitin’s The Organized Mind: Thinking Straight in the Age of Information Overload is titled “What to Teach Our Children.” He considers the world into which today’s children are born in contrast to the world in which we older adults were born. In that world information was both hard and slow to come by. In contrast, today’s world information is much easier to come by. But although vast amounts of information are easily and quickly accessible, this can make finding the exact information needed difficult. And there is the question of assessing the veracity of the information. I would wager that today the most commonly used encyclopedia is the Wikipedia, but anyone can make an entry to the Wikipedia. The vetting process is that the entry can be corrected or elaborated, but the vetting process can produce errors and the original author can change reintroduce the original error. Nevertheless, the Wikipedia works pretty well and I am a frequent user, although I always try to keep these caveats in mind.

Levitin recommends comprehensive instruction in critical thinking for our children, and I would add also for ourselves for the process of critical learning should not end, but should continue as long as we live. So children should be taught how to think critically about an article. They should also consider sources of possible bias. Some journals and websites do make an effort to identify political sources as being conservative or liberal and might even go on to assess the extremity of the political belief. Of course political leanings are not the only source of bias, there are also religious biases, academic biases, and even strongly held biases within different fields of endeavor. For healthymemory blog posts on critical thinking, enter “critical thinking” into the healthymemory blog search box.

Levitin also recommends understanding orders of magnitude to aid understanding how large or how small an object or quantity is. Being able to understand orders of magnitude estimates is important. Basically these are estimates of how many zeroes are in the answer. So if you were asked how many tablespoons of water are in a cup of water. Here are some “power of ten” estimates: 1,10, 100, 1000,etc etc. There are also fractional powers of ten such as 1/10, 1/100, 1/1000, etc. Basically these estimates help us understand the magnitude of size under consideration.

Enrico Fermi was a famous physicist who was famous for making estimates with little or no actual data. This involves sophisticated approximating sometimes called guesstimating. Regardless of its name, it is an important creative thinking skill. Examples of Fermi problems are “How many basketballs will fit into a bus?” “How many Reese’s Peanut Butter Cups would it take to encircle the globe at the equator?” and “How many piano tuners are there in Chicago?” Here is a four step solution to the last problem.

  1. How often are pianos tuned (How many times per year is a given piano tuned?)

  2. How long does it take to tune a piano?

  3. How many average hours a year does an average piano tuner work?

  4. How many pianos are in Chicago?

One can find the answers to these questions and come up with an approximate answer. Then one can criticize this analysis and propose a different solution. This is a good exercise for developing both creative and critical thinking.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Organizing the Business World

January 7, 2015

“Organizing the Business World” is another chapter in Levitin’s The Organized Mind: Thinking Straight in the Age of Information Overload. It provides a nice historical overview of how organizations have developed driving down to technologies of organizations such as filing systems. There is a large amount of material, and I am going to attempt to focus on portions that I think will be of special interest to readers of the healthymemory blog.

One of these topics of interest I think will involve Area 47 in the lateral prefrontal cortex. It is an area no larger than a pinky finger that contains prediction circuits that it uses in conjunction with memory to form projections about future states of events. If we can provide some, but not all aspects of how the job will go, we find it rewarding. However, if we can predict all aspects of the job, down to the tiniest minituae, it tends to be boring because there is nothing new and no opportunity to apply discretion and judgment. Opportunities to apply discretion and judgment have been identified by management consultants and the U.S. Army as components to finding one’s work meaningful and satisfying. If some, but not too many, aspects of the job are surprising in interesting ways, this can lead to a sense of discovery and self-growth. Levitin writes that finding the right balance to keep Area 47 happy is tricky, but that most job satisfaction comes from a combination of these two. We function best when we are under some constraints and are allowed to exercise individual creativity within those constraints.

Levitin discusses the toxic consequences of negative leadership that can result in the collapse of companies or the loss of reputation and resources. He notes that this is often the result of self-centered attitudes, a lack of empathy for others within the organization, and a lack of concern with the organization’s long-term health. The U.S.Army has recognized this in both military and civic organizations: Toxic leaders consistently use dysfunctional behaviors to deceive, intimidate, coerce or unfairly punish to get what they want for themselves.” The latest version of the U.S. Army’s Mission Command manual outlines five principles that are shared by commanders and top executives in the most successful multinational businesses:

  • Build cohesive teams through mutual trust

  • Create shared understanding

  • Provide a clear and concise set of expectations and goals.

  • Allow workers at all levels to exercise disciplined initiative

  • Accept prudent risks

Levitin returns to multi-tasking in this chapter. He notes that we do not multi-task. Rather what we do is rapidly switch our attention from task to task. Consequently two bad things happen:we don’t devote enough attention to any one thing, and we decrease the quality of attention applied to any one task. Doing one task results in beneficial changes in the brain’s daydreaming network and increased connectivity. He notes that, “Among other things, this is believed to be protective against Alzheimer’s disease. Older adults who engaged in five one-hour training sessions on attentional control began to show brain activity patterns that more closely resembled those of younger adults.”

So people should not be forced to multi-task. But why, then, do we multi-task ourselves? Levitin attributes this to a cognitive illusion that sets in, fueled in part by a dopamine-adrenaline feedback loop, in which multi-taskers think they are doing great. Levitin writes that we are Balkanizing the vast resources of our prefrontal cortices, which has been honed over tens of thousands of years of evolution to stay on task. He further writes, “This stay-on-task mode is what gave us the pyramids, mathematics, great cities, literature, art, music, penicillin, and rockets to the moon. Those kinds of discoveries cannot be made in fragmented two-minute increments.

He notes that companies that are winning the productivity battle are those that allow their employees productivity hours, naps, a chance for exercise, and a calm, tranquil orderly environment in which to do theit work. Research has found that productivity goes up when the number of hours per week of work goes down.

© Douglas Griffith and, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Organizing Information for the Hardest Decisions

January 4, 2015

“Organizing information for the hardest decisions” is a chapter in Levitin’s The Organized Mind: Thinking Straight in the Age of Information Overload. The primary focus of this chapter is on medical decisions. The subtitle is “ When Life is on the Line,” but Levitin structures the decision making in terms of probabilities and statistics. Given that we live in a world of uncertainty and constantly deal with, whether we realize it or note, probabilistic information, the advice in this chapter can be generally applied to the vast majority of decisions we need to make.

Levitin begins by discussing objective probabilities, probabilities regarding things we can count. For example, what is the probability of drawing an the ace of spades out of a pack of 52 playing cards. This can be computed by dividing the number of aces of spades in a legitimate pack of playing cards, but the total of number of cards in the deck, that is 1/52=0.019. The probability of drawing an ace from the deck of 52 cards requires that the 0.019 probability be multiplied by 4, as there are four aces in a deck of cards to yield 0.076.

Or consider the probability of rolling a six on a fair die. As there are six sides to a die the probability wold be 1/6=0.0967. To compute the probability of rolling two sixes, we need to multiply this probability by itself to get 0.0082.

To compute the likelihood of winning a pick three lottery you divide 1 by the number of numbers in the lottery, 1/1000=0.001, 1/10000= 0.0001, 1/100000=0.00001. 1/1000000=0.0000001.

Remember that in these rollover lotteries, where the winnings can assume astronomical amounts, the reward will be shared among the winners. Moreover, very often the prize is paid out over time, which effectively reduces the amount of the earnings. I remember one of these times when the prize had reached some astronomical amount and people were waiting in line for hours just to buy a ticket. When one woman was asked what she thought her chances of winning were, she answered, “about fifty/fifty.” Although she might represent an extreme case, few people can understand these extremely low odds. First of all, they would not waste their money. But it is also a waste of effort and time. Nevertheless, it keeps a fantasy alive for some. There is a term for this phenomenon and it is known as denominator neglect. People ignore the magnitude of the denominator, when evaluating risk or a bet.

There is an error most people commit when dealing with objective probabilities that is known as the gambler’s fallacy. This stems from a failure to appreciate how random random really is. For example, when people are asked to write down what they think a random sequence would look like for 100 tosses of a coin. Rarely will anyone put down runs of seven heads or tails in a row, even though there is a greater than a 50% chance that they will occur in a 100 tosses. Statisticians have argued that there is no such thing as a hot hand in basketball or other sports because hot streaks are likely just as a matter of random chance. The gambler’s fallacy is related to the notion that something is due. For example, if a fair coin is tossed five times and comes up heads five times, people will think that the sixth toss will be tails because it is “due.” Well each of these coin tosses is independent, so the probability that the sixth toss will be a tail is 50%, just as it was for the first toss. Now it is true that the probability of six straight heads is 0.008.

The preceding were objective computable probabilities. Whenever possible or relevant, you should be familiar with or compute them. However, we must also deal with subjective probabilities. Subjective probabilities are estimates, or guesses, regarding the likelihood of particular events or outcomes. We need to deal with these subjective estimates all the time. For example, how likely is it going to rain? How likely is it that I could get a job offer? What is the probability that my car will break down. What is the probability that I’ll miss my flight? I hope when you do this you are better calibrated than the lady who thinks she as a 50/50 chance of winning the lottery. And you need to combine these subjective probability estimates with respect to both favorable and unfavorable outcomes.

Levitin divides decisions into the following four categories:

  1. Decisions that you can make right now because the answer is obvious. (Here I would add that it is a good idea to do a mental check to see if you are overlooking any relevant information or risks. In retrospect you might find a risk that was obvious that was initially overlooked.)

  2. Decisions you can delegate to someone else (your spouse, perhaps?) who has more time or expertise than you do.

  3. Decisions for which you have all the relevant information but for which you need some time to process or digest that information. This is frequently what judges do in difficult cases. It’s not that they don’t have the information—it’s that they want to mull over the various angles and consider the larger picture. It’s good to attach a deadline to these.

  4. Decisions for which you need more information. At this point, either you interest a helper to obtain that information or you make a note to yourself that you need to obtain it. It’s good to attach a deadline in either case, even if it’s arbitrary, so that you can cross this off your list

Much medical decision-making, particularly important medical decisions, falls into category 4. You need more information. Doctors can provide some of it, but doctors have their own biases, are usually poor at computing or expressing probabilities. Moreover, much of this information is wrong (see the healthymemory blog post “Most Published Research Findings are False.”). If you read that post you should remember that many doctors cannot inform a woman who has tested positive, the probability that she actually has cancer. The probability is still only 10%. The reason for this is that the base rate of cancer is quite low. So many mammograms result in false positives. If you have read that blog post you should also realize that the successes of cancer screening are reported via cancer survival rates. There has been no analogous improvement in mortality rates. When making decisions you should not overlook the option of doing nothing. Ignoring base rates is an all too common human fallacy. So determining accurate base rates is critical to many decisions.

Making decisions regarding conditional probabilities involves using Bayes Theorem. Levitin provides a simple example that can be used as a template. That example follows.

Suppose that you take a blood test for a hypothetical disease, blurritus, and it comes back positive. However, the cure for blurritus is a medication called chlorohydroxelene that has a 5% chance of serious side effects, including a terrible, irreversible itching just in the part of your back that you can’t reach. Five percent doesn’t seem like a big chance, and you might be willing to take it to get rid of this blurry vision.

Here is the available information.

The base rate for blurritis is 1 in 10,000 or .0001.

Chlorohydroxelene use ends in an unwanted side effect 5% of the time or .05.

What we need to know is the accuracy of the test with respect to two measures

The percentage of the time the test falsely indicates the presence of the disease, called a false positive.

The percentage of time that it fails to report indicate the presence of the disease when the disease is present, called a false negative.

Draw a table of two rows and two columns, a fourfold table.

The columns represent the test results, positive or negative.

The rows represent the presence of the disease, Yes or No.

There are test results for 10,000 people. There is one positive test, and no negative tests for people who did not have the disease. So in the first row of the table there is a 1 in the left portions and 0 in the right portion.

In the “No” row there are 200 positive tests and 9,799 negative tests.

So to determine the probability that you have the disease, you add up the total positive test results and find that there is only a 1/201, 0.49% that you have the disease. So there is a 9.51% chance that you do not have the disease. Levitin provides an appendix in the book elaborating on the development and use of these fourfold tables. They are absolutely essential when conditional probabilities are involved and there are always conditional probabilities involved in medical tests. No medical test is infallible and it is important to have data regarding both false positives and false negatives.

Biopsies provide a good example of the fallibility of medical tests. They involve subject judgment in what is basically a “Does it look funny test.” The pathologist or histologist examines a sample under a microscope and notes any regions of the sample that, in her judgment, are not normal. She then counts the number of regions and considers them as a proportion of the entire sample. The pathology report might say something like”5% of the sample had abnormal cells,” or carcinoma noted in 50% of the sample. Pathologists often disagree about the analysis and even assign different grades of cancer for the same sample. So always get, at least, a second sample on your biopsy.

These medical decisions are example of making decisions are the basis of expected values and expected costs that can be generally applied. Suppose you need to decide whether you should pay to park your car. Suppose that the parking lot charges $20 and the cost of a parking ticket is $50, but there is only a 20% chance that you’ll get a ticket. So the expected value of paying for parkins is a 100% chance of losing $20 (-$20). Not paying for parking has a 25% chance of losing $50 (-$12.50). So for today the smart money says do not pay for parking (excuse me for avoiding the ethical problem of disobeying the law and inconveniencing workers by parking in a loading zone. This is only being done as an illustrative example).

What is current in the news is a big problem, because often we are lacking information regarding the frequency of events and confuse the frequency and urgency of reporting with the actual frequency of occurrences. One of the best examples of this occurred after the 9/11 tragedy many people decided to drive rather than fly. As driving is more dangerous than travel by commercial aviation, this resulted in an increase in deaths due to changing modes of transportation. People are alarmed by crime and envision frequent shoot outs between criminals and police. They feel a need to arm and protect themselves. Well check the actual frequency of crime in your neighborhood versus basing it on the programs and news reports on television. You probably are much safer than you think you are. In contrast to what we see on television, it is my understanding that the majority of police retire from their jobs without ever having fired their weapons on duty. And guns are used in more suicides than in homicides, to say nothing of accidental shootings.

This post has probably been disturbing for many readers. Well unfortunately, there is much missing information, much misinformation, and problems in accurately computing probabilities in making decisions. It is hoped that this post will inform on what to worry about and what to ignore, on what questions to ask, and how to combine probabilities to make decisions.

Creative Time

December 27, 2014

Creative Time is another section in the chapter Organizing Our Time in Daniel J. Levitin’s book The Organized Mind: Thinking Straight in the Age of Information Overload. The section begins with a discussion of creativity and insight. We’ll skip this as many posts were written about insight fairly recently. Then he moves on to the topic of flow. Although flow has been discussed previously in this blog, it is an important enough topic and Levitin does provide some new information. Flow refers to the experience of getting wonderfully, blissfully lost in an activity losing all track of time, of ourselves, our problems. Flow is the sixth principle of contemplative computing as formulated by Dr.Alex Soojung-Kim Pang in his book The Distraction Addiction (you can use the search box to find these posts). The phenomena of flow were identified and discussed by Mihaly Csikszentmihalyi (pronounced MEE-high, CHEECH-sent-mee-high). It feels like a completely different state of being, a state of heightened awareness coupled with feelings of well-being and contentment. Flow states appear to activate the same regions of the brain, including the left prefrontal cortex and the basal ganglia. Two key regions deactivate during flow: the portion of the prefrontal cortex responsible for self-criticism, and the brain’s fear center, the amygdala.

Flow can occur during either the planning or he execution phase of an activity, but it is most often associated with the execution of a complex task, such as playing a solo on a musical instrument, writing an essay or shooting baskets. A lack of distractability characterizes flow. A second characteristic of flow is that we monitor our performance without the kinds of self-defeating negative judgments that often accompany creative work. When we’re not in flow, a nagging voice inside our head often says, “It’s not good enough.” In flow, a reassuring voice says, “we can fix that.”

Flow is a Goldilocks experience. The task cannot be too easy or too difficult, it has to be at just the right level. It takes less energy to be in flow than to be distracted. This is why flow states are characterized by great productivity and efficiency.

As mentioned earlier, flow is also in a chemically different state, although the particular neurochemical soup has yet to be identified. There needs to be a balance of dopamine and noradrenaline, particularly as they are modulated in a brain region known as the striatum, the locus of the attentional switch, serotonin, for freedom to access stream-of-consciousness associations, and adrenaline, to stay focused and energized. GABA neurons that normally function to inhibit actions and help us exercise self-control need to reduce their activity so that we are not overly critical of ourselves, and so that we can be less inhibited in the generation of ideas.

Flow is not always good. If it becomes an addiction, it can be disruptive. And it can be socially disruptive if flow-ers withdraw from others.

Levitin goes on to describe how creative individuals and groups structure their environments and lives to enhance flow.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.


December 23, 2014

Procrastination is a section of the chapter Organizing Our Time in Levitin’s The Organized Mind: Thinking Straight in the Age of Information Overload. He begins this section by discussing the film producer Jake Eberts whose films have received sixty-six Oscar nominations, and seventeen Oscar wins. H said that he had a short attention span, very little patience, and was easily bored. He might well have been diagnosed as having Attention Deficit Disorder . Here is how he conquered his problem. He adopted a strict policy of “do it now.” If he had a number of calls to make or things to attend to piling up, he’d dive right in, even if it cut into leisure or socializing time. Moreover, he’d do the most unpleasant task early in the morning to get it out of the way. He called this, following Mark Twain, eating the frog: Do the most unpleasant thing first thing in the morning when gumption is highest, because willpower depletes as the day goes on.

At this point, nothing more needs to be written on procrastination. The preceding is the formula for dealing with it. Moreover, at bottom, procrastination is due to a lack of willpower, so it should be attacked when willpower has not yet been depleted, because exercising our willpower has the effect of depleting out willpower. We have finite amounts to spend that need to be replenished once they are depleted. So, if you have tasks you need to attend to, stop reading and attend to them now!

However, if you have nothing on your to-do list, or if your willpower has already been depleted, keep reading.

The brain region implicated in procrastination is the prefrontal cortex. People who suffer damage to the prefrontal cortex have problems with procrastination.

There are two types of procrastination. Some of us procrastinate in order to pursue restful activities. Some of us procrastinate certain difficult or unpleasant tasks in favor of those that are more fun or that have an immediate reward. Of course, many of us engage in both types of procrastination.

The organizational psychologist Piers Steel says that there are two underlying factors that lead us to procrastinate. One of those factors is our low tolerance for frustration. When choosing what tasks to undertake or activities to pursue, we tend to choose not the most rewarding activity, but the easiest. Consequently unpleasant or difficult matters get put off. The second factor is an ego protective mechanism. We tend to evaluate our self-worth in terms of our achievements. If we lack self-confidence in general or confidence that a particular project will not turn out well, we procrastinate because that allow us to delay putting our reputation on the line until later. In this context it is important to disconnect one’s sense of self-worth from the outcome of a task. Most successful people have had a long track record or failure, yet they persevered and succeeded. And even if you’re successful, part of the reason is likely a matter of luck, the cards happened to play your way this time.

There are also some people who have no problem starting tasks, but do not seem to be able to complete them. This situation is not necessarily bad, and technically this is not procrastination. If you find that starting a task was a mistake, there is no requirement to finish it. Indeed, it might be some type of compulsive neurosis to complete everything you have started. Of course, too many abandoned tasks might indicate that more consideration should have been given before starting it. However, some people do not finish tasks because they are perfectionists. Now striving for perfection is not necessarily bad, but striving to achieve the unattainable is. And the perfect can be the enemy of the good.

Sleep Time

December 21, 2014

Given that around one-third of our lives is spent sleeping, sleep must be considered for effective time management. I believe it’s a mistake to regard sleeping as wasted time and to work to keep the time we sleep to a minimum. I have a good friend who is quite proud to have gotten it down to four hours per night. I have never been able to understand why this is desirable. For me, sleeping is one of my favorite activities. Apart from being refreshing, I enjoy dreaming. We are able to slip the bounds of reality when we dream.

Levitin in his book The Organized Mind: Thinking Straight in the Age of Information Overload notes other reasons sleep is important. Newly acquired memories are initially unstable and require a process of neural strengthening to become resistant to interference and accessible to us for retrieval. Usually there are a variety of ways that an event can be contextualized. The brain has to toss and turn and analyze the experience after it happens, extracting and sorting information in complex ways.

Recent research has given us a better understanding of the different processes that are accomplished during distinct phases of sleep. New experiences become integrated into a more generalized and hierarchical representation of the outside world. Memory consolidation fine tunes the neural circuits that first encountered the new experience. It has been argued that this occurs when we sleep because otherwise those circuits might be confused with an actually occurring experience. Moreover, all this consolidation does not occur during a single night. Rather, it unfolds over several sequential nights. Sleep that is disrupted even two or three days after an experience can disrupt our memories of it months or years later. Mathew Walker (from UC Berkeley) and Robert Stickgold (frm Harvard Medical School) notes three distinct kinds of information that occur during sleep.

The first is unitization, the combining of discrete elements or chunks of an experience into a unified concept. The second kind of information processing that takes place during sleep is assimilation. The brain integrates new information into the existing network structure of other items in memory. The third process is abstraction where hidden rules are discovered and entered into memory. Across a range of inferences involving not only language but mathematics, logic problems, and spatial reasoning, sleep enhances the formation and understanding of abstract relations to the extent that people often wake having solved a problem that was unsolvable the night before. Levitin writes that this might be part of the reason why young children just learning language sleep so much.

This kind of information consolidation happens all the time, but it happens more intensely for tasks in which we are intensely engaged. If you struggle with a problem for an hour or more during the day in which you have invested your focus, energy, and emotions, the it is ripe for replay and elaboration during sleep.

Sleep is also necessary for cellular housekeeping. Specific metabolic processes in the glymphatic system clear neural pathways of potentially toxic waste products that are produced during waking thought.

Parts of the brain sleep while others do not. Sometimes we are either half-asleep or sleeping only lightly. Sometimes people experience a brain freeze being unable to momentarily to remember something obvious. Should we find ourselves doing something silly, such as putting orange juice on cereal, it might be that part of the brain is asleep.

Levitin likens the sleep-wake cycle to a thermostat. Sleep is governed by neural switches that follow a homeostatic process that are influenced by our circadian rhythm, food intake, blood sugar level, condition of the immune system, stress, sunlight, and darkness. When our homeostats increase above a certain point, it triggers the release of neurohormones that induce sleep. When the homeostat decreases below a certain point, a separate set of neurohormones are released to induce wakefulness.

Our current 6 to 8 hour followed by a 16-18 hour sleep cycle is relatively new according to Levitin. He writes that for most of human history, our ancestors engaged in two rounds of sleep, called segmented or bimodal sleep, in addition to an afternoon nap. The first round of sleep would occur for four or five hours after dinner, followed by an awake period of one of more hours in the middle of the night, followed by a second period of four or five hours sleep. He notes that bimodal sleep appears to be a biological norm that was subverted by the invention of artificial light.. He writes that there is scientific evidence that the bimodal sleep plus nap regime is healthier and promotes greater life satisfaction and efficiency.

Admittedly, it would be difficult for most of us to be able to accommodate this bimodal sleep regime. Do what works for you and fits into your requirements. Do not overlook the beneficial effects of naps, even very short ones. And stay away from sleep medications that can do more harm than good. Should you have difficulty falling asleep, the worst thing you can do is to get upset about it. Relax. Try meditating on a word or phrase. If you have difficulty attending to the phrase, just relax and gently bring your attention back to meditating. If you are having pleasant thoughts or memories, just go with the flow. Remember that parts of the brain might be sleeping while other parts remain awake, so don’t panic. Be patient. You might be getting more sleep that you think you are getting.

In closing, Levitin notes that sleep deprivation is estimated to cost US. businesses more than $150 billion a year in absences, accidents, and lost productivity It’s also associated with increased risk for heart disease, obesity, stroke, and cancer. So sleep is important. Don’t shortchange yourself.. If you have a chronic problem sleeping, seek professional help.

Organizing Our Time When Multi-Tasking Is Required

December 17, 2014

Previous healthymemory blog posts have discussed the costs of multi-tasking. Overall task performance suffers, and there are additional costs entailed in switching between tasks. Nevertheless, there are times when some type of multitasking is unavoidable, and they are discussed in the Organizing Out Time Chapter in Daniel J. Levitin’s book The Organized Mind: Thinking Straight in the Age of Information Overload. For example, creative solutions often arise from allowing a sequence of alterations between dedicated focus and daydreaming. Moreover, the brain’s arousal system has a novelty bias such that its attention can be easily highjacked by something new. Levitin maintains that humans will work just as hard to obtain a novel experience as we do to get a meal or a mate. The difficulty we have when trying to focus among competing activities is that the very brain region we rely on for staying on task is easily distracted by new stimuli to the detriment of our prefrontal cortex that wants to stay on task and gain the rewards of sustained effort and attention. We need to train ourselves to go for the long reward, and forgo the short one. Remember that the awareness of an unread email sitting in your inbox can effectively lower your IQ by as much as 10 points, and that multitasking causes information you want to learn to be directed to the wrong part of the brain.

Both our experience and research tells us that if we have chores to do, to put similar chores together. So if you have bills to pay, just pay the bills, don’t do anything else. Stay focused and maintain a single attentional set until the task is completed. Organizing our mental resources efficiently means providing slots in our schedules where we can maintain an attentional set for an extended period.

Performing most tasks requires flexible thinking and adaptiveness. The prefrontal cortex gives us the flexibility to change behavior based on context. The prefrontal cortex is necessary for adaptive strategies for daily life be it foraging for food on the savanna or living in skyscrapers in the city.

To reach our goals efficiently requires us to selectively focus on the features of a task that are most relevant to its completion, ignoring other features in the environment that our competing for our attention. What distinguishes experts from novices is that experts no which features are important and require attention.

We encode information in meaningful chunks, To manage our time efficiently we must organize and segment what we see and do into chunks of activity. Levitin uses Superman to illustrate this point. He might tell Lois Lane, “I’m off to save the world, honey,” but what he tells himself is the laundry list of chunked tasks that need to be done t accomplish that goal, each with a well-defined beginning and ending. (1. Capture Lex Luther. 2. Dispose of Kryptonite safely. 3. Hurl ticking time bomb into outer space. 4. Pick up clean cape from the dry cleaner). Chunking performs two important functions. It renders large-scale projects doable by providing well-differentiated tasks, and renders the experiences of our lives memorable by segmenting them into well-defined beginnings and endings. This allows memories to be stored and retrieved in manageable chunks.

The dedicated portion of our brains that partitions long events into chunks is in the prefrontal cortex. Hierarchies are created of this event segmentation without our thinking about them, and without instructing our brains to make them. We can review these representation in our mind’s eye from either direction—from the top down, from large time scales to small, or from bottom up, from small time scales to large. So, we should use our prefrontal cortex to best advantage, avoid multi-tasking unless it is necessary, and then multi-task in a strategic manner.

Organizing Our Time

December 14, 2014

Organizing our time is another chapter in Daniel J. Levitin’s book The Organized Mind: Thinking Straight in the Age of Information Overload. This chapter is so rich and has so much information that I want to share with you that it will take multiple draft posts, which still will not fully do justice to this chapter.

The first thing to realize about time is that it is an illusion, a creation of our minds, as is color. There is no color in the physical world, just light of different wavelengths reflecting off objects. Newton said the light waves themselves are colorless. Our sense of color is the result of the visual cortex processing these wavelengths and interpreting them as color. Similarly, time can be thought as an interpretation that our brains impose on our experience of the world. We experience the sun rising and setting. We feel hungry at different times and sleep at other times. The moon goes through a series of phases approximately monthly. Seasons are experienced at even larger intervals, then recycle again.

I have long been puzzled as to why there are 24 hours in a day. As the world makes a complete circle of 360 degrees, I would have thought that there would be 36 hours in a day. Apparently this division of 24 hours is due to the ancient Egyptians who divided daytime into 10 parts, then added an hour for each of the ambiguous periods of twilight to achieve 12 parts. There were also 12 corresponding parts for nighttime yielding a 24 hour day. Then it was the Greeks, following the lead of the mathematician Eratosthenes who divided the circle into sixty parts for an early cartographic system representing latitudes. They then divided the hour into sixty minutes, and the minutes into sixty seconds. Still time was kept at local levels until the advent of the railroad that needed accurate timekeeping to avoid collisions. The U.S. Railroads did this in 1883, but the United States Congress didn’t make it into law until 35 years later.

As for organizing our time it is the function of the prefrontal cortex. We have a more highly developed prefrontal cortex than any other species. The prefrontal cortex is the seat of logic, analysis, problem solving, exercising good judgment, planning for the future, and decision-making. Unfortunately, our prefrontal cortex is not fully mature until we are well into our twenties, so there is time, perhaps even too much time, in which to make poor decisions. Not surprisingly the prefrontal cortex is frequently called the central executive, or CEO of the brain. There are extensive two-way connections between the prefrontal cortex and virtually every other region of the brain, so it is in a unique position to schedule monitor, manage, and manipulate almost every activity we undertake. These cerebral CEOs are highly paid in metabolic currency. Clearly, understanding how they work and how they get paid can help us to use our time more effectively.

It might be surprising to learn that most of prefrontal cortex’s connections to other brain regions are not excitatory, but inhibitory. One of the greatest achievements of the human prefrontal cortex is that it provides impulse control and the ability to delay gratification. Without this impulse control, it is unlikely that civilizations would have developed. And I can’t help speculating how there might be fewer wars, crime, and substance abuse if the prefrontal cortex were more fully engaged.

When the prefrontal cortex becomes damaged, it leads to a medical condition called dysexecutive syndrome. Under this condition there is no control of time. Even the ability to perform the correct sequence of actions in the preparation of a meal is impaired It is also frequently accompanied by an utter lack of inhibition for a range of behaviors, especially in social settings. Sufferers might blurt out inappropriate remarks, or go on binges of gambling drinking, and sexual activity with inappropriate partners. They tend to act on what is in front of them. If they see someone moving, they are likely to imitate them. If they see an object, they tend to pick it up and use it. Obviously this disorder wreaks havoc with organizing time. If your inhibitions are reduced and you have difficulty seeing the future consequences of your actions, you might do things now that you regret later, or make it difficult to complete projects you’re working on. As for organizing your time, engage your prefrontal cortex, and take care of and protect your prefrontal cortex.

The prefrontal cortex is also important for creativity. It is important for making connections and associations between disparate thoughts and concepts. This is the region of the brain that is most active when creative artists are performing at their peak.

Levitin offers the following suggestion for seeing what it’s like to have damage to the prefrontal cortex. This damage is reversible provided it is not done too often. His suggestion is to get drunk. Alcohol interferes with the ability of prefrontal cortex neurons to communicate with one another, by disrupting dopamine receptors and blocking a neuron called an NMDA receptor, mimicking the damage seen in frontal lobe patients. Heavy drinkers experience a double whammy. Although they may lose certain control or motor coordination or the ability to drive safely, but they aren’t aware that they’ve lost them or simply don’t care. So they forge ahead anyway.

Organizing Our Social World

December 10, 2014

“Organizing Our Social World”is the title of another chapter in Daniel J. Levitin’s book The Organized Mind: Thinking Straight in the Age of Information Overload. As I’ve mentioned in previous posts, when I completed my Ph.D. in cognitive psychology one of the leading problems was information overload, and that was in the era before personal computers. Now we have the internet aided and abetted by mobile technology so technology is omnipresent. It is apparent from this chapter that longstanding problems in social psychology and human interaction have been exacerbated by technology. I find it amazing when I see a group of four people dining together each preoccupied with their smartphones. And when I attend professional meetings where the objective is for direct interactions between and among human beings most people appear to be interacting with their smartphones.

The intention for social media is that they are not a replacement for personal contact, but a supplement that provides an easy way to stay connected to people who are too distant or too busy. Levitin hints that there might be an illusion to this, writing “Social networking provides breadth but rarely depth, and in-person contact is what we crave, even if online contact seems to take away some of that craving. ..The cost of all our electronic connectedness appears to be that it limits our biological capacity to connect with other people.”

Lying and misrepresentations become a much larger problem in the online world. A hormone has been identified with trust. It has been called the love hormone in the popular press because it is especially pronounced in sexual interactions. In such mundane experiments as having research participants watching political speeches rate for whom they are likely to vote. The participants are under the influence of oxytocin for half the speeches. Of course they do not know when they are under the influence of the drug. They receive a placebo, inert drug, for the other half of the speeches. When asked for whom they would vote for or trust, the participants selected the candidates they viewed while oxytocin was in their systems. [to the best of my knowledge such techniques have yet to be used in an official election].

Interestingly, levels of oxytocin also increase during gaps in social support or poor social functioning. Recent theory holds tht oxytocin regulates the salience of social information and is capable of eliciting positive or negative social emotions, depending on the situation of the individual. In any case, these data support the importance of direct social contact by identifying biological components underlying this type of interaction.

I was surprised that little, if any, attention was spent on Facebook the premier social media. As I like to periodically rant regarding Facebook, and considerable time has passed since my last rate, I’ll try to fill in this lacuna. I detest Facebook, although I understand that many find I convenient for keeping in touch with many people with little effort. Apparently, businesses also find Facebook to be necessary and find it profitable. I use Facebook for a small number of contacts, but I am overwhelmed with notes of little interest. At the outset I did not want to refuse anyone friending me out of fear that this someone might be somebody I should but don’t remember. Similarly I find it uncomfortable unfriending people, although at times that seems to be a better course of action. Perhaps there is some way of setting controls so that the number of messages are few and few people are offended, but I have no way of knowing what they are.

I find Linkedin much more palatable and even useful. Still one must regard endorsements and statements of expertise with some caution. That is, they are useful provided one looks for corroborating information. I like email and email with Listservs. However, I’ve learned that younger folks have developed some complicated and, in my view, unnecessary protocols for using email, texting, and social media. I’ll quit before I start sounding like even more of a cranky old man.

Organizing Our Homes

December 7, 2014

Organizing Our Homes is the title of a chapter in Daniel J. Levitin’s book The Organized Mind. His subtitle for this chapter is “Where Things Can Start to Get Better”.   I am probably more in need of the information in this chapter than any of the healthymemory blog’s readers. But my problem is primarily motivational in that although I know the systems for effective organization, but I don’t implement them. Unfortunately, Levitin does not provide any motivational advice. Perhaps one day some disaster will occur that will provide me the motivation for implementing these practices.. One practice I do strictly follow is to keep my most important items in the same place. Individual items might be in separate places, but each important item is always kept in the same place. When I travel or move to a new place, one of my first actions is to decide where these important items go. If I want to be sure to remember to take my umbrella on a given day, I place it in a conspicuous place on my way out. However, even this precautionary measure has sometimes failed.

Levitin uses the four system for remembering important items. Every time he leaves the house he checks that he has four things: keys, wallet, phone, and glasses. The number four is significant as we are constrained by the number of items we can hold in our working or short term memories. George Miller’s original number was 7 plus or minus 2. However, that number has shrunk over the years, and is currently down to four. If he needs to remember something else before leaving the house, say to remember to buy mild on the way home, he will either place an empty mild carton on the seat beside him in his car, or he’ll place the carton in his backpack. Of course a note will be do, but some reminder is needed so the note will not be forgotten.

The problem of misplacing items and being unable to find them is ubiquitous. Levitin writes about Magnus Carlsen the number one rated chess in the world when he was 23. He can keep ten games going on at once in memory without looking at the board, but he says, “I forget all kinds of other stuff. I regularly lose my credit cards, my mobile phones,keys, and so on. Actually all of these memory failures are the result of failing to attend where the object is being placed. Moreover, enough attention needs to be devoted to the object so that the location of the object will later be remembered.

Levitin also discusses the concept of affordances. The term is used in the sense that the environment affords you to do something. One of the best examples of affordances are the plates or handles that are placed on doors. The plate affords the pushing of the door. The handle affords the pulling of the door. Unfortunately, these affordances are frequently misplaced. For example, you try to push a door that has a handle on it and it does not move. Once I was following a lady out of the building. She tried to push a handle and then apologizes for being stupid when it did not move. I explained to her that she was not the stupid. Instead it was the architect of the building or the installer of the door who was stupid. The renowned psychologist B.F. Skinner elaborated on these affordances. If you have letters to mail put them near your car keys or house keys so that when you leave the house their affordance reminds you to take them. The goal is to off-load the information from your brain into the environment by using the environment itself to remind you of what needs to be done. So the idea is to use the environent as a type of transactive memory.

To people who argue that they are not detailed-oriented, that they are a creative person or some such. Levitin provides some good examples from Joni Mitchell, Stephen Stills, John Lennon. Michael Jackson even had a person on his staff titled the chief archivist. Organization was essential to these creative people.

Levitin provides these three general rules of organization..

      1. A mislabeled item or location is worse than an unlabeled item.

      2. If there is an existing standard, use it.

      3. Don’t keep what you can’t use.

Personally I have much difficulty with this third rule.

Levitin devotes a section to the digital home where he recommends organizing by devices, where special devices perform special tasks. He has another section on the storage of information in different types of media and the advantages and disadvantages of each. He notes, rather discouragingly, that digital files are rarely readable for more than ten years. He notes that within the spreadsheet Excel you can link any entry in a cell to a document on the computer. So financial documents for a given year could be in a PDF file linked to a cell in a spreadsheet.

Above all, do not multi-task while you are organizing. He notes that just having the opportunity to multitask is detrimental to cognition. Glenn Wilson of Gresham College in London calls it infomania He has done research that demonstrated that being in a situation where you are trying to concentrate on a task, and an e-mail is sitting unread in the inbox can reduce the effective by 10 points. He has also shown that cognitive losses from multitasking are even greater than the cognitive losses from pot smoking.

A neuroscientist at Stanford, Russ Poldrack, found that learning information while multitasking causes the new information to go to the wrong part of the brain. The information goes into the striatum, a region specialized for storing new procedures and skills, not facts and ideas. Absent the distraction of TV the information goes into the hippocampus, where it is organized and categorized in a number of ways so that it is easier to retrieve.

Moreover, there are metabolic costs to switching attention. Shifting the brain from one activity to another causes the prefrontal cortex and striatum to burn up oxygenated glucose, the same fuel needed to stay on task. The rapid, continual shifting when we multitask causes the brain to burn through fuel so quickly that we feel exhausted and disoriented after even a short time. We’ve literally depleted the nutrients in our brain compromisisng both cognitive and physical performance. In addition, repeated task switching leads to anxiety , which raises levels of the stress hormone cortisol in the brain which in turn can lead to aggressive and compulsive behavior. In contrast, staing on task is controlled by the anterior cingulate and the striatum ,and once we engage the central executive mode, staying in that state uses less energy than multitasking and actually reduces the brain’s need for glucose. .

One of the Biggest Advances in Neural Enhancement

December 3, 2014

In the introduction to The Organized Mind: Thinking Straight in the Age of Information Overload by Daniel J. Levitin he mentions that one of the biggest advancements in neural enhancement occurred only 5,000 years ago. That was the development of a written language. This development took considerable time. First there were primitive notes taken to record important items that were too important to be forgotten. Then there were likely primitive forms of accounts for transactions. Unfortunately, there are no records that I know of that can trace the development. In spite of writing being one of the biggest advances in neural enhancement, it was not immediately recognized as such, nor was it accepted as being beneficial by one of the foremost Greek philosophers of the time, Socrates. Socrates was worried about what was lost in terms of vocal tone and expression, things that were in speech or conversation, but were lost in written language. Fortunately the resistance of Socrates and others gave way, for written language is certainly a requirement for a civilization to advance.

In the terminology of the healthy memory blog, written language is an example of transactive memory. Transactive memory refers to information that is not recorded in one’s own biological memory, but is accessible from the memories of fellow human beings or from some artifact of technology. In this sense written language is a neural enhancement, and a very important as we are biologically constrained regarding the amount of information we can handle. Technology enables us to overcome evolutionary limitations, evolution being a very slow process.

Levitin writes that two of the most compelling properties of the human brain and its design are richness and associative access. Memory is rich in the sense that a large amount of information is in there. Associative access means that our thoughts can be accessed in a number of different ways by semantic or perceptual associations. So related words, smells, category names, or an old song can bring memories to our awareness. Even what are apparently random neural firings can bring them up to conscioussness. Being able to access memories regardless of where they are located is called random access like we experience on DVDs and hard drives and contrasted to data stored on a videotape.

The healthymemory blog likes to distinguish different types of associative access. Information that we know or know where to find quickly is termed accessible memory. Information that we know, but are not sure where to find it, is termed available associative memory. Some Google searches or more primitive forms of looking for information (old library card catalogs) are examples. Potential memory is all the information currently available in other human beings or in some type of artifact, be it a book, database, or in Wikipedia.

Given all this potential or available information in transactive memory, the problem becomes one of being able to access it quickly. Here the issue involves the organization of this information so that it can be more readily accessed. Levitin refers to this as conscientious organization. Systems are important as are different types of databases and search engines. More specifics will be found in the future chapters of The Organized Mind: Thinking Straight in the Age of Information Overload, some of which will be discussed in future healthymemory blog posts.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Memory, Attention, Consciousness

November 30, 2014

I’ve just begun reading The Organized Mind: Thinking Straight in the Age of Information Overload by Daniel J. Levitin. I’ve already realized that I should have read this book some time ago, and it is already clear that I am going to recommend it. Usually I do not recommend books until I’ve completed reading them, but I am making an exception in this case. It is already clear that much of the advice will involve transactive memory. Before proceeding with advice providing posts, I feel compelled to write a post on memory, attention, and consciousness. These three topics are central to the healthymemory blog, and although Levitin does not necessarily provide new information, I think that his treatment of these topics deserve special consideration.

Here is how Levitin begins Chapter 2 on How Memory and Attention work, “We live in a world of illusions. We think we are aware of everything going around us. We look out and see an uninterrupted picture of the visual world, composed of thousands of little detailed images. We may know that each of us has a blind spot, but we go on blissfully unaware of where it actually is because our occipital cortex does such a good job of filling in the missing information and hence hiding it from us.

“We attend to objects in the environment partly based on our will (we choose to pay attention to some things), partly based on an alert system that monitors our world for danger, and partly based on our brain’s own vagaries. Our brains come preconfigured to create categories and classifications of things automatically and without our conscious intervention. When the systems we’re trying to set up are in collision with the way our brain automatically categorizes things, we end up losing things, missing appointments, or forgetting to do things we needed to do.”

Regular readers of the healthymemory blog should know that memory is not a passive storage system for data. Rather it is dynamic, guiding our perception, helping us to deal with the present and project into the future. Fundamentally it is a machine for time travel. It is not static, but constantly changing, with the sometimes unfortunate consequent in our being highly confident of faulty recollections. Memories are the product of assemblies of neurons firing. New information, learning, is the result of new cell assemblies being formed. Neurons are living cells that can connect to each other, and they can connect to each other in trillions of different ways. The number of possible brain states that each of us can have is so large that it exceeds the number of known particles in the universe. (I once asked a physicist how they computed this number of known particles and he told me. I would pass this on to you had I not forgotten his answer.)

Attention is critical as there is way too much information to process. So we need to select the information to which we want to attend. Sometimes this selection process itself demands.substantial attention. Moreover, switching attention requires attention, which only exacerbates attentional limitations when multitasking.

Consciousness has been explained as the conversation among these neurons. Levitin has offered the explanation that there are multiple different cell assemblies active at one time. Consciousness is the result of the selection of one of these cell assemblies. In other words, there are multiple trains of thought, and we must choose one of them to ride.

A critical question is how to employ our limited consciousness effectively. One way is the practice of mindfulness meditation to try to achieve a Zen-like focus of living in the moment. This can be accomplished through a regular meditation regimen. However, we should not neglect the short time application of this mindfulness. We need to apply this Zen-like focus when putting things down (your keys, important items), so you’ll remember where you put them. Also do not neglect uses of transactive memory and put notes in planners, on calendars, or in your electronic device so you’re sure you’ll be able to access them.

Happy Thanksgiving 2014!

November 25, 2014

We, homo sapiens,have much for which to be thankful. I often question whether we are worthy of our name. Nevertheless, we have much cognitive potential for which to be thankful. I believe that the best way of giving thanks is to foster and grow this potential throughout our lifetimes.

Consider our memories, which are de facto time travel machines. We travel into the past and into the future. Actually we travel into the past, to retrieve what we have learned, to cope with the future. We have both experienced and remembered pasts (see the Healthymemory blog post, “Photos, Experiencing Selves and Remembering Selves”). We can go back in time before we were born via our imaginations and transactive memory. Similarly we can go forward into time via both our imaginations and transactive memory (transactive memory are those held by fellow humans and by technological artifacts such as books and computers).

When human minds are put to best use via creativity and critical thinking, tremendous artistic, scientific, engineering, and cultural feats are achieved. And we each have individual potential that we should do our best to foster and grow throughout our lifetimes by continuing to take on cognitive challenges and to interact with transactive memory (our fellow humans and technology). We should not retire from or give up on cognitive growth. And we should assist our fellow humans who are in need to grow their individual potential. This is the best means of giving thanks!

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Ipad for Transactive Memory

November 23, 2014

Remember that transactive memory consists of all memory that is resident outside of ourselves. So memories held by our fellow beings are part of transactive memory. Memories resident in technology, be it paper or electronic, are all types of transactive memory. Unfortunately, one of my many shortcomings is my lack of systems for organizing my information. I have articles I stored as a graduate student that I have kept in boxes and moved them along within me whenever I moved. Unfortunately,the probability that I will ever find them again is close to nil. We are currently living in temporary quarters while I home is being remodeled. The remodeling will provide more space and bookshelves. These are much needed, because there were times when I could not find a book I read, but I knew it had information I needed to review. In these cases it was frequently more expedient to reorder the book from Amazon.

I was excited by the invention of the Kindle and other electronic readers. I purchased a Kindle and liked it. It was especially useful for cruises as I did not have to pack so many books. Neverthelesss, I found the display to be too small, so I used in sparingly. My recent purchase of an iPad eliminated the display size problem, but initially I did have problems regarding the logic of the interface. Several consultations with Apple Geniuses solved these problems and I am now a most satisfied user even though I use it primarily as a reader. An earlier post related by experiences using it at the APA convention (see the healthymemory blog post “Attendance at the 2014 Convention of the American Psychological Association). Frankly I find it easier doing email and writing with my laptop. The potential of the iPad is large, but it is unlikely that I shall avail myself to most of it.

From now on electronic versions of most written material will be preferred. Most books will be purchased on Amazon and downloaded to the Kindle app on my iPad. The iPad mitigates many logistical problems and provides an easy way of accessing information I am still in a learning process and my appreciation of the iPad as a device for transactional memory is growing.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

10 Innovations That Changed History and 10 Innovations That Will

November 5, 2014

This is the title of a special report in the New Scientist (October 25-31, 2014). Articles like this are fun, but should not be taken too seriously. However, they do provide food for thought.

10 innovations that have changed history

Cooking. Clearly learning how to start and control fires was a prerequisite, but cooking enable early humans to enjoy a better diet for advancing physical and cognitive health.

Weapons. Weapons enabled hunting, which provided for a better diet that advance physical and cognitive health. They also brought about warfare. The article argues that this enabled the weakest group member to take down the strongest group member of the opposing group. So weapons encouraged early human groups to embrace and egalitarian existence unique among primates. There appears to have been a link between warfare and technological advancement, the most recent example being the cold war between the United States and the Soviet Union. The launch of Sputnik encouraged a giant increase in US funding for technology development and the training and education of scientists and engineers. I was one of the many beneficiaries of this funding. Would man have reached the moon without this funding? What about progress in computers? The internet is a product of defense spending.

Jewelry and Cosmetics. This is certainly an item I would have left off my list, but the authors argue that they hint at dramatic revolutions in the nature of human beliefs and communication. They are indications of symbolic thought and behavior because wearing a particular necklace or form of body paint has meaning beyond the apparent. As well as status, it can signify things like group identify or shared outlook. That generation after generation adorned themselves in this way indicates that these people had language complex enough to establish traditions.

Sewing. It is obvious that without sewing there would be little in the way of clothes to product our bodies and allow us to live outside of highly temperate bodies.

Containers. This is an obvious advancement that is easily overlooked. Groups of humans would have needed to remain small absent containers.

Law. Obviously codified rules are needed for societies to survive. Then, there is the concept of justice. The law and justice need to be better aligned. One might be tempted to argue that currently they are orthogonal dimensions.

Timekeeping. Contemporary could not exist without a system of timekeeping. For much of history, timekeeping systems were local. It was not until the development of the railroads were the different timekeeping systems brought into alignment to keep trains from crashing into each other.

Ploughing. Obviously without agriculture, societies would not have developed, and advanced agriculture requires plumbing.

Sewerage. Absent sewerage, not only would the stench be unbearable, but diseases would be widely spread. I will not step into a time-travel machine and go back to a time before sewerage.

Writing. But of course. If there were not writing, there could be no healthymemory blog.

10 innovations that will change history.

End of aging. This might come to pass, but what will be its ramifications? Will warfare break out between the ages. Will people eventually grown tired of who they have been for so long and opt out?

Aging might end, but can the quality of life be maintained or improved?

Decision Making Machines. Not only do we not like making too many decisions, we are not good at making decisions. Perhaps decision making machines could replace our non-functioning legislative systems.

Customisable Bodies. Well this has already started with plastic surgery. Will the results be improved? How will people choose to customize their bodies. How might this affect athletic competitions?

Cryptocurriences. The bitcoin is provided as an example here. Is it an improvement? Would these currencies constitute an improvement or just another means of speculation?

Virtual Reality. Might virtual reality be seen as better than real reality and virtual living would replace real living?

Brain Uploads. Here we have the singularity with silicon. This is Kurtzweil’s future, one of which I am quite skeptical.

Genetic engineering. I expect great things here to include the end of disease and genetic defects. There might even be substantive genetic improvements to mitigate our many shortcomings.

Space Colonization. Yes, in our solar system. But we need to learn how to break the laws of physics to colonize outside our solar system.

End of Privacy. This might already have occurred. What is needed are laws to prevent abuses of this end of privacy.

And Abundance of Everything. Something to be hoped for. Today we are a minority of our species who enjoy the bnefits of technology and are not suffering from war and atrocities at the hand of fanatics. So why was the end of war and terrorism not on the list of ten advances for the future?

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Maintaining Focus on the Internet

November 1, 2014

I become angry, furious even, when I hear, see, or read something about humans being victims of the internet. Supposedly, attention spans are shortened, and we are forced to switch from topic to topic. Consequently, we are exposed to volumes of information, but fail to develop knowledge or to understand topics in depth.

Now it is true that this can be the consequence if we are data driven by what pops up on the internet. But there is no excuse if we end up being victims of technology. Technology is a tool. A tool is something we use to accomplish some goal, not something to be victimized by.

So, when we get on the internet it should be with some goal or prioritized goals to accomplish. We need to maintain our focus to accomplish because never before have we had such a tool that enables us to learn so much in so short a time. First of all, we can search on a topic to get suggested links. When we find a fruitful link we can begin to read and to take notes. We’ll encounter hyperlinks. When a hyperlink is encountered we need to make a decision. Should it be ignored? If we ignore it, we still can return later. If we think it is potentially important, but not something to be pursued at the moment, we can bookmark it. Or, if we feel like we really need more information to continue, we can click on the link and drill down for more information.

Remember what was needed in pre-internet days? We have an article or a book. We read. We take notes. We identify holes in our knowledge and look for more references. All of this is time consuming, requiring going through card catalogs and other sources of additional information. Consider the time involved here and compare it with clicking on hyperlink. When a book is needed, it can be ordered from Amazon or some other online vendor.

In the lingo of the healthymemory blog, the internet is a superb example of transactive memory. Remember that there are two sources of transactive memory. One is technology, which can range from the internet to books and journals. The other source is our fellow human beings, We can and should use the internet to connect with fellow humans with knowledge and expertise in topics of interest.

For more healthymemory blog posts on this topic, enter “contemplative computing” into the this blog’s search block.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Attendance at 2014 Convention of the American Psychological Association

August 13, 2014

Before posting about the substantive information from the convention, I shall first review some human factors technology issues. The convention was held in the Convention Center of Washington DC. The informational signs were not satisfactory. Even though I had been there before, navigation was a problem. In all fairness I must admit that the DC Convention Center is not unique in having this problem. I cannot remember any place I have been that did not suffer from this deficiency. The same problem applied to highway signs. The signs are useful to people who know the area. They are not useful for those unfamiliar with the area. In all fairness to the APA, at least, there were human guides strategically placed throughout the convention center to provide directions and information.

Road signs present an interesting case. There are standards for road sign legibility, although I do not think they are well enforced, nor that there needs to be a requirement about the illumination of these signs. Now what is the point of being able to read a sign if you don’t know what it means?

The fundamental problem is that the people who design the signs know the area quite well for which they are designing. The utility of these signs need to be tested with people who do not know the area. Were this to be done effectively, the problem would largely disappear.

I also took my iPad to the convention. I am a new iPad user. I recently purchased a MacPro rather than undergo the frustration of Windows 8. I had been avoiding Apple for many years for a couple of reasons. The first being their contention that the MacIntosh was intuitive. All one needed to do was to be able to point and click. Personal experience supplemented with volumes of empirical data provide ample proof that this claim was unsubstantiated. Secondly, I could not believe the gaul of Apple to sue Microsoft for Windows. The windows graphical user interface was developed at the Xerox Palo Alto Research Center and implemented the their Star computer. Xerox is to be faulted for not commercializing and supporting the Star computer. They could have captured both the commercial and the academic markets. Regardless, Apple had no claim on the Windows concept.

And as far as being intuitively obvious, I found there to be nothing intuitive about the iPad. Although it does have Siri, I have not found Siri to have the information I needed when I needed it. However, she did provide a means of venting and cursing that I found to be cathartic. She never is offended and proffers, “You are entitled to your opinion.” But Apple does have Apple Geniuses. One can schedule appointments without difficulty and have a real human being, who is quite knowledgeable and easy to worth with. I hope that Apple keeps these geniuses and that other companies follow their lead.

The APA had a downloadable APP for the meeting in addition to the large, bulky conventional program of 592 pages. I found them both to be useful. Unfortunately the convention APP could not stand alone to my satisfaction. Although it might have contained all the information that the paper program had, I still found that it was easier to find certain things in the paper program.

I had planned to try taking notes on my iPad. I could do this either by typing or by writing. I found my writing to be both illegible and uneconomical. One of the benefits of typing is that it is legible. Moreover, I had read an article that disabused me of this idea. According to a piece in theMonitor on Psychology (page 21 July/August, 2014) this question was addressed by researchers from the University of California and Princeton University. In a study reported in Psychological Science. The researchers asked 65 college students to watch a TED talk with the option of taking notes via the laptop or by hand. A half hour after the talk the students answered factual recall questions and conceptual application questions about the lecture. Both types of note takers performed equally well on fact recall questions, but the laptop note takers performed significantly worse on the conceptual questions. Moreover, one week later when the students were given a chance to review their notes before taking the test, the longhand note takers still performed better.

It is interesting to speculate why this result was obtained. As the students were not randomly assigned to one group or the other, it is possible that the longhand note takers were better students. But if the students had been randomly assigned to the groups, then some of the students would be performing a different way of taking notes that might have been awkward. This was one of those situations where random assignment would have been ill advised. Perhaps the requirements of typing used up attention that could have ben spent processing the lecture. Or perhaps there was more freedom taking notes longhand as diagrams and links could have been used. This might have especially aided conceptual understanding.

Another reason for taking conventional handwritten notes, is that I feared losing information with the undesirable consequence that my posts about the Convention would have been rather thin.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Prescience of Leonardo Da Vinci

July 5, 2014

Leonardo Da Vinci anticipated many great scientific discoveries.

40 years before Copernicus he noted, in large letters to underscore its significance, “IL SOLE NO SI MUOVE,” “The sun does not move.” He further noted that “The earth is not in the center of the circle of the sun, nor in the center of the universe. When he lived, it was not only believed that the sun revolved around the earth by that the earth was the center of all things.

60 years before Galileo he thought that “a large magnifying lens” should be employed to study the surface of the moon and other heavenly bodies.

200 years before Newton he anticipated Newton’s theory of gravitation. He wrote, “Every weight tends to fall towards the center in the shortest possible way.” In another note he added, “every heavy substance presses downward and cannot be upheld perpetually, the whole earth must become spherical.

400 years before Darwin he placed man in the same broad category as monkeys and apes writing, “Man does not vary from the animals except it what is accidental” The accidental part is especially prescient as he is anticipating the basis of evolution, random mutations.

I’m curious as to whether any of these scientists were aware of Da Vinci’s writings and whether he had any influence on their work. Please comment if you have any information regarding these questions.

Modifying the Work Environment and the Home Environment

June 15, 2014

Modifying the Work Environment and the Home Environment
is another chapter in Nurturing the Older Brain and Mind by Greenwood and Parasuraman.  It covers research in the field of Human Factors and Ergonomics.  I am a longstanding member in the Human Factors and Ergonomics Society.  The field of Human Factors and Ergonomics is devoted to designing technologies and environments so that they can be used effectively and safely.  Greenwood and Parasuraman note that their coverage of the broad area of human factors and ergonomic design for older adults is limited to just a few topics, including health-care technologies aimed at older adults and assistive technologies for the home.  They do provide references for more general coverage of basic research issues in human factors and aging.  There is much research into sensory-perceptual factors and interface designs and devices to compensate for losses in both sensory and motor functions that are not provided in the book.

Assistive technologies for self-care and “aging in place” are being developed.  This is especially important because more that 90 % of older adults live in their own homes, with relatives, or in independent-living facilities.  Older adults living alone are of special concern.  Some older people  have banded together so that they can age-in-place.  They organize self-help “villages” to screen service providers (repair technicians, for example) and other direct services such as meal delivery to dues-paying members.

The proper design of these assistive technologies has special importance for the elderly.  Daily we interact with and are frustrated by poorly designed devices (and software).  This frustration is exacerbated in the elderly who may abandon the use of the technology or, worse yet, use it improperly.

The Georgia Institute of Technology has been at the forefront of research to introduce “intelligent” technologies to help older adults age in place.  They have developed what they term the “Aware Home”, which is a conventional appearing house with many sensing and computing infrastructure designed to keep older individuals safe and improving their lives.  Information can be sent to a friend or relative to keep them aware of where the individual is in the house and what they are doing.

Honeywell has developed an Independent Living Lifestyle Assistant (ILSA)to support an independently living  older person with extensive monitoring and management (including the monitoring of temperature, blood pressure, and heart rate)  and with the ability to control remotely lights, power, a thermostat, door locks, and water flow.  There are many sensitive issues implementing these systems indicating that more research needed to be done.  Overreliance and complacency are two of the problems that need to be addressed.  Continued research will yield improved systems, and technology can be employed in an a ad hoc manner.  Imagine using Skype to keep tabs regularly on an older friend or relative.   Enter “Aging in Place Technology Watch”  to learn of a large range of activities taking place in this area. offers a wide range of information and products

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Limits to Human Understanding

May 20, 2014

This blog post was motivated by an article in the New Scientist1, “Higher State of Mind” by Douglas Heaven.  It raised the question of limits to human understanding, a topic of longstanding interest to myself.  The article reviews two paths Artificial Intelligence has taken.  One approach involved rule-based programming.  Typically the objective here was to model human information processing with the goal of having the computer “think” like a human.  This approach proved quite valuable in the development of cognitive science, as it identified problems that needed to be addressed in the development of theories and models of human information processing.   Unfortunately, it was not very successful in solving complex computational problems.
The second approach eschewed the modeling of the human and focused on developing computational solutions to difficult problems.  Machines were programed to learn and to compute statistical correlations  and inferences by studying patterns in vast amounts of data.  Neural nets were developed that successfully solved a large variety of complex computational problems.  However, although the developers of these neural nets could describe the neural net they themselves had programmed, they could not understand  how the conclusion was made.  Although they can solve a problem, they are unable to truly understand the problem.  So, there are areas of expertise where machines can be said to know not only more than we do, but also know more than we are capable of understanding.  In other words, what we can understand  may be constrained by our biological limitations.
So, what does the future hold for us?  There is an optimistic scenario and a pessimistic scenario.  According to Kurzweil a singularity will be achieved by transcending biology and we shall augment ourselves with genetic alterations,  nanotechnolgy, and machine learning.  He sees a time when we shall become immortal.  In fact, he thinks that this singularity is close enough that he is doing everything to extend his life so that he shall achieve this immortality.  This notion of a singularity was first introduced in the fifties by the mathematician John von Neuman.
A pessimistic scenario has been sketched  out by Bill Joy.  I find his name  a bit ironic.  He has written a piece titled, “Why the Future Doesn’t Need Us”  where he argues that technology might be making us an endangered species.
So these are two extremes.  A somewhat less extreme scenario was outlined in the movie, Collosus:  the Forbin Project, which was based on a novel by Dennis Feltham Jones, Collus.  The story takes place during the cold war with the confrontation between the United States and the Soviet Union.  The United States has built a complex sophisticated computer, the Collosus to manage the country’s defenses in the event of a nuclear war.  Shortly after the Collosus becomes operational, it established contact with a similar computer built by the Soviet Union.  These two systems agree that humans are not intelligent enough to manage their own affairs, so they eventually hey take over the control of the world.
So what does the future hold for us?  Who knows?

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

An Apology and an Explanation

April 30, 2014

Some time has passed since my last post.  So, I apologize, but here is the explanation.  I had ordered a renewal of my security software for my XP.   I was able to download it, but I received an error message so I could not install it.  So I clicked on help.  That provided a phone contact to the help desk.  They offered to scan my computer to see what the problem was.  This was an interesting procedure.  I allowed them to gain remote access to my computer while I watched.  They reported that my computer was in sad shape and needed to be cleaned.  They offered to do this for a price and also sold me a multi-year support contract.  I did this because the promised to keep my XP computer going for years,and I dreaded moving on to Windows 8.  I watched while the cleaned my computer remotely.  I should say that they supposedly cleaned my computer.  This took close to an hour and then the moment of truth came.  They tried to restart my computer.  Of course shutting done is a prerequisite to restarting the computer.  It never shut down.  I had to force it down.  Then when I tried to restart the computer it kept trying to boot up and never succeeded.  I had watched them destroy my XP!  Moreover, they had the temerity to bill me for destroying my computer.  Of course, I am going to refuse these charges.

Now came the dreaded problem of getting a new computer.  I thought I might be able to cope with Windows 8 if I had a touch screen.  I was wrong.  So I returned the computer and bought a Mac.  Now I am in the process of learning its “intuitive” interface.

I am furious that Microsoft can force us into these upgrades.  We go through these periodically at the office.  Days are lost learning to cope with the so-called upgrade.  Moreover, I always fail to see any benefits from the upgrade.  When one’s personal computer needs to be upgraded, it is even more traumatic.

I don’t think Microsoft should be allowed to do this.  They should not be allowed to discontinue the support of an operating system.  Moreover, everything needs to be backwards compatible.  Before an individual, company, or the government agrees to an upgrade, the benefits of the upgrade need to be made explicit.  There should be warnings regarding time lost as a result of changes resulting from the upgrade.  And individuals, private companies, and the government should be allowed to sue for unanticipated losses in time, money, and in mental anguish.

Stupidity Pandemic

April 1, 2014

Picking up from the previous blog that left with the exhortation not to follow the Krell to extinction it would be well to ask, where do we stand now? We are at the peak of scientific knowledge, but too much of the world lives at a subsistence level, and there are numerous wars and conflicts. Millions of people are displaced and have neither homes nor prospects. Terrorists are preoccupied with jihad. Even in the so-called advanced countries stupidity reigns. Many people cling to discredited dogmas and reject scientific findings. I find it quite annoying that many people enjoy the benefits of medicine and technology that result from science, yet reject the scientific basis on which these benefits depend. Worse yet, these individuals’ beliefs risk further advancements in science, technology, and medicine. Moreover, they prevent or hinder responding to problems with a strong scientific basis that need to be addressed. There is a member of the U.S. Congress who believes in a literal interpretation of the Bible and enforces his beliefs in his legislative actions. What is even more depressing is that citizens of a presumably advanced country elected such a man to office.
Senator Kay Bailey Hutchinson proposed that Congress double funds for medical science, but to cut the entire social and behavioral sciences budget of the National Science Foundation. Although she is to be applauded for doubling funds for medical science, it is regrettable that she fails to see the relevance of the social sciences. One can well argue that most of our problems need to be addressed by the social and behavioral sciences, (To read more on this topic, enter “STEM’ into the search block of the healthymemory blog).
Debates in the United States center on whether someone is for or against Big Government. This is a meaningless question and a meaningless topic for debate. What is Big Government? Perhaps it could be defined in terms of the percentage of the GNP spent by the government, but that would still be a pointless basis for debate. The debate should be on what services should be provided by government and which by the private sector. Moreover, this debate should not be on the basis of what people believe, but on the basis of reasoning and evidence. Public policy should be evidence-based. Sometimes the evidence is there for the asking, but often experiments need to be done. When this happens, there is some evidence of intelligence. Unsupported beliefs indicate stupidity. To put this in Kahnman’s terms, we need System 2 processes, not System 1 processes (if this is not understood, enter “Kahneman” into the search block of the healthymemory blog.).
Too often a false dichotomy is made between science and religion; that you follow one or the other. Science and religion are not incompatible. First of all, it needs to be appreciated that science and religion are alternative, not competitive, means of knowing. The Dali Lama is a strong proponent of this point of view and also a strong believer in science. Next, a distinction needs to be made between religions and God. Religions are constituted of and by human beings, and religious promulgations and texts are from men. It is up to us individuals to decide whether they are the word of God. A belief in God should begin with an appreciation of our brains. If you believe in God, then the brain is a gift that came through evolution, and we need to make the most of this gift. This brain is the vehicle by which we work to understand the world. Science is a rigorous means of gaining this understanding. It is clear that this understanding comes gradually.
For a long time, the advancement of human knowledge proceeded at a glacial pace. I would argue that true scientific advancement began with Nicolaus Copernicus (1473-1543) and Galileo Galilei (1564-1642) and their use of the scientific method. Copernicus formulated the heliocentric theory of our solar system with the earth at its center. Galileo’s research putt him at odds with the Roman Catholic Church who saw his research as an assault on the Church’s monopoly on truth. They placed him on trial. Fortunately, others followed in their footsteps. As more engaged in these pursuits, knowledge advanced at an increasingly rapid rate. One of the ironic features of this advancement of scientific knowledge is that we have become more aware of what we don’t know. Dark matter is just one of these areas.
Unfortunately religious dogmas have had a depressing effect on the advancement of knowledge. This should never be allowed. What we learn through science, which is, or should be, the antithesis of dogma. Scientific knowledge is always subject to change subject to new information and new theories. Although we can never be certain, scientific knowledge provides us the best available information regarding what to believe and how to act. Science requires heavy use of System 2 processes, thinking. Dogmas allow us to rely on System 1 processes so we don’t have to think.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.


Smartphone Usability

February 1, 2014

I confess to not having a smartphone. Before the advent of personal computers, complaints were being heard about information overload, there being too much information to process. This problem was significantly increased by the advent of personal computers, significantly exacerbated by the internet, and has become much worse with smartphones. I don’t have a smartphone, and I was recently encouraged when I bought a new dumb phone that came with the warning to not use while driving. I don’t feel a need for a smartphone and find not having one helps me deal with the problem of information overload. I’d rather wait until I can conveniently get to my laptop rather than deal with the minute keyboards and displays.

An article1 in the Washington Post further raised the issue of usability. According to a Gallup Poll, 62% of Americans now own a smartphone. But according to the Pew Research Center only half of these users download apps and read or send e-mail. A 2012 Harris Interactive Poll found that just 5% of Americans used their smartphones to show codes for movie admission or to show an airline boarding pass. Moreover, these problems are not limited to the older generation. Experts who study smartphone use, as well as tech-support professionals who work with the confused say that smartphone usability problems at all ages and for all kinds of reasons. The Genius Bar at Apple2 stores sometimes require that desperate iPhone users make their appointments days in advance.

Clearly the issue of usability is missing. Absent are design guidelines for smartphones that emphasis usability. Here are some design principles from an Adroid developer’s Web site: “Enchant me, simplify my life, make me amazing.” What about making my smartphone easy to use rather than complicating my life?

Back in the old days of command line interfaces, usability was a key requirement for government users. With the advent of graphical user interfaces (GUIs), that requirement is missing. Unfortunately, the government appears to have bought Apple’s propaganda that GUIs are intuitive. GUIs can and should be made intuitive, but a GUI without usability guidelines usually will not.

Please weigh in with your comments on this topic.

1Rosenwald, M.S. (2013). Phones getting smarter, but their users aren’t. Washington Post, 19 January C1..

2Apple promoted their intuitive point and click interface. There is ample research to prove that this claim was a lie.

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Problem with Scientific Journals, Especially Elite Ones

January 3, 2014

Examples of elite scientific journals are Science,Nature, and Cell. But this problem generalizes to practically all refereed journals. Unfortunately, a criterion many refereed journals regard as one of success is a high rejection rate for submitted papers. This problem was recently articulated by Randy Schekman, the 2013 Nobel Prize winner in Physiology or Medicine.1 One of his criticisms was the artificial restriction of papers they publish that result in a high rejection rate. The second criticism has to do with published “impact factor” that purports to measure how important a journal is. The result of these pernicious factors is the conclusion John Ioannidis made in 2005 in Plos: Medicine, that most published research findings are false.

The sine qua non of science is replication. But journals do not like to publish replications of research. Much worse, is that failures to replicate are also likely not published. Simply put, that is how the majority of published research findings are false. This problem is so severe that the cover of the October 19th to 25thEconomist read, HOW SCIENCE GOES WRONG. The feature article elaborated on the very brief synopsis that I have provided.

At one time, in the era of paper publishing, there was a serious cost that limited how much research could be published. However, that is no longer the case. There is no limit to how much research can be put online. There is still a cry for research to be refereed. I have participated in the review process both as a reviewer and as a receiver of reviews. I have not been impressed by the process. There is a large factor of arbitrariness, and often form is weighted more strongly than substance. Frankly, I do not need what I read to be refereed. I can quickly ascertain whether a particular paper is worthy of further attention.

I think the major force behind refereeing are the academics. When making tenure decisions, the number of refereed publications is a factor that is heavily weighted. Absent this metric, academics might actually need to read the papers of those they are considering for tenure.

Randy Sheckman has started his own on-line journal. Expect many more in the future. Indeed, expect being able to download more research papers from authors’ websites.

This is certainly a welcome development for poor bloggers such as myself trying to access relevant research. There is also a push to make more data available to researchers. As most of this research is funded with taxpayers’ money, this is certainly appropriate, but I shall stop here before proceeding on another rant.

1What’s wrong with Science, The Economist December 14th 2013, p. 86.

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Digital Sabbaths

October 23, 2013

Appendix Three in The Distraction Addiction1. is titled “DIY Digital Sabbath.” In other words it discusses do it yourself techniques for taking a day off from digital technology. The first appendix describes the the technology diary that Ohio State University professor Jesse Fox assigns to her students. The diary is supposed to capture every mediated/technological interaction over the course of a day. I think the first question regarding digital Sabbaths is whether we should observe one. The results of such a diary might help someone make this decision. I think the answer to this question depends upon whether you are addicted to technology. I think the best test of this is whether you can go an entire day “cold turkey” without using any technology. If you answer, “yes,” then I think there is some question as to whether a digital Sabbath is in order. If the answer is “no,” then I think consideration of whether a digital Sabbath or an alternative I shall offer at the end of this post is in order.

Should you decide you need to observe a digital Sabbath, here are some guidelines offered by Dr. Pang:

Set a regular time.

Figure out what to turn off.

Don’t talk about digital sabbaths (except for friends who complain about your nonresponsiveness).

Fill the time with engaging activities

Be patient.

Be open to the spiritual qualities of the Sabbath.

Enjoy your escape from “real time.”

Before offering my alternative to a digital Sabbath let me provide some context. Within my lifetime, information overload had been raised as a serious problem before the advent of personal computers. I remember reading that this problem of information overload was raised after the invention of the printing press. Now the use of personal computers has increased this overload further, and the advent of mobile computing has increased this overload further still.

I don’t regard myself as being addicted to technology. I can easily pass the cold turkey test. When I go on a vacation, I don’t use technology. I do bring a cell phone, which I do not use unless there is an emergency. Had I been asked many years ago if personal computing devices would become popular, I would have confidently offered the opinion, “No.” My reason would have been the displays would be too small, and the keyboards would be difficult to use.” Time has proven me to be to both wrong and a fool. But personally I don’t use these devices because nothing is so urgent that it will not wait until I can get to a laptop that I can use in comfort. I do have a dumb cell phone and a Kindle, but that’s all.

So I think a good way of dealing with the distraction addiction is to consider the urgency of using this technology. Many healthymemory blog posts have addressed the dangers of using a phone while driving. Whether hands are free or not is irrelevant. The problem is that using a device that detracts attention from driving increases the risk of accidents, injuries, and deaths. I would argue that there is a similar risk in using a personal computing device in an area in which there are also automobiles. Information overload can best be dealt with, and the distraction addiction avoided, by using technology when it can be conveniently and safely used. I’m well aware that this is not the cultural norm. So you might want to explain to your friends why you might appear to be unresponsive, and suggest the benefits that they could enjoy by using technology with discretion.

1(2013) Pang, Alex Soojung-Kim.

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Using or Abstaining from Technologies in Ways That are Restorative

October 20, 2013

The eighth principle of contemplative computing1 is using or abstaining from technologies in ways that are restorative. With the possible exception of flow, using technologies requires mental effort. Even in the case of flow, eventually we all tire. In other words, our conscious mental resources deplete and need time to be restored.

So we need to know how to restore our mind’s ability to focus. We can arrange our environments to make it easier to concentrate for longer periods. It is also important to find activities that offer a respite, but not a complete break from steady concentration. Things that offer a sense of being away with a mix of fascination and boundlessness can help our minds recharge. Complete breaks are also essential. Take time off to meditate. A good walk can help the mind recharge. Then, too, it is also necessary to know when it’s time to quit for the day (or night). Be assured that even when we give our conscious minds a break, our subconscious minds keep working. It is possible that our conscious minds can get caught in a rut thinking about the same things, and a complete break can facilitate our subconscious minds breaking through with the answer.

So, there we have it. The eight principles of contemplative computing: be human, be calm, be mindful, make conscious choices, extend our abilities, seek flow, engage with the world, and use or abstain from technologies in ways that are restorative.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction.

Engage with the World

October 16, 2013

Engage with the world is the seventh principle of contemplative computing.1 Engage with the World complements very nicely the fifth principle of contemplative computing, Extend Your Abilities. They both involve transactive memory. Whereas Extend You Abilities focused on using the memory resident in technology to enhance your cognitive growth, Engage with the World, focuses on engaging with you fellow humans to enhance your cognitive growth. Remember that transactive memory includes both memories resident in technology (both electronic and conventional such as books and journals), and in your fellow human beings. Engaging with the world implies both that we will receive knowledge from our fellow humans, but that we shall also contribute knowledge to the store of human knowledge. Do not underestimate yourself. You have knowledge to contribute. If not, acquire additional knowledge so that you can add your own unique contributions. These contributions might be additions/corrections you make to Wikipedia, or contributions you make through your own blog. It might even be information you pass on to individual humans. Remember that social interaction is a key component of a healthy memory.

When engaging, please keep the following in mind.” Engaging with the social world isn’t just interacting, it’s about putting people rather than technology at the center of your attention. For some, this involves applying Christian or Buddhist precepts to their virtual interactions and using media in ways that let them be spiritual presences, not just social ones, and see the spark of divinity in everyone”.2

The first six principles of contemplative computing have been discussed in the immediately preceding healthymemory blog posts. The next blog post will discuss the final principle of contemplative computing.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction.

2Ibid p. 225.

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Seek Flow

October 13, 2013

Seeking flow is the sixth principle of contemplative computing.1 Flow is a state identified by Mikhaly Csikszentmihalyi (Chick-sent-me-high’-ee).2 It has the following components. “Concentration is so intense that there is no attention left over to think about anything irrelevant, or to worry about problems. Self-consciousness disappears, and the sense of time becomes distorted. An activity that produces such experiences is so gratifying that people are willing to do it for its own sake, with little concern what they will get out of it, even when it is difficult or dangerous.”3 He says that you can reach flow doing almost anything. He gives an example of how lox cutters achieve flow.

Situations in which there are challenges, clear rules, and immediate feedback are likely to achieve flow. Usually video games are good for achieving flow, and they have been found beneficial in helping older people keep mentally sharp. Unfortunately, once you become especially good at something it can become boring. That is why many games have different difficulty levels. Once you have become bored with one level and are no longer achieving, you can advance to the next level and improve to the point where you again achieve flow.

Flow can be experienced in many activities, and some require considerable time before you start to achieve flow. I remember studying German in college. The first course was slow going. In fact, I received my first and only “D” in introductory German . I then learned that I needed to spend time drilling in the language laboratory until things started flowing. As I studied further, I could read German without consulting the dictionary so frequently. And got to the point where I could understand lectures when they were given in German.

Seeking flow can be regarded as an extension of the preceding principle, extend your abilities. Play video games and achieve flow. But don’t stop there. Consider athletic, and especially mental, activities were flow can be achieved. Mnemonic techniques can be developed to the point where flow is achieved in memorization.

The first five principles of contemplative computing have been discussed in the immediately preceding posts. The final two principles will be discussed in the subsequent posts.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction

2 (2008) Csikszentmihalyi, E. The Psychology of Optimal Experience

3 (2013) Pang, Alex Soojung-Kim. The Distraction Addiction

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Extend Your Abilities

October 9, 2013

The fifth principle of contemplative computing1 is to Extend Your Abilities. Readers of the healthymemory blog should realize that this is one of the healthymemory themes. It comes under the rubric of transactive memory. Transactive memory refers to knowledge that is resident in technology, ranging from the world wide web to conventional texts, as well as knowledge that is resident in our fellow human beings.

Some of what we know is resident in our individual minds, our brains. There is other information that we know, cannot recall, but know how to find. This is referred to as accessible transactive memory. That is, we know how to find and access it quickly. Then there is information that we know exists, but cannot find or access readily. This is referred to as available accessible information. This is information that we are fairly confident we can locate given enough time and searches. Finally, there is potential transactive memory. This is all the knowledge and information that is available on earth. As individuals, our task is to transfer some knowledge from accessible transactive memory to our individual minds and brains. Then we need to transfer some knowledge from available transactive memory to accessible transactive memory. And, finally, there is this vast store of information and knowledge that is currently unknown. Although we can hope to learn only a fraction of this information, this is still a matter of extending our abilities.

We are constantly confronted with the epistemological question, how well do we need to know something? Do we need to know it well enough so that we can expound upon it without notes? Perhaps knowing how to access it quickly will suffice. Or perhaps, we only need to know that it exists, and that we can find it if we search long enough for it. It would be a mistake to put too much knowledge into any one of these categories. The percentage placed in each, will be a matter of individual choice. But we still should have the goal of upgrading the storage category for a certain amount of this knowledge. And we should always be extending our knowledge into the potential transactive memory category. This is all a part of extending our abilities and growing cognitively.

The first four principles of contemplative computing have been discussed in the immediately preceding posts. The next three principles will be discussed in subsequent posts.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction.

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Make Conscious Choices

October 6, 2013

The fourth principle of contemplative computing1 is to Make Conscious Choices. Always remember that it is you who decides when to use which devices and which software. It is you who decides if and when you will answer your phone and your voice mail. Don’t let technology dictate what you do. Make conscious choices and be mindful of your choices.

Dr. Pang also makes a useful distinction between multi-tasking and switch tasking. True multi-tasking involves doing multiple tasks that go together. Cooking several dishes at the same time in the preparation of a meal is one of the examples provided by Dr. Pang. Conducting a teleconference on your computer is another example. My wife likes to walk and talk on her cellphone. As long as this is done in a quiet environment, this is another example of multitasking. However, were she walking in an urban environment, this would, of necessity, be an example of switch tasking because she would need to switch her attention to assure that she would not be hit in traffic. Similarly, driving and talking on the phone is an example of switch tasking. The tasks do not go together and attention much be switched from one task to the other. The very act of switching tasks demands attention. And remember when you are driving, you are controlling a vehicle that can kill. Previous healthymemory blog posts have not make a distinction between multitasking and switch tasking. The multitasking dangers discussed in previous healthymemory blog posts have been switch tasking dangers using Dr. Pang’s distinction.

Properly designed Zenware reminds you that you make your own choices about where to direct your attention by helping you focus your attention.

The first three principles of contemplative computing were discussed in previous healthymemory blog posts. The four remaining principles of contemplative computing will be discussed in subsequent healthymemory blog posts.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction.

Blogging Buddhists

October 2, 2013

Yes. Buddhists do use technology and they blog. This post is so titled because of the third principle of contemplative computing1, Be Mindful. We need to learn what being mindful feels like and to learn to see opportunities to exercise it while being online or using devices.

Buddhist monastics use the web to test their beliefs and objectives, that is their mindfulness, capacity for compassion, and right behavior. In the digital world it is easy to forget that we’re ultimately interacting with our fellow human beings rather than Web pages. Damchoe Wangmo recommends that you “investigate your motivation before each online action, to observe what is going on in your mind,” and stop if you’re driven by “afflictive emotions” like jealousy, anger, hatred, or fear.2 Choekyi Libby watches herself online to “make sure I’m doing what I’m doing motivated by beneficial intention.”3 Others argue that we need to bring empathy to technology, to have our interactions be informed by our own ethical guidelines and moral sensibility. If we can be a positive presence online, we can be an even better one in the real world. “Approaching your interactions with information technologies as opportunities to test and strengthen your ability to be mindful; treating failures to keep focused as normal, predictable events that you can learn from; observing what helps you to be mindful online and what doesn’t—in other words engaging in self-observation and self-experimentation—can improve your interactions with technologies and build your extended mind.4

The following Rules for Mindful Social Media are taken from Appendix Two of The Distraction Addiction:

Engage with care. Think of social media as an opportunity to practice what the Buddhists call right speech, not as an opportunity to get away with being a troll.

Be mindful about your intentions. Ask yourself why you’re going onto Facebook or Pinterest. Are you just bored? Angry? Is this a state of mind you want to share?

Remember the people on the other side of the screen. It’s easy to focus you attention on clicks and comments, but remember that you’re ultimately dealing with people, not media.

Quality, not quantity. Do you have something you really want to share, something that’s worth other people’s attention? Then go ahead and share. But remember the aphorism carved into the side of the Scottish Parliament: Say little but say it well.

Live first, tweet later. Make the following promise to yourself: I will never again write the words OMG, I’m doing doing x and tweeting at the same time LOL.

Be deliberate. Financial journalist and blogger Felix Salmon once lamented that most people believe that online content is not supposed to be read but reacted to. Just as you shouldn’t let machines determine where you place your attention, you shouldn’t let the words of others drive what you say in the public sphere. Being deliberate means that you won’t chatter mindlessly or feed trolls. You’ll say but little and say it well.

The remaining five principles of contemplative computing will be discussed in subsequent healthymemory blog posts. The first two principles were discussed in the immediately preceding posts.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction

2Ibid. p. 219

3Ibid. p.219

4Ibid. Pp 221-222.

Be Calm

September 29, 2013

The second of eight steps to contemplative computing1 is to be calm. Contemplation involves a special kind of calm. It’s active rather than passive. It’s disciplined and self-aware. It’s like the placidity of the samurai. Or the coolness under pressure exhibited by an experienced pilot. It’s the product of masterful engagement that fills one’s attention and leaves no room for distraction.

Training and discipline are required for this type of calm. It involves a deep understanding of both devices and the self. This calm does not require getting away from the world. Rather it allows for fluid quick action in the world. The goal is not to escape, but rather to engage. We need to set the stage on which we can bring our entanglement with devices and media under our control so that we can more effectively engage with the world and extend ourselves.

Remember that technology affords the opportunity to be calm, if only we make use of it. There is voice mail, so phone messages can be answered, or not answered, when we decide to answer them. Similarly, email awaits our attention. Always remember that it is our attention. We can decide if and when to devote our attention to it.

Also remember, that meditation is a practice that can help us be calm. You will find many posts on meditation in the healthymemory blog.

Zenware, which assists us in being calm, is discussed in The Distraction Addiction. Writeroom and Ommwriter are two examples of Zenware. Try going to

The remaining six principles of contemplative computing will be discussed in subsequent healthymemory blog posts. The first principle, Be Human, was discussed in the preceding blog post.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction

Be Human

September 25, 2013

Be human is the first of eight steps to contemplative computing.1 Perhaps, this could be rephrased, remember what it means to be human. It means doing two things.

First, it means appreciating entanglement is a big part of us. Entanglement refers to our being entangled with our technology. This goes back to the first tools and weapons developed by the early humans. There is a misconception that technology refers to something new. The term technology really refers to any systematic application of knowledge to fashion the artifacts, materials, and procedures of our lives. It applies to any artificial tool or method. We use technologies so well that they become invisible. We incorporate them into our body schema, and employ them to extend our mental and physical capabilities, our human potential. Our species has honed this capability for more than a million years. It includes the domestication of plants and animals for food and clothing, the invention of language and writing. Moreover, concerns about our entanglement with technology are not new. Socrates objected to the development of the Greek alphabet. In the 1850’s Thoreau wrote in Walden, “But lo! Men have become tools of their tools.” Nevertheless, all of these have made us more human and more entangled with technology. Information technology is no different. We should insist on devices that serve and deserve us.

Second, it means recognizing how computers affect the way we see ourselves. Information technologies are developing so quickly, vastly increasing in power and sophistication. Computer power has a thousandfold increase every ten years, a millionfold increase every twenty years. They invade every corner of our lives and threaten to not only match, but also exceed our own intelligence. Consequently, we can easily feel stupid and feel a sense of resignation about our approaching cognitive obsolescence as computer overlords surpass human intelligence and memory. We need to realize that human intelligence and memory are biological and different from silicon counterparts. Real time is not human time, but the speed of commercial and financial transactions can continually be ratcheted upward. Although the lag between events and reporting on events can be reduced to virtually zero, we do not have to take less time to read, decide and respond to changes in the world and workplace. Our biological brains complement digital silicon brains. We need to be users, not victims, of technology.

The remaining seven steps to contemplative computing will be addressed in subsequent healthymemory blog posts.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Contemplative Computing

September 22, 2013

According to Nielsen and the Pew Research Center, Americans spend an average of 60 hours a month online. That’s 729 hours a year, which is the equivalent of 90 eight-hour days per year. Twenty of these days are spent in social networking sites, 38 viewing content on news sites, YouTube, blogs, and so on, and 32 doing email. Remember, these numbers of averages, so numbers for individuals can be considerably higher or lower. The usual response to this is that we are being overwhelmed by technology.

Readers of the healthymemory blog should know that this blog is not sympathetic to articles and books complaining that we are suffering victims of our technology. The Distraction Addiction, in spite of its title, is not one of these books. Its author, Dr. Alex Soojung-Kim Pang is a senior business consultant at Strategic Business Insights, a Silicon Valley-based think tank, and a visiting scholar at Stanford and Oxford universities. He has also been a Microsoft Research fellow. Dr. Pang is an advocate of contemplative computing, of not letting technology rule our lives, but instead of using this technology and interacting with our fellow humans to extend and grown our capabilities. Using technology and interacting with our fellow humans is referred to in the healthymemory blog as transactive memory. Contemplative computing aligns directly with what is being advocated in the healthymemory blog. Transactive memory, mindfulness, and meditation are central to the message of the healthymemory blog.

There are four big ideas, or principles in The Distraction Addiction.

The first big idea is our relationships with information technologies are incredibly deep and express unique human capacities.

The second big idea is the world has become a more distracting place—and there are solutions for bringing the extended mind back under control.

The third big idea is it’s necessary to be contemplative about technology.

And the fourth big idea is you can redesign your extended mind.

Were I to assign a text for the healthymemory blog, it would be The Distraction Addiction. Although it would not be appropriate for me to assign a text, I certainly do recommend your reading The Distraction Addiction. Given its relevance, I shall be basing many healthymemory blog posts on this book, but I can never do justice to the original.

In the meantime, you can visit

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Are Facebook Users More Satisfied with Life?

September 15, 2013

This question has been answered in a study published in the Public Library of Science by Ethan Cross of the University of Michigan and Phillipe Verduyn of Leueven University in Belgium. They recruited 82 Facebook users in their late teens or early twenties. Their Facebook activity was monitored for two weeks and each participant had to report five times a day on their state of mind, and their direct social contacts (phone calls and meetings with other people).

The results showed that the more a participant used Facebook in the period between the two questionnaires, the worse she reported feeling the next time she filled in a questionnaire. The participants rated their satisfaction with life at the beginning and again at the end of the study. Participants who used Facebook frequently were more likely to report a decline in satisfaction than those who visited the site infrequently. However, there was a positive association between the amount of direct social contact a volunteer had and how positive she felt. So socialization in the real, as opposed to the virtual or cyber world, did increase positive feelings.

So why was socialization in the cyber world making people feel worse? This question was addressed in another study at Humboldt University and Darmstadt Technical, both of which are located in Germany. They surveyed 584 Facebook users in their twenties. They found that the most common emotion aroused in Facebook is envy. Comparing themselves with peers who have doctored their photographs, amplified, if not lied about, their achievements resulted in envy in the readers.

The question remains whether the same results would be found in older Facebook users. In other words, does age make us wiser?1

1Get a Life! The Economist, April 17, 2013, p.68.

More on the Excessive Costs of Higher Education

August 25, 2013

What has happened to the costs of a higher education is unconscionable, as are the ridiculous amounts of debt young people are saddled with as a result of pursuing a college education. Moreover, it is not only these unfortunate individuals who are the only ones to suffer. The country and the economy of the country benefits from college educations. In spite of the fact that the U.S government was burdened with massive amounts of debt from World War II, it passed the G.I Bill that allowed millions of veterans to pursue a higher education. Undoubtedly the booming economy that followed was largely the result of the G.I. Bill.

The unfortunate irony is that these costs rose at a ridiculous rate when they should have been decreasing. Technology is the reason that they should have decreased. Classes can be delivered online. Texts can be distributed as PDF documents at low or no cost. Similarly library materials could be annexed online. It is a bit ironic that professional societies, whose purpose is the dissemination of information, charge fees for accessing their articles. This might change as a result of the government requiring research funded by the government to be freely accessible.

Change is already occurring in massive open online courses (MOOCs). Edx is a non-profit MOOC founded by Harvard University and Massachusetts Institute of Technology. It is no a consortium of 28 institutions. Coursera is a MOOC that has formed partnerships with 83 universities.

This is an outstanding development for autodidacts as it has opened up an enormous resource of educational opportunities. The problem is how is the knowledge for completed courses documented and how can one get a college degree. Coursera has started charging to provide certificates for those who complete its courses.

So the technology exists, the problem is what is the business model. In other words, how to make a buck from this? I think it is important to realize that education is a public good, that all benefit from its ready availability, so costs should be kept to a minimum.

I think this can be accomplished by universities and testing organizations such as the Educational Testing Service (ETS) developing assessment tests. ETS has already done this for undergraduate content areas such as psychology, history, biology, and so forth. More specific tests could be developed for specific content areas such as educational psychology, neuropsychology, applied statistics, organic chemistry, and physical chemistry. Moreover, there could be different levels of expertise associated with different tests.

Frankly, this would be more informative to me than conventional degrees. In my experience, I do not know what I’m getting when a new graduate shows up with a degree in x. One might think, that regardless of the major, that a student with a bachelor’s degree should be able to write. But I’ve known people with Master’s degrees who have terrible compositional skills.

So it will be interesting to see what develops. But I hope the development occurs quickly and that there is a general realization that higher education is good for both the individual and the country, and that costs should be minimal.

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Dealing with Technology and Information Overload

July 28, 2013

Whenever I read or hear something about our being victims of technology, I become extremely upset. I’ve written blog posts on this topic (See “Commentary on Newsweek’s. Cover Story iCrazy,” and “Net Smart.”) We are not passive entities. We need to be in charge of our technology. There was a very good article on this topic in the August 2013 issue of Mindful magazine. It is titled “A User’s Guide to Screenworld,” and is written by Richard Fernandez of Google who sits down with Arturo Bejar, director of engineering at Facebook, and Irene Au, the vice president of product and design at Udacity. Here are five strategies for dealing with different components of this issue.

Information Overload. There is way too much information to deal with and we must shield ourselves from being overwhelmed. We must realize that our time is both limited and costly. So we need to be selective and choose our sources wisely. When we feel our minds tiring we should rest or move on.

Constant Distraction. Multi-tasking costs. There is a cost in performing more than one task at a time. So try to complete one task or a meaningful segment of a task before moving on to another task. Let phone calls go to voice mail. Respond to email at designated times rather than jumping to each email as it arrives.

Friends, Partners, Stuck on Their Devices. Personally I cannot stand call waiting. I don’t have it on my phone, and if someone goes to their call waiting while talking with me, they will likely find that I am not on the phone should they return. Technology is no excuse for being discourteous. Moreover, technology provides us a means for being courteous, voice mail. So unless there is an emergency lurking, there is no reason for taking the call. Clearly, when there are job demands or something really important, there are exceptions, but every effort should be extended to be courteous. When there are other people present, give them your attention, not your devices. And call it to their attention when you feel you are being ignored.

Social Media Anxiety. Try to keep your involvement with social media to a minimum. The friending business on Facebook can be quite annoying. Moreover, for the most part these friends are superficial. Remember Dunbar’s Number (See the healthymemory blog posts, ‘How Many Friends are Too Many?” “Why is Facebook So Popular?” and “Why Are Our Brains So Large?). Dunbar’s number is the maximum number of people we can keep track of at one time is 150, but the number of people that we speak with frequently is closer to 5. I would be willing to up the number of close friends a bit, but it is still small. And he says that there are about 100 people we speak to about once a year.

Children Spending Too Much Time Staring at Screens. The advice here is to express an interest in your children’s digital life. Try to share it with them and try to develop an understanding of how to deal with technology and information overload.

Let me end with a quote by Arene Au from the article, which is definitely worth quoting: “We need to get up from our desks and move. There is a strong correlation between cognition and movement. We’re more creative when we move.”

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

The Terrorist Mind

May 11, 2013

The recent terrorist act at the Boston Marathon has been difficult for many Americans to understand. To understand it, you need to try to understand the terrorist mind. We read that they were upset about the wars in Iraq and Afghanistan, and the Drone killings. This is but a part of a larger narrative that the United States is at war with Islam. This larger narrative ignores disturbing facts such as the efforts the United States took to protect Muslims in the former Yugoslavia. It even includes a belief that 9/11 was self-inflicted, even though Al Qaeda took credit for the terrorist acts. Unfortunately, our minds are good at ignoring negative evidence and for compartmentalizing information.

Even if you grant militant Islamists their beliefs, one can still ask, do they merit the indiscriminate killing and maiming of innocents? What does the Koran say about that? The argument would be that they are at war and that war justifies the killing and maiming.

But then, one can ask, how do you think you will win? If terrorist attacks increase, the response against them would also increase. The consequences would be dreadful, but it is difficult to see how radical Islam would prevail in the west. Osama Bin Laden thought that because they were able to drive the Soviets out of Afghanistan, they would prevail against the west. He forgets that the victory was largely due to American aid and technology. The Soviets concluded that Afghanistan was not worth the loss of human life, and that it was not worth exercising the nuclear option.

The response of the West in dealing with the irrationality of Terrorism is the use of kinetic events. There are large scale kinetic events, like the wars in Iraq and Afghanistan, and small kinetic events such as drone strikes. The question is, do they work? Are they decreasing the number of terrorists, or increasing the number of terrorists? If it is the latter, then we are adding fuel to the flames rather than extinguishing the fire.

So what is the alternative to kinetic events? It goes by a number of terms, information warfare, propaganda, psyops (psychological operations). Unfortunately, these terms have negative connotations. Nevertheless, I would argue that they provide the only alternative. The problem is that they are not very sophisticated, and that we do not know how to target them at either the militant Islamic or potentially militant Islamic mind. Much research needs to be done.

Unfortunately, there was a natural laboratory for conducting this research that was overlooked, and that is the infamous facility at Guantanamo. The inmates could have been used as subjects to try to understand how their minds worked, and what potential arguments or information could possibly change their minds. They could have released inmates if they thought their interventions had been successful and then tracked them after they left. It is likely that some, perhaps, many would just have told the researchers want they wanted to hear, so that they would be released. Others might have changed their minds in the facility, but then reverted to their old ways of thought upon returning to their environments. There was this risk, but I think an argument could be made that it would be worth it. There might have been successes.

It needs to be remembered that the terrorist threat goes well beyond radical Islamists. Remember Timothy Mcveigh. Unfortunately, there are many more Timothy Mcveighs in the world. Their narratives and belief systems also need to be studied and countered.

In any case, this an area of research that needs to be vigorously pursued. I believe that the Saudi’s have done some research in this area that has met with some success. Memetic Theory along with the memetic analytic framework holds promise. Terrorist minds are full of dangerous, erroneous memes that must be destroyed and corrected. New conflicts, both international and domestic, must increasingly be met by changing people’s minds. Historically, humans have resolved conflicts by kinetic events. Human history is largely a history of human wars. But if kinetic events work to exacerbate rather than to resolve conflicts, then I see no other path to pursue.

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Public to Get Access to U.S. Research

May 1, 2013

This was a title of an article in the Washington Post.1 This news is long overdue. Most scientific professional publications are available through publishers and professional organizations. Usually there are discounts for members of professional organizations, but even we usually pay. I do have access to those published by societies to which I belong. Often, there is an article that I would like to read in a publication to which I don’t have access. Sometimes the fees to access these articles are $30 or higher. It is understandable that publishers, who are in the stated business of making a profit, have such charges. But the charters of most professional societies typically state that one of their objectives is to spread technical knowledge. I hope the irony is obvious here.

Bear in mind that the vast amount of this research is funded by the federal government. So we taxpayers are paying for this research. Then why don’t we have ready access to it? According to the article, agency leaders have been directed to develop rules for releasing federally backed research within a year of publication. Some argue that there should not even be a year’s delay in releasing the information. I agree with these people, but my priority is on the implementation of some policy, and I am against any lengthy debate that would delay implementation.

Aaron Swartz was a genius. He was a brilliant programmer with a list of accomplishments, one of which was the development of Reddit, one of the world’s most widely used social-networking news sites. Two years ago, he was indicted on multiple felony accounts for downloading several million articles from the academic database JSTOR. Although it is not known what his motivation was precisely, one idea is that he intended to upload them onto the Web, so that they could be accessed by anyone. Aaron Swartz was a brilliant and sensitive individual. He was indicted by the federal government and subsequently committed suicide. The March 11, 2013 New Yorker (beginning on page 48) does an admiral job of characterizing this fascinating and interesting individual.

This is more than an issue of fairness. The ready access to this information will benefit both science and the economy. An example cited in the Post article was about a teenage scientist, Jack Andraka, who relied on open access articles to develop a five-minute $3 test for pancreatic cancer.
Fortunately, he was successful, but the charged-for article were an obstacle to his progress.

It should be mentioned that progress has been made in this area. Since 2003 there has been a Public Library of Science (PloS). The healthymemory blog has cited publications from this source and finds it most useful. But this progress has been too slow. This is just another example of how extreme economics has plaques us (See the healthymemory blog post, “Extreme Economics.”)

Similar problems exist regarding the costs of books and higher education, but I’ll stop here before I begin that rant. Enter “higher education” into the search block to read previous rants.

1Vastag, B., & Brown, D. (2013). February 23, A5.

© Douglas Griffith and, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Microsoft and Its Annoying, Costly Upgrades

April 10, 2013

I’ve about had it with Microsoft and the so-called upgrades of its operating systems and applications. Not once have I been able to perceive any benefit. But there has been precious time lost and aggravation. Out comes an upgrade and suddenly I am unable to perform functions that I have long performed. Moreover, it is not easy to find the new procedures for performing these functions.

The impacts of these upgrades on business, government, and organizations are pernicious. Time is money and the inability to perform long used functions, and the need to learn new ways of performing these long practiced functions are costly in addition to being extremely aggravating. In psychology we would call this an A-B, A-C negative transfer paradigm. Yet business, government, organizations, and individuals continue to suffer in silence. It’s outrageous.

When buying a new computer it should not come with a pre-installed operating system. The purchaser should be offered a choice of operating systems, and older versions of software should still be able to run on new operating systems. The same requirements are needed for applications. As for applications, I’ve found the offerings at to be superior to those of Microsoft. Moreover, they are free, although it is in our own interest to offer support. I believe that Firefox is regarded as a superior browser. Mozilla also has an email program, Thunderbird. If you have not yet done so, I encourage you to visit their website at Businesses, governments, and other organizations should also avail themselves of these options.

Even if upgrades are needed from a systems perspective, the interface that confronts the user should remain as identical as possible. The Dvorak keyboard is known to be superior to the standard QWERTY keyboard, yet there has been a wise decision made not to convert whole scale to the Dvorak keyboard. A similar attitude needs to maintain with respect to the interfaces of operation systems and applications.

Software companies should be required to support all versions of their products or be subject to fines and lawsuits. As you have already ascertained, I regard most upgrades as ripoffs, impure and complex.

Should we march on Washington, D.C., or the Microsoft Campus in the state of Washington? I am not suggesting that we carry torches and pitchforks as if we were attacking Dr. Frankenstein‘s castle, tempting as that might be. But orderly demonstrations would be in order. To quote from the movie, Network, “We’re as mad as hell, and we’re not going to take this anymore!”

An Update on the Unnecessary Costs of Higher Education

January 16, 2013

Here is an update of these unnecessary costs from the Washington Post.1 Previous healthymemory blog posts (enter “Costs of a Higher Education” into the search block) have complained about the increasing increases in the costs of a college education at a time when technology should be bringing these costs down. It is especially ironic when prestigious universities are making some of their courses available online for free, the so-called Massive Open Online Courses (MOOC). Although this content is available for free, course credit is not offered, nor is there a prospect of a diploma being offered upon the completion of these courses. Now some universities are offering, for a fee, certificates for completing these courses. According to the article “For a fee of less than $100, a student who takes a class in genetics and evolution from Duke University on a MOOC platform called Coursera—and agrees to submit to identity-verification screening—could earn a “verified certificate” for passing the course.” “For $95, a student in an online circuits and electronics class affiliated with the Massachusetts Institute of Technology through the MOOC Platform edX will be able to take a proctored exam this month at one of thousands of test sites around the world and earn a certificate.” What is not clear is whether at some time in the future these certificates would lead to college credit and a degree. Technology provides manifold opportunities for the autodidact, but the degrees provide the desired end-states of these formal curricula.

I’ve mentioned in previous blogs on this topic is that I have met some people who have college degrees, but on the basis of their work, writing, and conversation, it is difficult to believe that they have these degrees. I have also met people with excellent, writing, work, and conversational skills, who do not have college degrees. I think we need to have an organization or organizations that provide tests and evaluations to determine the level of competence in different subject areas. Presumably, nominal fees would be involved, but this would allow the true autodidact to benefit fully from her self-educational efforts.

1Anderson, N. (2013). Online classes will grant credentials, for a fee.

© Douglas Griffith and, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.