Posts Tagged ‘Facebook’

How Can We Keep Technology from Rotting Our Brains

January 27, 2020

First of all it is important to understand that it is not technology that is rotting out brains, it is the way we are using technology that is rotting our brains. If used properly technology provides an ideal means of enhancing our brains and building healthy memories.

The first action should be to get off social media, in general, and Facebook, in particular. The dangers of Facebook are well documented in this blog. Entering “Facebook” into healthymemory.wordpress.com will yield pages of posts about Facebook. The dangers of social media are also well documented in this blog. Besides, Facebook should be paying to use your data. So in addition to the other evils one might also add theft.

We all got along before Facebook and we will find that our lives are better after Facebook. HM certainly did.

One can develop one’s own interest groups on various topics. Go to the healthy memory blog post “Mindshift Resources.” Unfortunately, usually fees are involved in actually getting a degree. Go to
nopaymba.com to learn how to get an MBA-level business education at a fraction of the cost. Laura Pickard explains how to get an MBA for less than1/100th the cost of a traditional MBA.

Go to Wikipedia and search for topics of interest or to just browse. When you find topics worth pursuing, pursue them. This will involve System 2 processing at least.

You can learn juggling on YouTube. Juggling is one of many activities that is good for developing a healthy memory.

As for the GPS, it is recommended to try navigating without GPS. Go to a new, safe, area, traverse it and build a mental topographic map. Two activities that benefit a healthy memory can be engaged here, walking and mental navigating building a mental topographic map.

Visiting museums is another means of developing mental spatial maps. Museums provide another opportunity for engaging in two activities that build healthy memories. Building mental spatial maps, and learning the content present in the museum.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Advertising

December 8, 2019

The title of this post is identical to the title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. This is last fix he provides.

He writes, “Advertisers use ad mediation software provided by the platforms to find the most relevant audiences for their ads. These ad platforms take into account a user’s region, device, likes, searches, and purchasing history. Something called dynamic creative optimization, a tool that uses artificial intelligence, allows advertisers to optimize their content for the user and find the most receptive audience. Targeted ads are dispatched automatically across thousands of websites and social media feeds. Engagement statistics are logged instantaneously to tell the advertiser and the platform what is working and what is not. The system tailors the ads for the audiences likely to be most receptive.”

Of course, the bad guys use all these tools to find the audiences they want as well. The Russians became experts at using two parts of Facebook’s advertising infrastructure: the ads auction and something called Custom Audiences. In the ads auction, potential advertisers submit a bid for a piece of advertising real estate. Facebook not only awards the space to the highest bidder, but also evaluates how clickbaitish the copy is. The more eyeballs the ad will get the more likely it is to get the ad space, even if the bidder is offering a lower price. Since the Russians did not care about the accuracy of the content they were creating, they’re willing to create sensational false stories that become viral. Hence, more ad space.

The Russians efforts in the 2016 election have been reviewed in previous healthy memory blog posts. The Trump organization itself used the same techniques and spent exponentially more on these Facebook ads than the Russians did.

Stengel concludes this section on techniques for reducing the amount of disinformation in our culture would reduce, but not eliminate, disinformation. He writes that disinformation will always be with us, because the problem is not facts, or the lack of them, or misleading stories filled with conjecture; the problem is us (homo sapiens) .  There are all kind of cognitive biases and psychological states, but the truth is that people are gong to believe what they want to believe. It would be wonderful if the human brain came with a lie detector, but it doesn’t.

HM urges the reader not to take this conclusion offered by Stengel too seriously. It is true that human information processing is biased, because it needs to be. Our attention is quite limited. But rather than throwing in the towel, we need to deal with our biases as best we can. The suggestions offered by Stengel are useful to this end.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Section 230

December 4, 2019

The title of this post is identical to he title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. New legislation is needed to create an information environment that is more transparent, more consumer-focused, and makes the creators and purveyors of disinformation more accountable. Stengel calls this section legislations’s original sin, the Communicates Decency Act (CDA) of 1996. The CDA was one of the first attempts by Congress to regulate the internet. Section 230 of this act says that online platforms and their users are not considered publishers and have immunity from being sued for the content they post. It reads No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Congress’s motivation back in 1996 was not so much to shield these new platforms as to protect free speech. Congress didn’t want the government to police these platforms and thereby potentially restrict freedom of speech—it wanted the platforms to police themselves. Congress worried that if the platforms were considered publishers, they would be too draconian in policing their content and put a chill on the creation of content by third parties. The courts had suggested that if a platform exercised editorial control by removing offensive language, that made it a publisher and therefore liable for the content on its site. The idea of Section 230 was to give these companies a “safe harbor” to screen harmful content. The reasoning was that if they received a general immunity, they would be freer to remove antisocial content that violated their terms of service without violating constitutional free speech provisions.

Focus on the year this act was passed. This was the era of America Online, CompuServe, Netscape, Yahoo and Prodigy. That was a different world and there was no way to anticipate the problems brought by Facebook. Stengel notes that Facebook is not like the old AT&T. Facebook makes money off the content it hosts and distributes,. They just call it “sharing.” Facebook makes the same amount off ad revenue from shared content that is false as from shared content that is true. Note that this problem is not unique to Facebook, but perhaps Facebook is the most prominent example.

Stengel continues, “If Section 230 was meant to encourage platforms to limit content that is false or misleading, it’s failed. No traditional publisher could survive if it put out the false and untrue content that these platforms do. It would be constantly sued. The law must incentivize the platform companies to be proactive and accountable to fighting disinformation. Demonstrable false information needs to be removed from the platforms. And that’s just the beginning.”

Stengel concludes this section as follows: “But let’s be realistic. The companies will fight tooth and nail to keep their immunity. So, revising Section 230 must encourage them to make good-faith efforts to police their content, without making them responsible for every phrase or sentence on their services. It’s unrealistic to expect these platforms to vet every tweet or post. One way to do this is to revise the language of the CDA to say that no platforms that make a good faith effort to fulfill its responsibility to delete harmful content and provide information to users about that content can be liable for the damage that it does start. It’s a start.”

Missing Healthymemory Themes

April 26, 2019

HM was disappointed that Dr. Twenge did not at least touch upon healthy memory themes in “iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood.” One of these themes was alluded to in the posts about spirituality and religion. There seems to have been a loss in empathy among iGen-ers. Given the exorbitant college costs along with other economic demands, the iGen-ers are living in a dog eat dog world. Spiritual activities including meditation can increase sensitivity to and caring for our fellow human beings.

There was no evidence of passion, grit, or growth mindsets. People go to college to get a job. Education is an instrumental act, not a goal in itself. Of course, they are not unusual in this respect. This certainly is nothing new. When HM taught in college, that certainly was the most common response. But students who actually had an intellectual interest in a subject were dearly appreciated. This blog has advocated growth mindsets and lifelong learning as primary goals not only for a fulfilling life, but also as a means of decreasing the likelihood of Alzheimer’s or dementia. Even if they develop the defining neurofibrillary tangles and amyloid plaque, they might well die with these defining symptoms without ever evidencing the behavioral or cognitive symptoms of Alzheimer’s.

The key here is the System 2 processes engaged during learning or critical thinking. Unfortunately, too many people manage to minimize use of System 2 processes even during college. The hope is that at least they engage in activities such as Bridge or Chess, read some books, and stay off Facebook and similar online activities.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Internet: Online Time—Oh, and Other Media, Too

April 14, 2019

The title of this post is the same as the second chapter in iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D.

iGen-ers sleep with their phones. They put them under their pillows, on the mattress, or at least within arm’s reach of the bed. They check social media websites and watch videos right before they go to bed, and reach for their phones again as soon as they wake up in the morning. So their phone is the last thing they see before they go to sleep, and the first thing they see when they wake up. If they wake up in the middle of the night, they usually look at their phones.

Dr. Twenge notes, “Smartphones are unlike any other previous form of media, infiltrating nearly every minute of our lives, even when we are unconscious with sleep. While we are awake, the phone entertains, communicates, and glamorizes. She writes, “It seems that teens (and the rest of us) spend a lot of time on phones—not talking but texting, on social media, online, and gaming (togther, these are labeled ‘new media’). Sometime around 2011, we arrived at the day when we looked up, maybe from our own phones, and realized that everyone around us had a phone in his or her hands.”

Dr, Twenge reports, “iGen high school seniors spent an average of 2.25 hours a day texting on their cell phone, about 2 hours a day on the Internet, 1.5 hours a day on electronic gaming , and about a half hour on video chat. This sums to a total of 5 hours a day with new media, This varies little based on family background; disadvantaged teens spent just as much or more time online as those with more resources. The smartphone era has meant the effective end of the Internet access gap.

Here’s a breakdown of how 12th graders are spending their screen time from Monitoring the Future, 2013-2015:
Texting 28%
Internet 24%
Gaming 18%
TV 24%
Video Chat 5%

Dr. Twenge reports that in seven years (2008 to 2015) social media sites went from being a daily activity for half of teens, to almost all of them. In 2015 87% of 12th grade girls used social media sites almost every day in 2015 compared to 77% of boys.
HM was happy to see that eventually many iGen’ers see through the veneer of chasing likes—but usually only once they are past their teen years.

She writes that “social media sites go into and out of fashion, and by the time you read this book several new ones will probably be on the scene. Among 14 year olds Instagram and Snapchat are much more popular than Facebook.“ She notes that recently group video chat apps such as Houseparty were catching on with iGEN, allowing them to do what they call ‘live chilling.”

Unfortunately, it appears that books are dead. In the late 1970s, a clear majority of teens read a book or a magazine nearly every day, but by 2015, only 16% did. e-book readers briefly seemed to rescue books: the number who said they read two or more books for pleasure bounced back in the late 2000s, but they sank again as iGEN (and smartphones) entered the scene in the 2010. By 2015, one out of three high school seniors admitted they had not read any books for pleasure in the past year, three times as many as in 1976.

iGEN teens are much less likely to read books than their Millennial, GenX, and Boomer predecessors. Dr. Twenge speculates that a reason for this is because books aren’t fast enough. For a generation raised to click on the next link or scroll to the next page within seconds, books just don’t hold their attention. There are also declines for iGen-ers with respect to magazines and newspapers.

SAT scores have declined since the mid-2000s, especially in writing (a 13-point decline since 2006) and critical reading ( a 13-point decline since 2005).

Dr, Twenge raises the fear that with iGen and the next generations never learning the patience necessary to delve deeply into a topic, and the US economy falling behind as a result.

Get A Life!

April 9, 2019

This is the final post of a series of posts based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” Perhaps the best way of thinking about Facebook and related problems is via Nobel Winning Lauerate Daniel Kahneman’s Two System View of Cognition. System 1 is fast and emotional. Beliefs are usually the result of System 1 processing. System 2 is slow, and what we commonly regard as thinking.

The typical Facebook user is using System 1 processing almost exclusively. He is handing his life over to Facebook. The solution is to Get a Life and take your life back from Facebook.

The easiest way to do this is to get off from Facebook cold turkey. However, many users have personal reasons for using Facebook. They should take back their lives by minimizing their use of Facebook.

First of all, ignore individual users unless you know who they are. Ignore likes and individual opinions unless you know and can evaluate the individual. Remember what they say about opinions, “they’e like a—h—-s, everybody has one.” The only opinions you should care about are from responsible polls done by well known pollsters.

You should be able to find useful sources on your own without Facebook. Similarly you can find journalists and authors on your own without Facebook. Spend time and think about what you read. Is the article emotional? Is the author knowledgeable?

If you take a suggestion from Facebook, regard that source skeptically.

Try to communicate primarily via email and avoid Facebook as much as possible.

When possible, in person meetings are to be preferred.

In closing, it needs to be said that Facebook use leads to unhealthy memories. And perhaps, just as in the case of Trump voters, HM predicts an increased incidence of Alzheimer’s and dementia among heavy Facebook users.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

What’s Being Done

April 8, 2019

This is the twelfth post based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” The remainder of the book, and that remainder is large, discusses what is being done to remedy these problems. So people are concerned. One approach is to break up monopolies. But that approach ignores the basic problem. Facebook is taking certain actions, one of which is encryption is definitely bad Encryption would simply allow Facebook to hide its crimes.

One idea, which is not likely but has received undeserved attention, is to monetize users’ data so the Facebook would have to pay for its use. Unfortunately, this has likely provided users with hopes of future riches for their Facebook use. Although this is indeed how Facebook makes it money, it is unlikely to want to share it with users. Advertisements are pervasive in the world. Although we can try to ignore them in print media, advertisements need to be sat through on television unless one wants to record everything and fast forward through the ads later.

Moreover, there are users, and HM is one of them, who want ads presented on the basis of online behavior. Shopping online is much more efficient than conventional shopping, and ads taken from interests users shown online, provide more useful information. Amazon’s suggestions are frequently very helpful.

The central problem with Facebook is the artificial intelligence and algorithms that bring users of like mind together, and foster hate and negative emotions. This increases polarization and hatred that accompanies polarization.

Does Facebook need to be transparent and ask if users want to be sent off to these destinations the algorithms and AI have chosen? Even when explanations are provided polarization might still be enhanced as birds of a feather do tend to flock together on their own, but perhaps with less hate and extremism. There are serious legal and freedom of speech problems that need to be addressed.

Tomorrow’s post provides a definitive answer to this problem.

Damaging Effects on Public Discourse

April 7, 2019

This is the eleventh post based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” In the MIT Technology Review professor Zeynep Tufekci explained why the impact on internet platforms is so damaging and hard to fix. “The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone. It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities and we seek approval from our like-minded peers. We bond with our team by yelling at the fans on the other one. In sociology terms, we strengthen our feeling of ‘in-group’ belonging by increasing our distance from and tension with the ‘out-group’—us versus them. Our cognitive universe isn’t an echo chamber, but our social one is. That is why the various projects for fact-checking claims in the news, while valuable, don’t convince people. Belonging is stronger than facts.” To this HM would add “beliefs are stronger than facts.” Belonging leads to believing what the group believes. As has been written in previous healthymemory blog posts, believing is a System One Process in Kahneman’s Two-process view of cognition. And System One processing is largely emotional. It shuts out System Two thinking and promotes stupidity.

Facebook’s scale presents unique threats for democracy. These threats are both internal and external. Although Zuck’s vision of connecting the world and bringing it together may be laudable in intent, the company’s execution has had much the opposite effect. Facebook needs to learn how to identify emotional contagion and contain it before there is significant harm. If it wants to be viewed as a socially responsible company, it may have to abandon its current policy of openness to all voices, no matter how damaging. Being socially responsible may also require the company to compromise its growth targets. In other words, being socially responsible will adversely affect the bottom line.

Are you in Control?

April 6, 2019

This is the tenth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Facebook wants you to believe that you are in control. But this control is an illusion. Maintaining this illusion is central to every platform’s success, but with Facebook, it is especially disingenuous. Menu choices limit user actions to things that serve Facebook’s interest. Facebook’s design teams exploit what are known as “dark patterns” in order to produce desired outcomes. Wikipedia defines a dark pattern as “a user interface that has been carefully crafted to trick users into doing things.” Facebook tests every pixel to ensure it produces the desired response. For example: which shade of red best leads people to check their notifications? for how many milliseconds should notifications bubbles appear in the bottom left before fading away to most effectively keep users on site? what measures of closeness should we recommend new friends of you to “add”?

With two billion users the cost for testing every possible configuration is small. And Facebook has taken care to make its terms of service and privacy headings hard to find and nearly impossible to understand. Facebook does place a button on the landing page to provide access to the terms of service, but few people click on it. The button is positioned so that hardly anyone even sees it. And those who do see it have learned since the early days of the internet to believe that terms or service are long and incomprehensible, so they don’t press it either.

They also use bottomless bowls. News Feeds are endless. In movies and television, scrolling credits signal to the audience that it is time to move on. They provide a “stopping cue.” Platforms with endless news feeds and autoplay remove that signal to ensure that users maximize their time on site for every visit. They also use autoplay on their videos. Consequently, millions of people are sleep deprived from binging on videos, checking Instagram, or browsing on Facebook.

Notifications exploit one of the weaker elements of human psychology. They exploit an old sales technique, called the “foot in the door” strategy,” that lures the prospect with an action that appears to be low cost, but sets in motion a process leading to bigger costs. We are not good at forecasting the true cost of engaging with a foot-in-door strategy. We behave as though notifications are personal to us, completely missing that they are automatically generated, often by an algorithm tied to an artificial intelligence that has concluded that the notification is just the thing to provoke an action that will serve Facebook’s economic interests.

We humans have a need for approval. Everyone wants to feel approved of by others. We want our posts to be liked. We want people to respond to our texts, emails, tags, and shares. This need for social approval is what what made Facebook’s Like button so powerful. By controlling how often an entry experiences social approval, as evaluated by others, Facebook can get that user to do things that generate billions of dollars in economic value. This makes sense because the currency of Facebook is attention.

Social reciprocity is a twin of social approval. When we do something for someone else, we expect them to respond in kind. Similarly, when when a person does something for us, we feel obligated to reciprocate. So when someone follows us, we feel obligated to follow them. If w receive an invitation to connect from a friend we may feel guilty it we do not reciprocate the gesture and accept it.

Fear of Missing Out (FOMO) is another emotional trigger. This is why people check their smart phone every free moment, perhaps even when they are driving. FOMO also prevents users from deactivating their accounts. And when users do come to the decision to deactivate, the process is difficult with frequent attempts to keep the user from deactivating.

Facebook along with other platforms work very hard to grow their user count but operate with little, if any, regard for users as individuals. The customer service department is reserved for advertisers. Users are the product, at best, so there is no one for them to call.

It Gets Even Worse

April 5, 2019

This is the ninth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” This post picks up where the immediately preceding post, “Amplifying the Worse Social Behavior” stopped. Users sometimes adopt an idea suggested by Facebook or by others on Facebook as their own. For example, if someone is active in a Facebook Group associated with a conspiracy theory and then stop using the platform for a time, Facebook will do something surprising when they return. It might suggest other conspiracy theory Groups to join because they share members with the first conspiracy Group. Because conspiracy theory Groups are highly engaging, they are likely to encourage reengagement with the platform. If you join the Group, the choice appears to be yours, but the reality is that Facebook planted the seed. This is because conspiracy theories are good for them, not for you.

Research indicates that people who accept one conspiracy theory have a high likelihood of accepting a second one. The same is true of inflammatory disinformation. Roger accepts the fact that Facebook, YouTube, and Twitter have created systems that modify user behavior. Roger writes, “They should have realized that global scale would have an impact on the way people use their products and would raise the stakes for society. They should have anticipated violations of their terms of service and taken steps to prevent them. Once made aware of the interference, they should have cooperated with investigators. I could no longer pretend that Facebook was a victim. I cannot overstate my disappointment. The situation was much worse than I realized.”

Apparently, the people at Facebook live in their own preference bubble. Roger writes, “Convinced of the nobility of their mission, Zuck and his employees reject criticism. They respond to every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. To them, connecting 2.2 billion people is so obviously a good thing, and continued growth so important, that they cannot imagine that the problems that have resulted could be in any way linked to their designs or business decisions. As a result, when confronted with evidence that disinformation and fake news spread over Facebook influenced the Brexit referendum and the election of Putin’s choice in the United States, Facebook took steps that spoke volumes about the company’s world view. They demoted publishers in favor of family, friends, and Groups on the theory that information from those sources would be more trustworthy. The problem is that family, friends, and Groups are the foundational elements of filter and preference bubbles. Whether by design or by accident, they share the very disinformation and fake news that Facebook should suppress.

Amplifying the Worst Social Behavior

April 4, 2019

This is the eighth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “The competition for attention across the media and technology spectrum rewards the worst social behavior. Extreme views attract more attention, so platforms recommend them. News Feeds with filter bubbles do better at holding attention than News Feeds that don’t have them. If the worst thing that happened with filter bubbles was that they reinforced preexisting beliefs, they would be no worse than many other things in society. Unfortunately, people in a filter bubble become increasingly tribal, isolated, and extreme. They seek out people and ideas that make them comfortable.”

Roger continues, “Social media has enabled personal views that had previously been kept in check by social pressure—white nationalism is an example- to find an outlet.” This leads one to ask the question whether Trump would have been elected via the Electoral College if it weren’t for social media. Trump’s base consists of Nazis and white supremacists and constitutes more than a third of the citizens. Prior to the election, HM would never have believed that this was the case. Now he believes and is close to being clinically depressed.

Continuing on, “Before the platforms arrived, extreme views were often moderated because it was hard for adherents to find one another. Expressing extreme views in the real world can lead to social stigma, which also keeps them in check. By enabling anonymity and/or private Groups, the platforms removed the stigma, enabling like-minded people, including extremists, to find one another, communicate, and, eventually, to lose the fear of social stigma.”

Once a person identifies with an extreme position on an internet platform, that person will be subject to both filter bubbles and human nature. There are two types of bubbles. Filter bubbles are imposed by others, whereas a preference bubble is a choice, although the user might be unaware of this choice. By definition, a preference bubble takes users to a bad place, and they may not even be conscious of the change. Both filter bubbles and preference bubbles increase time on site, which is a driver of revenue. Roger notes that in a preference bubble, users create an alternative reality, built around values shared with a tribe, which can focus on politics, religion, or something else. “They stop interacting with people with whom they disagree, reinforcing the power of the bubble. They go to war against any threat to their bubble, which for some users means going to war against democracy and legal norms, They disregard expertise in favor of voices from their tribe. They refuse to accept uncomfortable facts, even ones that are incontrovertible. This is how a large minority of Americans abandoned newspapers in favor of talk radio and websites that peddle conspiracy theories. Filter bubbles and preference bubbles undermine democracy by eliminating the last vestiges of common ground among a huge percentage of Americans. The tribe is all that matters, and anything that advances the tribe is legitimate. You see this effect today among people whose embrace of Donald Trump has required them to abandon beliefs they held deeply only a few years earlier. Once again, this is a problem that internet platforms did not invent. Existing issues in society created a business opportunity that platforms exploited. They created a feedback loop that reinforces and amplifies ideas with a speed and at a scale that are unprecedented.”

Clint Watts in his book, “Messing with the Enemy” makes the case that in a preference bubble, facts and expertise can be the core of a hostile system, an enemy that must be defeated. “Whoever gets the most likes is in charge; whoever gets the most shares is an expert. Preference bubbles, once they’ve destroyed the core, seek to use their preference to create a core more to their liking, specially selecting information, sources, and experts that support their alternative reality rather than the real physical world.” Roger writes, “The shared values that form the foundation of our democracy proved to be powerless against the preference bubbles that have evolved over the past decade. Facebook does not create preference bubbles, but it is the ideal incubator for them. The algorithms that users who like one piece of disinformation will be fed more disinformation. Fed enough disinformation, users will eventually wind up first in a filter bubble and then in a preference bubble. if you are a bad actor and you want to manipulate people in a preference bubble, all you have to do is infiltrate the tribe, deploy the appropriate dog whistles, and you are good to go. That is what the Russians did in 2016 and what many are doing now.

The Effects Facebook Has on Users

April 3, 2019

This is the seventh post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “It turns out that connecting 2.2 billion people on a single network does not naturally produce happiness at all. It puts pressure on users, first to present a desirable image, then to command attention in the form of Likes or shares from others. In such an environment, the loudest voices dominate.” This can be intimidating. Consequently, we follow the human tendency to organize into clusters and tribes. This begins with people who share our beliefs. Most often this consists of family, friends, and Facebook Groups to which we belong. Facebook’s news feed encourages every user to surround him- or herself with like-minded people. Notionally, Facebook allows us to extend our friends network to include a highly diverse community, but many users stop following people with whom they disagree. Usually it feels good when we cut off someone who provokes us and lots of people do so. Consequently friends lists become more homogeneous over time. Facebook amplifies this effect with its approach to curating the News Feed. Roger writes, “When content is coming from like-minded family, friends, or Groups, we tend to relax our vigilance, which is one of the reasons why disinformation spreads so effectively on Facebook.

An unfortunate by-product of giving users what they want are filter bubbles. And unfortunately, there is a high correlation between the presence of filter bubbles and polarization. Roger writes, “I am not suggesting that filter bubbles create polarization, but I believe they have a negative impact on public discourse and political because filter bubbles isolate the people stuck in them. Filter bubbles exist outside Facebook and Google, but gains in attention for Facebook and Google are increasing the influence of their filter bubbles relative to others.”

Although practically everyone on Facebook has friends and family, many also are members of Groups. Facebook allows Groups on just about anything, including hobbies, entertainment, teams, communities, churches, and celebrities. Many groups are devoted to politics and they cross the full spectrum. Groups enables easy targeting by advertisers so Facebook loves them. And bad actors like them for the same reason. Case Sunstein, who was the administrator of the White House Office of Information and Regulatory Affairs for the first Obama administration conducted research indicating that when like-minded people discuss issues, their views tend to get more extreme over time. Jonathan Morgan of Data for Democracy has found that as few as 1 to 2 percent of a group can steer the conversation if they are well-coordinated. Roger writes, “That means a human troll with a small army of digital bots—software robots—can control a large, emotional Group, which is what the Russians did when they persuaded Groups on opposite sides of the same issue—like pro-Muslim groups and anti-Muslim groups—to simultaneously host Facebook events in the same place at the same time hoping for a confrontation.

Roger notes that Facebook asserts that users control their experience by picking the friends and sources that populate their News Feed when in reality an artificial intelligence, algorithms, and menus created by Facebook engineers control every aspect of that experience. Roger continues, “With nearly as many monthly users are there are notional Christians in the world, and nearly as many daily users as there are notional Muslims, Facebook cannot pretend its business model does not have a profound effect. Facebook’s notion that a platform with more than two billion users can and should police itself also seems both naive and self-serving, especially given the now plentiful evidence to the contrary. Even if it were “just a platform,” Facebook has a responsibility for protecting users from harm. Deflection of responsibility has serious consequences.”

Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks

April 2, 2019

This is the sixth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” In 2014, Facebook published a study called “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” “This experiment entailed manipulating the balance of positive and negative messages in News Feeds of nearly seven hundred thousand users to measure the influence of social networks on mood. The internal report claimed the experiment provided evidence that emotions can spread over its platform. Facebook did not get prior informed consent or provide any warning. Facebook made people sad just to see if it could be done. Facebook was faced with strong criticism for this experiment. Zack’s right hand lady, Sheryl Sandberg said: “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated. And for that communication we apologize. We never meant to upset you.”

Note that she did not apologize for running a giant psychological experiment on users. Rather, she claimed that experiments like this are normal “for companies.” So she apologized only for the communication. Apparently running experiments on users without prior consent is a standard practice at Facebook.

Filter Bubbles

April 1, 2019

This is the fiftth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Adults get locked into filter bubbles. Wikipedia defines filter bubbles as “a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the users, such as location, past click-behavior and search history.

Filter bubbles are not unique to internet platforms. They can also be found on any journalistic medium that reinforces preexisting beliefs of its audience, while surprising any stories that might contradict them, such as Fox News, In the context of Facebook, filter bubbles have several elements. In Facebook’s endless pursuit of engagement, Facebook’s AI and algorithms feed users a steady diet of content similar to what has engaged us most in the past. Usually that is content that we “like.” Each click, share, and comment helps Facebook refine its AI. With 2.2 billion people clicking, sharing, and commenting every month—1.47 billion every day—Facebook’s AI knows more about users than the users can possibly imagine. All that data in one place is a target for bad actors, even if it were well-protected. But Roger writes that Facebook’s business model is to give the opportunity to exploit that data to just about anyone who is willing to pay for the privilege.

One can make the case that these platforms compete in a race to the bottom of the brain stem—where AIs present content that appeals to the low-level emotions of the lizard brain, such things as immediate rewards, outrage, and fear. Roger writes, “Short videos perform better than longer ones. Animated GIFs work better than static photos. Sensational headlines work better than calm descriptions of events. Although the space of true things is fixed, the space of falsehoods can expand freely in any direction. False outcompetes true. Inflammatory posts work better at reaching large audiences within Facebook and other platforms.”

Roger continues, “Getting a user outraged, anxious, or afraid is a powerful way to increase engagement. Anxious and fearful users check the site more frequently. Outraged users start more content to let other people know what they should also be outraged about. Best of all from Facebook’s perspective, outraged or fearful users in an emotionally hijacked state become more reactive to further emotionally charge content. It is easy to imagine how inflammatory content would accelerate the heart rate and trigger dopamine hits. Facebook knows so much about each user that they can often tune News Feed to promote emotional responses. They cannot do this all the time for every user, but they do it far more than users realize. And they do it subtly in very small increments. On a platform like Facebook, where most users check the site every day small nudges over long periods of time can eventually produce big changes.”

The Role of Artificial Intelligence

March 31, 2019

This is the fourth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Companies like Facebook and Google use artificial intelligence (AI) to build behavioral prediction engines that anticipate our thoughts and emotions based on patterns found in the vast amount of data they have accumulated about users. Users of likes, posts, shares, comments, and Groups have taught Facebook’s AI how to monopolize our attention. As a result, Facebook can offer advertisers exceptionally high-quality targeting.

This battle for attention requires constant innovation. In the early days of the internet the industry learned that a user adapts to predictable ad layouts, skipping over them without registering any of the content. There’s a tradeoff when it comes to online ads. Although it is easy to see that the right person is seeing the ad, it is much harder to make sure that the person is paying attention to the ad. The solution to the latter problem is to maximize the time users spend on the platform. If users devote only a small percentage of attention to the ads they see, then they try to monopolize as much of the users’ attention as possible. So Facebook as well as other platforms add new content formats and products to stimulate more engagement. Text was enough at the outset. Next came photos, then mobile. Video is the current frontier. Facebook also introduces new products such as Messenger and, soon, dating. To maximize profits, Facebook and other platforms hide the data on the effectiveness of ads.

Platforms prevent traditional auditing practices by providing less-than-industry-standard visibility. Consequently advertisers say, “I know half my ad spending is wasted; I just don’t know which half. Nevertheless, platform ads work well enough that advertisers generally spend more every year. Search ads on Google offer the clearest payback, but brand ads on other platforms are much harder to measure. But advertisers need to put their message in front of prospective customers, regardless of where they are. When user gravitate from traditional media to the internet, the ad dollars follow them. Platforms do whatever they can to maximize daily users’ time on site.

As is known from psychology and persuasive technology, unpredictable, variable rewards stimulate behavioral addiction. Like buttons, tagging, and notifications trigger social validation loops. So users do not stand a chance. We humans have evolved a common set of responses to certain stimuli that can be exploited by technology. “Flight or fight” is one example. When presented with visual stimuli, such as vivid colors, red is a trigger color—or a vibration agains the skin near our pocket that signals a possible enticing reward, the body responds in predictable ways, such as a faster heartbeat and the release of dopamine are meant to be momentary responses that increase the odds of survival in a life-or-death situation. Too much of this kind of stimulation is bad for all humans, but these effects are especially dangerous in children and adolescents. The first consequences include lower sleep quality, an increase in stress, anxiety, depression, and inability to concentrate, irritability, and insomnia. Some develop a fear of being separated from their phone.
Many users develop problems relating to and interacting with people. Children get hooked on games, texting, Instagram, and Snapchat that change the nature of human experience. Cyberbullying becomes easy over social media because when technology mediates human relationships, the social cues and feedback loops that might normally cause a bully to experience shunning or disgust by their peers are not present.

Adults get locked into filter bubbles. Wikipedia defines filter bubbles as “a state of intellectual isolation that can result from personalized searches when a website algorithms selectively guesses what information a user would like to see.

Brexit

March 30, 2019

This is the third post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” The United Kingdom voted to exit the European Union in June 2016. Many posts have been written regarding how Russia used social media, including Facebook, to push Trump in the voting so that he won the Electoral College (but not the popular vote which was won by his opponent by more than 3 million votes).

The Brexit vote came as a total shock. Polling data had suggested that “Remain” would win over “Leave” by about four points. Precisely the opposite happened, and no one could explain the huge swing. A possible explanation occurred to Roger. “What if Leave had benefited from Facebook’s architecture? The Remain campaign was expected to win because the UK had a sweet deal with the European Union: it enjoyed all the benefits of membership, while retaining its own currency. London was Europe’s undisputed financial hub, and UK citizens could trade and travel freely across the open borders of the continent. Remain’s “stay the course” message was based on smart economics but lacked emotion. Leave based its campaign on two intensely emotional appeals. It appealed to ethnic nationalism by blaming immigrants for the country’s problems, both real and imaginary. It also promised that Brexit would generate huge savings that would be used to improve the National Health Service, an idea that allowed voters to put an altruistic shine on an otherwise xenophobic proposal.” So here is an example of Facebook exploiting System 1 processes that was explained in the immediately preceding post.

Roger writes, “The stunning outcome of Brexit triggered a hypothesis: in an election context, Facebook may confer advantages to campaign messages based on fear or anger over those based on neutral or positive emotions. It does this because Facebook’s advertising business model depends on engagement, which can best be triggered through appeals to our most basic emotions. What I did not know at the time is that while joy also works which is why puppy and cat videos and photos of babies are so popular, not everyone reacts the same way to happy content. Some people get jealous, for example. ‘Lizard brain’ emotions such as fear and anger produce a more uniform reaction and are more viral in a mass audience. When users are riled up, they consume and share more content. Dispassionate users have relatively little value to Facebook, which does everything in its power to activate the lizard brain. Facebook has used surveillance to build giant profiles on every user.”

The objective is to give users what they want, but the algorithms are trained to nudge user attention in directions that Facebook wants. These algorithms choose posts calculated to press emotional buttons because scaring users or pissing them off increases time on site. Facebook calls it engagement when users pay attention, but the goal is behavior modification that makes advertising more valuable. At the time the book was written, Facebook is the fourth most valuable company in America, despite being only fifteen years old, and its value stems from its mastery of surveillance and behavioral modification.
So who was using Facebook to manipulate the vote? The answer is Russia. Just as they wanted to elect Trump president. Russia used the Ukraine as a proving ground for their disruptive technology on Facebook. Russia wanted to breakup the EU, of which Great Britain was a prominent part. The French Minister of Foreign Affairs has found that Russia is responsible for 80% of disinformation activity in Europe. One of Russia’s central goals is to break up alliances.

Zucked

March 28, 2019

The title of this post is the first part of a title of an important book by Roger McNamee. The remainder of the title is “Waking Up to the Facebook Catastrophe.” Roger McNamee is a longtime tech investor and tech evangelist. He was an early advisor to Facebook founder Mark Zuckerberg. To his friends Zuckerberg is known as “Zuck.” McNamee was an early investor in Facebook and he still owns shares.

The prologue begins with a statement made by Roger to Dan Rose, the head of media partnerships at Facebook on November 9, 2016, “The Russians used Facebook to tip the election!” One day early in 2016 he started to see things happening on Facebook that did not look right. He started pulling on that thread and uncovered a catastrophe. In the beginning, he assumed that Facebook was a victim and he just wanted to warn friends. What he learned in the months that followed shocked and disappointed him. He learned that his faith in Facebook had been misplaced.

This book is about how Roger became convinced that even though Facebook provided a compelling experience for most of its users, it was terrible for America and needed to change or be changed, and what Roger tried to do about it. This book will cover what Roger knows about the technology that enables internet platforms like Facebook to manipulate attention. He explains how bad actors exploit the design of Facebook and other platforms to harm and even kill innocent people. He explains how democracy has been undermined because of the design choices and business decisions by controllers of internet platforms that deny responsibility for the consequences of their actions. He explains how the culture of these companies cause employees to be indifferent to the negative side effects of their success. At the time the book was written, there was nothing to prevent more of the same.

Roger writes that this is a story about trust. Facebook and Google as well as other technology platforms are the beneficiaries of trust and goodwill accumulated over fifty years of earlier generations of technology companies. But they have taken advantage of this trust, using sophisticated techniques to prey on the weakest aspects of human psychology, to gather and exploit private data, and to craft business models that do not protect users from harm. Now users must learn to be skeptical about the products they love, to change their online behavior, insist that platforms accept responsibility for the impact of their choices, and push policy makers to regulate the platforms to protect the public interest.

Roger writes, “It is possible that the worst damage from Facebook and the other internet platforms is behind us, but that is not where the smart money will place its bet. The most likely case is that technology and the business model of Facebook and others will continue to undermine democracy, public health, privacy, and innovations until a countervailing power, in the form of government intervention or user protest, forces change.

Free Exchange | Replacebook

March 27, 2019

The title of this post is identical to the title of a piece in the Finance & Economics section of the 16 February 2019 issue of “The Economist.” The article notes, “There has never been such an agglomeration of humanity as Facebook. Some 2.3bn people, 30% of the world’s population engage with the network each month.” It describes an experiment in which researchers kicked a sample of people off Facebook and observed the results.

In January, Hunt Allcott, of New York University, and Luca Braghiere, Sarah Eichmeyer and and Matthew Gentzkow, of Stanford University, published results of the largest such experiment yet. They recruited several thousand Facebookers and sorted them into control and treatment groups. Members of the treatment group were asked to deactivate their Facebook profiles for four weeks in late 2018. The researchers checked up on their volunteers to make sure they stayed off the social network, and then studied the results.

On average, those booted off enjoyed an additional hour of free time. They tended not to redistribute their liberated minutes to other websites and social networks, but instead watched more television and spent time with friends and family. They consumed much less news, and were consequently less aware of events but also less polarized in their views about them than those still on the network. Leaving Facebook boosted self-reported happiness and reduced feelings of depression and anxiety.

Several weeks after the deactivation period, those who had been off Facebook spent 23% less time on it than those who never left, and 5% of the forced leavers had yet to turn their accounts back on. And the amount of money subjects were willing to accept to shut off their accounts for another four weeks was 13% lower after the month off than it had been before.

In previous posts HM has made the point that our attentional resources are limited, and that they should not be wasted. HM has also recommended quitting Facebook and similar accounts. Of course, this is a personal question regarding how each of us uses πour attentional resources. They key point is to be cognizant that our precious attentional resources are limited and to spend them wisely and not waste them.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

We Need to Take Tech Addiction Seriously

March 26, 2019

The title of this post is the same as an article by psychologist Doreen Dodgen-Magee in the 19 March 2019 issue of the Washington Post. The World Health Organization has recognized Internet gaming as a diagnosable addiction. Dr. Dodgen-Magee argues that psychologists and other mental-health professionals must begin to acknowledge that technology use has the potential to become addictive and impact individuals and communities. Sometime the consequences are dire.

She writes that the research is clear, that Americans spend most of their waking hours interacting with screens. Studies from a nonprofit group Common Sense Media indicate that U.S. teens average approximately nine hours per day with digital media, tweens spend six hours and our youngest, ages zero to 8, spend 2.5 hours daily in front of a screen. According to research by the Nielsen Company, the average adult in the United States spends more than 11 hours a day in the digital world. Dr. Dodgen-Magee claims that when people invest this kind of time in any activity, we must at least start to ask what it means for their mental health.

Both correlational and causal relationships have been established between tech use and various mental-health conditions. Research at the University of Pittsburgh found higher rates of depression and anxiety among young adults who engage many social media platforms than those who engage only two. Jean Twenge found that the psychological development of adolescents is slowing down and depression, anxiety and loneliness, which she attributes to tech engagement are on the rise. Multitasking, a behavior that technology encourages and reinforces is consistently correlated with poor cognitive and mental-health outcomes. Researchers at the University of Pennsylvania have published the first experimental data linking decreased well-being to Facebook, Snapchat, and Instagram use in young adults. Dr. Dodgen-Magee concludes that our technology use is affecting our psychological functioning.

The author has been examining the interplay between technology and mental health for close to two decades. She finds that while technology can do incredible things for us in nearly every area of life, it is neither all good nor benign.

The author writes that when the mental-health community resists fully exploring the costs associated with constant tech interaction, it leaves those struggling with compulsive or potentially harmful use of their devices few places to turn. She continues that recently a woman scheduled a consultation with her because she was concerned about her inability to focus. She was a self-described Type A personality who found herself simultaneously interacting with three or four screens for nearly 20 hours a day, determined to stay on top of every demand. When it came time for her biannual revision of an important procedural manual, she couldn’t focus on the single tasks for the time to do it effectively. She is not the only individual with this problem.

She writes that consequently our attention spans are short. Our ability to focus on one task at a time is impaired. And our boredom tolerance is nil. People now rely on the same devices that drive so much of our anxiety and alienation for both stimulation and soothing. While, for many people, these changes will never move into the domain of addiction, for others they already have. In a recent Common Sense Media poll, 50% of adolescents reported already feeling that their use had become addictive and 27% of parents reported the same.

She writes, “If Americans were interacting with anything else for 11-plus hours a day, I feel confident we’d be talking more about how that interaction shapes us. Mental-health professionals must begin to educate themselves about the digital pools in which their clients swim and learn about the impact of excessive technology use on human development and functioning. It is too easy for therapists to assume that everyone’s engagement with the digital domain looks just their own and to go merrily from there. We would serve our client well by understanding the unique way in which many platforms encourage addictive pattens and behaviors. We should also create non-shaming environments in which they can candidly explore how their tech use impacts them.

It’s time to put our phones down and begin an informed conversation about how technology is impacting our mental health. Our clients’ health and the well-being of our communities may depend on it.”

Personal Examples of Stimulator Mode

March 20, 2019

This is the fourth post in the series of posts based on a book by Stephen Kosslyn and G. Wayne Miller titled “Top Brain, Bottom Brain.” The subtitle is “Harnessing the Power of the Four Cognitive Modes.” During the Vietnam War Abbie Hoffman was the Cofounder of the Youth International Party (Yippies). Hoffman organized marches, sit-ins, and demonstrations and by October 1967 was deeply involved in planning two days of actions at the Lincoln Memorial and outside the Pentagon. Preparations included obtaining a permit, which set a limit of 32 hours for the demonstrations. By the time that deadline arrived, organizers had achieved their primary objective, national coverage of their cause. Many began to leave, but Hoffman and others stayed on into a second morning—and were arrested. The authors note that this was pointless as the protest had already succeeded, and counterproductive for Hoffman, whose time would have been better spent planning the next action, not trying to free himself from the criminal justice system. The authors write, “With his long and intensive involvement in protests, Hoffman had repeatedly experienced the potential consequences—but he behaved like someone who did not engage in bottom-brain thinking as deeply as he should have.”

In 1968 Hoffman played a major role in planning demonstrations using his top brain. In the weeks leading up to the Democratic national convention Hoffman oversaw production of tens of thousands of leaflets, posters, and buttons urging antiwar protestors to join him in Chicago for the convention. He helped coordinate news coverage. He reached out to speakers and musicians and he presided over weekly meetings.

His work paid off: Thousands were on hand that August 28, when the Democrats nominated Hubert Humphrey as their presidential candidate. With the world’s journalists present, Hoffman had his biggest platform yet and a chance to make a powerful statement. But he did not think of the consequences for writing the F-word in lipstick on his forehead when he dressed that morning. But the consequence was one that many would predict. Police arrested him for thirteen hours. Hoffman missed the demonstration that would become one of the iconic protests of the 1960s, and he stood trial as one of the Chicago Seven, which was a long court ordeal that effectively removed him from the leadership of the movement.

When he emerged from hiding as a fugitive he wrote, “It’s mind boggling, but being a fugitive I’ve seen the way normal people live and it’s made me realize just how wrong I was in the past. I’ve grown up too. You know how it is when you’re young and not in control. I’d like to go back to school and learn how to be a credit to the community…Age takes its toll but it teaches wisdom.” The authors conclude, “In his later years, Hoffman showed signs of having developed the ability to think in Perceiver Mode at least some of the time.”

The authors write, “What better contemporary example could we use to illustrate the characteristics of operating in the Stimulator mode than Sarah Palin, onetime vice presidential candidate, former governor of Alaska, and continuing presence in American culture?” Palin moves through life, formulating and carrying out plans. But it appears that, like Hoffman, she often does not adequately register the consequences and adjust her plans accordingly. As a vice-presidential candidate she presented a folksy, budget-cutting fiscal conservative and demanded instant attention. Voters who were wary of politicians who waste taxpayer dollars applauded this governor who had pared Alaskan state construction spending, sold the the gubernatorial debt, and refused to be reimbursed for her hotel stays.

However during the campaign, she and her family accepted $150,00 worth of designer outfits and accessories from Neiman Marcus, Saks Fifth Avenue, and Bloomingdale’s. She indulged in an expensive makeup consultation—a spending spree that stood in stark contrast to her image as a Kmart-shopping mom.

In March 2010 she posted on her Facebook page pictures of gun crosshairs that “targeted” Democratic members of Congress for defeat. Rep. Gabrielle Giffords was one on whom she had place gun crosshairs. On 8 January Rep. Giffords was tragically shot and seriously injured. She is still recovering from her injuries. Palin is one of the favorite targets for the satire of the Capitol Steps.

If you have not tested yourself to see if you are classified in the Stimulator Mode, go to the the first post in this series “Top Brain, Bottom Brain.”

The Unreality Machine

January 21, 2019

This is the ninth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” There was a gold rush in Veles, Macedonia. Teenage boys there worked in “media.” More specifically, American social media. The average U.S. internet is virtually a walking bag of cash, with four times the advertising dollars of anyone else in the world. And the U.S. internet user is very gullible. The following is from the book: “In a town with 25% unemployment and an annual income of under $5,000, these young men had discovered a way to monetize their boredom and decent English-language skills. They set up catch websites, peddling fad diets and weird health tips.” They relied on Facebook “shares” to drive traffic. Each click gave them a small slice of the pie from ads running along the side. Some of the best of them were pulling in tens of thousands of dollars a month.

Competition swelled, but fortunately the American political scene soon brought them a virtually inexhaustible source of clicks and resulting fast cash. This was the 2016 presidential election. Now back to the text “The Macedonians were awed by Americans’ insatiable thirst for political stories, Even a sloppy, clearly plagiarized jumble of text and ads could rack up hundreds of thousands of “shares.” The number of U.S. politics-related websites operated out of Veles swelled into the hundreds.

One of the successful entrepreneurs estimated that in six month, his network of fifty websites attracted some 40 million page views driven there by social media. This made him about $60,000. This 18-year-old then expanded his media empire. He outsourced the writing to three 15-year-olds, paying each $10 a day. He was far from the most successful of the Veles entrepreneurs. Some became millionaires, One rebranded himself as as “clickbait coach,” running a school where he taught dozens of others how to copy his success.

These viral news stories weren’t just exaggerations or products of political spin; they were flat-out lies. Sometimes the topic was the proof that Obama had been born in Kenya or that he was planning a military coup. Another report warned that Oprah Winfrey had told her audience that “some white people have to die.”

The following is from the book: “Of the top twenty best-performing fake stories spread during the election, seventeen were unrepentantly pro Trump. Indeed, the single most popular news story of the entire election—“Pope Francis Shocks World, Endorses Donald Trump for President.” Social media provided an environment in which lies created by anyone, from anywhere, could spread everywhere, making the liars plenty of cash along the way”

In 1995 MIT media professor Nicholas Negroponte prophesied that there would be an interface agent that read every newswire and newspaper and catch every TV and radio broadcast on the planet, and then construct a personalized summary. He called this the “Daily Me.”

Harvard law professor Cass Sunstein argues that the opposite might actually be true. Rather than expanding their horizons, people were just using the endless web to seek out information with which they already agree. He called this the “Daily We.”

A few years later the creation of Facebook, the “Daily We,” an algorithmically created newsfeed became a fully functioning reality.

For example, flat-earthers had little hope of gaining traction in a post-Christopher Columbus, pre-internet world. This wasn’t just because of the silliness of their views, but they couldn’t easily find others who shared them. But the world wide web has given the flat-earth belief a dramatic comeback. Proponents now have an active community and aggressive marketing scheme.

This phenomenon is called ‘homophily,” meaning “love of the same.” Homophily is what makes us humans social creatures able to congregate in such like-minded groups. It explains the growth of civilization and cultures, It is also the reason an internet falsehood, once it begins to spread, can rarely be stopped.

Unfortunately falsehood diffused significantly farther, faster, deeper, and more broadly than the truth. It becomes a deluge. The authors write, “Ground zero for the deluge, however, was in politics. The 2016 U.S. presidential election released a flood of falsehoods that dwarfed all previous hoaxes and lies in history. It was an online ecosystem so vast that the nightclubbing, moneymaking, lie-spinning Macedonians occupied only one tiny corner. There were thousands of fake website, populated by millions of baldly false stories, each then shared across people’s personal networks. In the final three months of the 2016 election, more of these fake political headlines were shared on Facebook than real ones. Meanwhile, in study of 22 million tweets, the Oxford Internet Institute concluded that Twitter users, too, and shared more disinformation, polarizing and conspiratorial content’ than actual news. The Oxford team called this problem “junk news.”

Crowdsourcing

January 18, 2019

This is the sixth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. The terrorist attack on Mumbai opened up all the resources of the internet using Twitter to defend against the attack. When the smoke cleared, the Mumbai attack left several legacies. It was a searing tragedy visited upon hundreds of families. It brought two nuclear powers to the brink of war. It foreshadowed a major technological shift. Hundreds of witnesses—some on-site, some from afar—had generated a volume of information that previously would have taken months of diligent reporting to assemble. By stitching these individual accounts together, the online community had woven seemingly disparate bits of data into a cohesive whole. The authors write, “It was like watching the growing synaptic connections of a giant electric brain.”

This Mumbai operation was a realization of “crowdsourcing,” an idea that had been on the lips of Silicon Valley evangelists for years. It had originally been conceived as a new way to outsource programming jobs, the internet bringing people together to work collectively, more quickly and cheaply than ever before. As social media use had sky rocketed, the promise of had extended a space beyond business.

Crowdsourcing is about redistributing power-vesting the many with a degree of influence once reserved for the few. Crowdsourcing might be about raising awareness, or about money (also known as “crowdfunding.”) It can kick-start a new business or throw support to people who might have remained little known. It was through crowdsourcing that Bernie Sanders became a fundraising juggernaut in the 2016 presidential election, raking in $218 million online.

For the Syrian civil war and the rise of ISIS, the internet was the “preferred arena for fundraising.” Besides allowing wide geographic reach, it expands the circle of fundraisers, seemingly linking even the smallest donor with their gift on a personal level. The “Economist” explained, this was, in fact, one of the key factors that fueled the years-long Syrian civil war. Fighters sourced needed funds by learning “to crowd fund their war by using Instagram, Facebook and YouTube. In exchange for a sense of what the war was really like, the fighters asked for donations via PayPal. In effect, they sold their war online.”

In 2016 a hard-line Iraqi militia took to Instagram to brag about capturing a suspected ISIS fighter. The militia then invited its 75,000 online fans to vote on whether to kill or release him. Eager, violent comments rolled in from around the world, including many from the United States. Two hours later, a member of the militia posted a follow-up selfie; the body of the prisoner lay in a pool of blood behind him. The caption read, “Thanks for the vote.” In the words of Adam Lineman, a blogger and U.S. Army veteran, this represented a bizarre evolution in warfare: “A guy on the toilet in Omaha, Nebraska could emerge from the bathroom with the blood of some 18-year-old Syrian on his hands.”

Of course, crowdsourcing can be used for good as well as for evil.

Freed from the Feed

January 1, 2019

The title of this post is identical to the titled of a piece by Elise Viebeck in the 25 December ’18 issue of the Washington Post. The piece follows the development of an early Facebook enthusiast.

In 2005 Michael Lampert, a student at the University of Arizona, joined an early version of Facebook. He wrote, “It felt very cool, very hip, very exclusive.” He writes silly anecdotes and ridiculous things about college life.

In MId-2008 he is a recent graduate in the middle of the Great Recession trying to find work. Although Facebook did not help him find work, it did provide a distraction and a connection to far-off-friends. He says, “There was still this sense of happiness that I could go and log on and reignite old memories.”

In Spring 2012 he is a newcomer in fast-changing San Francisco. He endures rising rents and a difficult job in advertising. Unknown to him, a layoff loomed. Facebook became a way to keep track of new friends amid the upheaval. He says, “it helped me build the social circle I have now.”

In Summer 2018 he is thriving in Oakland engaged to be married. He receives congratulations on Facebook. The platform feels different since the 2016 edition. A friend’s decision to delete his account has made him think: “The people I had this artificial sense of relationship with online—how important is it that I maintain that? If I actively care about them, do they actively care about me?”

In late November he is about to be an ex-user of Facebook. He publishes his last post, urging friends to stay in touch by phone and email. He says, “Most people were like, ‘Oh, that’s too cool, good for you.” But weeks later, few of his old contacts have reached out. He says, “I feel like my perspective on social media is very much in the minority.”

Now he is not interested in returning to Facebook. He is pursuing a career in human resources, hoping to make corporate workplaces more humane. He doesn’t think social media is evil, but its ubiquity still has him thinking. He says, “I’m moving more toward a sense of being in the moment.”

May this post assist you in making a New Year’s resolution to break from social media.

Scale of Russian Operation Detailed

December 23, 2018

The title of this post is identical to the title of an article by Craig Timberg and Tony Romm in the 17 Dec ’18 issue of the Washington Post. Subtitles are: EVERY MAJOR SOCIAL MEDIA PLATFORM USED and Report finds Trump support before and after election. This post is the first to analyze the millions of posts provided by major technology firms to the Senate Intelligence Committee.

The research was done by Oxford University’s Computational Propaganda Project and Graphic, a network analysis firm. It provides new details on how Russians worked at the Internet Research Agency (IRA), which U.S. officials have charged with criminal offenses for interring in the 2016 campaign. The IRA divided Americans into key interest groups for targeted messaging. The report found that these efforts shifted over time, peaking at key political moments, such as presidential debates or party conventions. This report substantiates facts presented in prior healthy memory blog posts.

The data sets used by the researchers were provided by Facebook, Twitter, and Google and covered several years up to mid-2017, when the social media companies cracked down on the known Russian accounts. The report also analyzed data separately provided to House Intelligence Committee members.

The report says, “What is clear is that all of the messaging clearly sought to benefit the Republican Party and specifically Donald Trump. Trump is mentioned most in campaigns targeting conservatives and right-wing voters, where the messaging encouraged these groups to support his campaign. The main groups that could challenge Trump were then provided messaging that sought to confuse, distract and ultimately discourage members from voting.”

The report provides the latest evidence that Russian agents sought to help Trump win the White House. Democrats and Republicans on the panel previously studied the U.S. intelligence community’s 2017 finding that Moscow aimed to assist Trump, and in July, said the investigators had come to the correct conclusion. Nevertheless, some Republicans on Capitol Hill continue to doubt the nature of Russia’s interference in the election.

The Russians aimed energy at activating conservatives on issues such as gun rights and immigration, while sapping the political clout of left-leaning African American voters by undermining their faith in elections and spreading misleading information about how to vote. Many other groups such as Latinos, Muslims, Christians, gay men and women received at least some attention from Russians operating thousands of social media accounts.

The report offered some of the first detailed analyses of the role played by Youtube and Instagram in the Russian campaign as well as anecdotes about how Russians used other social media platforms—Google+, Tumblr and Pinterest—that had received relatively little scrutiny. That also used email accounts from Yahoo, Microsoft’s Hotmail service, and Google’s Gmail.

While reliant on data provided by technology companies the authors also highlighted the companies’ “belated and uncoordinated response” to the disinformation campaign and, once it was discovered, their failure to share more with investigators. The authors urged that in the future they provide data in “meaningful and constructive “ ways.

Facebook provided the Senate with copies of posts from 81 Facebook pages and information on 76 accounts used to purchase ads, but it did not share posts from other accounts run by the IRA. Twitter has made it challenging for outside researchers to collect and analyze data on its platform through its public feed.

Google submitted information in an especially difficult way for researchers to handle, providing content such as YouTube videos but not the related data that would have allowed a full analysis. They wrote that the YouTube information was so hard to study, that they instead tracked the links to its videos from other sites in hopes of better understand YouTube’s role in the Russian effort.

The report expressed concern about the overall threat social media poses to political discourse within and among nations, warning them that companies once viewed as tools for liberation in the Arab world and elsewhere are now a threat to democracy.

The report also said, “Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement to being a computational tool for social control, manipulated by canny political consultants and available to politicians in democracies and dictatorships alike.”

The report traces the origins of Russian online influence operations to Russian domestic politics in 2009 and says that ambitions shifted to include U.S. politics as early as 2013. The efforts to manipulate Americans grew sharply in 2014 and every year after, as teams of operatives spread their work across more platforms and accounts to target larger swaths of U.S. voters by geography, political interests, race, religion and other factors.

The report found that Facebook was particularly effective at targeting conservatives and African Americans. More than 99% of all engagements—meaning likes, shares and other reactions—came from 20 Facebook pages controlled by the IRA including “Being Patriotic,” “Heart of Texas,” “Blacktivist” and “Army of Jesus.”

Having lost the popular vote, it is difficult to believe that Trump could have carried the Electoral College given this impressive support by the Russians. One can also envisage Ronald Reagan thrashing about in his grave knowing that the Republican Presidential candidate was heavily indebted to Russia and that so many Republicans still support Trump.
© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Cyberwar

October 31, 2018

“Kiselev called information war the most important kind of war. At the receiving end, the chairwoman of the Democratic Party wrote of ‘a war, clearly, but edged on a different kind of battlefield.’ The term was to be taken literally. Carl von Clausewitz, the most famous student of war, defined it as ‘an act of force to compel our enemy to do our will.’ What if, as the Russian military doctrine of the 2010s posited, technology made it possible to engage the enemy’s will directly, without the medium of violence? It should be possible as a Russian military planning document of 2013 proposed, to mobilize the ‘protest potential of the population’ against its own interests, or, as the Izborsk Club specified in 2014, to generate in the United States a ‘destructive paranoid reflection. Those are concise and precise descriptions of Trump’s candidacy. The fictional character won, thanks to votes meant as a protest against the system, and thanks to voters who believed paranoid fantasies that simply were not true… The aim of Russian cyberwar was to bring Trump to the Oval Office through what seemed to be normal procedures. Trump did not need need to understand this, any more than an electrical grid has to know when it is disconnected. All that matters is that the lights go out.”

“The Russian FSB and Russian military intelligence (the GRU) both took part in the cyberwar against the United States. The dedicated Russian cyberwar center known as the Internet Research Agency was expanded to include an American Department when in June 2015 Trump announced his candidacy. About ninety new employees went to work on-site in St. Petersburg. The Internet Research Agency also engaged about a hundred American political activists who did not know for whom they were working. The Internet Research Agency worked alongside Russian secret services to move Trump into the Oval Office.”

“It was clear in 2016 that Russians were excited about these new possibilities. That February, Putin’s cyber advisor Andrey Krutskikh boasted: ‘We are on the verge of having something in the information arena that will allow us to talk to the Americans as equals.’ In May, an officer of the GRU bragged that his organization was going to take revenge on Hillary Clinton on behalf of Vladimir Putin. In October, a month before the elections, Pervyi Kanal published a long and interesting meditation on the forthcoming collapse of the United States. In June 2017, after Russia’s victory, Putin spoke for himself, saying that he had never denied that Russian volunteers had made cyber war against the United States.”

“In a cyberwar, an ‘attack surface’ is the set of points in a computer program that allow hackers access. If the target of a cyberwar is not a computer program but a society, then the attack surface is something broader: software that allows the attacker contact with the mind of the enemy. For Russian in 2015 and 2016, the American attack surface was the entirety of Facebook, Instagram, Twitter, and Google.”

“In all likelihood, most American voters were exposed to Russian Propaganda. It is telling that Facebook shut down 5.8 million fake accounts right before the election of November 2016. These had been used to promote political messages. In 2016, about a million sites on Facebook were using a tool that allowed them to artificially generate tens of millions of ‘likes,’ thereby pushing certain items, often fictions, into the newsfeed of unwitting Americans. One of the most obvious Russian interventions was the 470 Facebook sites placed by Russia’s Internet Research Agency, but purported to be those of American political organizations or movements. Six of these had 340 million shares each of content on Facebook, which would suggest that all of them taken together had billions of shares. The Russian campaign also included at least 129 event pages, which reached at least 336,300 people. Right before the election, Russia placed three thousand advertisements on Facebook, and promoted them as memes across at least 180 accounts on Instagram. Russia could do so without including any disclaimers about who had paid for the ads, leaving Americans with the impression that foreign propaganda was an American discussion. As researchers began to calculate the extent of American exposure to Russian propaganda, Facebook deleted more data. This suggests that the Russian campaign was embarrassingly effective. Later, the company told investors that as many as sixty million accounts were fake.”

“Americans were not exposed to Russian propaganda randomly, but in accordance with their own susceptibility, as revealed by their practices on the internet. People trust what sounds right, and trust permits manipulation. In one variation, people are led towards even more intense outrage about what they already fear or hate. The theme of Muslim terrorism, which Russia had already exploited in France and Germany, was also developed in the United States. In crucial states such as Michigan and Wisconsin, Russia’s ads were targeted at people who could be aroused by anti-Muslim messages. Throughout the United States, likely Trump voters were exposed to pro-Clinton messages on what purported to be American Muslim sites. Russian pro-Trump propaganda associated refugees with rapists. Trump had done the same when announcing his candidacy.”

“Russian attackers used Twitter’s capacity for massive retransmission. Even in normal times on routine subjects, perhaps 10% of Twitter accounts (a conservative estimate) are bots rather than human beings: that is computer programs of greater or lesser sophistication, designed to spread certain messages to a target audience. Though bots are less numerous that humans on Twitter, they are more efficient than humans in sending messages. In the weeks before the election, bots accounted for about 20% of the American conversation about politics. An important scholarly study published the day before the polls opened warned that bots could ‘endanger the integrity of the presidential election.’ It cited three main problems: ‘first, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can be further polarized; third, spreading misinformation and unverified information can be enhanced.’ After the election, Twitter identified 2,752 accounts as instruments of Russian political influence. Once Twitter started looking it was able to identify about a million suspicious accounts per day.”

“Bots were initially used for commercial purposes. Twitter has an impressive capacity to influence human behavior by offering deals that seem cheaper or easier than alternatives. Russia took advantage of this. Russian Twitter accounts suppressed the vote by encouraging Americans to ‘text-to-vote,’ which is impossible. The practice was so massive that Twitter, which is very reluctant to intervene in discussions over its platform, finally had to admit its existence in a statement. It seems possible that Russia also digitally suppressed the vote in another way: by making voting impossible in crucial places and times. North Carolina, for example, is a state with a very small Democratic majority, where most Democratic voters are in cities. On Election Day, voting machines in cities ceased to function, thereby reducing the number of votes recorded. The company that produced the machines in question had been hacked by Russian military intelligence, Russia also scanned the electoral websites of at least twenty-one American states, perhaps looking for vulnerabilities, perhaps seeking voter data for influence campaigns. According to the Department of Homeland Security, “Russian intelligence obtained and maintained access to elements of multiple U.S. state or local electoral boards.

“Having used its Twitter bots to encourage a Leave vote in the Brexit referendum, Russia now turned them loose in the United States. In several hundred cases (at least), the very same bots that worked against the European Union attacked Hillary Clinton. Most of the foreign bot traffic was negative publicity about her. When she fell ill on September 11, 2016, Russian bots massively amplified the case of the event, creating a trend on Twitter under the hashtag #Hillary Down. Russian trolls and bots also moved to support Donald Trump directly at crucial points. Russian trolls and bots praised Donald Trump and the Republican National Convention over Twitter. When Trump had to debate Clinton, which was a difficult moment for him, Russian trolls and bots filled the ether with claims that he had won or that the debate was somehow rigged against him. In crucial swing states that Trump had won, bot activity intensified in the days before the election. On Election Day Itself, bots were firing with the hashtag #WarAgainstDemocrats. After Trump’s victory, at least 1,600 of the same bots that had been working on his behalf went to work agains Macron and for Le Pen in FRance, and against Merkel and for the AfD in Germany. Even at this most basic technical level, the war against the United States was also the war against the European Union.”

“In the United States in 2016, Russia also penetrated email accounts, and then used proxies on Facebook and Twitter to distribute selection that were deemed useful. The hack began when people were sent an email message that asked them to enter their passwords on a linked website. Hackers then used security credentials to access that person’s email account and steal its contents. Someone with knowledge of the American political system then chose what portions of this material the American public should see, and when.”

The hackings of the Democratic convention and wikileaks are well known. The emails that were made public were carefully selected to ensure strife between supporters of Clinton and her rival for the nomination, Bernie Sanders. Their release created division at the moment when the campaign was meant to coalesce. With his millions of Twitter followers, Trump was among the most important distribution channels of the Russian hacking operation. Trump also aided the Russian endeavor by shielding it from scrutiny, denying repeatedly that Russia was intervening in the campaign.
Since Democratic congressional committees lost control of private data, Democratic candidates for Congress were molested as they ran for Congress. After their private data were released, American citizens who had given money to he Democratic Party were also exposed to harassment and threats. All this mattered at the highest levels of politics, since it affected one major political party and not the other. “More fundamentally, it was a foretaste of modern totalitarianism is like: no one can act in politics without fear, since anything done now can be revealed later, with personal consequences.”

None who released emails over the internet has anything say about the relationship of the Trump campaign to Russia. “This was a telling omission, since no American presidential campaign was ever so closely bound to a foreign power. The connections were perfectly clear from the open sources. One success of Russia’s cyberwar was the seductiveness of the secret and the trivial drew America away from the obvious and the important: that the sovereignty of the United States was under attack.”

Quotes are taken directly from “The Road to Unfreedom: Russia, Europe, America” by Timothy Snyder

The 2016 Election—Part One

July 20, 2018

This post is based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” In the middle of 2015 the Democratic National Committee asked Richard Clarke to assess the political organization’s digital vulnerabilities. He was amazed at what his team discovered. The DNC—despite its Watergate History, despite the well-publicized Chinese and Russian intrusion into the Obama campaign computers in 2008 and 2012—was securing its data with the kind of minimal techniques one would expect to find at a chain of dry cleaners. The way spam was filtered wasn’t even as sophisticated as what Google’s Gmail provides; it certainly wasn’t prepared for a sophisticated attack. And the DNC barely trained its employees to spot a “spear phishing” of the kind that fooled the Ukrainian power operators into clicking on a link, only to steal whatever passwords are entered. It lacked any capability for detecting suspicious activity in the network such as the dumping of data to a distant server. Sanger writes, “It was 2015, and the committee was still thinking like it was 1792.”

So Clarke’s team came up with a list of urgent steps the DNC needed to take to protect itself. The DNC said they were too expensive. Clarke recalled “They said all their money had to go into the presidential race.” Sanger writes, “Of the many disastrous misjudgments the Democrats made in the 2016 elections, this one may rank as the worst.” A senior FBI official told Sanger, “These DNC guys were like Bambi walking in the woods, surrounded by hunters. They had zero chance of surviving an attack. Zero.”

When an intelligence report from the National Security Agency about a suspicious Russian intrusion into the computer networks at the DNC was tossed onto Special Agent Adrian Hawkin’s desk at the end of the summer of 2015, it did not strike him or his superiors at the FBI as a four-alarm fire. When Hawkins eventually called the DNC switchboard, hoping to alert its computer-security team to the FBI’s evidence of Russian hacking he discovered that they didn’t have a computer-security team. In November 2015 Hawkins contacted the DNC again and explained that the situation was worsening. This second warning still did not set off alarms.

Anyone looking for a motive for Putin to poke into the election machinery of the United States does not have to look far: revenge. Putin had won his election, but had essentially assured the outcome. This evidence was on video that went viral.
Clinton, who was Secretary of State, called out Russia for its antidemocratic behavior. Putin took the declaration personally. The sign of actual protesters, shouting his name, seemed to shake the man known for his unchanging countenance. He saw this as an opportunity. He declared that the protests were foreign-inspired. At a large meeting he was hosting, he accused Clinton of being behind “foreign money” aimed at undercutting the Russian state. Putin quickly put down the 2011 protests and made sure that there was no repetition in the aftermath of later elections. His mix of personal grievance at Clinton and general grievance at what he viewed as American hypocrisy never went away. It festered.

Yevgeny Prigozhin developed a large project for Putin: A propaganda center called the Internet Research Agency (IRA). It was housed in a squat four-story building in Saint Petersburg. From that building, tens of thousands of tweets, Facebook posts, and advertisements were generated in hopes of triggering chaos in the United States, and, at the end of the processing, helping Donald Trump, a man who liked oligarchs, enter the Oval Office.

This creation of the IRA marked a profound transition in how the Internet could be put to use. Sanger writes, “For a decade it was regarded as a great force for democracy: as people of different cultures communicated, the best ideas would rise to the top and autocrats would be undercut. The IRA was based on the opposite thought: social media could just as easily incite disagreements, fray social bonds, and drive people apart. While the first great blush of attention garnered by the IRA would come because of its work surrounding the 2016 election, its real impact went deeper—in pulling at the threads that bound together a society that lived more and more of its daily life the the digital space. Its ultimate effect was mostly psychological.”

Sanger continues, “There was an added benefit: The IRA could actually degrade social media’s organizational power through weaponizing it. The ease with which its “news writers” impersonated real Americans—or real Europeans, or anyone else—meant that over time, people would lose trust in the entire platform. For Putin, who looked at social media’s role in fomenting rebellion in the Middle East and organizing opposition to Russia in Ukraine, the notion of calling into question just who was on the other end of a Tweet or Facebook post—of making revolutionaries think twice before reaching for their smartphones to organize—would be a delightful by-product. It gave him two ways to undermine his adversaries for the price of one.”

The IRA moved on to advertising. Between June 2015 and August 2017 the agency and groups linked to it spent thousands of dollars on Facebook as each month, at a fraction of the cost for an evening of television advertising on a local American television stations. In this period Putin’s trolls reached up to 126 million Facebook users, while on Twitter they made 288 million impressions. Bear in mind that there are about 200 million registered voters in the US and only 139 million voted in 2016.

Here are some examples of the Facebook posts. A doctored picture of Clinton shaking hands with Osama bin Laden or a comic depicting Satan arm-wrestling Jesus. The Satan figures says “If I win, Clinton wins.” The Jesus figure responds, “Not if I can help it.”

The IRA dispatched two of their experts, a data analyst and a high-ranking member of the troll farm. They spent three weeks touring purple states. They did rudimentary research and developed an understanding of swing states (something that doesn’t exist in Russia). This allows the Russians to develop an election-meddling strategy, which allows the IRA to target specific populations within these states that might be vulnerable to influence by social media campaigns operated by trolls across the Atlantic.

Russian hackers also broke into the State Department’s unclassified email system, and they might also have gotten into some “classified” systems. They also managed to break into the White House system. In the end, the Americans won the cyber battle in the State and White House systems, though they did not fully understand how it was part of an escalation of a very long war.

The Russians also broke into Clinton’s election office in Brooklyn. Podesta fell prey to a phishing attempt. When he changed his password the Russians obtained access to sixty thousand emails going back a decade.

The Electoral College Needs to Go

May 17, 2018

This post is based on Cathy O’Neil’s informative book, “Weapons of Math Destruction.” The penultimate chapter in the book shows how weapons of math destruction are ruining our elections. It is only recently that Facebook and Cambridge Analytics have be found to employ users data for nefarious purposes. Nevertheless Dr. O’Neil’s book was published in 2016. To summarize the chapter, weapons of math destruction are distorting if not destroying our elections. Actually the most informative and most important part of the chapter is found in a footnote at the end:

“At the federal level, this problem could be greatly alleviated by abolishing the Electoral College system. It’s the winner-take-all mathematics from state to state that delivers so much power to a relative handful of voters. It’s as if in politics, as in economics, we have a privileged 1 percent. And the money from the financial 1 percent underwrites the micro targeting to secure the votes of the political 1 percent. Without the Electoral College, by contrast, every vote would be worth exactly the same. That would be a step toward democracy. “

Readers of the healthy memory blog should realize that the Electoral College is an injustice that has been addressed in previous healthy memory blog posts (13 to be exact). Just recently, the Electoral College, not the popular vote, produced Presidents with adverse effects. One resulted in a war in Iraq that was justified by nonexistent weapons of mass destruction. And most recently, the most ill-suited person for the presidency became president, contrary to the popular vote.

The justification for the Electoral College was the fear that ill-informed voters might elect someone who was unsuitable for the office. If there ever was a candidate unsuitable for the office, that candidate was Donald Trump. It was the duty of the Electoral College to deny him the presidency, a duty they failed. So, the Electoral College needs to be disbanded and never reassembled.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Responsible Tech is Google’s Likely Update

May 9, 2018

The title of this post is identical to the title of an article by Elizabeth Dworkin and Haley Tsukayama in the 8 May 2018 issue of the Washington Post. At its annual developer conference scheduled to kick off today in its hometown of Mountain View, CA, Google is set to announce a new set of controls to its Android operating system, oriented around helping individuals and families manage the time they spend on mobile devices. Google’s chief executive, Sundar Pichai is expected to emphasize the theme of responsibility in his keynote address.

Pichai is trying to address the increased public skepticism and scrutiny of the technology regarding the negative consequences of how its products are used by billions of people. Some of this criticism concerns the addictive nature of many devices and programs. In January two groups of Apple shareholders asked the company to design products to combat phone addiction in children. Apple chief executive Tim Cook has said he would keep the children in his life away from social networks, and Steve Jobs placed strict limitation on his children’s screen time. Even Facebook admitted that consuming Facebook passively tends to put people in a worse mood according to both its internal research as well as academic reports. Facebook chief executive Mark Zuckerberg has said that his company didn’t take a broad enough view of our responsibility to society, in areas such as Russian interference and the protection of people’s data. HM thinks that this statement should qualify as the understatement of the year.

Google appears to be ahead of its competitors with respect to family controls. Google offers Family Link, which is a suite of tools that allows parents to regulate how much time their children can spend on apps and remotely lock their child’s device. FamilyLink gives parents weekly reports on children’s app usage and offers controls to approve the apps kids download.

Google has also overhauled Google news. The new layout show how several outlets are covering the same story from different angles. It will also make it easier to subscribe to news organizations directly from its app store.

HM visited Google’s campus at Mountain View, which was one of the trips of a month long workshop he attended provided. It looks more like a university campus than a technology business. Different people explained what they were working on, and we ate at the Google cafeteria. This cafeteria is large, offers a wide variety of delicious food, and is open 24 hours so staff can snack or dine for free any time they want.

The most talented programmer with whom HM was privileged to work with, left us for an offer at Google. She felt that this was a needed move for her to develop further her already excellent programming skills.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Data is Needed on Facial Recognition Accuracy

May 8, 2018

This post is inspired by an article titled “Over fakes, Facebook’s still seeing double” by Drew Harrell in the 5 May 2018 issue of the Washington Post. In December Facebook offered a solution of its worsening coverage of fake accounts: new facial-recognition technology to spot when a phony profile tries to use someone else’s photo. The company is now encouraging its users to agree to expand use of their facial data, saying they won’t be protected from imposters without it. The Post article notes that Katie Greenmail and other Facebook users who consented to that technology in recent months have been plagued by a horde of identity thieves.

After the Post presented Facebook with a list of numerous fake accounts, the company revealed that its system is much less effective than previously advertised: The tool looks only for imposters within a user’s circle of friends and friends of friend’s of friend;s—not the site’s 2 billion-user network, where the vast majority of doppelgänger accounts are probably born.

Before any entity uses facial recognition software, they should be compelled to test the software and describe in detail the sample it was developed on including the size and composition of that sample, and the performance of the software with respect to correct identifications, incorrect identifications, and no classifications. Facebook needed to do this testing and present the results. And Facebook users needed to demand these results from testing before using face recognition. How many time do users need to be burned by Facebook before they terminate interactions with the application?

The way facial recognition is used on police shows on television seems like magic. A photo is taken at night with a cellphone and is tested against a data base that yields the identity of the individual and his criminal record. These systems seem to act with perfection. HM has yet to see a show in which someone in a database is incorrectly identified, and that individual arrested by the police, interrogated and charged. That must happen. But how often and under what circumstances? It seems likely that someone with a criminal record is likely to be in the database and it is possible that the individual whose photo was taken is not in the database. If there is no match will the system make the best match that it can and make a person who is in the database a suspect in the crime?

The public, and especially defense lawyers, need to have quality data on how well these recognition systems perform.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How Facebook Let A Friend Pass My Data to Cambridge Analytica

April 24, 2018

The title of this post is identical to the title of a News & Technology piece by Timothy Revell in the 21 April 2018 issue of the New Scientist. This Is Your Digital Life (TIYDL) is the name of the Facebook App whose data ended up in the hands of Cambridge Analytica. Presumably only 270,000 people used the TIYDL app, but Facebook estimates that Cambridge Analytica ended up with data from 87 million people. These data were used by Cambridge Analytica to perform election shenanigans. The United Kingdom (UK) is gathering claimants to take Facebook to court for mishandling their data.

People who used the TIYDL app gave it permission to access the Facebook public profile page, date of birth and current city for each of their friends, along with the pages they liked. Facebook also says that “ a small number of people gave access to their own timeline and private messages, meaning that posts or messages from their friends would have been scooped up as well.

The TIDYL app was created by University of Cambridge professor Aleksandr Kogan to research how someone’s online presence corresponds to their personality traits. Kogan gave data from the app to Cambridge Analytics, which Facebook says was a violation of its terms of service. The UK’s information commissioner is also investigating whether it broke UK data protection laws. Data collected for research purposes can’t be given to a private company for a different use without consent. Kogan says that Facebook knew his intention was to pass it on and that it was written in the TIDYL app’s terms and conditions.

When reporters told Facebook about the situation in 2015, the firm said Cambridge Analytica had to delete the data. Cambridge Analytica said it did this, but whistle-blower Christopher Wylie said it didn’t.

Now Facebook is informing the people involved. It has released a tool that lets people check if their data were involved (bit.ly/2uXuHOY). The author used the tool and found, to his surprise, that a friend had used the app.

The problem is that to use virtually any software you need to agree to the terms of agreement, which include the privacy policies. Researchers at Carnegie Mellon University found in 2012 that it would take the average person 76 days to read all the privacy policies that they see each year. Clearly this is unreasonable.

Requirements should be made that these agreements be of reasonable length and understandable to the layperson. Moreover the default options should be “out” and action should be taken by the user to “opt in” This is necessary to be sure that people understand what they are doing.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Facebook May Guess Millions of People’s Sexuality to Sell Adds

February 25, 2018

The title of this post is identical to the title of an article in the News Section of the 24 Feb 2018 Issue of the New Scientist. Last year Spain fined Facebook 1.2 million Euros for targeting adverts based on sensitive information without first obtaining explicit consent. In May, new EU-wide legislation called the General Data Protection Regulation (GDPR), which states that users must be specifically asked before companies collect and use their sensitive information.

Angel Cuevas Rumin at Charles III University of Madrid and his colleagues have been conducting research on how Facebook uses its users’ information to target its adverts. The research team purchased three Facebook ad campaigns. One targeted users interested in various religions, another was aimed at people based on their political opinions, and a third targeted those interested in “transsexualism” or “homosexuality.” For 35 Euros, they reached more than 25,000 people.

Remember that in Europe it is against the law for companies like Facebook to use sensitive information without first obtaining explicit consent from its users. So it would appear that Facebook has broken the law. However, Facebook argues that interests are not the same as sensitive information, so they claim that they are in compliance with the law.

To assess how often sensitive interests are used to target adverts on Facebook, Cuevas and his team created an internet browser extension that analyses how you interact with adverts. Moreover, it also records why you were shown a specific advert. Between October 2016 and October 2017, more than 3000 people from EU countries used the tool, corresponding to 5.5 million adverts. The team found more than 2000 reasons that Facebook had for showing someone an advert that related to sensitive interests, including politics, religion, health, sexuality, and ethnicity. About 905 of the people who used the extension were targeted with ads based on these categories.

Extrapolating from the demographics of the people using the browser extension, the team estimated that about 40% of all EU citizens, some 200 million people, may have been targeted using sensitive interests. (arxiv.org/abs/1802.05030).

Europeans do not like this state of affairs. A survey in 2015 found that 63% of EU citizens don’t trust online firms, and more than half don’t like providing personal information in return for free services.

Neither does HM who no longer uses Facebook.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Social Media Putting Democracy at Risk

February 24, 2018

This blog post is based on an article titled, “”YouTube excels at recommending videos—but not at deeming hoaxes” by Craig Timberg, Drew Harrell, and Tony Romm in 23 Feb 2018
issue of the Washington Post. The article begins, “YouTube’s failure to stop the spread of conspiracy theories related to last week’s school shooting in Florida highlights a problem that has long plagued the platform: It is far better at recommending videos that appeal to users than at stanching the flow of lies.”

To be fair, YouTube’s fortunes are based on how well its recommendation algorithm is tuned to the tastes of individual viewers. Consequently, the recommendation algorithm is its major strength. YouTube’s weakness in detecting misinformation was on stark display this week as demonstrably false videos rose to the top of YouTube’s rankings. The article notes that one clip that mixed authentic news images with misleading context earned more than 200,000 views before YouTube yanked it Wednesday for breaching its rules on harassment.

The article writes, “These failures this past week, which also happened on Facebook, Twitter, and other social media sites—make it clear that some of the richest, most technically sophisticated companies in the world are losing against people pushing content rife with untruth.”

YouTube apologized for the prominence of these misleading videos, which claimed that survivors featured in news reports were “crisis actors” appearing to grieve for political gain. YouTube removed these videos and said the people who posted them outsmarted the platform’s safeguards by using portions of real news reports about the Parkland, Fla, shooting as the basis for their conspiracy videos and memes that repurpose authentic content.

YouTube made a statement that its algorithm looks at a wide variety of factors when deciding a video’s placement and promotion. The statement said, “While we sometimes make mistakes with what appears in the Trending Tab, we actively work to filter out videos that are misleading, clickbait or sensational.”

It is believed that YouTube is expanding the fields its algorithm scans, including a video’s description, to ensure that clips alleging hoaxes do not appear in the trending tab. HM recommends that humans be involved with the algorithm scans to achieve man-machine symbiosis. [to learn more about symbiosis, enter “symbiosis” into the search block of the Healthymemory blog.] The company has pledged on several occasions to hire thousands more humans to monitor trending videos for deception. It is not known whether this has been done or if humans are being used in a symbiotic manner.

Google also seems to have fallen victim to falsehoods, as it did after previous mass shootings, via its auto-complete feature. When users type the name of a prominent Parkland student, David Hogg, the word “actor” often appears in the field, a feature that drives traffic to a subject.

 

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How Google and Facebook Hooked Us—and How to Break the Habit

February 17, 2018

The title of this post is identical to the title of a post by Douglas Heaven in the Features section of the 10 February 2018 issue of the New Scientist.

In 2009 Justin Rosenstein created Facebook’s “Like” Button. Now he has dedicated himself to atoning for it. Martin Moore of King’s College London said, “Just a few years ago, no one could say a bad word about the tech giants. Now no one can say a good word.” The author writes, “Facebook, Google, Apple and Amazon variously avoid tax, crush competition, and violate privacy, the complaints go. Their inscrutable algorithms determine what we see and what we know, shape opinions, narrow world views and even subvert the democratic order that spawned them.”

“Facebook knew right from the start it was making something that would exploit vulnerabilities in our psychology. Behavior design for persuasive tech, a discipline found at Stanford University in California in the 1990s, is baked into much of big tech’s hardware and software. Whether it is Amazon’s “customers who bought this also bought function”, or the eye-catching red or orange “something new” dots on you smartphone app icons, but tech’s products are not just good, but subtly designed to control us, even to addict us — to grab us by the eyeballs and hold us there.”

The article goes on and develops this theme further. Here are data points offered in the article. There are 2 billion Active Facebook Users. 88% of Google’s 2017 income came from advertising. 20% of global spending on advertising goes to Facebook and Google.

And these products have been used to interfere with democracy and to subvert elections.

The article goes on and discusses various regulatory approaches for dealing with these problems, but warns about unintended consequences.

The most telling point follows: “But if big tech’s power is based entirely on our behavior, the most effective brake on their influence is to change our own bad habits.” This point has long been advocated in the healthy memory blog. The web is filled with tips for tuning out as is the healthy memory blog. Entering “technology addiction” will lead you to ways to free yourself from this addiction. Entering “Mary Aiken” will lead you to many posts based on her book “The Cyber Effect,” which you might find are well worth your time.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How To Take Back Your Life from Disruptive Technology

September 27, 2017

There have been twelve posts on “The Distracted Mind: Ancient Brains in a High Tech World” that documented the adverse affects of technology. There was an additional post demonstrating that just the presence of a Smartphone can be disruptive. The immediately preceding post documented the costs of social media per se. First of all they have disruptive effects on lives and minds. And these disruptive effects degrade your mind, which the blog posts documented affect many aspects of your life, including education. Hence the title of this blog post.

Unfortunately, social media make social demands. So removing yourself from social media is something that needs to be explained to your friends, whom you should let know you’ll still be willing to communicate via email. Review with them the reason for your decision. Cite the relevant research presented in this blog and elsewhere. Point out that Facebook not only has an adverse impact on cognition, it was also a tool used by Russia to influence our elections. Facebook accepted rubles to influence the US Presidential election. The magnitude of this intervention has yet to be determined. For patriotic reasons alone, Facebook should be ditched. You are also taking these steps to reclaim control of your attentional resources and to build a healthy memory.

Carefully consider what steps you need to take. Heavy users become nervous when they are not answering alerts. One can gradually increase the increments in answering alerts. However, going cold turkey and simply turning off alerts might be more painful initially, but it would free you from the compulsion to answer alerts earlier should you of cold turkey. It would also make your behavior clearer to your friends earlier rather than later. Similarly you can only answer text messages and phone calls at designated. Voice mail assures you won’t miss anything.

If asked by a prospective employer or university as to why you are not on Facebook, explain that you want to make the most of your cognitive potential and that Facebook detracts from this objective. Cite the research. You can develop a web presence by having your own website that you would control. Here you could attach supporting materials as you deem fit.

Doing this should make you stand out over any other candidates who might be competing with you (unless they were also following the advice of this blog). If your reviewer is not impressed, you should conclude that he is not worthy of you and that affiliating with them would be a big mistake. Hold to this conclusion regardless of the reputation of the school or employer.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Happiness Effect

September 26, 2017

The subtitle to “The Happiness Effect” is “How Social Media is Driving a Generation to Appear Perfect at Any Cost,” a book by Donna Freitas. The book reports extensive research using surveys and interviews on the use of social media by college students. The subtitle could be expanded to “How Social Media is Driving a Generation to Appear Perfect at Any Cost Resulting In Unhappiness and Anxiety.’ The book focuses on the emotional and social costs and ends with suggestions regarding how to ameliorate the damage.

Although this is an excellent book, HM had difficulty finishing reading it. He kept thinking how stupid, moronic, and damaging social media are. How could new technology be adopted and put to such a counterproductive use? The reason that HM’s reaction is much more severe than that of Donna Freitas is that he is also considering social media in terms of how they exacerbate the problem of the Distracted Mind, which has been the topic of the fifteen healthy memory blog post immediately preceding this current one. So these activities that produce unhappiness and anxiety also assault the mind with more distractions.

They do so in two ways. First of all they subtract time from effective thinking. Social media also foster interruptions that further disrupt effective thinking. So consider the possibility that social media foster unhappy airheads.

Facebook pages are cultivated to impress future employers. Organizations and activities cultivate Facebook pages to provide good public relations for their organizations and activities. But remember the healthy memory blog post, “The Truth About Your Facebook Friends” based on Seth Stephens-Davidowitz’s groundbreaking book, “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who We Really Are.” You should realize that anyone who believes what they read on Facebook is a fool.

The following post will suggest some activities for you to consider should you be convinced of what you have read in the healthy memory blog and related sources on this topic. These suggestions go beyond what was presented in the blog post “Modifying Behavior.”

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

 

Workplace

September 19, 2017

This is the tenth post based on “The Distracted Mind: Ancient Brains in a High Tech World” by Drs. Adam Gazzaley and Larry Rosen.

A study of more than 200 employees at a variety of companies studied the facts that predicted employee stress levels. Although having too much work to do was the best prediction, it was only slightly stronger in predicting exhaustion, anxiety, and physical complaints than outside interruptions, many of which were electronic in nature. Gloria Mark summarized one study that “working faster with interruptions has its cost: people in the interrupted conditions experienced a higher workload, more stress, more time pressure and effort. So interrupted work may be done faster, but at a price. Clive Thompson, in a New York Times interview, summed up research results on workplace interruptions by asserting that “we humans are Pavlovian; even thought we know we’re just pumping ourselves full of stress, we can’t help frantically checking our email the instant the bell goes ding.”

Open offices settings further exacerbate this problem. Approximately 70% of US offices—including Google, Yahoo, Goldman Sachs, and Facebook, have either no partitions or low ones that do not make for quiet workplaces. Research has shown that open offices promote excessive distractions. HM personally testifies regarding the disruptive effects of these distractions. A content analysis of 27 open-office studies identified auditory distractions, job dissatisfaction, illness, and stress as major ramifications of this type of workplace.

The bottom line is that being constantly interrupted and having to spend extra time to remember what we were doing has a negative impact on workplace productivity and quality of life. One 2005 study, before the major increase in smartphone usage, estimated that when office workers are interrupted as often as eleven times an hour it costs the United States $558 billion per year.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

 

The Impact of Constantly Shifting Our Attention on Higher Education

September 17, 2017

This is the eighth post based on “The Distracted Mind: Ancient Brains in a High Tech World” by Drs. Adam Gazzaley and Larry Rosen. In one study by Dr. Rosen’s research team observed hundreds of middle school, high school, and university students studying something important for fifteen minutes in the environment where they normally study. Minute by minute observations showed that the typical student couldn’t stay focused on work for more than three to five minutes. Students were asked to provide their grade point average (GPA) on a four point scale. The predictors of a lower GPA from these extensive data were: percentage of time on task studying strategies, total media time doing a typical day, and preferences for task-switching rather than working on a task until it was completed. Moreover, by examining the websites that students visited during their fifteen minute sample, they uncovered a fifth predictor of a lower GPA. Only one website visited predicted a lower GPA: Facebook. It did not matter whether students visited it one or fifteen times. Once was enough to predict lower grade performance.

In another experiment by Laura Bowman and her colleagues at Connecticut State University, students were randomly assigned to three groups to read a book chapter and then take a test. One group simply read the chapter and took the test. The second group first completed an instant messaging conversation with the experimenter and then read the chapter and took the test. The third group started to read the chapter, were interrupted with the same instant messaging conversation, which was delivered in pieces at various times during reading, and then took the test. All three groups performed equally well on the test. But the third group took substantially longer even when the time spent instant messaging was removed. This result leads to two conclusions. One is that interrupted studying takes significantly more time. And the second conclusion answers why it takes more time. Each time one switches back to the primary task, time is lost switching and reorienting to where in the task one was when interrupted. In addition, working memory may also be compromised, as distractions degrade the fidelity of the information they are trying to maintain during the learning process.

Another study validating the negative impact of classroom multitasking interrupted students during a short video lecture and required them either to text the experimenter or post material on social media, under two conditions: one new text or post every minute, or one new text every thirty seconds. The control group simply watched the video, which was followed by a test. The results found that more texting or social media posting resulted in poorer lecture notes and lower test scores than the control group. A negative linear trend emerged in both lecture notes and test scores, where the highest scores and best notes demonstrated by those students who did not receive any interruptions, followed by lesser scores and notes of students who were interrupted every minute, and, not surprisingly the worst scores and notes of students who were interrupted every thirty seconds.

Several research studies have shown even more far-reaching effects of technology use by college students. One study showed that those students who used cell phones and texted more often during class showed more anxiety, had lower GPAs, and were less satisfied with life than students who used phones and texted less frequently. A different study of more than 770 college students discovered that students who used more interfering technology in the classroom also tended to engage in more high-risk behaviors, including using alcohol, cigarettes, marijuana, and other drugs, drunk driving, fighting, and having multiple sex partners. So it appears that college students who use inessential technology during either class sessions or while studying face difficulties on both an academic and personal level.

The Psychology of Technology

September 16, 2017

At the centerpiece of technology is the internet. This is the seventh post based on “The Distracted Mind: Ancient Brains in a High Tech World” by Drs. Adam Gazzaley and Larry Rosen. There is a distinction made in human memory between information that is accessible in memory and information that is available in memory, but not at the moment accessible. A similar distinction can be made for information in transactive memory. Information that can be readily accessed, say via Google for instance, is accessible in transactive memory. However, information that requires more than one step to access is in available transactive memory. Obviously, the amount of information available in transactive memory is enormous, so only information that can be quickly accessed is in accessible transactive memory. So a hierarchy of information knowledge is
accessible personal memory
available personal memory (information that is personal memory but is currently inaccessible)
accessible transactive memory (information readily accessible from technology or a fellow human)
available transactive memory (information that can be found with sufficient searches)

This hierarchy can be regarded as an indication of the depth of knowledge.

Someone who can communicate extemporaneously and accurately on a topic has an impressive degree of knowledge.

Someone who refers to notes is dependent on those notes.

Whenever we encounter new relevant information we are confronted with the problem as whether commit that information to memory, or to bookmark it so it can be accessed when needed. Too much reliance on bookmarks can lead to superficial knowledge and unimpressive presentations.

Dr. Betsy Sparrow and her colleagues at Columbia University studied the ability to remember facts and unsurprisingly discovered that we were much better at knowing where to find the answers to our questions than we were at remembering the answers themselves. She dubbed this the “Google Effect.”

Social media began with email, but this is fundamentally one to one communication. Facebook is the medium for widespread communication. Moreover, there is the business of friending and liking. This tends to be taken to extremes. One cannot have hundreds of meaningful friends, and the continuous seeking of approval through likes can become problematic.

Smartphones are smart because the computer is in the phone making it smart. More than seven in ten Americans own one, more than 860 million Europeans own one, and more than half all cell phone owners in Asia have at least one smartphone if not more. More photographs are taken with smartphones than with digital cameras, and more online shopping is done via smartphones than through standard computers. Smartphone users pick up their phone an average of 27 times a day, ranging from 14 to 150 times per day depending on the study, the population, and the number of years that someone has owned he smartphone—and the number of years that someone has owned the smartphone—those who have owned a smartphone longer check it far more often than those who have recently obtained a phone. Frequently, there is no good reason for them to do so; 42% check their phone when they have time to kill (which rises to 55% of young adults). Only 23% claim to do so when there is something specific for them to do. Feelings of loneliness appear to underlie at least some of this apparently non-needed use of technology (see the healthy memory blog post “Loneliness”).

Multitasking, task switching, and continuous partial attention are serious problems. Remember that we cannot multitask. What is apparently multi-tasking is the rapid switching between or among tasks, and there are attentional costs in doing this switching. Multitasking occurs in every sphere of our world, including home, school, workplace, and our leisure life. Moreover, this is not just limited to the younger generation. One study followed a group of young adults and a group of older adults with wore biometric belts with embedded eyeglass cameras for more than 300 hours of leisure time. Younger adults switched from task to task twenty-severn times an hour, about once every two minutes. Older adults switched tasks seventeen times per hour, or once every three to four minutes. Former Microsoft executive Linda Stone termed this constant multitasking, “continuous partial attention.” This could also be termed half-keistered information processing. Attention is not being distributed optimally.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Truth About Your Facebook Friends

August 29, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Social media are another source of big data. Seth writes, “The fact is, many Big Data sources, such as Facebook, are often the opposite of digital truth serum.

Just as with surveys, in social media there is no incentive to tell the truth. Much more so than in surveys, there is a large incentive to make yourself look good. After all, your online presence is not anonymous. You are courting an audience and telling your friends, family members, colleagues, acquaintances, and strangers who you are.

To see how biased data pulled from social media can be, consider the relative popularity of the “Atlantic,” a highbrow monthly magazine, versus the “National Enquirer,” a gossipy often-sensational magazine. Both publications have similar average circulations, selling a few hundred thousand copies (The “National Enquirer” is a weekly, so it actually sells more total copies.) There are also a comparable number of Google searches for each magazine.

However, on Facebook, roughly 1.5 million people either like the “Atlantic” or discuss articles from the “Atlantic” on their profiles. Only about 50,000 like the Enquirer or discuss its contents.

Here’s an “Atlantic” versus “National Enquirer” popularity compared by different sources:
Circulation Roughly 1 “Atlantic” for every 1 “National Enquirer”
Google searches 1 “Atlantic” for every 1 “National Enquirer”
Facebook Likes 27 “Atlantic” of every 1 “National Enquirer”

For assessing magazine popularity, circulation data is ground truth. And Facebook data is overwhelmingly biased against the trashy tabloid, making it the worst data for determine what people really like.

Here are some excerpts from the book:
“Facebook is digital brag-to-my friends-about-how-good-my life-is-serum. In Facebook world, the average adult seems to be happily married, vacationing in the Caribbean, and perusing the “Atlantic.” In the real world, a lot of the people are angry, on supermarket checkout lines, peeking at the “National Enquirer”, ignoring phone calls from their spouse, whom them haven’t slept with in years. In Facebook world, family life seems perfect. In the real world, family life is messy. I can be so messy that a small number of people even regret having children. In Facebook world, it seems every young adult is at a cool party Saturday night. In the real world, most are at home alone, binge-watching shows on Netflix. In Facebook world, a girlfriends posts twenty-six happy pictures from her getaway with her boyfriend. In the real world, immediately after posting this, she Googles “my boyfriend won’t have sex with me.”

 

In summary:
DIGITAL TRUTH                          DIGITAL LIES
Searches                                        Social media posts
Views                                             Social media likes
Clicks                                             Dating profiles
Swipes

(Dis)connected

July 20, 2017

The title of this post is identical to the title of an article by Kirsten Weir in the March 2017 issue of “Monitor on Psychology.” This article reviews research showing how smartphones are affecting our health and well-being, and points the way toward taking back control.

Some of the most established evidence concerns sleep. Dr. Klein Murdock, a psychology professor who heads the Technology and Health Lab at Washington and Lee University followed 83 college students and found that those who were more-attuned to their nighttime phone notifications had poorer subjective sleep quality and greater self-reported sleep problems. Although smartphones are often viewed as productivity-boosting devices, their ability to interfere with sleep can have the opposite effect on getting things done.

Dr. Russell E. Johnson and his colleagues at Michigan State University surveyed workers from a variety of professions. They found that when people used smartphones at night for work-related purposes, they reported that they slept more poorly and were less engaged at work the next day. These negative effects were greater for smartphone users than for people who used laptops or tablets right before bed.

Reading a text or email at bedtime can stir your emotions or set your mind buzzing with things you need to get done. So your mind becomes activated at a time when it’s important to settle down and have some peace.

College students at the University of Rhode Island were asked to keep sleep diaries for a week. They found that 40% of the students reported waking at night to answer phone calls and 47% woke to answer text messages. Students who were more likely to use technology after they’d gone to sleep reported poorer sleep quality, which predicted symptoms of anxiety and depression.

FOMO is an acronym for Fear Of Missing Out. In one study, Dr Larry Rosen a professor emeritus of psychology at California State University and his colleagues took phones away from college students for an hour and tested their anxiety levels at various intervals. Light users of smartphones didn’t show any increasing anxiety as they sat idly without their phones. Moderate users began showing signs of increased anxiety after 25 minutes without their phones, but their anxiety held steady at that moderately increased level for the rest of the hour long study. Heavy phone users showed increased anxiety after just 10 phone-free minutes, and their anxiety levels continued to climb throughout the hour.

Rosen has found that younger generations are particularly prone to feel anxious if they can’t check their text messages, social media, and other mobile technology regularly. But people of all ages appear to have a close relationship with their phones. 76% of baby boomers reported checking voicemail moderately or very often, and 73% reported checking text messages moderately or very often. Anxiety about not checking in with text messages and Facebook predicted symptoms of major depression, dysthymia, and bipolar mania.

When research participants were limited to checking email messages just three times a day, they reported less daily stress. This reduced stress was associated with positive outcomes including greater mindfulness, greater self-perceived productivity and better sleep quality.

In another study participants were asked to keep all their smartphone notifications on during one week. In the other week, they were asked to turn notifications off and to keep their phones tucked out of sight. At the end of the study participants were given questionnaires. During the week of notifications participants reported greater levels of inattention and hyperactivity compared with their alert-free week. These feelings of inattention and hyperactivity were directly associated with lower levels of productivity, social connectedness, and psychological well being. Having your attention scattered by frequent interruptions has its costs.

The article also stresses the importance of personal interactions, which are inherently richer. The key to having healthy relationships with technology is moderation. We want to get the best from technology, but at the same time to make sure that it’s not controlling us.

 

Media Multi-tasking

February 4, 2017

Media multitasking is another important topic addressed by Julia Shaw in “THE MEMORY ILLUSION.”  She begins this section as follows:  “Let me tell you a secret.  You can’t multitask.”  This is the way neuroscientist Earl Miller from MIT puts it, “people can’t multitask very well, and when people say they can, they’re deluding themselves…The brain is very good at deluding itself.”  Miller continues, “When people think they’re multitasking, they’re actually just switching from one task to another very rapidly.  And every time they do, there’s a cognitive cost.”

A review done in 2014 by Derk Crews and Molly Russ on the impact of task-switching has on efficiency concluded that it is bad for our productivity, critical thinking and ability to concentrate, in addition to making us more error-prone.  Moreover, they concluded that these consequences are  not limited to diminishing our ability to do the task at hand.  They also have an impact on our ability to remember things later.  Task switching also increases stress, diminishing people’s ability to manage a work-life balance, and can have negative social consequences.

Reysol Junco and Shelia Cotton further examined the impact of task-switching on our ability to learn and remember things. Their research was reported in an article entitled ‘No A 4 U’.  They asked 1,834 students about their use of technology and found that most of them spent a significant amount of time using information and communication technologies on a daily basis.  They found that 51% of respondents reported texting, 33% reported using Facebook, and 21% reported emailing while doing schoolwork somewhat or very frequently.  The respondents reported that while studying outside of class, they spent an average 60 minutes per day on Facebook, 43 minutes per day browsing the internet, and 22 minutes per day on their email.  This is over two hours attempting to multitask while studying per day.  The study also found that such multitasking, particularly the use of Facebook and instant messaging, was significantly negatively correlated with academic performance; the more time students reported spending using these technologies while studying, the worse their grades were.

David Strayer and his research team at the University of Utah published a study comparing drunk drivers to drivers who were talking on their cell phones.  It is assumed here that most conscious attention is being directed at the conversation and the driving has been relegated to automatic monitoring.  The results were that “When drivers were conversing on either a handheld or a hands-free cell phone, their braking reactions were delayed and they were involved in more traffic accidents than when they were not conversing on a cell phone.’  HM believes that this research was conducted in driving simulators and did not engender any carnage on the road.  Strayer also concluded that driving while chatting on the phone can actually be as bad as drunk driving, with both noticeably increasing the risk for car accidents.

Unfortunately, legislators have not understood this research.  Laws allow hand-free use of cell phones, but it is not the hands that are at issue here.  It is the attention available for driving.  Cell phone use regardless of whether hands are involved detracts from the attention needed during driving when emergencies or unexpected happenings occur.

Communications researchers Aimee Miller-Ott and  Lynne Kelly studied how constant use of our phones while also engaged in other activities can impede our happiness.  Their position is that we have expectations of how certain social interactions are supposed to look, and if these expectation are violated we have a negative response.
They asked 51 respondents to explain what they expect when ‘hanging out’ with friends and loved ones, and when going on dates.  They found that just the mere presence of a visible cell phone decreased the satisfaction of time spent together, regardless of whether the person was constantly using it.  The reasons offered by the respondents for disliking the other person being on their cell phone included the involution of the expectation of undivided attention during dates and other intimated moments.  When hanging out, this expectation was lessened, so the presence of a cell phone was not perceived to be as negative, but was still often considered to diminish the in-person interaction.  Their research corresponded to their review of the academic literature, where there is strong evidence showing that romantic partners are often annoyed  and upset when their partner uses a cell phone during the time spent together

Marketing professor James Roberts has coined the term ‘phub’— an elision of ‘phone’ and ‘snub’ to describe the action of a person choosing to engage with their  phone instead of engaging with another person.  For example, you might angrily say, “Stop phubbing me!”  Roberts says that phone attachment  leading to this kin of use behavior has ben lined with higher stress, anxiety, and depression.

Designed to Addict

September 8, 2016

Designed to Addict is the title of the second chapter in “The Cyber Effect” by Dr. Mary Aiken.  Although the internet was not designed to addict users, it appears that it is addicting many.  Of course, humans are not passive victims, they are allowing themselves to be addicted.  Dr. Aiken begins with the story of a twenty-two year old mother Alexandra Tobias.  She called 911 to report that her three-month old son had stopped breathing and needed to be resuscitated.  She fabricated a story to make it sound as if an accident had happened, but later confessed that she was playing “Farmville” on her computer and had lost her temper when her baby’s crying distracted her from the Facebook game.  She picked up the baby and shook him violently and his head hit the computer.  He was pronounced dead at the hospital dead from head injuries and a broken leg.

At the time of the incident “Farmville” had 60 million active users and was described by its users in glowing terms as being highly addictive.  It was indeed addictive so that “Farmville” Addicts Anonymous support groups were formed and a FAA page was created on Facebook.    Dr. Aiken found this case interesting as a forensic cyberpsychologist for the following reason:  the role of technology in the escalation of an explosive act of violence.  She described it as extreme impulsivity, an unplanned spontaneous act.

Impulsivity is defined as “a personality trait characterized by the urge to act spontaneously without reflecting on an action and its consequences.”  Dr. Aiken notes “that the trait of impulsiveness influences several important psychological processes and behaviors, including self-regulation, risk-taking and decision making.  It has been found to be a significant component of several clinical conditions, including attention deficit/hyperactivity disorder, borderline personality disorder, and the manic phase of bipolar disorder, as well as alcohol and drug abuse and pathological gambling.”  Dr. Aiken takes care to make the distinction between impulsive and compulsive.  Impulsive behavior is a rash, unplanned act, whereas compulsive behavior is planned repetitive behavior, like obsessive hand washing.  She elaborates in cyber terms.  “When you constantly pick up your mobile phone to check your Twitter feed, that’s compulsive.  Then  you read a nasty tweet and can’t restrain yourself from responding with an equally  nasty retort (or an even nastier one), that’s impulsive.”

Joining an online community or playing in a multiplier online game can give you a sense of belonging.  Getting “likes” meets a need for esteem.  According to psychiatrist Dr. Eva Ritvo in her article “Facebook and Your Brain” social networking “stimulates release of loads of dopamine as well as offering an effective cure to loneliness.  These “feel good” chemicals are also triggered by novelty.  Posting information about yourself can also deliver pleasure.  “About 40 percent of daily speech is normally taken up with self-disclosure—telling others how we feel or what we think about something—but when we go online the amount of self-disclosure doubles.   According to Harvard neuroscientist Diana Tamir, this produces a brain respond similar to the release of dopamine.”

Jack Panksepp is a Washington State University Neuroscientist who coined the term affective neuroscience, or the biology of arousing feelings or emotions.  He argues that a number of instincts such as seeking, play, anger, lust, panic, grief, and fear are embedded in ancient regions of the human brain built into the nervous system as a fundamental level.  Panskepp explains addiction as an excessive form of seeking.  “Whether the addict is seeking a hit from cocaine, alcohol, or a Google search, dopamine is firing, keeping the human being in a constant state of alert expectation.”

Addiction can be worsened by the stimuli on digital devices that come with each new email or text to Facebook “like,” so keep them turned off unless there is a good justification for keeping them on, and then only for a designated amount of time.

There is technology to help control addictive behavior.  One of these is Breakfree, an app that monitors  the number of times you pick up your phone, check your email, and search the web.  It offers nonintrusive  notifications and provides you with an “addiction score” every day, eery week, and every month to track your progress.  There are many more such apps such as Checky and Calm, but ultimately it is you who needs to control your addictions.

Mindfulness is a prevalent theme in the healthy memory blog.  It is a Buddhist term “to describe the state of mind in which our attention is directed to the here and now, to what is happening in the moment before us, a way of being kind to ourselves and validating our own experience.”    As a way of staying mindful and keeping track of time online, Dr. Aiken has set her laptop computer to call out the time, every hour on the hour, so that even as she is working in cyberspace, where time flies, she is reminded very hour of the temporal real world.”

Internet addictive behavior expert Kimberly Young recommends three strategies:
1.  Check your checking.  Stop checking your device constantly.
2.  Set time limits.  Control your online behavior—and remember , kids will model
their behavior on adults.
3.  Disconnect to reconnect.  Turn off devices at mealtimes—and reconnect with                  the family.
Some people find what are called internet sabbaths helpful and disconnect for a day or a weekend.  Personally HM believes in having a daily disciplined schedule to prevent a beneficial activity from becoming a maladaptive behavior.

Much more is covered in the chapter, to include compulsive shopping, but the same rule applies.  To be aware of potential addiction monitor your behavior, and make the appropriate modifications.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Ann Applebaum’s Column on Facebook

December 14, 2015

The title of her column was Undoing Facebook’s damage.  Anyone who has read any of my sixteen previous posts about Facebook should be aware that I am not a fan.  However, I must applaud Mark Zukerberg and his wife on their pledge to give away $45 billion dollars.  Nevertheless, I also applaud Anne Applebaum for her column.  Here is her advice “…use it to undo the terrible damage done by Facebook and other forms of social media to democratic debate and civilized discussion all over the world.”  She goes on to say that weak democracies suffer the most.  Given the extensive damage done in the USA, that is an extraordinary amount of damage.  Just let me cite one example, the conversion of Moslems to radical jihadism.  This is a problem most acutely felt by Moslems, in general, and by the parents of those converted, in particular.

Of course, this was not Zukerberg’s intention. Rather it is an unintended and rather extreme consequence.   Applebaum goes on to write, “The longer-term impact of disinformation is profound:  Eventually it means that nobody believes anything.”

Readers of the healthy memory blog should be aware that it is extremely difficult to disabuse people of their false beliefs.  Moreover there are organizations who produce false information.   This has become an activity with its own name, agnogenesis.

So an activity is needed to counter agnognesis. Disagnogensis?  Please help, Mr. Zukerberg.

© Douglas Griffith and healthymemory.wordpress.com, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Organizing Our Social World

December 10, 2014

“Organizing Our Social World”is the title of another chapter in Daniel J. Levitin’s book The Organized Mind: Thinking Straight in the Age of Information Overload. As I’ve mentioned in previous posts, when I completed my Ph.D. in cognitive psychology one of the leading problems was information overload, and that was in the era before personal computers. Now we have the internet aided and abetted by mobile technology so technology is omnipresent. It is apparent from this chapter that longstanding problems in social psychology and human interaction have been exacerbated by technology. I find it amazing when I see a group of four people dining together each preoccupied with their smartphones. And when I attend professional meetings where the objective is for direct interactions between and among human beings most people appear to be interacting with their smartphones.

The intention for social media is that they are not a replacement for personal contact, but a supplement that provides an easy way to stay connected to people who are too distant or too busy. Levitin hints that there might be an illusion to this, writing “Social networking provides breadth but rarely depth, and in-person contact is what we crave, even if online contact seems to take away some of that craving. ..The cost of all our electronic connectedness appears to be that it limits our biological capacity to connect with other people.”

Lying and misrepresentations become a much larger problem in the online world. A hormone has been identified with trust. It has been called the love hormone in the popular press because it is especially pronounced in sexual interactions. In such mundane experiments as having research participants watching political speeches rate for whom they are likely to vote. The participants are under the influence of oxytocin for half the speeches. Of course they do not know when they are under the influence of the drug. They receive a placebo, inert drug, for the other half of the speeches. When asked for whom they would vote for or trust, the participants selected the candidates they viewed while oxytocin was in their systems. [to the best of my knowledge such techniques have yet to be used in an official election].

Interestingly, levels of oxytocin also increase during gaps in social support or poor social functioning. Recent theory holds tht oxytocin regulates the salience of social information and is capable of eliciting positive or negative social emotions, depending on the situation of the individual. In any case, these data support the importance of direct social contact by identifying biological components underlying this type of interaction.

I was surprised that little, if any, attention was spent on Facebook the premier social media. As I like to periodically rant regarding Facebook, and considerable time has passed since my last rate, I’ll try to fill in this lacuna. I detest Facebook, although I understand that many find I convenient for keeping in touch with many people with little effort. Apparently, businesses also find Facebook to be necessary and find it profitable. I use Facebook for a small number of contacts, but I am overwhelmed with notes of little interest. At the outset I did not want to refuse anyone friending me out of fear that this someone might be somebody I should but don’t remember. Similarly I find it uncomfortable unfriending people, although at times that seems to be a better course of action. Perhaps there is some way of setting controls so that the number of messages are few and few people are offended, but I have no way of knowing what they are.

I find Linkedin much more palatable and even useful. Still one must regard endorsements and statements of expertise with some caution. That is, they are useful provided one looks for corroborating information. I like email and email with Listservs. However, I’ve learned that younger folks have developed some complicated and, in my view, unnecessary protocols for using email, texting, and social media. I’ll quit before I start sounding like even more of a cranky old man.

Blogging Buddhists

October 2, 2013

Yes. Buddhists do use technology and they blog. This post is so titled because of the third principle of contemplative computing1, Be Mindful. We need to learn what being mindful feels like and to learn to see opportunities to exercise it while being online or using devices.

Buddhist monastics use the web to test their beliefs and objectives, that is their mindfulness, capacity for compassion, and right behavior. In the digital world it is easy to forget that we’re ultimately interacting with our fellow human beings rather than Web pages. Damchoe Wangmo recommends that you “investigate your motivation before each online action, to observe what is going on in your mind,” and stop if you’re driven by “afflictive emotions” like jealousy, anger, hatred, or fear.2 Choekyi Libby watches herself online to “make sure I’m doing what I’m doing motivated by beneficial intention.”3 Others argue that we need to bring empathy to technology, to have our interactions be informed by our own ethical guidelines and moral sensibility. If we can be a positive presence online, we can be an even better one in the real world. “Approaching your interactions with information technologies as opportunities to test and strengthen your ability to be mindful; treating failures to keep focused as normal, predictable events that you can learn from; observing what helps you to be mindful online and what doesn’t—in other words engaging in self-observation and self-experimentation—can improve your interactions with technologies and build your extended mind.4

The following Rules for Mindful Social Media are taken from Appendix Two of The Distraction Addiction:

Engage with care. Think of social media as an opportunity to practice what the Buddhists call right speech, not as an opportunity to get away with being a troll.

Be mindful about your intentions. Ask yourself why you’re going onto Facebook or Pinterest. Are you just bored? Angry? Is this a state of mind you want to share?

Remember the people on the other side of the screen. It’s easy to focus you attention on clicks and comments, but remember that you’re ultimately dealing with people, not media.

Quality, not quantity. Do you have something you really want to share, something that’s worth other people’s attention? Then go ahead and share. But remember the aphorism carved into the side of the Scottish Parliament: Say little but say it well.

Live first, tweet later. Make the following promise to yourself: I will never again write the words OMG, I’m doing doing x and tweeting at the same time LOL.

Be deliberate. Financial journalist and blogger Felix Salmon once lamented that most people believe that online content is not supposed to be read but reacted to. Just as you shouldn’t let machines determine where you place your attention, you shouldn’t let the words of others drive what you say in the public sphere. Being deliberate means that you won’t chatter mindlessly or feed trolls. You’ll say but little and say it well.

The remaining five principles of contemplative computing will be discussed in subsequent healthymemory blog posts. The first two principles were discussed in the immediately preceding posts.

1(2013) Pang, Alex Soojung-Kim. The Distraction Addiction

2Ibid. p. 219

3Ibid. p.219

4Ibid. Pp 221-222.

Are Facebook Users More Satisfied with Life?

September 15, 2013

This question has been answered in a study published in the Public Library of Science by Ethan Cross of the University of Michigan and Phillipe Verduyn of Leueven University in Belgium. They recruited 82 Facebook users in their late teens or early twenties. Their Facebook activity was monitored for two weeks and each participant had to report five times a day on their state of mind, and their direct social contacts (phone calls and meetings with other people).

The results showed that the more a participant used Facebook in the period between the two questionnaires, the worse she reported feeling the next time she filled in a questionnaire. The participants rated their satisfaction with life at the beginning and again at the end of the study. Participants who used Facebook frequently were more likely to report a decline in satisfaction than those who visited the site infrequently. However, there was a positive association between the amount of direct social contact a volunteer had and how positive she felt. So socialization in the real, as opposed to the virtual or cyber world, did increase positive feelings.

So why was socialization in the cyber world making people feel worse? This question was addressed in another study at Humboldt University and Darmstadt Technical, both of which are located in Germany. They surveyed 584 Facebook users in their twenties. They found that the most common emotion aroused in Facebook is envy. Comparing themselves with peers who have doctored their photographs, amplified, if not lied about, their achievements resulted in envy in the readers.

The question remains whether the same results would be found in older Facebook users. In other words, does age make us wiser?1

1Get a Life! The Economist, April 17, 2013, p.68.

Dealing with Technology and Information Overload

July 28, 2013

Whenever I read or hear something about our being victims of technology, I become extremely upset. I’ve written blog posts on this topic (See “Commentary on Newsweek’s. Cover Story iCrazy,” and “Net Smart.”) We are not passive entities. We need to be in charge of our technology. There was a very good article on this topic in the August 2013 issue of Mindful magazine. It is titled “A User’s Guide to Screenworld,” and is written by Richard Fernandez of Google who sits down with Arturo Bejar, director of engineering at Facebook, and Irene Au, the vice president of product and design at Udacity. Here are five strategies for dealing with different components of this issue.

Information Overload. There is way too much information to deal with and we must shield ourselves from being overwhelmed. We must realize that our time is both limited and costly. So we need to be selective and choose our sources wisely. When we feel our minds tiring we should rest or move on.

Constant Distraction. Multi-tasking costs. There is a cost in performing more than one task at a time. So try to complete one task or a meaningful segment of a task before moving on to another task. Let phone calls go to voice mail. Respond to email at designated times rather than jumping to each email as it arrives.

Friends, Partners, Stuck on Their Devices. Personally I cannot stand call waiting. I don’t have it on my phone, and if someone goes to their call waiting while talking with me, they will likely find that I am not on the phone should they return. Technology is no excuse for being discourteous. Moreover, technology provides us a means for being courteous, voice mail. So unless there is an emergency lurking, there is no reason for taking the call. Clearly, when there are job demands or something really important, there are exceptions, but every effort should be extended to be courteous. When there are other people present, give them your attention, not your devices. And call it to their attention when you feel you are being ignored.

Social Media Anxiety. Try to keep your involvement with social media to a minimum. The friending business on Facebook can be quite annoying. Moreover, for the most part these friends are superficial. Remember Dunbar’s Number (See the healthymemory blog posts, ‘How Many Friends are Too Many?” “Why is Facebook So Popular?” and “Why Are Our Brains So Large?). Dunbar’s number is the maximum number of people we can keep track of at one time is 150, but the number of people that we speak with frequently is closer to 5. I would be willing to up the number of close friends a bit, but it is still small. And he says that there are about 100 people we speak to about once a year.

Children Spending Too Much Time Staring at Screens. The advice here is to express an interest in your children’s digital life. Try to share it with them and try to develop an understanding of how to deal with technology and information overload.

Let me end with a quote by Arene Au from the article, which is definitely worth quoting: “We need to get up from our desks and move. There is a strong correlation between cognition and movement. We’re more creative when we move.”

© Douglas Griffith and healthymemory.wordpress.com, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Can Social Networking Make It Easier to Solve Real-World Problems?

September 23, 2012

An article in The Economist1 raised this question. According to an article in 2011, Facebook analysed 72 million users of its social networking site and found that an average of 4.7 hops could link any two of them via mutual friends. This is even less that the Six Degrees of Separation popularized by John Guare in his play by the same name.

In the United States the Defense Advanced Research Projects Agency (DARPA) staged the Red Balloon Challenge in 2009. It was trying to determine how quickly and efficiently information could be gathered using social media. Competitors were to race to find ten red weather balloons that had been tethered at random locations throughout the United States for a $40,000 prize. MIT had the winning team that found all ten balloons in nine hours using the following incentive-based system to encourage participation. The first person to send the correct coordinates of a balloon received $2,000. Whoever recruited that person received $1,000, and the recruiters recruiter received $500, and so forth and so forth.

DARPA staged a new challenge this year, the Tag Challenge. This time the goal was to locate and photograph five people each wearing unique T-shirts in five named cities across two continents. All five had to be identified within 12 hours from nothing more than a mugshot. The prize fund was $5,000. This time none of the teams managed to find all five targets. However, one team with members from MIT,the universities of Edinburgh and Southampton, and the University of California at San Diego did manage to fine three, one in each of the following cities, New York, Washington DC, and Bratislava. This team had a website and a mobile app to make it easier to report findings and to recruit people. Each finder was offered $500 and whoever recruited the finder $100. So anyone who did not know anyone in one of the target cities had no incentive to recruit someone who did. The team promoted itself on Facebook and Twitter. Nevertheless, most participants just used conventional email. It was conjectured that in the future smart phones might have an app that can query people all over the world, who can then steer the query towards people with the right information.

To return to the title of this post, Can Social Networking Make It Easier to Solve Real-World Problems, I would conclude, if the social problem involves finding someone or something, the answer would be yes. But I think that real-world problems typically involve collaboration of diverse people. In this respect one might argue that social media are actually a detriment to solving real world problems. Social media are good at bringing people of like minds together about something. If what is needed is collaboration among people of diverse opinions, this would not seem productive, and might very likely be counterproductive.

However, there still might be solutions using technology. Wikis provide a useful tool for collaboration. Another approach would having people of relevant, but diverse perspective could interact with each other anonymously using computers. Physical cues and identities would be absent. This would negate or minimize ego or group involvement and would be an exchange of information and ideas with the goal of arriving at a viable consensus. The number of people who can collaborate at a given time appears to be a constraint.

1Six Degrees of Mobilization, The Economist Technology Quarterly, September 2012, p.8.

© Douglas Griffith and healthymemory.wordpress.com, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Why Are Our Brains So Large?

September 16, 2012

A recent article1 provides a possible answer. The article’s title is Social Network Size Linked to Brain Size. Perhaps the most prominent hypothesis is that our enlarged brains allow us to be smarter than our competitors. We are better at abstract thinking, better with tools (I am a personal exception here), and better at adapting our behavior than our prey and predators.

In 1992 anthropologist Robin Dunbar (Remember Dunbar’s Number? See healthymemory blog posts, “Why Is Facebook So Popular?”, and “How Many Friends are Too Many?”) published research showing that in primates the ratio of the size of the neo-cortex to that of the rest of the brain consistently increases with increases in the size of the social group. So the Tamarin monkey has a brain size ratio of around 2.3 and an average social group size of around 5 members, whereas a Macaque monkey has a brain size ratio of about 3.8 but a large average group size of around 40 members. Consequently, Dunbar advanced his “social brain hypothesis,” which states that the relative size of the neo-cortex rose as social groups became larger in order to maintain the complex set of relationships necessary for stable co-existence. Moreover, he suggested that given the human brain ratio we have an expected social group size of about 150, the size of what Dunbar called a clan.

Dunbar’s previous worked was focused on differences among species. His more recent work focuses on differences within species. He has found that the size of each individual’s social network is linearly related to the neural volume in the orbital prefrontal cortex. His research has shown that more than just more neural material in the prefrontal cortex is needed. Psychological skills are also needed, especially an ability to understand the other person’s state of mind. This cognitive skill is called a “theory of mind.”

So we have two explanations of why are brain’s are so large. One is that we are better at abstract thinking and adapting our behavior. The other is that the larger brain is needed to accommodate larger social networks that are beneficial to our survival. The astute healthymemory blog reader will likely quickly realize that these two hypotheses are not mutually exclusive. Most likely they are both at work.

1http://www.scientificamerican.com/article.cfm?id=social-network-size-linked-brain-size

© Douglas Griffith and healthymemory.wordpress.com, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

My Problems with Facebook

June 9, 2012

I don’t like Facebook. I find it to be unwieldy and cluttered up with junk in which I have no interest. I assume there are means for tidying things up, but I don’t have the time or patience to learn them. There are two reasons I have a Facebook account. One is to have a means of providing additional exposure for this blog. The other reason is that I do not want to offend friends and old acquaintances. My facility with Facebook is such that there are times when I think I might have responded, but I am not sure, so I don’t know whether I am fulfilling my second objective.

In the early days, I responded positively to all friending requests. I didn’t want to offend anyone and I was especially afraid that I might offend an old acquaintance who had momentarily slipped my mind. However, there came a time when I realized that this is foolish. Why be a friend to someone I do not know and have no reason to know just so they can boast of the number of people they have friended. Tbere is a fairly limited number of people with whom one can be genuinely friends (See the Healthymemory Blog post, “How Many Friends are Too Many.”)

The vast majority of stuff on my page consists of items and people that are of no interest to me. Of course, the stuff from my real friends is there and I treasure it. It is just that I would rather correspond privately by email, but Facebook discourages one from doing this. I appreciate their convenience of being able to contact many people, so I continue to endure.

One of my pet peeves is Farmville. Notes on purchasing something or other for Farmville periodically appear. I am still working and don’t have time to deal with this. I have a hunch that most of these requests are coming from people who are retired. If retirement reduces one to playing the Farmville game, then you can count on me never retiring!

Feel free to tell me what a fuddy-duddy I am; what a poor sport I am; or to pity the poor people I am offending. What would be most appreciated are tips on how to clean up my Facebook Page!

© Douglas Griffith and healthymemory.wordpress.com, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Adverse Effects of Social Isolation

October 23, 2011

Lonely people have a higher risk of everything from heart attacks to dementia, and from depression to death. However, people who are satisfied with their social lives sleep better, age more slowly and have more favorable responses to vaccines. John Cacioppo of the University of Chicago, an expert on the effects of social isolation, says that curing loneliness is as good for your health as giving up smoking. Charles Raison of Emory University studies mind-body interactions agrees with Cacioppo. He has said, “It’s probably the most powerful behavioral finding in the world. People who have rich social lives and warm open relationships don’t get sick and they live longer.”1

Although it is true that some people who are lonely might not take good care of themselves, Cacioppo states that there are direct physiological mechanisms that are related to the effects of stress. Cacioppo has found that genes involved in cortisol signaling and the inflammatory response are up-regulated in lonely people and that immune cells important in fighting bacteria were more active too. His conjecture is that our bodies might have evolved so that in situations of perceived social isolation, they trigger branches of the immune system involved in would healing and bacterial infection. On the other hand, people in a group might favor the immune response for fighting viruses, which are more likely to be spread among people living in close contact.

It is important to note that these differences relate most strongly to how lonely people believe themselves to be, rather than to the actual size of their social network. Cacioppo thinks that our attitude to others is key here. Lonely people become overly sensitive to social threats and see other people as potentially dangerous. In a review of previous studies that he published last year, he found that disabusing lonely people of this attitude reduced loneliness more effective than giving people more opportunities for interaction, or teaching social skills.2

Only one or two close friends might suffice if you are satisfied with your social life. Problems arise when you feel lonely.3 In the jargon of the Healthymemory Blog, this is largely a matter of transactive memory. Transactive memory refers to shared memories and of the knowledge one has of other memories. These memories can form as a result of person-to-person interactions or via means of technology, such as the internet. It should be noted that having hundreds of friends on Facebook would not necessarily indicate that you are not lonely. “What is important is the quality rather than the quantity of these relationships. An evolutionary biologist, Robin Dunbar, came up with a number he modestly named, “Dunbar’s number.” He bases this number on the size of the human brain and its complexity. He calculates that the maximum number of relationships our brain can keep track of at one time to be about 150 . This number includes all degrees of relationships. This is the maximum number of relationships. The number of close, meaningful relationships is much smaller. He estimates that we have a core group of about five people with whom we speak frequently. I find this absolute number a tad small, but to be in the general ballpark. At the other extreme there are about 100 people with whom we speak about once a year. The 150 number is an absolutely maximum of people we can even generously consider as friends. So Facebook users who have friended several hundred friends have essentially rendered the term “friend” meaningless.” (From the Healthymemory Blog post, “Why is Facebook So Popular?”, also see the Healthymemory Blog post “How Many Friends are Too Many?”).

1From “Trust People” in Heal Thyself by Marchant, J. (2011), New Scientist., 27 August, p. 35.

2Cacipoppo, J. (2010). Annals of Behaviorl Medicine, 40, p. 218.

3This part of this post was based heavily on the article by Marchant in the first footnote above.

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Why is Facebook So Popular?

July 10, 2011

I am definitely confused. No only is there an enormous number of individual users, but companies, societies, organizations, television programs, and many other entities also feel a necessity to establish a presence on Facebook. Although most of these entities have good websites, they still feel compelled to maintain a Facebook presence.

Personally, I find regard Facebook to be an annoyance. It can be difficult to use, and I see little value in it. I have loads of requests from people I don’t know who indicate that they want to friend me. Early on, I consented because I did not want to be rude. Even now I worry that I might refuse the request of someone I did know long ago. I still accept requests from people who have been recommended by someone I know. But I do this only not to offend a true friend. I know of nothing that ever develops from this “friending.” With the exception of birthday greetings I receive from old acquaintances, I have seen nothing of value on Facebook. Just one inanity after another. I worry about people who do engage extensively in these activities.

I asked a friend of mine, who is extensively knowledgeable about cyberspace and who apparently spends significant time there, what he thinks about Facebook. His response was, “Never have touched it.  Who wants to be “connected” to everybody out there?!  Not me!”

I think he raises a good question. An earlier Healthymemory Blog post entitled “How Many Friends are Too Many?” addressed that very question. An evolutionary biologist, Robin Dunbar, came up with a number he modestly named, “Dunbar’s number.” He bases this number on the size of the human brain and its complexity. He calculates that the maximum number of relationships our brain can keep track of at one time to be about 150 . This number includes all degrees of relationships. This is the maximum number of relationships. The number of close, meaningful relationships is much smaller. He estimates that we have a core group of about five people with whom we speak frequently. I find this absolute number a tad small, but to be in the general ballpark. At the other extreme there are about 100 people with whom we speak about once a year. The 150 number is an absolutely maximum of people we can even generously consider as friends. So Facebook users who have friended several hundred friends have essentially rendered the term “friend” meaningless.

MIT social psychologist Sherry Turkle contends that social networking is eroding our ability to live comfortably offline.1 Although she makes a compelling argument, it is not the technology that is to be blamed, but rather how we use the technology. After all, the technology is not going to go away. There might be underlying psychological, genetic, or epigenetic substrates that contribute to the problem. Facebook, itself, can be regarded as providing affordances that contribute to this abuse.

1Price, M. (2011). Questionnaire; Alone in the Crowd. Monitor on Psychology, June, 26-28.

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Google vs. Facebook Revisited

February 9, 2011

A number of blog posts back I expressed disappointment that Facebook had replaced Google in terms of usage. The stated grounds for my disappointment was that Facebook consisted primarily of superficial postings. True, they are enjoyable and fun, but little is learned and there is little cognitive growth. Although it is true that there are trivial searches on Google, a Google search is more likely for some useful point of knowledge. So, according to my line of reasoning, Google users were more likely to benefit from cognitive growth than were Facebook users.

In retrospect, I think that I might have been a bit unfair with my Facebook criticism, even though I did admit that many professional organizations are on Facebook. This blog post falls into the category of transactive memory. Now if you search for transactive memory on the Wikipedia (or you can search for it on Facebook that will link you to the Wikipedia) you will find that it is memory shared among a group. Actually the Healthymemory Blog is waging a rather lonely vigil by including the other meaning of transactive memory, namely, information that is found in all forms of technology (the internet, but also in conventional libraries). Although I do think that Google provides a more ready entry to transactive memory in the sense of technology, Facebook provides an entry to transactive memory in terms of memories shared with people.

I should also note that cognitive growth does not require delving into deep academic topics. For purposes of a healthy memory, information about sports and movies can form new memory circuits and reinvigorate old memory circuits in the brain. So the important point is to be cognitively active. In this respect Facebook can be quite helpful. It can serve as a resource for sharing information and collaborating with fellow human beings.

Personally, I provide a poor example. The Healthymemory Blog does have a Facebook posting, but I have done nothing with it, so it is rather sparse. I am interested in any experiences readers of this blog might have had in using Facebook in learning about topics of interest and in sharing information regarding those topics of interest. Please leave your comments. 

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Google vs. Facebook

January 19, 2011

I found the news that Facebook had surpassed Google in usage quite depressing, particularly with respect to considerations regarding cognitive growth and development. Of course, it seems that everyone, myself included, is on Facebook. Included here are professional organizations and businesses. So the news should not be surprising; so why then do I find it depressing?
Let us compare and contrast the reasons for using Google against the reasons for using facebook. Someone who uses Google is usually trying to learn something. This might simply be information on a restaurant, or a movie, or a stock investment. Or someone might be looking for the definition of a word or trying to understanding a topic. Someone who is really interested in a topic might be using Google Scholar. Or someone might be trying to remember what the name of something is by searching for other things that remind you of the thing. It seems to me that these activities lead to cognitive growth, of course, some to deeper levels than others. And you can use Google to find people and build social relationships.

Perhaps it is this last activity where Facebook excels over Google. It is true hat one can build and renew social relationships, but it seems that most “friending” is done at a superficial level. Some people “friend” just to boast of the number of friends they have. I continually receive “friend” requests from people I don’t know and can find no reason for wanting to know. With the exception of genuine social relationships, I see little on Facebook that would foster cognitive growth or a healthy memory. When I review most of the postings on Facebook, I do not think that it would be any great loss if they were lost forever. Now the loss of a truly great search engine like Google would be catastrophic.

Of course, Myspace was once a top website that has declined seriously in popularity. I just looked at the top websites as of January 5, 2011 and saw that Google was back on top. Now wikipedia.org was in 7th place. Wikipedia should be one of the premier websites for cognitive growth.

I would like to hear your opinions on this topic. Please submit your comments.

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.