Archive for the ‘Transactive Memory’ Category

What Do We Know, What Can We Do?

January 24, 2019

This is the twelfth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” Having raised an enormous number of problems, it is fortunate that the authors also proposed possible solutions.

The military is already training and experimenting for the new environment. The Joint Readiness Training Center at Fort Polk, Louisiana is a continuously operating field laboratory. The laboratory is good not only for training, but also for simulations to respond to different situations, so that possible solutions can be evaluated in a simulation prior to actual conflict. The Army needs to understand how to train for this war. Fort Polk has a brand-new simulation for this task: the SMEIR (Social Media Environment and Internet Replication). SMEIR simulates the blogs, news outlets, and social media accounts that intertwine to form a virtual battlefield.

The authors have also claimed that LikeWar has rules, and has tried to articulate them:

“First for all the sense of flux, the modern information environment is becoming stable. The internet is now the preeminent communications medium in the world; it will remain so for the foreseeable future. Through social media the web will grow bigger in size, scope, and membership, but its essential form and centrality to the information ecosystem will not change.”

“Second, the internet is a battlefield. It is a platform for achieving the goals of whichever actor manipulates it most effectively. Its weaponization, and the conflicts that erupt on it, define both what happens on the internet and what we take away from it.”

“Third, this battlefield changes how we must think about information itself. If something happens, we must assume that there’s likely a digital record of it that will surface seconds or years from now. But an event only carries power if people also believe that it happened. So a manufactured event can have real power, while a demonstrably true event can be rendered irrelevant. What determines the outcome isn’t mastery of the “facts,” but rather a back-and-forth battle of psychological, political, and algorithmic manipulation.”

“Fourth, war and politics have never been so intertwined. In cyberspace, the means by which the political or military aspects of this competition are won are essentially identical. Consequently, politics has taken on elements of information warfare, while violent conflict is increasingly influenced by the tug-of-war for online opinion. This also means that the engineers of Silicon Valley, quite unintentionally, have turned into global power brokers, Their most minute decisions shape the battlefield on which both war and politics are increasingly decided.”

“Fifth, we’re all part of the battle. We are surrounded by countless information struggles—some apparent, some invisible-all of which seek to alter out perceptions of the world. Whatever we notice whatever we “like,” whatever we share, become the next salvo. In this new war of wars, taking place on the network of networks, there is no neutral ground.”

“For governments, the first and most important step is to take this new battleground seriously. The authors write, “Today, a significant part of the American political culture is willfully denying the new threats to its cohesion. In some cases, it is colluding with them.”

“Too often, efforts to battle back against online dangers emanating from actors and home and abroad have been stymied by elements within the U.S. government, Indeed, at the time we write this in 2018, the Trump White House has not held a single cabinet-level meeting on how to address the challenges outlined in this book, while its State Department refused to increase efforts to counter online terrorist propaganda and Russian disinformation, even as Congress allocated nearly $80 million for the purpose.”

“Similarly, the American election system remains remarkably vulnerable, not merely to hacking of the voting booth, but also to the foreign manipulation of U.S. voters political dialogue and beliefs. Ironically, although the United States has contributed millions of dollars to help nations like Ukraine safeguard their citizens against these new threats, political paralysis has prevented the U.S. government from taking meaningful steps to inoculate its own population. Until this is reframed as a nonpartisan issue—akin to something as basic as health education—the United States will remain at grave risk.”

Advertisements

The Conflicts That Drive the Web and the World

January 23, 2019

This is the eleventh post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” The title to this post is identical to the subtitle of the chapter titled “Likewar.” In 1990 two political scientists with the Pentagon’s think tank at the RAND Corporation started to explore the security implications of the internet. John Arquilla and David Ronfeldt made their findings public in a revolutionary article titled “Cyberwar Is Coming!” in a 1993 article. They wrote that “information is becoming a strategic resource that may prove as valuable in the post-industrial era as capital and labor have been in the industrial age.” They argued that future conflicts would not be won by physical forces, but by the availability and manipulation of information. They warned of “cyberwar,” battles in which computer hackers might remotely target economies and disable military capabilities.

They went further and predicted that cyberwar would be accompanied by netwar. They explained: It means trying to disrupt, damage, or modify what a target population “knows” or thinks it knows about itself and the world around it. A network may focus on public or elite opinion, or both. It may involve public diplomacy, measures, propaganda and psychological campaigns, political and cultural subversion, deception of or interference with the local media…In other words, netwar represents a new entry on the spectrum of conflict that spans economic, political, and social as well as military forms of ‘war.’

Early netwar became the province of far-left activists undemocratic protesters, beginning with the 1994 Zapatista uprising in Mexico and culminating in the 2011 Arab Spring. In time, terrorists and far-right extremists also began to gravitate toward net war tactics. The balance shifted for disenchanted activists when dictators learned to use the internet to strengthen their regimes. For us, the moment came when we saw how ISIS militants used the internet not just to sow terror across the globe, but to win its battles in the field. For Putin’s government it came when the Russian military reorganized itself to strike back what it perceived as a Western information offensive. For many in American politics and Silicon Valley, it came when the Russian effort poisoned the networks with a flood of disinformation, bots, and hate.

In 2011, DARPA’s research division launched the new Social Media in Strategic Communications program to study online sentiment analysis and manipulation. About the same time, the U.S. military’s Central Command began overseeing Operation Earnest Voice to fight jihadists across the Middle East by distorting Arabic social media conversations. One part of this initiative was the development of an “online persona management service,” which is essentially sockpuppet software, “to allow one U.S. serviceman or woman to control up to 10 separate identities based all over the world.” Beginning in 2014, the U.S. State Department poured vast amounts of resources into countering violent extremism (CVE) efforts, building an array of online organizations that sought to counter ISIS by launching information offensives of their own.

The authors say national militaries have reoriented themselves to fight global information conflicts, the domestic politics of these countries have also morphed to resemble netwars. The authors write, “Online, there’s little difference in the information tactics required to “win” either a violent conflict or a peaceful campaign. Often, their battles are not just indistinguishable but also directly linked in their activities (such as the alignment of Russian sockpuppets and alt-right activists). The realms of war and politics have begun to merge.”

Memes and memetic warfare also emerged. Pepe the Frog was green and a dumb internet meme. In 2015, Pepe was adopted as the banner of Trump’s vociferous online army. By 2016, he’d also become a symbol of a resurgent timed of white nationalism, declared a hate symbol by the Anti-Defamation League. Trump tweeted a picture of himself as an anthropomorphized Pepe. Pepe was ascendant by 2017. Trump supporters launched a crowdfunding campaign to elect a Pepe billboard “somewhere in the American Midwest.” On Twitter, Russia’s UK embassy used a smug Pepe to taunt the British government in the midst of a diplomatic argument.

Pepe formed an ideological bridge between trolling and the next-generation white nationalist, alt-right movement that had lined up behind Trump. The authors note that Third Reich phrases like “blood and soil” filtered through Pepe memes, fit surprisingly well with Trump’s America First, anti-immigration, anti-Islamic campaign platform. The wink and note of a cartoon frog allowed a rich, but easily deniable, symbolism.

Pepe transformed again when Trump won. Pepe became representative of a successful, hard-fought campaign—one that now controlled all the levers of government. On Inauguration Day in Washington, DC, buttons and printouts of Pepe were visible in the crowd. Online vendors began selling a hat printed in the same style as those worn by military veterans of Vietnam, Korea, and WW II. It proudly pronounced its wearer as a “Meme War Veteran.”

The problem with memes is that by highjacking or chance, a meme can come to contain vastly different ideas than those that inspired it, even as it retains all its old reach and influence. And once a meme has been so redefined, it becomes nearly impossible to reclaim. Making something go viral is hard; co-opting or poisoning something that’s already viral can be remarkable. U.S Marine Corps Major Michael Prosser published a thesis titled: “Memetics—a Growth industry in US Military Operations.. Prosser’s work kicked off a tiny DARPA-Funded industry devoted to “military memetics.”

The New Wars for Attention and Power

January 22, 2019

This is the tenth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” The title of this post is identical to the subtitle of the title “Win the Net, Win the Day” of a chapter in the book.

Brian Jenkins declared in a 1974 RAND Corporation report, “Terrorism is theater,” that became one of terrorism’s foundational studies. The difference between the effectiveness of the Islamic State and that of terror groups in the past was not the brains of the ISIS; it was the medium they were using. Mobile internet access could be found everywhere; smartphones were available in any bazaar. Advanced video and image editing tools were just one illegal download away, and an entire generation was well acquainted with their use. For those who weren’t, there were free online classes offered by a group called Jihadi Design. It promised to take ISIS supporters ‘from zero to professionalism’ in just a few sessions. The most dramatic change from terrorism was that distributing a global message was as easy as pressing ”send,” with the dispersal facilitated by a network of super-spreaders beyond any one state’s control.

ISIS networked its propaganda pushing out a staggering volume of online messages. In 2016 Charlie Winter counted nearly fifty different ISIS media hubs, each based in different regions with different target audiences, but all threaded through the internet. These hubs were able to generate over a thousand “official” ISIS releases, ranging from statements to online videos, in just a one-month period.

They spun a tale in narratives. Human minds are wired to seek and create narratives. Every moment of the day, our brains are analyzing new events and finding them in thousand of different narratives already stowed in our memories. In 1944 psychologists Fritz Heider and Marianne Simmel produced a short film that showed three geometric figures (two triangles and a circle) bouncing off each other at random. They screened the film to a group of research subjects and asked them to interpret the shapes’ actions. All but one of the subjects described these abstract objects as living beings; most saw them as representations of humans. In the shapes’ random movements they expressed motives, emotions, and complex personal histories such as: the circle was “worried,” one triangle was “innocent” and the other was “blinded by rage.” Even in crude animation all but one observer saw a story of high drama.

The first rule in building effective narratives is simplicity. In 2000, the average attention span of an internet user was measured at twelve seconds. By 2015 it had shrunk to eight seconds. During the 2016 election Carnegie Mellon University researchers studied and ranked the complexity of the candidates language (using the Flesch-Kincaid score). They found that Trump’s vocabulary measured at the lowest level of all the candidates, comprehensible to someone with a fifth-grade education. This phenomenon is consistent with a larger historic pattern. Starting with George Washington’s first inaugural address, which was one of the most complex overall, American presidents communicated at a college level only when newspapers dominated mass communication. But each time a new technology took hold, complexity dropped. The authors write, “To put it another way: the more accessible the technology, the simpler a winning voice becomes. It may be Sad! But it is True!

The second rule of narrative is resonance. Nearly all effective narratives conform to what social scientists call “frames.” Frames are proud of specific languages and cultures that feel instantly and deeply familiar. To learn more about frames enter “frames” into the search block of the healthy memory blog.

The third and final rule of narrative is novelty. Just as narrative frames help build resonance, they also serve to make things predictable. However, too much predictability can be boring, especially in an age of microscopic attention spans and unlimited entertainment. Moreover, there seems to be no limit on the quality of narrative. Some messages far exceed the limits of credibility, yet they are believed and spread.

Additional guidelines are pull the heartstrings and feed the fury. Final guidance would be inundation: drown the web, run the world.

The Unreality Machine

January 21, 2019

This is the ninth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” There was a gold rush in Veles, Macedonia. Teenage boys there worked in “media.” More specifically, American social media. The average U.S. internet is virtually a walking bag of cash, with four times the advertising dollars of anyone else in the world. And the U.S. internet user is very gullible. The following is from the book: “In a town with 25% unemployment and an annual income of under $5,000, these young men had discovered a way to monetize their boredom and decent English-language skills. They set up catch websites, peddling fad diets and weird health tips.” They relied on Facebook “shares” to drive traffic. Each click gave them a small slice of the pie from ads running along the side. Some of the best of them were pulling in tens of thousands of dollars a month.

Competition swelled, but fortunately the American political scene soon brought them a virtually inexhaustible source of clicks and resulting fast cash. This was the 2016 presidential election. Now back to the text “The Macedonians were awed by Americans’ insatiable thirst for political stories, Even a sloppy, clearly plagiarized jumble of text and ads could rack up hundreds of thousands of “shares.” The number of U.S. politics-related websites operated out of Veles swelled into the hundreds.

One of the successful entrepreneurs estimated that in six month, his network of fifty websites attracted some 40 million page views driven there by social media. This made him about $60,000. This 18-year-old then expanded his media empire. He outsourced the writing to three 15-year-olds, paying each $10 a day. He was far from the most successful of the Veles entrepreneurs. Some became millionaires, One rebranded himself as as “clickbait coach,” running a school where he taught dozens of others how to copy his success.

These viral news stories weren’t just exaggerations or products of political spin; they were flat-out lies. Sometimes the topic was the proof that Obama had been born in Kenya or that he was planning a military coup. Another report warned that Oprah Winfrey had told her audience that “some white people have to die.”

The following is from the book: “Of the top twenty best-performing fake stories spread during the election, seventeen were unrepentantly pro Trump. Indeed, the single most popular news story of the entire election—“Pope Francis Shocks World, Endorses Donald Trump for President.” Social media provided an environment in which lies created by anyone, from anywhere, could spread everywhere, making the liars plenty of cash along the way”

In 1995 MIT media professor Nicholas Negroponte prophesied that there would be an interface agent that read every newswire and newspaper and catch every TV and radio broadcast on the planet, and then construct a personalized summary. He called this the “Daily Me.”

Harvard law professor Cass Sunstein argues that the opposite might actually be true. Rather than expanding their horizons, people were just using the endless web to seek out information with which they already agree. He called this the “Daily We.”

A few years later the creation of Facebook, the “Daily We,” an algorithmically created newsfeed became a fully functioning reality.

For example, flat-earthers had little hope of gaining traction in a post-Christopher Columbus, pre-internet world. This wasn’t just because of the silliness of their views, but they couldn’t easily find others who shared them. But the world wide web has given the flat-earth belief a dramatic comeback. Proponents now have an active community and aggressive marketing scheme.

This phenomenon is called ‘homophily,” meaning “love of the same.” Homophily is what makes us humans social creatures able to congregate in such like-minded groups. It explains the growth of civilization and cultures, It is also the reason an internet falsehood, once it begins to spread, can rarely be stopped.

Unfortunately falsehood diffused significantly farther, faster, deeper, and more broadly than the truth. It becomes a deluge. The authors write, “Ground zero for the deluge, however, was in politics. The 2016 U.S. presidential election released a flood of falsehoods that dwarfed all previous hoaxes and lies in history. It was an online ecosystem so vast that the nightclubbing, moneymaking, lie-spinning Macedonians occupied only one tiny corner. There were thousands of fake website, populated by millions of baldly false stories, each then shared across people’s personal networks. In the final three months of the 2016 election, more of these fake political headlines were shared on Facebook than real ones. Meanwhile, in study of 22 million tweets, the Oxford Internet Institute concluded that Twitter users, too, and shared more disinformation, polarizing and conspiratorial content’ than actual news. The Oxford team called this problem “junk news.”

Censorship, Disinformation, and the Burial of Truth

January 20, 2019

This is the eighth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. Initially, the notion that the internet would provide the basis for truth and independence was supported. The Arab Spring was promoted on the internet. The authors write, “Social media had illuminated the shadows crimes through which dictators had long clung to power, and offered up a powerful new means of grassroots mobilization.

Unfortunately, this did not last. Not only did the activists fail to sustain their movement, but they noticed that the government began to catch up. Tech-illiterate bureaucrats were replaced by a new generation of enforcers who understood the internet almost as well as the protestors. They invaded online sanctuaries and used the very same channels to spread propaganda. And these tactics worked. The much-celebrated revolutions fizzled. In Libya and Syria, digital activists turned their talents to waging internecine civil wars. In Egypt, the baby named Facebook would grow up in a country that quickly turned back to authoritarian government.

The internet remains under the control of only a few thousand internet service providers (ISPs). These firms run the backbone, or “pipes,” of the internet. Only a few ISPs supply almost all of he world’s mobile data. Because two-thirds of all ISPs reside in the United States, the average number across the rest of the world is relatively small. The authors note that, “Many of these ISPs hardly qualify as “businesses” at all. Rather, they are state-sanctioned monopolies or crony sanctuaries directed by the whim of local officials. Although the internet cannot be destroyed, regimes can control when the internet goes on or off and what goes on it.

Governments can control internet access and target particular areas of the country. India, the world’s largest democracy had the mobile connections in an area where violent protests had started out for a week. Bahrain instituted an internet curfew that affected only a handful of villages where antigovernment protests were brewing. When Bahrainis began to speak out against the shutdown, authorities narrowed their focus further, cutting access all the way down to specific internet users and IP addresses.

The Islamic Republic of Iran has poured billions of dollars into its National Internet Project. It is intended as a web replacement, leaving only a few closely monitored connections between Iran and the outside world. Italian officials describe it as creating a “clean” internet for its citizens, insulated from the “unclean” web that the rest of us use.

Outside the absolute-authoritarian state of North Korea (whose entire internet is a closed network of about 30 websites), the goal isn’t so much to stop the signal as it is to weaken it. Although extensive research and special equipment can circumvent government controls, the empower parts of the internet are no longer for the masses.

Although the book discusses China, that discussion will not be included here as there are separate posts on the book “Censored: Distraction and Diversion Inside China’s Great Firewall” by Margaret E. Roberts.

The Russian government hires people to create chaos on the internet. They are tempted by easy work and good money for work such as writing more than 200 blog posts and comments a day, assuming fake identities, hijacking conversations, and spreading lies. This is an ongoing war of global censorship by means of disinformation.

Russia’s large media networks are in the hands of oligarchs, whose finances are deeply intertwined with those of the state. The Kremlin makes its positions known through press releases and private conversations, the contents of which are then dutifully reported to the Russian people, no matter how much spin it takes to make them credible.

Valery Gerasimov has been mentioned in previous healthy memory blog posts. He channeled Clausewitz in speech reprinted in the Russian military newspaper that “the role of nonmilitary means of achieving political and strategic goals has grown. In many cases, they have exceeded the power of the force of weapons in their effectiveness.” This is known as the Gerasimov Doctrine that has been enshrined in the nation’s military strategy.

Individuals working at the Internet Research Agency assume a series of fake identities known as “sockpuppets.” The authors write, The job was writing hundreds of social media posts per day, with the goal of hijacking conversations and spreading lies, all to the benefit of the Russian government. For this work people are paid the equivalent of $1500 per month. (Those who worked on the “Facebook desk” targeting foreign audience received double the pay of those targeting domestic audiences).

The following is taken directly from the text:

“The hard work of a sockpuppet takes three forms, best illustrated by how they operated during the 2016 U.S. election. One is to pose as the organizer of a trusted group. @Ten_GOP called itself the “unofficial Twitter account of Tennessee Republicans” and was followed by over 136,000 people (ten times as many as the official Tennessee Republican Party Account). It’s 3,107 messages were retweeted 1,213,506 times. Each retweet then spread to millions more users especially when it was retweeted by prominent Trump campaign figures like Donald Trump Jr., Kellyanne Conway, and Michael Flynn. On Election Day 2016, it was the seventh most retweeted account across all of Twitter. Indeed, Flynn followed at least five such documented accounts, sharing Russian propaganda with his 1000,000 followers at least twenty-five times.

The second sockpuppet tactic is to pose as a trusted news source. With a cover photo image of the U.S. Constitution, @partynews presented itself as hub for conservative fans of the Tea Party to track the latest headlines. For months , the Russian front pushed out anti-immigrant and pro-Trump messages and was followed and echoed out by some 22,000 people, including Trump’s controversial advisor Sebastian Gorka.

Finally, sockpuppets pass as seemingly trustworthy individuals: a grandmother, a blue-collar worker from the midwest,a decorated veteran, providing their own heartfelt take on current events (and who to vote for). Another former employee of the Internet
Research Agency, Alan Baskayev, admitted that it could be exhausting to manage so many identities. “First you had to be a redneck from Kentucky, then you had to be some white guy from Minnesota who worked all his life, paid taxes and now lives in poverty; and in 15 minutes you have to write something in the slang of [African] Americans from New York.”

There have been many other posts about Russian interference in Trump’s election. Trump lost the popular vote, and it is clear that he would not have won the Electoral College had it not been for Russia. Clearly, Putin owns Trump.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Flynn

January 19, 2019

This is the seventh post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. A former director of the U.S. Defense Intelligence Agency (DIA) said,”The exponential explosion of publicly available information is changing the global intelligence system…It’s changing how we tool, how we organize, how we institutionalize—everything we do.” This is how he explained to the authors how the people who once owned and collected secrets—professional spies—were adjusting to this world without secrets.

U.S. intelligence agencies collected open source intelligence (OSINT) on a massive scale through much of the Cold War. The U.S. embassy in Moscow collected OSINT on a massive scale. The U.S. embassy in Moscow maintained subscriptions to over a thousand Soviet journals and magazines, while the Foreign Broadcast Monitoring Service (FBIS) stretched across 19 regional bureaus, monitoring more than 3,500 publications in 55 languages, as well as nearly a thousand hours of television each week. Eventually FBIS was undone by the sheer volume of OSINT the internet produced. In 1993, FBIS was creating 17,000 reports a month; by 2004 that number had risen to 50,000. In 2005 FBIS was shuttered. The former director of DIA said, Publicly available information is now probably the greatest means of intelligence that we could bring to bear. Whether you’re a CEO, a commander in chief, or a military commander, if you don’t have a social media component…you’re going to fail.”

Michael Thomas Flynn was made the director of intelligence for the task force that deployed to Afghanistan. Then he assumed the same role for the Joint Special Operations Command (JSOC), the secretive organization of elite units like the bin Laden-killing navy SEAL team. He made the commandos into “net fishermen” who eschewed individual nodes and focused instead on taking down he entire network, hitting it before it could react and reconstitute itself. JSOC got better as Flynn’s methods evolved capturing or killing dozens of terrorists in a single operation, gathering up intelligence, and then blasting off to hit another target before the night was done. The authors write, “Eventually, the shattered remnants of AQI would flee Iraq for Syria, where they would ironically later reorganize themselves as the core of ISIS.

Eventually the Peter Principle prevailed. The Peter Principle is that people rise in an organization until they reach their level of incompetence. The directorship of DIA was that level for Flynn. Flynn was forced to retire after 33 years of service. Flynn didn’t take his dismissal well . He became a professional critic of the Obama administration, which brought him to the attention of Donald Trump. He used his personal Twitter account to push out messages of hate (Fear of Muslims is RATIONAL). He put out one wild conspiracy theory after another. His postings alleged that Obam wasn’t just a secret Muslim, but a “jihadi” who “laundered” money for terrorists, and that if Hillary Clinton won the election she would help erect a one-world government to outlaw Christianity (notwithstanding that Hillary Clinton was and is a Christian). He also claimed that Hillary was involved in “Sex Crimes w Children. This resulted in someone going into a Pizzeria, the supposed locus of these sex crimes with children, and shooting it up. He was charged by the FBI for lying about his contact with a Russian official. This was based on a recorded phone conversation. This was a singularly dumb mistake for a former intelligence officer

Crowdsourcing

January 18, 2019

This is the sixth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. The terrorist attack on Mumbai opened up all the resources of the internet using Twitter to defend against the attack. When the smoke cleared, the Mumbai attack left several legacies. It was a searing tragedy visited upon hundreds of families. It brought two nuclear powers to the brink of war. It foreshadowed a major technological shift. Hundreds of witnesses—some on-site, some from afar—had generated a volume of information that previously would have taken months of diligent reporting to assemble. By stitching these individual accounts together, the online community had woven seemingly disparate bits of data into a cohesive whole. The authors write, “It was like watching the growing synaptic connections of a giant electric brain.”

This Mumbai operation was a realization of “crowdsourcing,” an idea that had been on the lips of Silicon Valley evangelists for years. It had originally been conceived as a new way to outsource programming jobs, the internet bringing people together to work collectively, more quickly and cheaply than ever before. As social media use had sky rocketed, the promise of had extended a space beyond business.

Crowdsourcing is about redistributing power-vesting the many with a degree of influence once reserved for the few. Crowdsourcing might be about raising awareness, or about money (also known as “crowdfunding.”) It can kick-start a new business or throw support to people who might have remained little known. It was through crowdsourcing that Bernie Sanders became a fundraising juggernaut in the 2016 presidential election, raking in $218 million online.

For the Syrian civil war and the rise of ISIS, the internet was the “preferred arena for fundraising.” Besides allowing wide geographic reach, it expands the circle of fundraisers, seemingly linking even the smallest donor with their gift on a personal level. The “Economist” explained, this was, in fact, one of the key factors that fueled the years-long Syrian civil war. Fighters sourced needed funds by learning “to crowd fund their war by using Instagram, Facebook and YouTube. In exchange for a sense of what the war was really like, the fighters asked for donations via PayPal. In effect, they sold their war online.”

In 2016 a hard-line Iraqi militia took to Instagram to brag about capturing a suspected ISIS fighter. The militia then invited its 75,000 online fans to vote on whether to kill or release him. Eager, violent comments rolled in from around the world, including many from the United States. Two hours later, a member of the militia posted a follow-up selfie; the body of the prisoner lay in a pool of blood behind him. The caption read, “Thanks for the vote.” In the words of Adam Lineman, a blogger and U.S. Army veteran, this represented a bizarre evolution in warfare: “A guy on the toilet in Omaha, Nebraska could emerge from the bathroom with the blood of some 18-year-old Syrian on his hands.”

Of course, crowdsourcing can be used for good as well as for evil.

Sharing

January 17, 2019

This is the fifth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” The authors blame sharing on Facebook rolling out a design update that included a small text box that asked the simple question: “What’s on your mind?” Since then, the “status update” has allowed people to use social media to share anything and everything about their lives they want to, from musings and geotagged photos to live video and augmented-reality stickers.

The authors continue, “The result is that we are now our own worst mythological monster—not just watchers but chronic over-sharers. We post on everything from events small (your grocery list) to momentous (the birth of a child, which one of us actually live-tweeted). The exemplar of this is the “selfie,” a picture taken of yourself and shared as widely as possible online. At the current pace, the average American millennial will take around 26,000 selfies in their lifetime. Fighter pilots take selfies during combat missions. Refugees take selfies to celebrate making it to safety. In 2016, one victim of an airplane hijacking scored the ultimate millennial coup: taking a selfie with his hijacker.”

Not only are these postings revelatory of our personal experiences, but they also convey the weightiest issues of public policy. The first sitting world leader to use social media was Canadian prime minister Stephen Harper in 2008, followed by U.S. President Barack Obama. A decade later, the leaders of 178 countries had joined in, including former Iranian president Mahmoud Ahmadinejad, who banned Twitter during a brutal crackdown, has changed his mind on the morality—and utility—of social media. He debuted online with a friendly English-language video as he stood next to the Iranian flag. He tweeted, “Let’s all love each other.”

Not just world leaders, but agencies at every level and in every type of government now share their own news, from some 4,000 national embassies to the fifth-grade student council of the Upper Greenwood Lake Elementary school. When the U.S. military’s Central Command expanded Operation Inherent Resolve against ISIS in 2016, Twitter users could follow along directly via the hashtag #TALKOIR.

Nothing actually disappears online. The data builds and builds and could reemerge at any moment. Law professor Jeffrey Rosen said that the social media revolution has essentially marked “the end of forgetting.”

The massive accumulation of all this information leads to revelations of its own. Perhaps the clearest example of this phenomenon is the first president to have used social media before running for office. Being both a television celebrity and a social media addict, Donald Trump entered politics with a vast digital trail behind him. The Internet Archive has a fully perusable, downloadable collection of more than a thousand hours of Trump-related video, and his Twitter account has generated around 40,000 messages. Never has a president shared so much of himself—not just words but even neuroses and particular psychological tics—for all the world to see. Trump is a man—the most powerful in the world—whose very essence has been imprinted on the internet. Know this one wonders how such a man could be elected President by the Electoral College.

Tom Nichols is a professor at the U.S. Naval War College who worked with the intelligence community during the Cold War explained the unprecedented value of this vault of information: “It’s something you never want the enemy to know. And yet it’s all out there…It’s also a window into how the President processes information—or how he doesn’t process information he doesn’t like. Solid gold info.” Reportedly Russian intelligence services came to the same conclusion, using Trump’s Twitter account as the basis on which to build a psychological profile of Trump.

The World Wide Web Goes Mobile

January 16, 2019

This is the fourth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” On January 9, 2007, Apple cofounder and CEO Steve Jobs introduced the world to the iPhone. Its list of features: a touchscreen; handheld integration of movies, television, and music; a high quality camera; plus major advances in call reception and voicemail. The most radical innovation was a speedy, next-generation browser that could shrink and reshuffle websites, making the entire internet mobile-friendly.

The next year Apple officially opened its App Store. Now anything was possible as long as it was channeled through a central marketplace. Developers eagerly launched their own internet-enabled games and utilities, built atop the iPhone’s sturdy hardware (There are about 2.5 million such apps today). The underlying business of the internet soon changed with the launch of Google’s Android operating system and competing Google Play Store that same year, smartphones ceased to be the niche of tech enthusiast, and the underlying business of the internet soon changed.

There were some 2 billion mobile broadband subscriptions worldwide by 2013. By 2020, that number is expected to reach 8 billion. In the United States, where three-quarters of Americans own a smartphone, these devises have long since replaced televisions as the most commonly used piece of technology.

The following is taken directly from the text: “The smartphone combined with social media to clear the last major hurdle in the race started thousands of years ago. Previously, even if internet services worked perfectly, users faced a choice. They could be in real life but away from the internet. Or they could tend to their digital lives in quiet isolation, with only a computer screen to keep them company. Now, with an internet-capable device in their pocket, it became possible for people to maintain both identities simultaneously. Any thought spoken aloud could be just as easily shared in a quick post. A snapshot of a breathtaking sunset or plate of food (especially food) could fly thousands of miles away before darkness had fallen or the meal was over. With the advent of mobile livestreaming, online and offline observers could watch the same even unfold in parallel.”

Twitter was one of the earliest beneficiaries of the smartphone. Silicon Valley veterans who were hardcore free speech advocates founded the companion 2006. The envisioned a platform with millions of public voices spinning the story of their lives in 140-character bursts. This reflected the new sense that it was the network, rather than the content on it, that mattered.

Twitter grew along with smartphone use. In 2007, its users were sending 5,000 tweets per day. By 2010, that number was up to 50 million; by 2015, 500 million. The better web technology offered users the chance to embed hyperlinks, images, and video in their updates.

The most prominent Twitter user is Donald Trump, who likened it to “owning your own newspaper.” What he liked most about it was that it featured one perfect voice: his own.
It appears that it is his primary means of communications. It also highlights the risks inherent in using Twitter impulsively.

An Early Example of the Weaponization of the Internet

January 15, 2019

This is the third post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” In early 1994 a force of 4,000 disenfranchised workers and farmers rose up in Mexico’s poor southern state of Chiapas. They called themselves the Zapista National Liberation Army (EZLN). They occupied a few towns and vowed to march on Mexico City. This did not impress the government. Twelve thousand soldiers were deployed, backed by tanks and air strikes, in a swift and merciless offensive. The EZLN quickly retreated to the jungle. The rebellion teetered on the brink of destruction. But twelve days after it began the government declared a sudden halt to combat. This was a real head-scratcher, particularly for students of war.

But there was nothing conventional about this conflict. Members of the EZLN had been talking online. They spread their manifesto to like-minded leftists in other countries, declared solidarity with international labor movements protesting free trade (their revolution had begun the day the North American Free Trade Agreement (NAFTA) went into effect, established contact with organizations like the Red Cross, and urged every journalist they could find to come and observe the cruelty of the Mexican military firsthand. They turned en masse to the new and largely untested power of the internet.

It worked. Their revolution was joined in solidarity by tens of thousands of liberal activists in more than 130 countries, organizing in 15 different languages. Global pressure to end the small war in Chiapas built quickly on the Mexican government. And it seemed to come from every direction, all at once. Mexico relented.

But this new offensive did not stop after the shooting had ceased. The war became a bloodless political struggle, sustained by the support of a global network of enthusiasts and admirers, most of whom had never heard of Chiapas before the call to action went out. In the years that followed, this network would push and cajole the Mexican government into reforms the local fighters had been unable to obtain on their own. The Mexican foreign minister, Jose Angel Gurria lamented in 1995, “The shots lasted ten days, but ever since the war has been a war of ink, of written word, a war on the internet.”

There were signs everywhere that the internet’s relentless pace of innovation was changing the social and political fabric of the real world. The webcam was invented and the launch of eBay and Amazon; the birth of online dating; even the first internet-abetted scandals and crimes, one of which resulted in a presidential impeachment, stemming from a rumor first reported online. In 1996, Manual Castells, one of the world’s foremost sociologists, made a bold prediction: “The internet’s integration of print, radio, and audiovisual modalities into a single system promise an impact on society comparable to that of the alphabet.”

The authors note that most forward-thinking of these internet visionaries was not an academic. In 1999, musician David Bowie sat for an interview with the BBC. Instead of promoting his albums, he waxed philosophical about technology’s future. He explained that the internet would not just bring people together; it would also tear them apart. When asked by the interviewer about his surety about the internet’s powers, Bowie said that he didn’t think we’ve even seen the tip of the iceberg. “I think the potential of what the internet is going to do to society, both good and bad, is unimaginable. I think we’re actually on the cusp of something, exhilarating and terrifying…It’s going to crush our ideas of what mediums are all about.”

Could Sputnik be Responsible for the Internet?

January 14, 2019

This is the second post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” Probably most readers are wondering what is or was Sputnik? Sputnik was the first space satellite to orbit the earth. It was launched by the Soviet Union. The United States was desperately trying to launch such a satellite, but was yet to do so. A young HM appeared as part of a team of elementary school presenters on educational TV that made a presentation on Sputnik and on the plans of the United States to launch such a satellite. The young version of HM explained the plans for the rocket to launch a satellite. Unfortunately, the model briefed by HM failed repeatedly, and a different rocket was needed for the successful launch.

The successful launch of Sputnik created panic in the United States about how far we were behind the Russians. Money was poured into scientific and engineering research and into the education of young scientists and engineers. HM personally benefited from this generosity as it furthered his undergraduate and graduate education.

Licklider and Taylor the authors of the seminal paper, “The Computer as a Communication Device” were employees of the Pentagon’s Defense Advanced Research Project Agency (DARPA). An internetted communications system was important for the U.S. military was that it would banish its greatest nightmare: the prospect of the Soviet Union being able to decapitate U.S. command and control with a single nuclear strike. But the selling point for the scientists working for DARPA was that linking up computers would be a useful way to share what was at the time incredibly rare and costly computer time. A network could spread the load and make it easier on everyone. So a project was funded to transform the Intergalactic Computer Network into reality. It was called ARPANET.

It is interesting to speculate what would have been developed in the absence of the Soviet threat. It is difficult to think that this would have been done by private industry.
Perhaps it is a poor commentary on homo sapiens, but it seems that many, if not most, technological advances have been developed primarily for warfare and defense.

It is also ironic to think that technology developed to thwart the Soviet Union would be used by Russia to interfere in American elections to insure that their chosen candidate for President was elected.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

LikeWar: The Weaponization of Social Media

January 13, 2019

The title of this post is identical to the title of a book by P.W. Singer and Emerson T. Brooking. Many of the immediately following posts will be based on or motivated by this book. The authors have been both exhaustive and creative in their offering. Since it is exhaustive only a sampling of the many important points can be included. Emphasis will be placed on the creative parts.

The very concept that led to the development of the internet was a paper written by two psychologists J.C.R Licklider and Robert W. Taylor titled “The Computer as a Communication Device.” Back in those days computers were large mainframes used for data processing. Licklider wrote another paper titled “Man Computer Symbiosis.” The idea here was that both computers and humans could benefit from the interaction between the two, a true symbiotic interaction. Unfortunately, this concept has been largely overlooked. Concentration was on replacing humans, who were regarded as slow and error prone, with computers. Today the fear is of the jobs lost by artificial intelligence. Attention needs to be focused on the interaction between humans and computers as advocated by Licklider.

But the notion of the computer as a communication device did catch on. More will be written on that in the following post.

The authors also bring Clausewitz into the discussion. Clausewitz was a military strategist famous for his saying, war is politics pursued in other means. More specifically he wrote, “the continuation of political intercourse with the addition of other means.” The two are intertwined, he explained. “War in itself does not suspend political intercourse or change it into something entirely different. In essentials that intercourse continues, irrespective of the means it employs.” War is political. And politics will always be at the heart of human conflict, the two inherently mixed. “The main lines along which military events progress, and to which they are restricted, are political lines that continue throughout the war into the subsequent peace.”

If only we could learn of what Clausewitz would think of today. Nuclear warfare was never realistic. Mutual Assured Destruction with the meaningful acronym (MAD) was never feasible. Conflicts need to be resolved, not the dissolution of the disagreeing parties. Today’s technology allows for the disruptions of financial systems, power grids, the very foundations of modern society. Would Clausewitz think that conventional warfare has become obsolete? There might be small skirmishes, but would standing militaries go all out to destroy each other. Having a technological interface rather than face to face human interactions seems to allow for more hostile and disruptive
interactions. Have politics become weaponized? Is that what the title of Singer and Brooking’s book implies?

The authors write that their research has taken them around the world and into the infinite reaches of the internet. Yet they continually found themselves circling back to five core principles, which form the foundation of the book.
First, the internet has left adolescence.

Second, the internet has become a battlefield.

Third, this battlefield changes how conflicts are fought.

Fourth, this battle changes what “war” means.

Fifth, and finally, we’re all part of this war.

Here are the final two paragraphs of the first chapter.

“The modern internet is not just a network but an ecosystem of nearly 4 billion souls, each with their own thoughts and aspirations, each capable of imprinting a tiny piece of themselves on the vast digital commons. They are the targets not of a single information war but of thousands and potentially millions of them. Those who can manipulate this swirling tide, to steer its direction and flow, can accomplish incredible good. They can free people, expose crimes, save lives, and seed far-reaching reforms. But they can also accomplish astonishing evil. They can foment violence, stoke hate, sow falsehoods, incite wars, and even erode the pillar of democracy itself.

Which side succeeds depends, in large part, on how much the rest of us learn to recognize this new warfare for what it is. Our goal in “LikeWar” is to explain exactly what’s going on and to prepare us all for what comes next.”

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Scale of Russian Operation Detailed

December 23, 2018

The title of this post is identical to the title of an article by Craig Timberg and Tony Romm in the 17 Dec ’18 issue of the Washington Post. Subtitles are: EVERY MAJOR SOCIAL MEDIA PLATFORM USED and Report finds Trump support before and after election. This post is the first to analyze the millions of posts provided by major technology firms to the Senate Intelligence Committee.

The research was done by Oxford University’s Computational Propaganda Project and Graphic, a network analysis firm. It provides new details on how Russians worked at the Internet Research Agency (IRA), which U.S. officials have charged with criminal offenses for interring in the 2016 campaign. The IRA divided Americans into key interest groups for targeted messaging. The report found that these efforts shifted over time, peaking at key political moments, such as presidential debates or party conventions. This report substantiates facts presented in prior healthy memory blog posts.

The data sets used by the researchers were provided by Facebook, Twitter, and Google and covered several years up to mid-2017, when the social media companies cracked down on the known Russian accounts. The report also analyzed data separately provided to House Intelligence Committee members.

The report says, “What is clear is that all of the messaging clearly sought to benefit the Republican Party and specifically Donald Trump. Trump is mentioned most in campaigns targeting conservatives and right-wing voters, where the messaging encouraged these groups to support his campaign. The main groups that could challenge Trump were then provided messaging that sought to confuse, distract and ultimately discourage members from voting.”

The report provides the latest evidence that Russian agents sought to help Trump win the White House. Democrats and Republicans on the panel previously studied the U.S. intelligence community’s 2017 finding that Moscow aimed to assist Trump, and in July, said the investigators had come to the correct conclusion. Nevertheless, some Republicans on Capitol Hill continue to doubt the nature of Russia’s interference in the election.

The Russians aimed energy at activating conservatives on issues such as gun rights and immigration, while sapping the political clout of left-leaning African American voters by undermining their faith in elections and spreading misleading information about how to vote. Many other groups such as Latinos, Muslims, Christians, gay men and women received at least some attention from Russians operating thousands of social media accounts.

The report offered some of the first detailed analyses of the role played by Youtube and Instagram in the Russian campaign as well as anecdotes about how Russians used other social media platforms—Google+, Tumblr and Pinterest—that had received relatively little scrutiny. That also used email accounts from Yahoo, Microsoft’s Hotmail service, and Google’s Gmail.

While reliant on data provided by technology companies the authors also highlighted the companies’ “belated and uncoordinated response” to the disinformation campaign and, once it was discovered, their failure to share more with investigators. The authors urged that in the future they provide data in “meaningful and constructive “ ways.

Facebook provided the Senate with copies of posts from 81 Facebook pages and information on 76 accounts used to purchase ads, but it did not share posts from other accounts run by the IRA. Twitter has made it challenging for outside researchers to collect and analyze data on its platform through its public feed.

Google submitted information in an especially difficult way for researchers to handle, providing content such as YouTube videos but not the related data that would have allowed a full analysis. They wrote that the YouTube information was so hard to study, that they instead tracked the links to its videos from other sites in hopes of better understand YouTube’s role in the Russian effort.

The report expressed concern about the overall threat social media poses to political discourse within and among nations, warning them that companies once viewed as tools for liberation in the Arab world and elsewhere are now a threat to democracy.

The report also said, “Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement to being a computational tool for social control, manipulated by canny political consultants and available to politicians in democracies and dictatorships alike.”

The report traces the origins of Russian online influence operations to Russian domestic politics in 2009 and says that ambitions shifted to include U.S. politics as early as 2013. The efforts to manipulate Americans grew sharply in 2014 and every year after, as teams of operatives spread their work across more platforms and accounts to target larger swaths of U.S. voters by geography, political interests, race, religion and other factors.

The report found that Facebook was particularly effective at targeting conservatives and African Americans. More than 99% of all engagements—meaning likes, shares and other reactions—came from 20 Facebook pages controlled by the IRA including “Being Patriotic,” “Heart of Texas,” “Blacktivist” and “Army of Jesus.”

Having lost the popular vote, it is difficult to believe that Trump could have carried the Electoral College given this impressive support by the Russians. One can also envisage Ronald Reagan thrashing about in his grave knowing that the Republican Presidential candidate was heavily indebted to Russia and that so many Republicans still support Trump.
© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Memory Special: Is Technology Making Your Memory Worse?

November 28, 2018

The title of this post is identical to the title of a Feature article by Helen Thomson in the 27 Oct 2018 issue of the “New Scientist.” Previous healthymemory blog posts have implied that the answer to this question is worse. Thomson writes, “Outsourcing memories, for instance to pad and paper is nothing new, but it has become easier than ever to do using external devices, leading some to wonder whether our memories are suffering as a result.”

Taking pictures has become more of an obsession given the capabilities of smart phones to take and store high quality photos. Although you might think that taking pictures and sharing stories helps you to preserve memories of events, but the opposite is true. When Diana Tamir and her colleagues at Princeton University sent people out on tours, those encouraged to take pictures actually had a poorer memory of the tour at a later date. Prof. Tamir said, “Creating a hard copy of an experience through media leaves only a diminished copy in our own heads.” People who rely on a satellite navigation system to get around are also worse at working out where they have been than those who use maps.

The expectation of information being at our fingertips seems to have an effect. When we think of something that can be accessed later, regardless of whether we will be tested on it, we have lower rates of recall of the information itself and enhanced recall instead for where to access it. Sam Gilbert of University College, London says, “These kinds of studies suggest that technology is changing our memories. We increasingly don’t need to remember content, but instead, where to find it.”

Unfortunately relying too heavily on devices can mess with our appreciation of how good our memory actually is. We are constantly making judgements about whether something is worth storing in mind. Will I remember this tomorrow? Does it need to be written down? Should I set a reminder? Meta-memory refers to our ability to understand and use our memory. Technology seems to screw it up.

People who can access the internet to help them answer general knowledge questions, such as “How does a zip work?” overestimate how much information they think they have remembered, as well as their knowledge of unrelated topics after the test,compared with people who answered questions without gong online. You lose touch with what you came from you and what came from the machine. This exacerbates the part of the Dunning-Krueger phenomena in which we think we know much more than we actually know. Gilbert says, “These are subtle biases that may not matter too much if you continue to have access to external resources. But if those resources disappear—in an exam, in an emergency, in a technological catastrophe—we may underestimate how much we would struggle without them. Having an accurate insight into how good your memory actually is, is just as important as having a good memory in the first place.”

Hypertext

October 25, 2018

HM was disappointed in Wolf’s “READER COME HOME” as hypertext was not addressed except in passing in a note to a journal article titled, “Why Don’t We Read Hypertext Novels?” HM sees enormous potential in hypertext. In scientific reading links can be provided to the references and notes in the text. Unfortunately, the financing of academic and professional texts and journals makes the seamless operation of this capability difficult. Professional organizations and publishers need to recognize that their primary job, and this is certainly true of professional organizations, is to disseminate information about their disciplines. There is a demand for hypertext here and the free moving to different texts. It is hope that in the future this demand will eventually be realized once means of remuneration and compensation are identified.

HM would be interested to read “Why Don’t We Read Hypertext Novels?” One reason might be that there are so few, if any, of them. But there is a need here, unless authors feel compelled to shove everything they’ve written down the throats of their readers. There could be links providing more information on characters and background. There could be digressive passages that a reader might want to have the option of reading or skipping. If passages are not interesting to certain readers, they either skim them or give up on the book.

From an author’s perspective hypertext offers the option to expand views and to write one document to different levels of readers. One text could be written for beginning, intermediate, and advanced learners that would provide a coherent path through one’s learning.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Concluding Letters

October 24, 2018

What struck HM about the concluding letters (chapters in Wolf’s parlance) in “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf was that they were not especially unique to digital media. They applied generally to reading and education.

One letter was titled “Between Two and Five Years: When Language and Thought Take Flight Together.” The most important point in this chapter is to read to your children. This goes beyond reading to the building of intimacy and rapport with your children. And it will fill them with the wonder of books, be that in print or digital. Some of HM’s favorite childhood memories are of his mother reading to him. She read many items some of which were “Peter Pan,” “Tom Sawyer,” and sports books by Claire Bee featuring Chip Hilton. The wonder that these abstract characters on a background yielded such interesting and entertaining stories that stimulated the mind to create images of the stories. So when reading was the subject at school, HM was a highly motivated student.

Another letter is titled “The Science and Poetry in Learning (and teaching to Read). True there are necessary adaptions for digital material, some yet to be identified, and these are important subjects. Moreover, science is involved in addressing the questions raised by digital media. Relevant sections titles are ‘Investment in Early, Ongoing Assessment of Students,” “Investment in Our Teachers,” and “Investment in the Teaching of Reading Across the School Years.”

Another letter is titled “Building a Biliterate Brain.” “Biliterate” here refers to being literate in both conventional and digital media. But this is what the entire text addresses, and it should not be thought that everything is known about conventional media. True, the ignorance is greater on the digital side, and the genius is combining so there is a synergy between the two. Research is needed. There needs to be professional training and development, and it is important that there be equal access regardless of the financial resources of the schools.

The final letter is titled “Reader, Come Home,” which again extols the virtues of reading and thinking.

The Raising of Children in a Digital Age

October 23, 2018

The title of this post is identical to the title of a letter in “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf. Wolf refers to her chapters as letters. Wolf writes: “The tough questions raised in the previous letters come to roost like chickens on a fence in the raising of our children. They require of us a developmental version of the issues summarized to this point: Will time-consuming, cognitively demanding deep-reading processes atrophy or be gradually lost within a culture whose principle mediums advantage speed, immediacy, high levels of stimulation, multitasking and large amount of information?”

She continues, “Loss, however, in this question implies the existence of a well-formed, fully elaborated circuitry. The reality is that each new reader—that is, each child—must build a wholly new reading circuit. Our children can form a very simple circuit for learning to read and acquire a basic level of decoding, or they can go on to develop highly elaborated reading circuits that add more and more sophisticated intellectual processes over time.”

These not-yet-formed reading circuits present unique challenges and a complex set of questions: First, will the early-developing cognitive components of the reading circuit be altered by digital media before, while, and after children learn to read. What will happen to the development of their attention, memory, and background knowledge—processes known to be be affected in adults by multitasking, rapidity, and distraction? Second, if they are affected, will such changes alter the makeup of the resulting expert reading circuit and/or the motivation to form and sustain deep reading capacities? Finally, what can we do to address the potential negative effects of varied digital media on reading without losing their immensely positive contributions to children and to society?

The digital world grabs children. A 2015 RAND study reported the average amount of time spent by three-to-five year old children on digital devices was four hours a day, with 75% of children from zero to eight years old having access to digital devices. This figure is up from 52% only two years earlier. The use of digital devices increased by 117% in just one year. Our evolutionary reflex, the lovely bias pulls our attention immediately toward anything new. The neuroscientist Daniel Levitin says, “Humans will work just as hard to obtain a novel experience as we will to get a meal or a mate…In multitasking, we unknowingly enter an addiction loop as the brain’s novelty centers become rewarded for processing tiny new stimuli, to the detriment of our prefrontal cortex, which wants to stay on task and gain the rewards of sustained effort attention. We need to train ourselves to go for the long reward and forgo the short one.”

Levitin claims that children can become so accustomed to a continuous stream of competitors for their attention that their brains are for all purposes being bathed in hormones such as cortisol and adrenaline, the hormones more commonly associated with fight, flight, and stress. Children three, or four, or sometimes even two and younger—but they are first passively receiving and then, ever so gradually requiring the levels of stimulation of much older children on a regular basis.

The Stanford University neuroscientist Poldrack and his team has found that some digitally raised youth can multitask if they have been trained sufficiently on one of the tasks. Unfortunately, not enough information is reported to evaluate this claim, other than to leave it open and look to further research to see how these skills can develop.

Wolfe raises legitimate concerns. Much research is needed. But the hope is that damaging effects can be eliminated or minimized. Perhaps even certain types of training with certain types of individuals can be done to minimize the costs of multitasking.

Digital Media and the Loss of Quality Information

October 22, 2018

To put matters in perspective before proceeding it is useful to remember that Socrates saw dangers in the printed word. He believed that knowledge needed to be resident in the brain and not on physical matter. He thought that the printed word would result in going to hell in a hand basket (Be clear that he did not say this, but he did see it as a definite potential danger). So this new digital world has much to offer, but also has dangers, and we need to avoid these dangers.

Frank Schirrmacher placed the origins of the conflict without our species’ need to be instantly aware of every new stimulus, what some call our novelty bias. Hyper vigilance toward the environment has definite survival value. It is virtually certain that this reflect saved many of our prehistoric ancestors from threats signaled by the barely visible tracks of deadly tigers or the soft susurrus of venomous snakes in the underbrush. Unfortunately experts in”persuasion design” principles know very well how to exploit these tendencies.

Wolf writes, “As Schirrmacher described it, the problem is that contemporary environments bombard us constantly with new sensory stimuli, as we split our attention across multiple digital devices most of our days, as often as not, nights shortened by our attention to them. A recent study by Time, Inc. of the media habits of people in their twenties indicated that they switched media sources twenty-seven times an hour. On average they now check their cell phones between 150 and 190 times a day, As a society we’re continuously distracted by our environment, and our very wiring as ominous aids and abets this. We do not see or hear the same quality of attention, because we see and hear too much, become habituated, and then seek still more.

Enter “The Distracted Mind” into the search block of the healthy memory blog to find many more relevant posts on this topic. There are clearly two distinct components to this problem: Staying plugged in and the volume and quality of information.

Unfortunately, Wolf does not directly address the topic of being plugged in, but this problem needs to be addressed first before significant progress can be made on the second. Being constantly plugged in precludes one from making any progress on this problem. There are simply too many disruptions and distractions. So one either unplugs cold turkey and remains that way, either only plugging in to communicate or strictly limiting the time one is plugged in. Clearly there are social implications here, so one needs to explain to one’s friends and acquaintances why one is doing this and try to persuade them to join you for their own benefit.

Next one can deal with the volume of communications. Wolf notes that the average amount of communication consumed by us is 34 gigabytes. Moreover, this is characterized by one spasmodic burst after another. Barack Obama has said he is worried that for many of our young, information has become “a distraction, a diversion, a form of entertainment, rather a tool of empowerment, rather than a means of emancipation.”

The literature professor Mark Edmundson writes, “Swimming in entertainment, my students have been sealed off from the chance to call everything they’ve valued into question, to look at new ways of life…For them, education is knowing and lordly spectatorship, never the Socratic dialogue about how one ought to live one’s life.”

Wolf writes, “What do we do with the cognitive overload from multiple gigabytes of information from multiple devices? First, we simplify. Second we process the information as rapidly as possible: more precise, we read more in briefer bursts. Third, we triage. We stealthily begin the insidious trade-off between our need to know with our need to save and gain time. Sometimes we outsource our intelligence to the information outlets that offer the fastest, simplest most digestible distillations of information we no longer want to think about ourselves.”

This post is based in part on “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf. She does discuss how she managed to discipline herself and break these bad habits, although she doesn’t mention the importance of the first necessary act to unplug oneself.

Then one needs to decide that technology is a tool one should use to benefit oneself rather than letting technology drives one life. Realize that we humans have finite attentional resources and prioritize what sources and types of technology should be used to pursue specific goals. These will change over time as will goals, but one should always have goals, perhaps as simple as learning something about x. If that is rewarding, one can pursue it further, move off to related areas, or to completely new areas. The objective should always be to use technology, not be used by technology, for personal fulfillment.

This post will close with a quote from Susan Sontag:
“To be a moral human being is to pay, be obliged to pay, certain kinds of attention…The nature of moral judgments depends on our capacity for paying attention, has its limits, but whose limits can be stressed.”

And one from Herman Hesse’s essay “The Magic of the Book:’
“Among the many worlds which man did not receive as a gift of nature, but which he created with his own spirit, the world of books is the greatest. Every child, scrawling his first letters on his slate and attempting to read for the first time, in so doing, enters an artificial and most complicated world: to know the laws and rules of this world completely and to practice them perfectly, no single human life is long enough. Without words, without writing, and without books thee would be no history, there could be no concept of humanity.”

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Is Deep Reading Endangered by Technology?

October 21, 2018

This post is based on “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf. MIT scholar Sherry Turkle described a study by Sara Konrath and her research group at Stanford University that showed a 40% decline in empathy in young people over the last two decades. The most precipitous decline occurred in the last ten years. Turkle attributes the loss of empathy largely to their inability to navigate the online world without losing track of their real-time, face-to-face relationships. Turkle thinks that our technologies place us at a remove, which changes not only who we are as individuals but also who we are with one another. Wolf writes, “The act of taking on the perspective and feelings of others is one of the most profound, insufficiently heralded contributions of the deep-reading process.”

Barack Obama described novelist Marilynne Robinson as a “specialist in empathy.” Obama visited Robinson during his presidency. During their wide-ranging discussion, Robinson lamented what she saw as a political drift among many people in the United States toward seeing those different from themselves as the “sinister other.” She characterized this as “dangerous a development as there could be in terms of whether we continue to be a democracy.” Whether writing about humanism’s decline or fear’s capacity to diminish the very values its proponents purport to defend, Ms Robinson conceptualized the power of books to help us understand the perspective of others as an antidote to the fears and prejudices many people harbor, often unknowingly. Within this context Obama told Robinson that the most important things he had learned about being a citizen came from novels. “It has to do with empathy. It has to do with being comfortable with the notion that the world is complicated and full of grays but there’s still truth there to be found, and that you have to strive for that and work for that. And that it’s possible to connect with someone else even thought they’re very different from you.”

It is most insightful that the polarization that is being experienced, is due in large part to missing empathy, which to some degree, perhaps large is due to digital screen technology. Although technology has been blamed for much, part of the problem here is not just the display mode of information, but also the type of content of the information. Quality fiction builds empathy. Even technical reading can build empathy provided the content can be related to the feelings and thinking of others. And some social research does summarize the feelings and thinking of others.

Wolf writes, “There are many things that would be lost if we slowly lose the cognitive patience to immerse ourselves in the worlds created by books and the lives and feelings of the “friends” who inhabit them. And although it is a wonderful thing that movies and film can do some of this, too, there is a difference in the quality of immersion that is made possible by entering the articulated thoughts of others. What will happen to young readers who never meet and begin to understand the thought and feelings of someone totally different? What will happen to older readers who begin to lose touch with that feeling of empathy for people outside their ken or kin? It is a formula for unwitting ignorance, fear and misunderstanding, that can lead to the belligerent forms of intolerance that are the opposite of America’s original goals for its citizens of many cultures.”

Deep reading involves more than empathy. Wolf writes, “The consistent strengthening of the connections among our analogical, inferential, empathic, and background knowledge processes generalize well beyond reading. When we learn to connect these processes over and over in our reading, it becomes easier to apply them to our own lives, teasing apart our motives and intentions and understanding with ever perspicacity and, perhaps, wisdom, why others think and feel the way they do. Not only is it the basis for the compassionate side of empathy, but it also contributes to strategic thinking.

Just as Obama noted, however, these strengthened processes do not come without work and practice, nor do they remain static if unused. From start to finish, the basic neurological principle—“Use it or lost it”— is true for each deep-reading process. More important still, this principle holds for the whole plastic reading-brain circuit. Only if we continuously work to develop and use our complex analogical and inferential skills will the neural networks underlying them sustain our capacity to be thoughtful, critical analysts of knowledge, rather than passive consumers of information.”

Mark Edmunson asks in his book “Why Read,” “What exactly is critical thinking?” He explains that it includes the power to examine and potentially debunk personal beliefs and convictions. Then he asks, “What good is this power of critical thought if you do not yourself believe something and are not open to having this belief modified? What’s called critical thought generally takes place from no set position at all.”

Edmonson articulates two connected, insufficiently discussed threats to critical thinking. The first threat comes when any powerful framework for understanding our world (such as a political or religious view) becomes so impenetrable to change and so rigidly adhered to that it obfuscates any divergent type of thought, even when the latter is evidence-based or morally based.

The second effect that Edmunson observes is the total absence of any developed personal belief system in many of our young people, who either do not know enough about past systems of thought (for example, Freud, Darwin, or Chomsky) or who are too impatient to examine and learn from them. As a result, their ability to learn the kind of critical thinking necessary for deeper understanding can become stunted, Intellectual rudderlessness and adherence to a way of thought that allows no question are threats to critical thinking in us all.

It is also important to be aware that Deep Reading has a generative process. Here is a quote from Jonah Lehrer—“An insight is a fleeting glimpse of the brain’s huge store of unknown knowledge. The cortex is sharing one of its secrets.”

Wolf writes, “Insight is the culmination of the multiple modes of exploration we have brought to bear on what we have read thus far: the information harvested from the text; the connections to our best thoughts and feelings; the critical conclusions gained; and then the uncharted leap into a cognitive space where we may upon occasion glimpse whole new thoughts. The formation of the reading-brain circuit is a unique epigenetic achievement in the intellectual history of our species. Within this circuit, deep reading significantly changes what we perceive, what we feel, and what we know and in so doing alters, informs, and elaborates the circuit itself.”

Neuroscience informs us that creativity is everywhere based on brain imaging and recording. There is no neat map of what occurs when we have our most creative bursts of thinking. Instead, it appears that we activate multiple regions of the brain, particularly the prefrontal cortex and the anterior cingulate gyrus.

Print vs. Screen or Digital Media

October 20, 2018

What is most bothersome about “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf is the way she contrasts print media versus the new screen or digital media. Readers might mistakenly think that the solution to this problem is to use print media and eschew screen or digital media. The reality is that in the future this might be impossible as conventional print media might be found only in museums or special libraries. But what is key to understanding is that unfortunate habits tend to develop when using screen/digital media. Moreover, the unfortunate habits are the result of a feeling of needing to be plugged in with digital media. It is these habits, skimming, superficial processing, and multi-tasking that are the true culprits here.

These same practices can be found using print matter and they are not always bad. Reading the newspaper, in either print or digital form, HM’s attention is dictated by his interests. Initially he is skimming, but when he finds something interesting he focuses his attention and reads deeply. If it turns out that he already knows the material, or that the material is a bunch of crap. He resumes skimming. This is the reason he does not like televised news since it includes material he would like to ignore or skip over. HM finds it annoying that the phrase “Breaking News” is frequently heard. Frankly, he would prefer “Already considered and processed news.” Unless there is a natural catastrophe or some imminent danger, there is no reason the news can’t wait for further context under which it can be processed.

Frankly, HM would never have been able to complete his Ph.D, had he not developed this ability. His work is interdisciplinary, so he must read in different areas. He skims until he finds relevant material. Then he focuses and quizzes himself to assure he is acquiring the relevant material. Sometimes this might be a matter of bookmarking it with the goal of returning when there would be sufficient time to process the material. Even if the topic is one with which he is familiar, he will assess whether there is anything new that requires his attention. There is simply too much material and too little time. Strategies need to be employed. The risk from current technology is that the technology is driving the process rather than the individual using the technology effectively.

We are not victims of technology unless we passively allow ourselves to become victims of technology. Students need to be taught how to use the technology and what practices need to be abandoned. One of these is being continually plugged in, but there are also social issues that need to be addressed.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

READER COME HOME

October 18, 2018

The title of this post is the same as the title of an important book by Maryanne Wolf. The subtitle is “The Reading Brain in the Digital World.” Any new technology offers benefits, but it may also contain dangers. There definitely are benefits from moving the printed world into the digital world. But there are also dangers, some of which are already quite evident. One danger is the feeling that one always needs to be plugged in. There is even an acronym for this FOMO (Fear of Missing Out). But there are costs to being continually plugged in. One is superficial processing. One of the best examples of this is of the plugged-in woman who was asked what she thought of OBAMACARE. She said that she thought it was terrible and was definitely against it. However, when she was asked what she thought of the Affordable Care Act, she said that she liked it and was definitely in favor of it. Of course, the two are the same.

This lady was exhibiting an effect that has a name, the Dunning-Krueger effect. Practically all of us think we know more than we do. Ironically, people who are quite knowledgeable about a topic are aware of their limitations and frequently qualify their responses. So, in brief, the less you know the more you think you know, but the more you know, the less you think you know. Moreover, this effect is greatly amplified in the digital age.

There is a distinction between what is available in our memories and what is accessible in our memories. Very often we are unable to remember something, but we do know that it is present in memory. So this information is available, but not accessible. There is an analogous effect in the cyber world. We can find information on the internet, but we need to look it up. It is not available in our personal memory. Unfortunately, being able to look something up on the internet is not identical to having the information available in our personal memories so that we can extemporaneously talk about the topic. We daily encounter the problem of whether we need to remember some information or whether it would be sufficient to look it up. We do not truly seriously understand something until it is available in our personal memories. The engineer Kurtzweil is planning on extending his life long enough so the he can be uploaded to a computer, thus achieving a singularity with technology. Although he is a brilliant engineer, he is woefully ignorant of psychology and neuroscience. Digital and neural codes differ and the processing systems differ, so the conversion is impossible. However, even if it were understanding requires deep cognitive and biological processing. True understanding does not come cheaply.

Technology can be misused and it can be very tempting to misuse technology. However, there are serious costs. Maryanne Wolf discusses the pitfalls and the benefits of technology. It should be understood that we are not victims of technology. Rather we need to use technology not only so that we are not victims, but also so we use technology synergistically.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

An Ambiguous State of Affairs

September 18, 2018

The title of this post is identical to the title of a section of a chapter in an insightful book by Antonio Damaisio titled “The Strange Order of Things: Life, Feeling, and the Making of Cultures. The title of this chapter is “On the Human Condition Now.”

Damaisio writes, “This could be the best of times to be alive because we are awash in spectacular scientific discoveries and in technical brilliance that make life even more comfortable and convenient; because the amount of available knowledge and the ease of access to that knowledge are at an all-time high and so is human interconnectedness at a planetary scale, as measured by actual travel, electronic communication, and international agreements for all sorts of cooperation, in science, the arts, and trade; because the ability to diagnose, manage, and even cure diseases continues to expand and longevity continues to extend so remarkably that human beings born after the year 2000 are likely to live, hopefully well, to an average of at least a hundred. Soon we will be driven around by robotic cars, saving us effort and lives because, at some point, we should have few fatal accidents.”

Unfortunately for the past four or five decades, Damaisio notes that the general public of the most advanced societies has accepted with little or no resistance a gradually deformed treatment of news and public affairs designed to fit the entertainment model of commercial television and radio. Damaisio writes, “Although a viable society must care for the way its governance promotes the welfare of citizens, the notion that one should pass four some minutes of each day and make an effort to learn about the difficulties and successes of governments and citizenry is not just old-fashioned; it has nearly vanished. As for the notion that we should learn about such matters seriously and with respect, is by now an alien concept,. Radio and television transform every governance issue into “a story,” and it is the “form” and entertainment value of the story that count, more than its factual content.”

The internet provides a means that provides large amounts of information readily available to the public. It also provides means for deliberation and discussion. Unfortunately it also provides for the generation of false news, creates alternative realities, and builds conspiracy theories. This blog has repeatedly invoked Daniel Kahneman’s Two Process View of cognition to assist in understanding the problem.
System 1 is named Intuition. System 1 is very fast, employs parallel processing, and appears to be automatic and effortless. They are so fast that they are executed, for the most part, outside conscious awareness. Emotions and feelings are also part of System 1. Learning is associative and slow. For something to become a System 1 process requires much repetition and practice. Activities such as walking, driving, and conversation are primarily System 1 processes. They occur rapidly and with little apparent effort. We would not have survived if we could not do these types of processes rapidly. But this speed of processing is purchased at a cost, the possibility of errors, biases, and illusions.
System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors to slip through.

To achieve coherent understanding, System 2 processing is required. However, System 1 processing is common on the internet. The content is primarily emotional. Facts are irrelevant and the concept of objective truth is becoming irrelevant. The Russians were able to use the internet to enable their choice for US President, Trump, to win.

Due to System 2 processing being more effortful, no matter how smart and well informed one is, we naturally tend to resist changing our beliefs, in spite of the availability of contrary evidence. Research done at Damaisio’s institute shows the resistance to change is associated with a conflicting relationship of brain systems related to emotivity and reason. The resistance to change is associated with the engagement of systems responsible for producing anger. We construct some sort of natural refuge to defend ourselves against contradictory information.

Damaisio writes, “The new world of communication is a blessing for the citizens of the world trained to think critically and knowledgeable about history. But what about citizens who have been seduced by the world of life as entertainment and commerce? They have been educated, in good part, by a world in which negative emotional provocation is the rule rather than the exception and where the best solutions for a problem have to do primarily with short-term interests.”

Fascism in on the March Again: Blame the Internet

August 11, 2018

The title of this post is identical to the title of an article by Timothy Snyder in the Outlook Section of the 27 May 2018 issue of the Washington Post. The hope was that the internet would connect people and spread liberty around the world. The opposite appears to have happened. According to Freedom House, ever year since 2005 has seen a retreat in democracy and an advance of authoritarianism. The year 2017, when the Internet reached more than half the world’s population, was marked by Freedom House as particularly disastrous. Young people who came of age with the Internet care less about democracy and are more sympathetic to authoritarianism that any other generation.

Moreover, the Internet has become a weapon of choice for those who wish to spread authoritarianism. Russia’s president and its leading propagandism both cite a fascist philosopher, Ivan Ilyin, who believed that factuality was meaningless. In 2016 Russian Twitter bots spread messages designed to discourage some Americans from voting and encourage others to vote for Russia’s preferred presidential candidate, Donald Trump. Britain was substantially influenced by bots from beyond its borders. In contrast, Germany’s democratic parties have agreed not to use bots during political campaigns. The only party to resist the idea was the extreme right Alternative fur Deutschland, which was helped by Russia’s bots in last year’s elections.

Mr. Snyder writes, “Modern democracy relies upon the notion of a “public space” where, even if we can no longer see all our fellow citizens and verify facts together, we have institutions such as science and journalism that can provide going references for discussion and policy. The Internet breaks the line between the public and private by encouraging us to confuse our private desires with the actual state of affairs. This is a constant human tendency. But in assuming that the Internet would make us more rather than less rational, we have missed the obvious danger: that we can now allow our brokers to lead us into a world where everything we would like to believe is true.

The explanation that the healthy memory blog makes is Nobel Lauerate Daniel Kahneman’s Two System View of Cognition. System 1, intuition, is our normal mode of processing and requires little or no attention. System 2, commonly referred to as thinking, requires our attention. One of the roles of System 2 is to monitor System 1. When we encounter something contradictory to what we believe, the brain set off a distinct signal. It is easiest to ignore this signal and to continue our System 1 processing. To engage System 2 requires our attentional resources to attempt to resolve the discrepancy and to seek further understanding. The Internet is a superhighway for System 1 processing, with few willing to take the off ramps to System 2 to learn new or different ways of thinking.

Mr. Snyder writes, “Democracy depends upon a certain idea of truth: not the babel of our impulses, but an independent reality visible to all citizens. This must be a goal; it can never be fully achieved. Authoritarianism arises when this goal is openly abandoned, and people conflate the truth with what they want to hear. Then begins a politics of spectacle, where the best liars with the biggest megaphones win. Trump understands this very well. As a businessman he failed, but as a politician he succeeded because he understood how to beckon desire. By deliberately speaking unreality with modern technology, the daily tweet, he outrages some and elates others, eroding the very notion of a common world of facts.

“To be sure Fascism 2.0 differs from the original. Traditional facts want to conquer both territories and selves; the Internet will settle for your soul. The racist oligarchies that are emerging behind the Internet today want you on the couch, outraged or elated, it doesn’t matter which, so long as you are dissipated at the end of the day. They want society to be polarized, believing in virtual enemies that are inside the gate, rather than actually marching or acting in the physical world. Polarization directs Americans at other Americans, or rather at the Internet caricatures of other Americans, rather than at fundamental problems such as wealth inequality or foreign interference in democratic elections. The Internet creates a sense of “us and them” inside the country and an experience that feels like politics but involves no actual policy.”

To be sure, Trump is a Fascist. His so-called “base” consists of nazis and white supremacists. His playbook is straight from Joseph Goebbels with the “big lie” and the repetition of that “big lie.”

VR Headset Helps People Who Are Legally Blind to See

August 9, 2018

The title of this post is identical to the title of an article by Catherine de Lange in the News section of the 4 August 2018 issue of the New Scientist. Although this virtual reality headset does not cure the physical cause of blindness, the device does let people with severe macular degeneration resume activities like reading and gardening—tasks they previously found impossible.

Macular degeneration is a common, age-related condition. It affects about 11 million people in the US and around 600,000 people in the UK. Damage to blood vessels causes the central part of the eye, the macula, to degrade. This leaves people with a blind spot in the center of their vision, and can make those with the condition legally blind. Bob Massof at Johns Hopkins University says, “You can still see with your periphery, but it is difficult or impossible to recognize people, to read, to perform everyday activities.”

This new system is called IrisVision. It uses virtual reality (VR) to make the most of peripheral vision. The user puts on a VR headset that holds a Samsung Galaxy phone. It records the person’s surrounding and displays them in real time, so that the user can magnify the image as many times as they need for their peripheral vision to become clear. Doing so also helps to reduce or eliminate their blind spot.

Tomi Perski at Iris Vision, who also has severe macular degeneration, says “Everything around the blind spot looks, say, 10 times bigger, so the relative size of the blind spot looks so much smaller that the brain can’t perceive it anymore. When he first started using the device it was an emotional experience. He says, “I sensed that I could see again and the tears started coming.”

Perski says, “If I were to look at my wife—and I’m standing 4 or 5 feet away—my blind spot is so large I couldn’t see her head at all.” But when he uses IrisVision the magnification causes the blind spot to be relatively much smaller, so that it no longer covers his wife’s whole head, just a small part of her face. He says, “If I just move that blind spot I can see her whole face and her expression and everything.”

The software automatically focuses on what the person is looking at, enabling them to go from reading a book on their lap to looking at the distance without adjusting the magnification or zoom manually. Colors are given a boost because many people with macular generation have trouble distinguishing them (the cones are largely in the macular region), and users can place the magnification bubble over anything they want to see in even more detail, for example to read small print.

In a trial, 30 people used the system for two weeks, filling out questionnaires on their ability to complete daily activities before and after the period. David Rhew at Samsung Electronics Americas says, “They can now read, they can watch TV, they can interact with people, they can do gardening, They can can stuff that for years was not even a consideration.”
According to Rhew, the vision of participants was all but restored with the headset. Whew says, “The baseline rate of vision in the individuals came in at 20/400, which is legally blind, and with the use of this technology it improved to 20/30, which is pretty close to 20/20 vision.” 20/40 is usually the standard that lets people drive without glasses. 20/30 is even better. This is not to say they can drive with this device, but rather to indicate the quality of the vision.

The results have been presented at the Association for Research in Vision and Opthalmology annual meeting.

The headset is now being used in 80 ophthalmology centers around the US, and the next step is to adapt the software to work of other vision disorders.

The system costs $2500, which includes a Samsung Gear VR headset and a Galaxy S7 or S8 smartphone customized with the software.

Trump and the 2018 Election

July 27, 2018

At the joint press conference with Trump and Putin, Putin said that he wanted Trump to win and that he helped Trump win. The record (both video and print) of this conference the White House has published, which is supposed to be an accurate public record, has omitted these comments by Putin. And Trump is arguing that Russia is going to help the Democrats in 2018.

In case you’re wondering how Trump manages to do this, you must realize that Trump lives in a self-created reality that changes as a function of what he wants and what is convenient at the moment. Objective reality does not exist for Trump.

The obvious question is, how can Trump’s base not notice that Trump is not in touch with reality. The answer is that they are exclusive System 1 processors (see the many posts on Kahneman) who believe everything he says.

The immediately preceding post predicted a possible Constitutional Crisis resulting from disputed election results. The situation reminds HM of the response Benjamin Franklin gave to someone who asked what the outcome of the Constitutional Convention was. Franklin answered, “a republic, if you can keep it.” HM is becoming increasingly doubtful that we shall be able to keep it. What is needed is for Republicans return to Republican values rather then serving as Trump’s unthinking lackeys.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Surprise, Maryland

July 26, 2018

The title of this post is identical to the title of an article in the 23 July 2018 issue of the Washington Post. The subtitle is “Your election contractor has ties to Russia. And other states also remain vulnerable to vote tampering.” Senior officials have revealed that an Internet technology company with which the state contracts to hold electronic voting information is connected to a Russia oligarch who is “very close’ to Russian President Vladimir Putin. Maryland leaders did not know about the connection until the FBI told them.

Maryland is not a slacker on election security; it is regarded as being ahead of the curve relative to other states. So if even motivated states can be surprised, what about the real laggards?

Maryland’s exposure began when it chose a company to keep electronic information on voter registration, election results and other extremely sensitive data. Later this company was purchased by a firm run by a Russian millionaire and heavily invested in by a Kremlin-connected Russian billionaire. Currently the state does not have any sense that this Russia links have had any impact on the conduct of its elections, and it is scrambling to shore up its data handling before November’s voting. But the fact that the ownership change’s implications could have gone unnoticed by state officials is cause enough for concern. The quality of contractors that states employ to handle a variety of election-related tasks is just one of may concerns election-security experts have identified since Russia’s manipulation in the 2016 U.S. presidential election.

Maryland has pushed to upgrade its election infrastructure. It rented new voting machines in advance of the 2016 vote to ensure that they have left a paper trail. State election officials note that they hire an independent auditor to conduct a parallel count based on those paper records, with automatic recounts if there is a substantial discrepancy between the two tallies. Observers note that the state could still do better, for example by conducting manual post-election audits as well as electronic ones. But Maryland is still far more responsible than many others.

Recently Politico’s Eric Geller surveyed 40 states about how they would spend new federal election-security funding Congress recently approved. Here are some depressing results: “only 13 states said they intend to use the federal dollars to buy new voting machines. At least 22 said they have no plans to replace their machines before election—including all five states that rely solely on electronic voting devices, which cybersecurity experts consider a top vulnerability. In addition almost no states conduct robust statistic-based post-election audits to look for evidence of tampering after the fact. And fewer than one-third of states and territories have requested a key type of security review from the Department of Homeland Security.”

Moreover, Congress seems uninterested in offering any more financial help, despite states’ glaring needs. Federal lawmakers, who are Republican, last week nixed a $380 million election-security measure.
So do not waste your time watching voter predictions and wonder whether there will be a “blue wave” to save the country from Trump. Russian election interference is guaranteed, and Trump, understandably is taking no action. So if there is no blue wave, Democrats will cry interference. If there is a “blue wave” Trump would claim interference even though such interference by Russia would make no sense, although Trump has already made this assertion. Mixed results and widespread dissatisfaction are the likely result. And perhaps a Constitutional Crisis.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Wait, There’s More

July 25, 2018

More information on Russian interference in the 2016 Presidential Election. This post is based on an article titled, “Burst of tweets from Russian operatives in October 2016 generates suspicion” by Craig Timberg and Shane Harris in the 21 July 2018 issue of the Washington Post. The article begins, “On the eve of one of the newsiest days of the 2016 presidential election, Russian operatives sent off tweets at a rate of about a dozen each minute. By the time they finished, more than 18,000 had been sent through cyberspace toward unwitting American voters, making it the busiest day by far in a disinformation operation whose aftermath is still roiling U.S. politics.

Clemson University researchers have collected 3 million Russian tweets. The reason for this burst of activity on 6 Oct. 2016 is a mystery that has generated intriguing theories but no definitive explanation. The theories attempt to make sense of how such a heavy flow of Russian disinformation might be related to what came immediately after on Oct. 7. This was the day when Wikileaks began releasing embarrassing emails that Russian intelligence operatives had stolen from the campaign chairman for Hillary Clinton, revealing sensitive internal conversations that would stir controversy.

Complicating this analysis is the number of other noteworthy events on that day. That day is best remembered for the Washington Post’s publication of a recording of Donald Trump speaking crudely about women. Also on that day U.S. intelligence officials first made public their growing concerns about Russian meddling in the presidential election, following reports about the hacking of prominent Americans and intrusions into election systems in several U.S. states.

Two questions are: Could the Russian disinformation teams have gotten advanced notice of the Wikileaks release, sending the operatives into overdrive to shape public reactions to the news? And what do the operatives’ actions that day reveal about Russia’s strategy and tactics that Americans are heading into another crucial election in just a few months?

The Clemson University researchers have assembled the largest trove of Russian disinformation tweets available so far. The database includes tweets between February 2014 and May 2018, all from accounts that Twitter has identified as part of the disinformation campaign waged by the Internet Research Agency, based in St. Petersburg, Russia and owned by a Putin associate.

The new data offer still more evidence of the coordinated nature of Russia’s attempt to manipulate the American election. The Clemson researchers dubbed it “state-sponsored agenda building.”

Overall the tweets reveal a highly adaptive operation that interacted tens of millions of times with authentic Twitter users—many of whom retweeted the Russian accounts—and frequently shifted tactics in response to public events, such as Hillary Clinton’s stumble at a Sept. 11 memorial.

The Russians working for the Internet Research Agency are often called “trolls” for their efforts to secretly manipulate online conversations. These trolls picked up their average pace of tweeting after Trump’s election. This was especially true for the more than 600 accounts targeting the conservative voters who were part of his electoral base, a surge the researchers suspect was an effort to shape the political agenda during the transition period by energizing core supporters.

For sheer curiosity, nothing in the Clemson dataset rivals Oct. 6. The remarkable combination of news events the following day has several analysts, including the Clemson researchers, suspect there likely was a connection to the coming Wikileaks release. Other researchers dispute the conclusion that there was a connection.

However, last week’s indictment of Russian intelligence officers by Special Counsel Robert Mueller III made clear that the hack of Clinton campaign chairman John Podesta’s emails and their distribution through Wikileaks was a meticulous operation. Tipping off the Internet Research Agency, the St. Petersburg troll factory owned by an associate of President Vladimir Putin, might have been part of an overarching plan of execution, said several people familiar with the Clemson findings about the activity of the Russian trolls.

Clint Watts, a former FBI agent and an expert on the Russian troll armies and how they respond to new as well as upcoming events, like debates of candidate appearances ,says that they tend to ramp up when they know something’s coming.

Although Watts did not participate in the Clemson research, his instincts fit with those of researchers Darren L. Linvill and Patrick L. Warren, who point to the odd consistency of the storm of tweets. More than on any other day, the trolls on Oct 6 focused their energies on a left leaning audience, with more than 70% of the tweets targeting Clinton’s natural constituency of liberals, environmentalists and African Americans. Livill and Warren, have written a paper on their research now undergoing peer review, identifying 230 accounts they categorized as “Left Trolls” because they sought to infiltrate left-wing conversation on Twitter.

These Left Trolls did so in a way designed to damage Clinton, who is portrayed as corrupt, in poor health, dishonest and insensitive to the needs of working-class voters and various minority groups. In contrast, the Left Trolls celebrated Sen. Bernie Sasnders and his insurgent primary campaign against Clinton and, in the general election, Green Party candidate Jill Stein.

For example, less than two weeks before election day, the Left Troll account @Blactivisits tweeted, “NO LIVES MATTER TO HILLARY CLINTON. ONLY VOTES MATTER TO HILLARY CLINTON.”

Ninety-three of the Left Troll accounts were active on Oct. 6 and 7, each with an average following of 1760 other Twitter accounts. Taken together, their messages could have directly reached Twitter accounts 20 million times on those two days, and reached millions of others though retweets, according to the Clemson researchers.

Podesta’s emails made public candid, unflattering comments about Sanders and fueled allegations that Clinton had triumphed over him because of her connection to the Democratic party establishment. The Left Trolls on Oct. 6 appeared to be stirring up conversation among Twitter users potentially interested in such arguments, according to the Clemson researchers.

Warren, an associate professor of economics, said, “We think that they were trying to activate and energize the left wing of the Democratic Party, the Bernie wing basically, before the Wikileaks release that implicated Hillary in stealing the Democratic primary.”

U.S. officials with knowledge of information that the government has gathered on the Russian operation said they had yet to establish a clear connection between Wikileaks and the troll account that would prove they were coordinating around the release of campaign emails. The official spoke on the condition of anonymity to share assessments not approved for official release.

But some clues have emerged that may point to coordination. It now appears that WikiLeaks intended to publish the Podesta emails closer to the election, and that some external event compelled the group to publish sooner than planned, the officials said.

One U.S. official said, “There is definitely a command and control structure behind the IRA’s use of statistical media, pushing narratives and leading people towards certain conclusions.”

Warren and Linvill found that Russian disinformation tweets generated significant conversation among other Twitter users. Between September and November 2016, references to the Internet Research Agency accounts showed up in the tweets of others 4.7 million times.

The patterns of tweets also shows how single team trolls worked on different types of accounts depending on shifting priorities, one hour playing the part of an immigrant-bashing conservative, the next an African American concerned about police brutality and on their avid participant in “hashtag games” in which Twitter users riff on particular questions such as “#WasteAMillionIn3Words.” The answer on 11 July 2015 from IRA account @LoraGreen was, “Donate to #Hillary.”

Linvill said, “Day to day they seem to be operating as a business just allocating resources. It’s definitely one organization. It’s not one fat guy sitting in his house.”

Warren and LInvill collected their set of Internet Research Agency tweets using a social media analysis tool called Social Studio that catalogs tweets in a searchable format. The researchers collected all of the available tweets from 3,841 accounts that Twitter has identified as having been controlled by the Internet Research Agency, whose officials and affiliated companies have been charged with several crimes related to the 2016 election.

The Clemson researchers sorted the Internet Research Agency accounts into five categories, the largest two being “Right Troll” and Left Troll.” The others focused on retweeting news stories from around the country, participating in hashtag games or spreading a false news story about salmonella outbreak in turkeys around the Thanksgiving season of 2015.

The latest and most active group overall were the Right Trolls, which typically had little profile information but features photos the researchers described as “young, attractive women.” They collectively had nearly a million followers, the researchers said.

The Right Trolls pounced on the Sept. 11 stumble by Clinton to tweet at a frenetic pace for several days. They experimented with a variety of related hashtags such as #HillarSickAtGroundZero, #ClintonCollapse and #ZombieHillary before eventually focusing on #HillarysHealth and #SickHillary, tweeting these hundreds of times.

This theme flowed into several more days of intensive tweeting about a series of bombings in the New York area that injured dozens of people, stoking fears of terrorism.

When one group of accounts was tweeting at a rapid pace, others often slacked off or stopped entirely, underscoring the Clemson researchers’ conclusion that a single team was taking turns operating various accounts. The trolls also likely used some forms of automation to manage multiple accounts simultaneously and tweet with a speed impractical for humans, according to the researchers.

What Should Be Done

July 24, 2018

The first part of this post is taken from the Afterword of “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age,” by David E. Sanger.

“The first is that our cyber capabilities are no longer unique. Russia and China have nearly matched America’s cyber skills; Iran and North Korea will likely do so soon, if they haven’t already. We have to adjust to that reality. Those countries will no sooner abandon their cyber arsenals than they will abandon their nuclear arsenals or ambitions. The clock cannot be turned back. So it is time for arms control.”

“Second, we need a playbook for responding to attacks, and we need to demonstrate a willingness to use it. It is one thing to convene a ‘Cyber Action Group’ as Obama did fairly often, and have them debate when there is enough evidence and enough concert to recommend to the president a ‘proportional response.’ It is another thing to respond quickly and effectively when such an attack occurs.”

“Third, we must develop our abilities to attribute attacks and make calling out any adversary the standard response to cyber aggression. The Trump administration, in its first eighteenth months, began doing just this: it named North Korea as the culprit in WannaCry and Russia as the creators of NotPetya. It needs to do that more often, and faster. “

“Fourth, we need to rethink the wisdom of reflexive secrecy around our cyber capabilities. Certainly, some secrecy about how our cyberweapons work is necessary—though by now, after Snowdon and Shadow Brokers, there is not much mystery left. America’s adversaries have a pretty complete picture of how the United States breaks into the darkest of cyberspace. “

“Fifth, the world tends to move ahead with setting these norms of behavior even if governments are not yet ready. Classic arms-control treaties won’t work: they take years to negotiate and more to ratify. With the blistering pace of technological change in cyber, they would be outdated before they ever went into effect. The best hope is to reach a consensus on principles that begins with minimizing the danger to ordinary civilians, the fundamental political goal of most rules of warfare. There are several ways to accomplish that goal, all of them with significant drawbacks. But the most intriguing, to my mind, has emerged under the rubric of a “Digital Geneva Convention,” in which companies—not countries—take the lead in the short term. But countries must then step up their games too.”

There is much more in this book than could be covered in these healthymemory posts. The primary objective was to raise awareness of this new threat, this new type of warfare, and how ill-prepared we are to respond to it and to fight it. You are encouraged to buy this book and read it for yourself. If this book is relevant to your employment, have your employer buy this book.
It is important to understand that Russia made war on us by attacking our election, and that they shall continue to do so. Currently we have a president who refuses to believe that we have been attacked. Moreover, it is possible that this president colluded with the enemy in this attack. Were he innocent, he would simply let the investigation take its course. Through his continuing denials, cries of witch hunt, and his attacks on the intelligence agencies and justice department are unconscionable. This has been further exacerbated by Republicans aiding in this effort to undermine our democracy.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The 2016 Election—Part Three

July 22, 2018

This post is based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” Once the GRU via Gucci 2.0, DCLeaks, and WikiLeaks, began distributing the hacked emails, each revelation of the DNC’s infighting or Hillary Clinton’s talks at fund raisers became big news. The content of the leaks overwhelmed the bigger, more important questions of whether everyone—staring with the news organizations reporting the contents of the emails—was doing Putin’s bidding. When in early August John Brennan, the CIA Director, began sending intelligence reports over to the White House in sealed envelopes, the administration was preoccupied with the possibility that a far larger plot was under way. The officials feared that the DNC was only an opening shot, or a distraction. Reports were trickling in about constant “probes” of election systems in Arizona and Illinois were traced back to Russian hackers. Two questions were: Was Putin’s bigger plan to hack the votes on November 8? and how easy would that be to pull off?

Brennan’s intelligence reports of Putin’s intentions and orders made the CIA declare with “high confidence” that the DNC hack was the work of he Russian government at a time when the NSA and other intelligence agencies still harbored doubts. The sources described a coordinated campaign ordered by Putin himself, the ultimate modern-day cyber assault—subtle, deniable, launched on many fronts-incongruously directed from behind the six-hundred walls of the Kremlin. The CIA concluded that Putin didn’t think Trump could win the election. Putin, like everyone else, was betting that his nemesis Clinton would prevail. He was hoping to weaken her by fueling a post-election-day narrative, that she had stolen the election by vote tampering.

Brennan argued that Putin and his aides had two goals: “Their first objective was to undermine the credibility and integrity of the US electoral process. They were trying to damage Hillary Clinton. They thought she would be elected and they wanted her bloodied by the time she was going to be inaugurated;” but Putin was hedging his bets by also trying to promote the prospects of Mr. Trump.

[Excuse the interruption of this discussion to consider where we stand today. Both Putin and Trump want to undermine the credibility and integrity of the US electoral process. Trump has been added because he is doing nothing to keep the Russians from interfering again. Much is written about the possibility of a “Blue Wave” being swept into power in the mid-term elections. Hacking into the electoral process again with no preventive measures would impede any such Blue Wave. Trump fears a Blue Wave as it might lead to his impeachment. This is one of his “Remain President and Keep Out of Jail Cards. Others will be discussed in later posts. ]

Returning to the blog, at this time Trump began warning about election machine tampering. He appeared with Sean Hannity on Fox News promoting his claim of fraudulent voting. He also complained about needing to scrub the voting rolls and make it as difficult as possible for non-Trump voters to vote. Moreover, he used this as his excuse for losing the popular election.

At this time Russian propaganda was in full force via the Russian TV network and Breitbart News, Steve Bannion’s mouthpiece.

A member of Obama’s team, Haines said he didn’t realized that two-thirds of American adults get their news through social media. He said, “So while we knew somethig about Russian efforts to manipulate social media, I think it is fair to say that we did not recognize the extent of the vulnerability.

Brennan was alarmed at the election risk from the Russians. He assembled a task force of CIA, NSA, and FBI experts to sort through the evidnce. And as his sense of alarm increased, he decided that he needed to personally brief the Senate and House leadership about the Russian infiltrations. One by one he got to these leaders and they had security clearances so he could paint a clear picture of Russia’s efforts.

As soon as the session with twelve congressional leaders led by Mitch McConnell began it went bad. It devolved into a partisan debate. McConnell did not believe what he was being told. He chastised the intelligence officials for buying into what he claimed was Obama administration spin. Comey tried to make the point that Russian had engaged in this kind of activity before, but this time it was on a far broader scale. The argument made no difference, It became clear that McConnell would not sign on to any statement blaming the Russians.

It should be remembered that when Obama was elected, McConnell swore he would do everything in his power to keep Obama from being reelected. McConnell is a blatant racist and 100% politician. The country is much worse for it. For McConnell professionals interested in determining the truth do not exist. All that exists is what is politically expedient for him.

There was much discussion regarding what to do about Russia. DNI Clapper warned that if the Russians truly wanted to escalate, the had an easy path. Their implants were already deep inside the American electric grid. The most efficient for turning Election Day into a chaotic finger-pointing mess would be to plunge key cities into darkness, even for just a few hours.

Another issue was that NSA’s tools had been compromised. Their implants in foreign systems exposed, the NSA temporarily went dark. At a time when the White House and Pentagon were demanding more options on Russia and a stepped-up campaign against ISIS, the NSA was building new tools because their old ones had been blown.

The 2016 Election—Part Two

July 21, 2018

This post is based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” In March 2016 “Fancy Bear,” a Russian group associated with the GRU (Russian military intelligence) broke into the computers of the Democratic Congressional Campaign Committee before moving into the DNC networks as well. “Fancy Bear” was busy sorting through Podesta’s email trove. The mystery was what the Russians planned to do with the information they had stolen. The entire computer infrastructure at the DNC needed to be replaced. Otherwise it would not be known for sure where the Russians had buried implants in the system.

The DNC leadership began meeting with senior FBI officials in mid-June. In mid-June, the DNC leadership decided to give the story of the hack to the Washington Post. Both the Washington Post snd the New York Times ran it, but it was buried in the political back pages. Unlike the physical Watergate break-in, the significance of a cyber break in had yet to be appreciated.

The day after the Post and the Times ran they stories a persona with the screen name Guccifer 2.0 burst onto the web, claiming that he—not some Russian group—had hacked the DNC. His awkward English, a hallmark of the Russian effort made it clear he was not a native speaker. He contended he was just a very talented hacker, writing:

Worldwide known cyber security company CrowdStrike announced that the Democratic National Committee (DNC) servers had been hacked by “sophisticated” hacker groups.

I’m very please the company appreciated mu skills so highly)))
But in fact, it was easy, very easy.

Guccifer may have been the first one who penetrated Hillary Clinton’s and other Democrats’ mail servers. But he certainly wasn’t the last. No wonder any other hacker could easily get access to the DNC’s servers.

Shame on CrowdStrike: Do you think I’ve been in the DNC’s networks for almost a year and saved only 2 documents? Do you really believe it?

He wrote that thousands of files and emails were now in the hands of WikiLeaks. He predicted that they would publish them soon.

Sanger writes, “There was only one explanation for the purpose of releasing the DNC documents: to accelerate the discord between the Clinton camp and the Bernie Sanders camp, and to embarrass the Democratic leadership. That was when the phrase “weaponizing” information began to take off. It was hardly a new idea. The web just allowed it to spread faster than past generations had ever known.”

Sanger continues, “The digital break-in at the DNC was strange enough, but Trump’s insistence that there was no way it could be definitively traced to the Russians was even stranger, Yet Trump kept declaring he admired Putin’s “strength,” as if strength was the sole qualifying characteristic of a good national leader…He never criticized Putin’s moves against Ukraine, his annexation of Crimea, or his support of Bashar al-Assad in Syria.”

The GRU-linked emails weren’t producing as much news as they had hoped, so the next level of the plan kicked in: activating WikiLeaks. The first WikiLeaks dump was massive: 44,000 emails, more than 17,000 attachments. The deluge started right before the Democratic National Convention .

Many of these documents created discord in the convention. The party’s chair, Wasserman Schultz had to resign just ahead of the convention over which she was to preside. In the midst of the convention Sanger and his colleague Nicole Perlroth wrote: “An unusual question is capturing the attention of cyber specialists, Russia experts and Democratic Party leaders in Philadelphia: Is Vladimir V. Putin trying to meddle in the American Presidential Election?”

A preliminary highly classified CIA assessment circulating in the White House concluded with “high confidence” the the Russian government was behind the theft of emails and documents from the Democratic National Committee. This was the first time the government began to signal that a larger plot was under way.

Still the White House remained silent. Eric Schmitt and Sanger wrote,” The CIA evidence leaves President Obama and his national security aides with a difficult diplomatic decision: whether to publicly accuse the government of Vladimir V. Putin of engineering the hacking.”

Trump wrote on Twitter, “The new joke in town is that Russia leaked the disastrous DNC emails, which never should have been written (stupid), because Putin likes me.”

Sanger writes, “Soon it would not be a joke.

The 2016 Election—Part One

July 20, 2018

This post is based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” In the middle of 2015 the Democratic National Committee asked Richard Clarke to assess the political organization’s digital vulnerabilities. He was amazed at what his team discovered. The DNC—despite its Watergate History, despite the well-publicized Chinese and Russian intrusion into the Obama campaign computers in 2008 and 2012—was securing its data with the kind of minimal techniques one would expect to find at a chain of dry cleaners. The way spam was filtered wasn’t even as sophisticated as what Google’s Gmail provides; it certainly wasn’t prepared for a sophisticated attack. And the DNC barely trained its employees to spot a “spear phishing” of the kind that fooled the Ukrainian power operators into clicking on a link, only to steal whatever passwords are entered. It lacked any capability for detecting suspicious activity in the network such as the dumping of data to a distant server. Sanger writes, “It was 2015, and the committee was still thinking like it was 1792.”

So Clarke’s team came up with a list of urgent steps the DNC needed to take to protect itself. The DNC said they were too expensive. Clarke recalled “They said all their money had to go into the presidential race.” Sanger writes, “Of the many disastrous misjudgments the Democrats made in the 2016 elections, this one may rank as the worst.” A senior FBI official told Sanger, “These DNC guys were like Bambi walking in the woods, surrounded by hunters. They had zero chance of surviving an attack. Zero.”

When an intelligence report from the National Security Agency about a suspicious Russian intrusion into the computer networks at the DNC was tossed onto Special Agent Adrian Hawkin’s desk at the end of the summer of 2015, it did not strike him or his superiors at the FBI as a four-alarm fire. When Hawkins eventually called the DNC switchboard, hoping to alert its computer-security team to the FBI’s evidence of Russian hacking he discovered that they didn’t have a computer-security team. In November 2015 Hawkins contacted the DNC again and explained that the situation was worsening. This second warning still did not set off alarms.

Anyone looking for a motive for Putin to poke into the election machinery of the United States does not have to look far: revenge. Putin had won his election, but had essentially assured the outcome. This evidence was on video that went viral.
Clinton, who was Secretary of State, called out Russia for its antidemocratic behavior. Putin took the declaration personally. The sign of actual protesters, shouting his name, seemed to shake the man known for his unchanging countenance. He saw this as an opportunity. He declared that the protests were foreign-inspired. At a large meeting he was hosting, he accused Clinton of being behind “foreign money” aimed at undercutting the Russian state. Putin quickly put down the 2011 protests and made sure that there was no repetition in the aftermath of later elections. His mix of personal grievance at Clinton and general grievance at what he viewed as American hypocrisy never went away. It festered.

Yevgeny Prigozhin developed a large project for Putin: A propaganda center called the Internet Research Agency (IRA). It was housed in a squat four-story building in Saint Petersburg. From that building, tens of thousands of tweets, Facebook posts, and advertisements were generated in hopes of triggering chaos in the United States, and, at the end of the processing, helping Donald Trump, a man who liked oligarchs, enter the Oval Office.

This creation of the IRA marked a profound transition in how the Internet could be put to use. Sanger writes, “For a decade it was regarded as a great force for democracy: as people of different cultures communicated, the best ideas would rise to the top and autocrats would be undercut. The IRA was based on the opposite thought: social media could just as easily incite disagreements, fray social bonds, and drive people apart. While the first great blush of attention garnered by the IRA would come because of its work surrounding the 2016 election, its real impact went deeper—in pulling at the threads that bound together a society that lived more and more of its daily life the the digital space. Its ultimate effect was mostly psychological.”

Sanger continues, “There was an added benefit: The IRA could actually degrade social media’s organizational power through weaponizing it. The ease with which its “news writers” impersonated real Americans—or real Europeans, or anyone else—meant that over time, people would lose trust in the entire platform. For Putin, who looked at social media’s role in fomenting rebellion in the Middle East and organizing opposition to Russia in Ukraine, the notion of calling into question just who was on the other end of a Tweet or Facebook post—of making revolutionaries think twice before reaching for their smartphones to organize—would be a delightful by-product. It gave him two ways to undermine his adversaries for the price of one.”

The IRA moved on to advertising. Between June 2015 and August 2017 the agency and groups linked to it spent thousands of dollars on Facebook as each month, at a fraction of the cost for an evening of television advertising on a local American television stations. In this period Putin’s trolls reached up to 126 million Facebook users, while on Twitter they made 288 million impressions. Bear in mind that there are about 200 million registered voters in the US and only 139 million voted in 2016.

Here are some examples of the Facebook posts. A doctored picture of Clinton shaking hands with Osama bin Laden or a comic depicting Satan arm-wrestling Jesus. The Satan figures says “If I win, Clinton wins.” The Jesus figure responds, “Not if I can help it.”

The IRA dispatched two of their experts, a data analyst and a high-ranking member of the troll farm. They spent three weeks touring purple states. They did rudimentary research and developed an understanding of swing states (something that doesn’t exist in Russia). This allows the Russians to develop an election-meddling strategy, which allows the IRA to target specific populations within these states that might be vulnerable to influence by social media campaigns operated by trolls across the Atlantic.

Russian hackers also broke into the State Department’s unclassified email system, and they might also have gotten into some “classified” systems. They also managed to break into the White House system. In the end, the Americans won the cyber battle in the State and White House systems, though they did not fully understand how it was part of an escalation of a very long war.

The Russians also broke into Clinton’s election office in Brooklyn. Podesta fell prey to a phishing attempt. When he changed his password the Russians obtained access to sixty thousand emails going back a decade.

WannaCry & NotPetya

July 19, 2018

This post is based on “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age,” by David E. Sanger. The North Koreans got software stolen from the NSA by the Shadow Brokers group. So, the NSA lost its weapons and the North Koreans shot them back.

The North Korean hackers married NSA’s tool to a new form of ransomware, which locks computers and makes their data inaccessible—unless the user pays for an electronic key. The attack was spread via a phishing email similar to the one used by Russian hackers in the attacks on the Democratic National Committee and other targets in 2016. It contained an encrypted, compressed file that evaded most virus-detection software. Once it burst alive inside a computer or network, users received a demand for $300 to unlock their data. It is not known how many paid, but those who did never got the key, if there ever was one—to unlock their documents and databases.

WannaCry, like the Russian attackers on the Ukraine power grid, was among a new generation of attacks that put civilians in the crosshairs. Jared Cohen, a former State Department official said, “If you’re wondering why you’re getting hacked—or attempted hacked—with greater frequency, it is because you are getting hit with the digital equivalent of shrapnel in an escalating state-against-state war, way out there in cyberspace.”

WannaCry shut down the computer systems of several major British hospital systems, diverting ambulances and delaying non-emergency surgeries. Banks and transportation systems across dozens of counties were affected. WannaCry hit seventy-four countries. After Britain, the hardest hit was Russia (Russia’s Interior Ministry was among the most prominent victims). The Ukraine and Taiwan were also hit.

It was not until December 2017, three years to the day after Obama accused North Korea of the Sony attacks, for the United States and Britain to formally declare that Kim Jong-un’s government was responsible for WannaCry. President Trump’s homeland security adviser Thomas Bossert said he was “comfortable” asserting that the hackers were “directed by the government of North Korea,” but said that conclusion came from looking at “not only the operational infrastructure, but also the tradecraft and the routine and the behaviors that we’ve seen demonstrated in past attacks. And so you have to apply some gumshoe work here, and not just some code analysis.”

“The gumshoe work stopped short of reporting about how Shadow Brokers allowed the North Koreans to get their hands on tools developed for the American cyber arsenal. Describing how the NSA enabled North Korean hackers was either too sensitive, too embarrassing or both. Bossert was honest about the fact that having identified the North Koreans, he couldn’t do much else to them. “President Trump has used just about every level you can use, short of starving the people of North Korea to death, to change their behavior,” Bossert acknowledged. “And so we don’t have a lot of room left here.”
The Ukraine was victim to multiple cyberattacks. One of the worst was NotPetya. NotPetya was nicknamed by the Kaspersky Lab, which is itself suspected by the US government of providing back doors to the Russian government via its profitable security products. This cyberattack on the Ukrainians seemed targeted at virtually every business in the country, both large and small—from the television stations to the software houses to any mom-and-pop shops that used credit cards. Throughout the country computer users saw the same broken-English message pop onto their screens. It announced that everything on the hard drives of their computers had been encrypted: “Oops, your important files have been encrypted…Perhaps you are busy looking to recover your files, but don’t waste your time.” Then the false claim was made that if $300 was paid in bitcoin the files would be restored.

NotPetya was similar to WannaCry. In early 2017 the Trump administration said that NotPetya was the work of the Russians. It was clear that the Russians had learned from the North Koreans. They made sure that no patch of Microsoft software would slow the spread of their code, and no “kill switch’ could be activated. NotPetya struck two thousand targets around the world, in more than 65 countries. Maersk, the Danish shipping company, was among the worst hit. They reported losing $300 million in revenues and had to replace four thousand servers and thousands of computers.

The Shadow Brokers

July 18, 2018

This is the fourth post based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” Within the NSA a group developed special tools for Tailored Access Operations (TAO). These tools were used to break into the computer networks of Russia, China, and Iran, among others. These tools were posted by a group that called itself the Shadow Brokers. NSA’s cyber warriors knew that the code being posted was malware they had written. It was the code that allowed the NSA to place implants in foreign systems, where they could lurk unseen for years—unless the target knew what the malware looked like. The Shadow Brokers were offering a product catalog.

Inside the NSA, this breach was regarded as being much more damaging than what Snowdon had done. The Shadow Brokers had their hands on the actual code, the cyberweapons themselves. These had cost tens of millions of dollars to create, implant, and exploit. Now they were posted for all to see—and for every other cyber player, from North Korea to Iran, to turn to their own uses.

“The initial dump was followed by many more, wrapped in taunts, broken English, a good deal of profanity, and a lot of references to the chaos of American politics.” The Shadow Brokers promised a ‘monthly dump service’ of stolen tools and left hints, perhaps misdirection, that Russian hackers were behind it all. One missive read, “Russian security peoples is becoming Russian hackers at nights, but only full moons.”

This post raised the following questions. Was this the work of the Russians, and if so was it the GRU trolling the NSA the way it was trolling the Democrats”? Did the GRU’s hackers break into the TAO’s digital safe, or did they turn an insider maybe several. And was this hack related to another loss of cyber trolls from the CIA’s Center for Cyber Intelligence which had been appearing for several months on the WikiLeaks site under the name “Vault 7?” Most importantly, was there an Implicit message in the publication of these tools, the threat that if Obama came after the Russians too hard for the election hack, more of the NSA’s code would become public?

The FBI and Brennan reported a continued decrease in Russian “probes” of the state election system. No one knew how to interpret the fact. It was possible that the Russians already had their implants in the systems they had targeted. One senior aide said, “It wouldn’t have made sense to begin sanctions” just when the Russians were backing away.

Michael Hayden, formerly of the CIA and NSA said that this was “the most successful covert operation in history.

From Russia, With Love

July 17, 2018

The title of this post is identical to the title of the Prologue from “The Perfect Weapon: War, Sabotage, & Fear in the Cyber Age.” Andy Ozment was in charge of the National Cybersecurity & Communications Integration Center, located in Arlington, VA. He had a queasy feeling as the lights went out the day before Christmas Eve, 2015. The screens at his center indicated that something more nefarious than a winter storm or a blown-up substation had triggered the sudden darkness across a remote corner of the embattled former Soviet republic. The event had the marking of a sophisticated cyberattack, remote-controlled from someplace far from Ukraine.

This was less than two years since Putin had annexed Crimea and declared it would once again be part of Mother Russia. Putin had his troops trade in their uniforms for civilian clothing and became known as the “little green men.” These men with their tanks were sowing chaos in the Russian-speaking southeast of Ukraine and doing what they could to destabilize a new, pro-Western government in Kiev, the capital.

Ozment realized that this was the ideal time for a Russian cyberattack against the Ukrainians in the middle of the holidays. The electric utility providers were operating with skeleton crews. To Putin’s patriotic hackers, Ukraine was a playground and testing ground. Ozment told his staff that this was a prelude to what might well happen in the United States. He regularly reminded his staff, that the world of cyber conflict, attackers came in five distinct varieties: “vandals, burglars, thugs, spies, and saboteurs. He said he was not worried about the thugs, vandals, and burglars. It was the spies, and particularly the saboteurs who keep him up at night.

In the old days, they could know who launched the missiles, where they came from and how to retaliate. This clarity created a framework for deterrence. Unfortunately, in the digital age, deterrence stops at the keyboard. The chaos of the modern Internet plays out in an incomprehensible jumble. There are innocent service outages and outrageous attacks, but it is almost impossible to see where any given attack came from. Spoofing the system comes naturally to hackers, and masking their location was pretty simple. Even in the case of a big attack, it would take weeks, or months, before a formal intelligence “attribution” would emerge from American intelligence agencies and even then there might be no certainty about who instigated the attack. So this is nothing like the nuclear age. Analysts can warn the president about what was happening, but they could not specify, in real time and with certainty, where an attack was coming from or against whom to retaliate.

In the Ukraine the attackers systematically disconnected circuits, deleted backup systems, and shut down substations, all by remote control. The hackers planted a cheap program—malware named “KillDisk”—to wipe out the systems that would otherwise allow the operators to regain control. Then the hackers delivered the finishing touch: they disconnected the backup electrical system in the control room, so that not only were the operators now helpless, but they were sitting in darkness.

For two decades experts had warned the hackers might switch off a nation’s power grid, the first step in taking down an entire country.

Sanger writes, “while Ozment struggled to understand the implications of the cyber attack unfolding half a world away in Ukraine, the Russians were already deep into a three-pronged cyberattack on the very ground beneath his feet. The first phase had targeted American nuclear power plants as well as water and electric systems, with the insertion of malicious code that would give Russia the opportunity to sabotage the plants or shut them off at will. The second was focused on the Democratic National Committee, an early victim of a series of escalating attacks ordered, American intelligence agencies later concluded, by Vladimir V. Putin himself. And the third was aimed at the heart of American innovation, Silicon Valley. For a decade the executives of Facebook, Apple and Google were convinced that the technology that made them billions of dollars would hasten the spread of democracy around the world. Putin was out to disprove that thesis and show that he could use the same tools to break democracy and enhance his own power.”

Trump and North Korea

July 16, 2018

The situation between Trump and North Korea provides a salient, if not the most salient, example of the issues explored in THE PERFECT WEAPON. Trump has mistakenly declared that the threat of a nuclear armed North Korea is over.

Trump has met with Kim Jong-un. This was a major victory for Kim in that North Korea has desired a face to face meeting with the American President for a long time. The meeting was one of personal pleasure for Trump. His profuse praise of Kim Jong-un was honest as Kim is one of the most ruthless, if not the hands-down most ruthless, dictators. Clearly Kim is someone that Trump personally admires and would like to emulate.

The earlier name calling was just a ploy to provoke Kim. It’s a good thing that he did not provoke Kim as Kim has a large portion of Seoul that can be fired upon and destroyed by a simple command. This is the dilemma that has precluded taking any military action against North Korea. Actually the capability of hitting the United States with missiles armed with nuclear warheads has virtually no effect on the situation before Kim developed this capability. It’s primary role is that of prestige. North Korea is now in the nuclear club.

Kim realizes that if he ever hit the United States with nuclear weapons, there would be a massive nuclear retaliation by the United States against North Korea.

Regardless of what it says, North Korea is not going to relinquish its nuclear arsenal. They’ve played this negotiation game in the past, and they never follow through on their promises. The danger is that when Trump realizes that he has been played, he will threaten the “bloody nose” that he has threatened North Korea with in the past. Should he do this, Kim would likely use his cyberwarfare options. He could disrupt financial operations, the electrical grid, communications and effectively bring the United States to its knees. Even if Trump exercised his nuclear option that would likely not deter the North Koreans. Many of its servers and its operators reside outside North Korea. Moreover, it is likely that the Chinese would come to North Korea’s aide as they did during the Korean war. America would be living for a substantial amount of time in the dark ages.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

THE PERFECT WEAPON

July 15, 2018

The title of this post is identical to the title of a book by David E. Sanger. The subtitle is “War, Sabotage, & Fear in the Cyber Age.” The following is from the Preface:

“Cyberweapons are so cheap to develop and so easy to hide that they have proven irresistible. And American officials are discovering that in a world in which almost everything is connected—phones, cars, electrical grids, and satellites—everything can be disrupted, if not destroyed. For seventy years, the thinking inside the Pentagon was that only nations with nuclear weapons could threaten America’s existence. Now that assumptions is in doubt.

In almost every classified Pentagon scenario for how a future confrontation with Russia and China, even Iran and North Korea, might play out, the adversary’s first strike against the United States would include a cyber barrage aimed at civilians. It would fry power grids, stop trains, silence cell phones, and overwhelm the Internet. In the worst case scenarios, food and water would begin to run out; hospitals would turn people away. Separated from their electronics, and thus their connections, Americans would panic, or turn against one another.

General Valery Gerasimov, an armor officer who after combat in the Second Chechen War, served as the commander of the Leningrad and then Moscow military districts. Writing in 2013 Gerasimov pointed to the “blurring [of] the lines between the state of war and the state of peace” and—after noting the Arab Awakening—observed that “a perfectly thriving state can, in a matter of months and even days, be transformed into an arena of fierce armed conflict…and sink into a web of chaos.” Gerasimov continued, “The role of nonmilitary means of achieving political and strategic goals has grown,” and the trend now was “the broad use of political, economic, informational humanitarian, and other nonmilitary measures—applied in coordination with the protest potential of the population.” He said seeing large clashes of men and metal as a “thing” of the past.” He called for “long distance, contactless actions against the enemy” and included in his arsenal “informational actions, devices, and means.” He concluded, “The information space opens wide asymmetrical possibilities for reducing the fighting potential of the enemy,” and so new “models of operations and military conduct” were needed.

Putin appointed Gerasimov chief of the general staff in late 2012. Fifteen months later there was evidence of his doctrine in action with the Russian annexation of Crimea and occupation of parts of the Donbas in eastern Ukraine. It should be clear from General Gerasimov and Putin appointing him as chief of the general staff, that the nature of warfare has radically

changed. This needs to be kept in mind when there is talk of modernizing our strategic nuclear weapons. Mutual Assured Destruction, with the appropriate acronym MAD, was never a viable means of traditional warfare. It was and still is a viable means of psychological warfare, but it needs to remain at the psychological level.

Returning to the preface, “After a decade of hearings in Congress, there is still little agreement on whether and when cyberstrikes constitute an act of war, an act of terrorism, mere espionage, or cyber-enabled vandalism.” Here HM recommends adopting Gerasimov and Putin’s new definition of warfare.

Returning to the preface, “But figuring out a proportionate yet effective response has now stymied three American presidents. The problem is made harder by the fact that America’s offensive cyber prowess has so outpaced our defense that officials hesitate to strike back.”

James A. Clapper, a former director of national intelligence said that was our problem with the Russians. There were plenty of ideas about how to get back at Putin: unplug Russia from the world’s financial system; reveal Putin’s links to the oligarchs; make some of his own money—and there was plenty hidden around the world—disappear. The question Clapper was asking was, “What happens next (after a cyber attack)? And the United States can’t figure out how to counter Russian attacks without incurring a great risk of escalation.

Sanger writes, “As of this writing, in early 2018, the best estimates suggest there have been upward of two hundred known state-on-state cyber atacks—a figure that describes only those made public.”

This is the first of many posts on this book.

Microsoft Calls for Regulation of Facial Recognition

July 14, 2018

The title of this post is that same as the title of an article by Drew Harwell in 12 July 2018 issue of the Washington Post. Readers of the healthy memory blog should know that there have been many posts demanding data on the accuracy of facial recognition software to include a party responsible for assessing its accuracy. As has been mentioned in many posts, the accuracy of facial recognition software on television, especially on police shows, is misleading. And the ramifications of erroneous classifications can be serious.

The article begins, “Microsoft is calling for government regulation on facial-recognition software, one of its key technologies, saying such artificial intelligence is too important and potentially dangerous for tech giants to police themselves. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike. The only way to regulate this broad use is for the government to do so.”

There’s been a torrent of public criticism aimed at Microsoft, Amazon and other tech giants over their development and distribution of the powerful identification and surveillance technology—including their own employees.

Last month Microsoft faced widespread calls to cancel its contract with Immigration and Customs Enforcement, which uses a set of Microsoft cloud-computing tools that also include facial recognition. In a letter to chief executive Satya Nadella, Microsoft workers said they “refuse to be complicit” and called on the company to “put children and families about profits.” The company said its work with Immigration and Customs Enforcement is limited to mail, messaging and office work.

This a rare call for greater regulation from a tech industry that has often bristled at Washington involvement in its work. The expressed fear is that government rules could hamper new technologies of destroy their competitive edge. The expressed fear is not real if the government does the testing of new technologies. This does no hamper new technologies, rather it protects the public from using inappropriate products.

Face recognition is used extensively in China for government surveillance. The technology needs to be open to greater public scrutiny and oversight. Allowing tech companies to set their own rules is an inadequate substitute for decision making by the public and its representatives.

Microsoft is moving more deliberately with facial recognition consulting and contracting work and has turned down customers calling for deployment of facial-recognition technology in areas where we’ve concluded that there are greater human rights and risks.

Regulators also should consider whether police or government use of face recognition should require independent oversight; what legal measures could prevent AI from being used for racial profiling; and whether companies should be forced to post noticed that facial-recognition technology is being used in public places.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Conclusions

July 1, 2018

This is the sixth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Although this is an outstanding work by Dr. Roberts, the conclusions could have been better. Consequently, HM is providing his conclusions from this work. It is divided into two parts. The first part deals with implications for authoritarian governments. The second part deals with implications for democracies.

Authoritarian Governments

Mao Tse Tung initially used a heavy handed approach to the control of information. Although he managed to maintain control of the regime, it was an economic and social disaster. Beginning with Den Xiapong policies of reform and opening were begun. This evolved slowly and serially. The dictator’s dilemmas were discussed in the first post, “Censored.” One dilemma is when the government would like to enforce constraints on public speech, but repression could backfire against government. Censorship could be seen as a signal that the authority is trying to conceal something and is not in fact acting as an agent for citizens. Another dilemma is that even if the dictator would like to censor, by censoring the autocrat has more difficulty collecting precious information about the public’s view of government. The third dilemma is that censorship can have economic consequences that are costly for authoritarian governments that retain legitimacy from economic growth.

China has apparently handled these three dilemmas via porous censorship. As China has a highly effective authoritarian government it appears that porous censorship is highly effective. One could argue that China has provided a handbook for authoritarian governments, explaining how to maintain power, have a growing economy, and have a fairly satisfied public. It still is an open question for how long this authoritarian government can maintain. Although many Chinese are wealthy, and some are extremely wealthy, the majority of the country is poor. Although, in general, the standard of living has improved for virtually everyone, the amount of improvement largely differs. China has emerged as one of the leading powers in the world.

The question is whether they are satisfied being an economic power, or does it also want to be a military power? It is devoting a serious amount of money to its military forces and has built its first aircraft carrier. Other countries in the area, along with the United States, are justly concerned with China’s growing military power, especially its navy and air force. China has made it clear that they want to dominate the South China Sea. There is also the possibility that when they think the time is right, they will invade Taiwan. It is clear that the United States does not want another land war in Asia. But US Naval forces would be stretched very thin. And the loss of a couple of super carriers could result in a very short war.

Democracies

One can argue that democracy is already plagued with flooding. There is just way too much stuff on the internet. One could also argue that this is just too much of a good thing, but one would be wrong. Placing good information on the internet requires effort. Apart from entertainment, objective truth needs to be a requirement for the internet. Unfortunately, there are entities and individuals such as the current president of the United States, such as the alt-right that do not care about objective truth. So it is easy to post stuff on the internet that has no basis in objective reality. It is easy to spin conspiracy theories and all sorts of nonsense. So there is a problem on the production side. Information based on objective-truth takes time to produce. Eliminate this goal of objective truth and letting the mind run wild provides the means of producing virtually endless amounts of nonsense, at least some of which is harmful.

But there is also effort on the receiving side. Concern with the objective truth requires the use of what Kahneman terms, System 2 processing, which is more commonly know as thinking. This requires both time and mental effort. However, a disregard for objective truth such as what is produced by the alt-right, requires only believing, not thinking. It involves System 1 processing which is also where our emotions sit.

Given that objective truth requires System 2 processing both for its production and its reception, and that a disregard for objective truth such as illustrated in alt-right products and conspiracy theories, requires only System 1 processing with emotional and gut feelings, the latter will likely overwhelm the former. This could spell the death of democracy. If so, the Chinese have provided an effective handbook for managing authoritarian governments.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Information Flooding

June 30, 2018

This is the fifth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Dr. Roberts writes, “information flooding is the least identifiable form of censorship of all the mechanisms described in this book. Particularly with the expansion of the Internet, the government can hide its identify and post online propaganda pretending to be unrelated to the government. Coordinate efforts to spread information online reverberate throughout social media because citizens are more likely to come across them and share them. Such coordinate efforts can distract from ongoing events that might be unfavorable to the government and can de-prioritize other news and perspectives.

We might expect that coordinated government propaganda efforts would be meant to persuade or cajole support from citizens on topics that criticize the government about. However, the evidence presented in this chapter indicates that governments would rather not use propaganda to draw attention to any information that could shed a negative light on their performance. Instead, governments use coordinated information to draw attention away from negative events toward more positive news for their own overarching narrative, or to create positive feelings about the government among citizens. This type of flooding is even more difficult to detect, and dilutes the information environment to decrease the proportion of information that reflects badly on the government.

Information flooding can be subtle. In other cases it can be quite glaring. On August 3, 2014 a 6.5 magnitude earthquake hit Yunnan province in China. The earthquake killed hundreds and injured thousands of people, destroying thousands of homes in the process, School buildings toppled and trapped children, reminiscent of the 2008 Sichuan earthquake, which killed 70,000 people. The government was heavily criticized for shoddy construction of government buildings. Emergency workers rushed to the scene to try to rescue survivors.

Eight hours after the earthquake struck, the Chinese official media began posting coordinated stories. These stories were not about the earthquake , but about controversial Internet personality Guo Meimei. Guo had reached Internet celebrity status three years earlier, in 2011 when she repeatedly posted posted pictures of herself dressed in expensive clothing and in front of expensive cars on Sina Weibo, attributing her lavish lifestyle to her job at the Red Cross in China. Although Guo did not work at the Red Cross, her boyfriend, Wang Jun, was on the board of the Red Cross Bo-ai Asset Management Ltd., a company that coordinated charity events for the Red Cross. The expensive items that Guo had posed with on social media in 2011 were allegedly gifts from Wang. This attracted millions of commentators on social media. This scandal highlighted issues with corruption of charities in China, and donations to the Red Cross plummeted.

By 2014, when the earthquake hit, the Guo Meimei scandal was old news, long forgotten by the fast pace of the Internet. On July 10, 2014, Chinese officials had arrested Guo on allegations of gambling on the World Cup. On midnight August 4, 2014 Xinhua out of the blue posted a long, detailed account of a confession made by Guo Memei that included admissions of gambling and engaging in prostitution. On the same day, many other major media outlets followed suit, inducing coverage by major media outlets such as CCTV, the Global Times, Caijing, Southern Weekend, Beijing Daily, and Nanjing Daily. Obviously this was not an enormous coincidence. Rather, it was well coordinated information flooding.

Coordination of information to produce such flooding is central to the information strategies of the Chinese propaganda system. The Chinese government is in the perfect position to coordinate because it has the resources and infrastructure to do so. The institution of propaganda in China is built in a way that makes coordination easy. The Propaganda Department is one of the most extensive bureaucracies within the Chinese Communist Party, infiltrating every level of government. It is managed and led directly from the top levels of the CCP.

China has a Fifty Cent Party that provides highly coordinated cheerleading. Current conceptions of online propaganda in China posit that the Fifty Cent Party is primarily tasked with countering anti-government rhetoric online. Social media users are accused of being Fifty Cent Party members when they defend government positions in heated online debates about policy or when they attack those with anti-government views. Scholars and pundits have viewed Fifty Cent Party members as attackers aimed at denouncing or undermining pro-West, anti-China opinion. For the most part, Fifty Cent Party members have been seen in the same light as traditional propaganda. They intend to persuade rather than to censor.

Instead of attacking, the largest portion of Fifty Cent Party posts in the leaked email archive were aimed at cheerleading for citizens and China—patriotism, encouragement or motivation of citizens, inspirational quotes or slogans, gratefulness, or celebrations of historical figures, China or cultural events. Most of the posts seem to be intended to make people feel good about their lives, and not to draw attention to anti-government threads on the Internet, is consistent with recent indication from Chinese propaganda officials that propagandists attempt to promote “positivity.” The Chinese Communist Party has recently focused on encouraging art, TV shows, social media posts, and music to focus on creating “positive energy” to distract from increasingly negative commercial news.

The Powerful Influence of Information Friction

June 29, 2018

This is the fourth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Dr. Roberts related that in May 2011 she had been following news about a local protest in Inner Mongolia in which an ethnic Mongol herdsmen had been killed by a Han Chinese truck driver. In the following days increasingly large numbers of local Mongols began protesting outside of government buildings, culminating in sufficiently large-scale protests that the Chinese government imposed martial law. These were the largest protests that Inner Mongolia had experienced in twenty years. A few months later Dr. Roberts arrived in Beijing for summer. During discussions with a friend she brought up the Inner Mongolia protest. Her friend could not recollect the event, saying that she had not heard of it. A few minutes later, she remembered that a friend of hers had mentioned something about it. but when she looked for information online, she could not find any, so she assumed that the protest itself could not have been that important.

This is what happened. Bloggers who posted information about the protest online had their posts quickly removed from the Internet by censors. As local media were not reporting on the event, any news of the protest was reported mainly by foreign sources, many of which had been blocked by the Great Firewall. Even for the media, information was difficult to come by, as reporting on the protests on the ground had been banned, and the local Internet had been shut off by the government.

Dr. Roberts noted that information about the protest was not impossible to find on the Internet. She had been following news from Boston and even in China. The simple use of a Virtual Private Network and some knowledge of which keywords to search for had uncovered hundreds of news stories about the protests. But her friend, a well-to-do, politically interested, tech-savvy woman, was busy and Inner Mongolia is several hundred miles away. So after a cursory search that turned up nothing, she concluded that the news was either unimportant or non-existent.

Another of her friends was very interested in politics and followed political events closely. She was involved in multiple organizations that advocated for genuine equality and was an opinionated feminist. Because of her feminist activist, Dr. Roberts asked her whether she had heard of the five female activists who had been arrested earlier that year in China, including in Beijing, for their involvement in organizing a series of events meant to combat sexual harassment. The arrests of these five women had been covered extensively in the foreign press and had drawn an international outcry. Articles about the activists had appeared in the New York Times and on the BBC. Multiple foreign governments had called for their release. But posts about their detention were highly censored and the Chinese news media were prohibited from reporting on it. Her friend, who participated in multiple feminist social media groups, and had made an effort to read Western news, still had not heard about their imprisonment.

Dr. Roberts kept encountering examples like these, where people living in China exhibited surprising ignorance about Chinese domestic events that had made headlines in the international press. They had not heard that the imprisoned Chinese activist Liu Xiao had won the Nobel Peace Prize. They had not heard about major labor protests that had shut down factories or bombings of local government offices. Despite the possibility of of accessing this information without newspapers, television, and social media blaring these headlines, they were much less likely to come across these stories.

Content filtering is one of the Chinese censorship methods. This involves the selective removal of social media posts in China that are written on the platforms of Chinese owned internet service providers. The government does not target criticism of government policies, but instead removes all posts related to collective action events, activists, criticism of censorship, and pornography. Censorship focuses on social media posts that are geo-located in more restive areas, like Tibet. The primary goal of government censorship seems to be to stop information flow from protest areas to other areas of China. Since large-scale protest is known to be one of the main threats to the Chinese regime, the Chinese censorship program is preventing the spread of information about protests in order to reduce their scale.

Despite extensive content filtering, if users were motivated and willing to invest time in finding information about protests, they could overcome information friction to find such information. Information is often published online before it is removed by Internet Companies. There usually is a lag of several hours to a day before content is removed from the Internet.

Even with automated and manual methods of removing content, some content is missed. And if the event is reported in the foreign press, Internet users could access information by jumping the Great Firewall using a VPN.

The structural frictions of the Great Firewall are largely effective. Only the most dedicated “jump” the Great Firewall. Those who jump the Great Firewall are younger and have more education and resources. VPN users are more knowledgeable about politics and have less trust in government. Controlling for age, having a college degree means that a user is 10 percentage points more likely to jump the Great Firewall. Having money is another factor that increases the likelihood of jumping the Great Firewall. 25% of those who jump the Great Firewall say they can understand English, as compared with only 6% of all survey respondents. 12% of those who jump work for a for a foreign-based venture compared to only 2% of all survey respondents. 48% of the jumpers have been abroad compared with 17% of all respondents.

The government has cracked down on some notable websites. Google began having conflicts with the Chinese government in 2010. Finally, in June 2014, the Chinese government blocked Google outright.

The Wikipedia was first blocked in 2004. Particular protests have long been blocked . but the entire Wikipedia website has occasionally been made unaccessible to Chinese IP addresses.

Instagram was blocked on September 29, 2014 from mainland Chinese IP addresses due to increase popularity among Hong Kong protestors.

Censorship of the Chinese Internet

June 28, 2018

This is the third post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” The arrival of the web in 1995 following the Tiananmen crackdown complicated the government’s ability to control the gatekeepers of information as channels of information transitioned from a “one to many” model, where a few media companies transferred information to many people, to a “many to many” model where everyday people could contribute to media online and easily share news and opinions with each other. If the government had been worried about complete control over the information environment, one would expect it to try to slow the expansion of the Internet within the country. Instead the government actively pursued it. The Chinese government aggressively expanded Internet access throughout the country and encouraged online enterprises as the CCP saw these as linked to economic growth and development.

As it pursued greater connectivity, the government simultaneously developed methods of online information control that allowed it to channel information online. The government issued regulations for the Internet in 1994, stipulating that the Internet could not be used to hurt the interest of the state. Immediately, the state began developing laws and technology that allowed it more control over information online, including filtering, registration of online websites, and capabilities for government surveillance.

The institutions that now implement information control in China for both news media and the Internet are aimed at targeting large-scale media platforms and important producers of information in both traditional and online media make it more difficult for the average consumer to come across information that the Chinese government finds objectionable. The CCP also maintains control over key information channels to be able to generate and spread favorable content to citizens. The CCP’s control over these information providers allows them the flexibility to make censorship restriction more difficult to penetrate during particular periods and to loosen constraints during others. This censorship system is a taxation system of information on the Internet, allowing the government to have it two ways: by making information possible to access, those who care enough (such as entrepreneurs, academics, those with international business connections) will bypass control and find the information they need. For the masses the impatience that accompanies surfing the web makes the control effective even though it is porous.

In 2013 President Xi Jinping upgraded the State Internet Information Office to create a new, separate administration for regulating Internet content and cyberspace. This office was called the Cyberspace Administration of China (CAC), which was run by the Central Cybersecurity and Informatization Leading Small Group and personally chaired by Xi Jinping. Xi was worried that there were too many bureaucracies in control of regulating the Internet, so he formed the CAC to streamline Internet control. The CAC sought to more strictly enforce censorship online. This included shutting down websites that did not comply with censorship regulations and increasing the prevalence of the government’s perspective online by digitizing propaganda. This showed the importance the Xi administration placed on managing content on the internet.

These institutions use a variety of laws and regulations to control information in their perspective purviews. These laws tend to be relatively ambiguous giving the state maximal flexibility in their enforcement. Censorship disallows a wide range of political discourse, including anything that “harms the interests of the nation,” “spreads rumors or disturbs social order,” “insults or defames third parties,” or “jeopardizes the nation’s unity.” Due to widespread discussion of protest events and criticism of the government online, the government cannot possibly arrest all those who violate a generous interpretation of this law. These institutions keep a close watch particularly on high-profile journalists, activists, and bloggers, developing relationships with these key players to control content and arresting those they view as dangerous. These activities are facilitated by surveillance tools that require users to register for social media with their real names and require Internet providers to keep records of users’ activities. Since Xi Jinping became president in 2012, additional laws and regulations have been written to prevent “hacking and Internet-based terrorism.”

The government cannot only order traditional media to print particular articles and stories, it also retains flooding power on the Internet. The Chines government hires thousands of online commentators to write pseudonymously at its direction. This is called the Fifty Cent Party, which is an army off Internet commentators who work at the instruction of the government to influence public opinion during sensitive periods. These propagandists are largely instructed to promote positive feelings, patriotism, and a positive outlook on governance. They are unleashed during particularly sensitive periods as a form of distraction. This is in line with President Xi’s own statements that public opinion guidance should promote positive thinking and “positive energy.” They also sometime defame activists or counter government criticism.

Since the government focus control on gatekeepers of information, rather than individuals, from the perspective of an ordinary citizen in China the information control system poses few explicit constraints. For those who are aware of censorship and are motivated to circumvent it, censorship poses an inconvenience rather than a complete constraint on their freedom. While minimizing the perception of control, the government is able to wield significant influence over which information citizens will come across.

Modern History of Information Control in China

June 27, 2018

This is the second post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.”

Censorship under Mao (1949-1976)
Under Mao the Chinese government exercised authority in all areas of citizens’ lives. The Party regarded information control as a central component of political control, and Party dogma, ideology, and doctrine pervaded every part of daily routine. Propaganda teams were placed in workplaces and schools to carry out work and education in the spirit of party ideology and to implement mass mobilization campaigns. Ordinary citizens were encouraged to engage in self-criticism—publicly admitting and promising to rectify “backward” thoughts.

Under Mao the introduction of “thought work” into everyday life meant that fear played a primary role in controlling information, and each citizen was aware of political control over speech and fearful of the consequences of stepping over the line. Everyday speech could land citizens in jail or worse.

During this period China was closed off from the Western world in an information environment completely controlled by the state, had among the most “complete” control of information a country could muster, akin to today’s North Korea.

Even with ideological uniformity and totalitarian control based on repression, both the Communist Party and the Chinese people paid a high price for highly observable forms of censorship that control citizens through brainwashing and deterrence. Citizens’ and officials’ awareness of political control stifled the government’s ability to gather information on the performance of policies, contributing to severe problems of economic planning and governance. The Great Leap Forward, in which about thirty million people died of starvation in the late 1950s, has been partially attributed to local officials’ fear of reporting actual levels of grain production to the center, which led them to report inflated numbers. Even after the Great leap Forward, the inability of the Chinese bureaucracy to extract true economic reports from local officials and citizens led to greater economic instability and failed economic policies and plans.

This extensive control also imposed explicit constraints on economic growth. Large amounts of trade with other countries were not possible without loosening restriction on the exchange of information with foreigners. Innovation and entrepreneurship require risk-taking, creativity, and access to the latest technology, which are all difficult under high levels of fear that encourage risk-aversion. Millions of people were given class levels that made them second-class citizens or were imprisoned in Chinese gulags that prevented them from participating in the economy. Frequently, those who were persecuted had high levels of education and skills that the Chinese economy desperately needed. The planned economy in concert with high levels of fear stifled economic productivity and keep the vast majority of Chinese citizens in poverty.

Even in a totalitarian society with little contact with the outside world, government ideological control over the everyday lives of citizens decrease the government’s legitimacy and sowed seeds of popular discontent. Mao’s goal of ideological purity led him to encourage the Cultural Revolution, which was a decade-long period of chaos in China based on the premise of weeding out ideological incorrect portions of society. In the process this killed millions of people and completely disrupted social order. The chaos of the Cultural Revolution combined with resentment toward the extreme ideological left in the Chinese political system that had spawned it created openings for dissent. In 1974, a poster written in Guangzhou under a pseudonym called explicitly for reform. Similar protests followed. During the first Tiananmen incident in 1976, thousands of people turned out to protest the ideological left. Several years later, in the Democracy movement in 1978 and 1979, protesters explicitly called for democracy and human rights, including free speech.

Censorship Reform Before 1989
In 1978 when Deng Xiapong gained power, he initiated policies of reform and opening that were in part a reaction to the intense dissatisfaction of Chinese citizens with the Cultural Revolution and the prying hand of the government in their personal affairs. A hallmark of Deng’s transition to a market economy, which began in 1978, was the government’s retreat from the private lives of citizens and from the control of the media. Leaders within Deng’s government realized the trade-offs between individual control and entrepreneurship, creativity, and competition required by the market and decreased government emphasis on ideological correctness of typical citizens in China. In the late 1970s and early 1980s, the Chinese Communist Party (CCP) rehabilitated those who had been political victims during the Cultural Revolution. Class labels were removed and political prisoners were released, thus enabling more than twenty million additional people to participate in the economy, many of whom had high levels of education. It has been noted that the “omnipresent fear” that had been common in the Mao era lessened and personal relationships again became primarily private and economic. At first citizens began to criticize the government and express dissatisfaction privately, but later more publicly.

Not only did the government retreat from the private lives of individuals to stimulate the economy and address dissatisfaction, but also loosened its control over the media in order to reduce its own economic burden in the information industry. As other aspects of the Chinese economy privatized, the government began to commercialize the news media to respond to citizens’ demands for entertainment and economic, international, and political news. This proved to be extremely lucrative for Chinese media companies. This lessened control also allowed Chinese media to compete with the new onslaught of international information that began to pour in as international trade and interactions increased, and Chinese media companies were able to innovate to retain market share in an increasingly competitive information environment.

In the 1980s there was an increasing decentralization of the economy from the central Party planning system to the localities. As the government began to decentralize its control, it began to rely on the media to ensure that local officials were acting in the interest of the Party. Watchdog media could help keep local businesses, officials, and local courts in check. Investigative journalism serves citizens by exposing the defective aspects of its own system. Freer media in a decentralized state can serve the government’s own interest as much as it can serve the interests of citizens.

The CCP did take significant steps toward relaxing control over the flow of information in the 1980s to loosen enforcement over speech, particularly with respect to the Maoist era. By 1982, the Chinese constitution began to guarantee free speech and expression for all Chinese citizens, including freedom of the press, assembly and demonstrations. Commercialization of Chinese newspapers began in 1979 with the the first advertisement and gradually the press began making more profit from the sales of advertising and less from government subsidies. Radio and television, which had previously been controlled by the central and provincial levels of government, expanded rapidly to local levels of government and was also commercialized.

In April 1989 the death of Hu Yaobang sparked the pro-democracy protests centered in Beijing’s Tiananmen Square. These protests spread all over China, culminating in an internal CCP crisis and a large-scale violent crackdown on protesters on June 4, 1989, that was condemned internationally.

Not surprisingly, this June 4 crisis marked a turning point in government strategy with respect to the media and the press. There was widespread consensus among the Party elites after the crackdown that the loosening of media restrictions had aggravated the student demonstrations. During the months of protests reformers within the Party had allowed and even encouraged newspapers to discuss the protests. In the immediate aftermath of the crackdown on the protesters and clearing of the square on June 4, 1989, censorship ramped up quickly. This large-scale crackdown on journalists, activists, an academics reintroduced widespread fear into the private lives of influential individuals, particularly among those who had been involved in the protest events. China was returning to the model of media serving the Party and expressing enthusiasm for government policies.

Post-Tiananmen: Control Minimizing the Perception of Control

Although the belief among government officials that free media had contributed to unrest prevented the CCP from returning to the extent of press freedom before Tiananmen Square, Deng did not return to the version of pre-reform information control that relied on fear-based control of individuals’ everyday lives and instead quickly reversed the post -Tiananmen crackdown on speech. Instead, government policy evolved toward a censorship strategy that attempted to minimize the perception of information control among ordinary citizens while still playing a central role in prioritizing information for the public. The government strengthend mechanisms of friction and flooding while for the most past staying out of the private lives of citizens. A few years after Tiananmen Square, the CCP returned to an apparent loosening of control, and commercialization of the media resumed in the mid-1990s. After Deng’s “Southern Tour” in 1992, meant to reemphasize the economy, broader discussions and criticisms of the state were again allowed, even publicly and even about democracy.

Even though the government did not return to Maoist-era censorship, the government tightened its grip on the media, officials, journalists, and technology in a way that allowed targeted control: by managing the gatekeepers of information, the government could de-prioritize information unfavorable to itself and expand its own production of information to compete with independent sources. The government strengthened institutional control over the media. The CCP created stricter licensing requirement to control the types of organizations that could report news. They also required that journalists apply for press cards, which required training in government ideology. In spite of extensive commercialization that created the perception among readers that news was driven by demand rather than supply, the government retained control over the existence, content, and personnel decisions of newspapers throughout the country allowing the government to effectively, if not always explicitly, control publishing.

The government proactively changed its propaganda and strategies after Tiananmen Square, adapting Western theories of advertising and persuasion, and linking thought work with entertainment to make it more easily understood by the public. The CCP decided to instruct newspapers to follow Xinhua’s lead on important events and international news, much as the had done with the People’s Daily doing the 1960s. In the 1990s, the party also renewed its emphasis on “patriotic education” in schools around the country, ensuring that the government’s interpretations of events were the first interpretations of politics that students learned.

Censored

June 26, 2018

The title of this post is identical to the title of an important and highly relevant book by Margaret E. Roberts. The subtitle is “Distraction and Diversion Inside China’s Great Firewall.” This book is of special interest to HM. A number of summers back HM was privileged to participate in a month long workshop on the effect of new technology on two countries: China and Iraq. The workshop included intelligence professionals, technology professionals, linguists, and experts on these specific topics. Why they were interested in a psychologist like HM was not clear to him, although it was a most stimulating month, and HM hopes he was able to make some contributions.

This book makes clear the sophisticated means that China uses to control information in the country. These were vaguely understood from the workshop, but Dr. Roberts brings them into clear view.

“China has four million websites, with nearly 700 million Internet users, 1.2 mobile phone users, 600 million WeChat and Weibo users, and generates 30 billion pieces of information every day. It is not possible to apply censorship to this enormous amount of data. Thus censorship is not the correct word choice. But no censorship does not mean no management.” Lu Wei was the Director, State Internet Information Office, China, in December 2015. As the former “gatekeeper of the Chinese Internet” Lu Wei stresses in his epigraph that the thirty billion pieces information generated each day by Chinese citizens quite simply cannot be censored.

So China as developed what is termed “porous” censorship. Dr. Roberts writes, “…most censorship methods implemented by the Chinese government act not as a ban but as a tax on information, forcing users to pay money or spend more time if they want to access the censored material. For example, when the government ‘kicked out’ Google from China in 2010, it did so simply by throttling the search engine so it loaded only 75% of the time.” So if you want to use Google, you just needed to be more patient. China’s most notorious censorship intervention that blocked a variety of foreign websites from Chinese users could be circumvented by downloading a Virtual Private Network (VPN). Chinese social media users circumvent keyword censoring of social media posts by substituting similar words that go undetected for the words that the government blocks. This makes content accessible as long as you spend more time searching. Newspapers are often instructed by censors to put stories on the back pages of the newspaper, where access is just a few more slips of the page away. This technique is termed “friction” for creating friction that seriously slows, but does not eliminate, access to the information. Porous censorship is neither unique to China nor the modern time period. Iran has been known simply to throttle information accessibility and make it slower during elections.

The Russian government also uses armies of online bots and commentators to flood opposition hashtags, and make it more difficult, but not impossible, for people to find information on protests or opposition leaders. This technique is termed “flooding.” Essentially users are flooded and drown in information.

Conventional wisdom is that these porous censorship strategies are futile for governments as citizens learn quickly to circumvent censorship that is not complete or enforced. Conventional wisdom is wrong. Many governments that have the capacity to enforce censorship more forcefully choose not to do so. Using censorship that taxes, rather than prohibits, information in China and in other countries around world is done as a design choice and is not an operational flaw.

The trade-offs between the benefits and costs of repression and censorship are often referred to as “the dictator’s dilemma.” One form of the dictator’s dilemma is when the government would like to enforce constraints on public speech, but repression could backfire against the government. Censorship could be seen as a signal that the authority is trying to conceal something and is not in fact acting as an agent for citizens.

Another form of the “dictator’s dilemma” is that even if the dictator would like to censor, by censoring the autocrat has more difficulty collecting precious information about the public’s view of the government. Fear of punishment scares the public into silence and this creates long-term information collection problems for governments, which have interest in identifying and solving problems of governance that could undermine their legitimacy. Greater transparency facilitates central government monitoring of local officials, ensuring that localities are carrying out central directives and not mistreating citizens. Allowing citizens to express grievances online also allows government to predict and prevent the organization of protests.

What could perhaps be considered a third “dictator’s dilemma” is that censorship can have economic consequences that are costly for authoritarian governments that retain legitimacy from economic growth. Communications technologies facilitate markets, create greater efficiencies, lead to innovation, and attract foreign direct investment. Censorship is expensive—government enforcement or oversight of the media can be a drag on firms and requires government infrastructure. Economic stagnation and crises can contribute to the instability of governments. Censorship can exacerbate crises by slowing the spread of information that protects citizens. When censorship contributes to crises and economic stagnation, it can have disastrous long-term political costs for governments.

So “porous” censorship is much more efficient than heavy handed control of virtually all information by inducing fear in users.

Putin’s Peaks

June 25, 2018

The title of this post is identical to the title of an article by Dmitry Kobak, Sergey Shpilkin, and Maxim S. Pshenichnikov in the June 2018 issue of “Significance.” “Significance” is a joint publication of the Royal Statistical Society and the American Statistical Association. The subtitle of the article is “Russian election data revisited.”

The article states that the Kremlin wants a golden 70-70 win, meaning a win of 70% of the vote with a turnout of 70% to give it a clear mandate and provide it with a riposte to Western leaders who criticize Russia as an autocracy. What was actually achieved was a seemingly respectable 67.5%, with Putin securing 76.7% of the vote. But there have been criticisms of the election process, and doubts have been cast over the validity of the outcome. For instance, Golos, an election monitoring organization, has documented incidents of ballot stuffing at various polling stations, and multiple other violations both before and during the election (bit.ly/2HawRD3). At least since the mid-2000s Presidential and parliamentary elections in Russia have been accused of being fraudulent. From the Russian perspective, the two most important numbers that describe an election outcome are turnout percentage and leader’s result percentage. Although these percentages are not reported in the data sets from individual polling stations, they can be calculated from the information provided officially.

The authors (and others including HM) have argued that due to human attraction to round numbers, large-scale attempts to manipulate reported turnout or leader’s results would likely show up as frequent whole (integer) percentages in the election data. A previous “Significance” article gave the hypothetical example of a polling station with 1577 registered voters. Here election officials decide to forge the results and report a turnout of 85%. 85% was chosen as it is a round number which is more appealing than say 83.27%. To achieve a falsified turnout of 85%, this polling station needs to report 1755 x 0.85 = 1492 ballots cast. Other polling stations making similar attempts at fraud may also choose 85% as their target value, so that when we look at the turnout percentages for all polling stations, we see a noticeable split in the number of stations with turnout of 85%. In a previous article these integer peaks were found in elections from 2004 to 2012.

Since then two new elections were held in Russia, the 2016 parliamentary and the 2018 presidential elections. As with previous elections, sharp periodic peaks are clearly visible at integer values (91%, 92%, and 93%) and at round integer values (80%. 85%, and 90%) rather than fractional values (such as 91.3%).

The authors did Monte Carlo simulations of election results using the binomial distribution of ballots at every polling station. It strongly confirmed the hypothesis that results were being rounded to the benefit of the government. The authors note that integer peaks in the election data do not originate uniformly across all parts of Russia; they are mostly localized in the same administrative regions, providing additional evidence supporting that these are not natural phenomena Specific peaks can sometimes be traced to a particular city, or even an electoral constituency within a city, where turnout and/or leader’s results are nearly identical at a large number of polling stations. The most prominent example from the last two elections was the city of Saratov in 2016. Its plotting stations are the sole contributor to the sharp turnout peak at 64.3% and the leader’s result peak at 62.2%. These peaks are not integer and so are not counted towards the anomalies. Curiously, their product—showing the fraction of leader’s votes with respect to the total number of registered voters is a perfectly round 40%: 0.643 x 0.622 = 0.400.

One could regard these discrepancies, assuming that they do accurately reflect the underlying true vote as relatively innocuous. But the suspicion is that the results are significantly modified to get close to the Golden 70-70.

In the future it will be interesting to see if this integer bias persists in future voting summaries. It is disappointing to see this “rookie” flaw in a country noted for phony elections.

Russia’s newly developed strength is in influencing elections via technology. It has been discussed in previous healthy memory blog posts how Russian developed this new type of warfare. It began in homeland Russia. It was developed further in Russian speaking countries and in the Ukraine. And it has now been exported to Europe, where is it credited by some for the Brexit result, and to the United States were it is credited for Trump’s victory at least by some (Former DNI Director Clapper and HM at least).

Moreover, Russia is perfecting this new form of warfare and is promising its continuance. There is much talk of the upcoming midterm elections in the United States, yet nary a word about Russian interference. Trump is not taking any actions to safeguard these elections, which is perfectly understandable as Russian interference benefits the invertebrates supporting him. Even if the Russians are not entirely successful in benefitting Trump, just a small amount of interference could call into question the validity of the elections.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Possible Outcomes

May 22, 2018

This is the final post in this series. Unfortunately, Hayden does not come to any real conclusions at the end of “The Assault on Intelligence: American Security in an Age of Lies.” He just rambles on and on. As a career intelligence professional, one could expect better. He has made a career of dealing with large amounts of data of varying amounts of credibility, and has come to conclusions, or at least different possible outcomes weighted differently. But he didn’t. So please tolerate HM’s offerings.

The president has already tweeted that the entire Department of Justice is the deep state. He has also told a New York Times reporter, “I have an absolute right to do what I want to do with the Justice Department. Two conclusions can be drawn here.
Trump is woefully ignorant of the Constitution and what he can do.
The Russian new way of conducting warfare has been highly successfully.

Should the Democrats win back the House and the Senate, Trump can be impeached and removed from office.
However, this is a goal that it is difficult to achieve. And likely impossible given Russian interference, which has been promised, but for which Trump is going to do nothing to prevent.

Mueller can finish his report and provide it to Congress. It is likely that Republicans would not be impressed by compelling evidence of obstruction of justice.

But what about conspiring with Russia to win the election? The United States has spent large amounts on defense. But to what end if the Russians have effectively captured the White House? Trump worships Putin and would gladly serve as his lap dog.

And suppose it is discovered that Trump owes large amounts of money to Russia and that Putin effectively owns him?

What happens in these latter two cases rests solely with the Republicans. Too many Republicans have been influenced by Russia’s new form of warfare and are doing everything they can to subvert Mueller’s work. They have already produced a biased report that excludes Democratic input and exonerates the president.

Similarly, if Trump fires Mueller and tries to close down the investigation, the question is how will Republicans respond to this constitutional crisis? If they’re complacent and do nothing, our democracy effectively goes down the drain. Trump is likely to declare himself President for life, and Russia would effectively occupy the oval office.

The Russians are generations ahead of the United States in warfare. If this were an old-fashioned shooting war, all Americans would be enraged and the country would be up in arms. But the type of highly effective warfare to which the Russians have advanced involves the human mind. Some US Citizens are loosing interest in Mueller’s investigation and are tired of it lasting so long. They seem to care not that they would be losing the White House to the Russians. All this requires thinking, that is System 2 processing. System 1 processing, feeling, believing, not thinking and being oblivious of the truth is so much easier.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Trump, Russia, and Truth (Cont.)

May 21, 2018

This post is a continuation of the post of the same title taken from the book by Michael Hayden titled “The Assault on Intelligence: American Security in the Age of Lies.” This is the third post in the series.

Gary Kasparov, Soviet chess champion turned Russian dissident outlined the progression of Putin’s attacks. They were developed and honed first in Russia and then with Russian-speaking people nearby before expanding to Europe and the U.S. These same Russian information operations have been used to undercut democratic processes in the United States and Europe, and to erode confidence in institutions like NATO and the European Union.

Hayden notes, “Committed to the path of cyber dominance for ourselves, we seemed to lack the doctrinal vision to fully understand that the Russians were up to with their more full-spectrum information dominance. Even now, many commentators refer to what the Russians did to the American electoral process as a cyber attack, but the actual cyber portion of that was fairly straightforward.”

Hayden writes, “Evidence mounted. The faux personae created at the Russian bot farm—the Saint Petersburg—based Internet Research Agency—were routinely represented by stock photos taken from the internet, and the themes they pushed were consistently pro-Russian. There was occasional truth to their posting, but clear manipulation as well, and they all seemed to push in unison.

The Russians knew their demographic. The most common English words in their faux twitter profiles were “God,” “military,” “Trump,” “family,” “country,” “conservative,” “Christian,” “America,” and “Constitution,” The most commonly used hashtags were #nuclear, #media, #Trump, and #Benghazi…all surefire dog whistles certain to create trending.”

It was easy for analysts to use smart algorithms to determine whether something was trending because of genuine human interaction or simply because it was being pushed by the Russian botnet. Analysts could see that the bots ebbed and flowed based upon the needs of the moment. Analysts tried to call attention to this, but American intelligence did not seem to be interested.

Analyst Clint Watts characterized 2014 as year of capability development for the Russians and pointed to a bot-generated petition movement calling for the return of Alaska to Russia that got more than forty thousand supporters while helping the Russians build their cadre and perfect their tactics. With that success in hand in 2015 the Russians started a real push toward the American audience, by grabbing any divisive social issue they could identify. They were particularly attracted to issues generated from organic American content, issues that had their origin in the American community. Almost by definition, issues with a U.S. provenance could be portrayed as genuine concerns to America, and they were already preloaded in the patois of the American political dialogue, which included U.S. based conspiracy theorists.

Hayden writes, “And Twitter as a gateway is easier to manipulate than other platforms since in the twitterers we voluntarily break down into like-minded tribes, easily identified by or likes and by whom we follow. Watts says that the Russians don’t have to “bubble” us—that is, create a monolithic information space friendly for their messaging, We have already done that to ourselves since, he says, social media is as gerrymandered as any set of state electoral districts in the country. Targeting can become so precise that he considers social media “a smart bomb delivery system.” In Senate testimony, Watts noted that with tailored news feeds, a feature rather than a bug for those getting their news online, voters see “only stories and opinions suiting their preferences and biases—ripe condition for Russian disinformation campaigns.”

Charlie Sykes believes “many Trump voters get virtually all their information from inside the bubble…Conservative media has become a safe space for people who want to be told they don’t have to believe anything that is uncomfortable or negative…The details are less important than the fact that you’re being persecuted, you’re being victimized by people you loathe.”

What we have here is an ideal environment for System 1 processors. They can feed their emotions and beliefs without ever seeing any contradicting information that would cause them to think and invoke System 2 processing.

Republican Max Boot railed against the Fox network as “Trump TV,” Trump’s own version of RT,” and its prime-time ratings czar Sean Hannity as “the president’s de facto minister of information. Hayden says that there are what he calls genuine heroes on the Fox Network, like Shepard Smith, Chris Wallace, Charles Krauthammer, Bret Baier, Dana Perino and Steve Hayes, but for the most part he agrees with Boot. Hannity gave a platform to WikiLeaks’ Julian Assange shortly before Trump’s inauguration, traveling to London to interview him at the Ecuadorian embassy, where Assange had taken refuge from authorities following a Swedish rape allegation.

Hayden writes, “When the institutions of the American government refuse to kowtow to the president’s transient whim, he sets out to devalue and delegitimize them in a way rarely, if ever, seen before in our history. A free (but admittedly imperfect) press is “fake news,” unless, of course, it is Fox; the FBI is in “tatters,” led by a”nut job” director and conducting a “witch hunt”; the Department of Justice, and particularly the attorney general, is weak, and so forth.”

It is clear that Trump has experience only with “family” business, where personal loyalty reigns supreme. He has no experience with government and is apparently ignorant of the separation of the three branches of govern, legislative, judicial, and executive. The judicial and legislative branches are to be independent of the executive.

Apparently the White House lawyer, Ty Cobb, asked Trump whether he was guilty. Obviously, Trump said he was innocent, so Cobb told Trump to cooperate with Mueller and that would establish his innocence quickly and he could devote full time to his presidential duties.

Obviously, he is not innocent. On television he told Lester Holt that the reason he fired Comey was that he would not back off the Russia investigation. In other words, he has already been caught obstructing Justice.

During the campaign he requested Hillary’s emails from the Russians. So he was conspiring with the Russians and this conspiracy was successful as he did indeed get the emails.

There are also questions regarding why is he so reluctant to take any actions against Russia? One answer is that it is clearly in Trumps’ interest for the Russians interfering in the mid term election as he is concerned that the Democrats could regain control of both the House and the Senate, which would virtually guarantee that he would be impeached.

A related question regards his finances. Why has he never released his tax forms? There are outstanding debts that are not accounted for, and he seems to be flush with cash, but from where? The most parsimonious answer to this question is that he is in debt to Putin. In other words, Putin owns him.

We do not know what evidence Mueller has, but it appears that it is very large.

And Trump is behaving like a guilty person. Of course he denies his guilt and proclaims his innocence vehemently, but this only makes him appear guilty. He is viciously attacking the government and the constitution to discredit them, since he will not be able to prove his innocence. And the Russians have and will continue to provide the means for helping him try to discredit the justice system, the intelligence community, and the press.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Trump, Russia, and Truth

May 20, 2018

The title of this post is identical to the title of a chapter in “The Assault on Intelligence: American Security in an Age of Lies.” This book is by Michael V. Hayden who has served as the directors of both the National Security Agency (NSA) and the Central Intelligence Agency (CIA). This is the second post in the series.

in 2017 a detailed story in “Wired” magazine revealed how Russia was subverting U.S. democracy cited a European study that found that rather than trying to change minds, the Russian goal was simply “to destroy and undermine confidence in Western media.” The Russians found a powerful ally in Trump, who attacked American institutions with as much ferocity as did Russian propaganda, as when he identified the press as the “enemy of the American people.” The attack on the media rarely argued facts. James Poniewozik of the New York Times wrote in a 2017 tweet that Trump didn’t try to argue the facts of a case—“just that there is no truth, so you should just follow your gut & your tribe.”

Wired also pointed out the convergence between the themes of Russian media/web blitz and the Trump campaign: Clinton’s emails, Clinton’s health, rigged elections, Bernie Sanders, and so forth. And then there was an echo chamber between Russian news and American right-wing outlets, epitomized by Clinton staffer Seth Rich was somehow related to the theft of DNC emails, and the dumping of them on Wikileaks—that it was an inside job and not connected to Russia at all.

Hayden writes, “Trump seemed the perfect candidate for the Russians’ purpose, and that was ultimately our choice not theirs. But the central fact to be faced and understood here is that Russians have gotten very good indeed at invading and often dominating the American information space. For me, that story goes back twenty years. I arrived in San Antonia, TX, in January 1996 to take command of what was then called the Air Intelligence Agency. As I’ve written elsewhere, Air Force Intelligence was on the cutting edge of thinking about the new cyber warfare, and I owed special thanks to my staff there for teaching me so much about this new battle space.”

“The initial question they asked was whether we were in the cyber business or the information dominance business? Did we want to master cyber networks as a tool of war or influence or were we more ambitious, with an intent to shape how adversaries or even societies received and processed all information? As we now have a Cyber Command and not an information dominance command, you can figure how all this turned out. We opted for cyber; Russia opted for information dominance.”

The Russian most interested in that capacity was General Valery Gerasimov, an armor officer who after combat in the Second Chechen War, served as the commander of the Leningrad and then Moscow military districts. Writing in 2013 Gerasimov pointed to the “blurring [of] the lines between the state of war and the state of peace” and—after noting the Arab Awakening—observed that “a perfectly thriving state can, in a matter of months and even days, be transformed into an arena of fierce armed conflict…and sink into a web of chaos.”

Gerasimov continued, “The role of nonmilitary means of achieving political and strategic goals has grown,” and the trend now was “the broad use of political, economic, informational humanitarian, and other nonmilitary measures—applied in coordination with the protest potential of the population.” He said seeing large clashes of men and metal as a “thing” of the past.” He called for “long distance, contactless actions against the enemy” and included in his arsenal “informational actions, devices, and means.” He concluded, “The information space opens wide asymmetrical possibilities for reducing the fighting potential of the enemy,” and so new “models of operations and military conduct” were needed.

Putin appointed Gerasimov chief of the general staff in late 2012. Fifteen months later there was evidence of his doctrine in action with the Russian annexation of Crimea and occupation of parts of the Donbas in eastern Ukraine.

Hayden writes, “In eastern Ukraine, Russia promoted the fiction of a spontaneous rebellion by local Russian speakers against a neofascist regime in Kiev, aided only by Russian volunteers, a story line played out in clever high quality broadcasts from news services like RT and Sputnik coupled with relentless trolling on social media. [At this time HM was able to view these RT telecasts at work. They were the best done propaganda pieces he’s ever seen, because they did not appear to be propaganda, but rather, high quality, objective newscasts.]

Hayden concludes, “With no bands, banners, or insignia, Russia had altered borders within Europe—by force—but with an informational canopy so dense as to make the aggression opaque.”

The Assault on Intelligence

May 19, 2018

Michael V. Hayden has served as the director of both the National Security Agency (NSA) and the Central Intelligence Agency (CIA). His latest book is “The Assault on Intelligence: American Security in an Age of Lies.” Actually this title is modest. The underlying reality is that this is an attack on American Democracy.

In 2016 the Oxford’s English Dictionary’s word of the year was “post truth,” a condition where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. A. C. Grayling characterized the emerging post-truth world as “over-valuing opinion and preference at the expense of proof and data.” Oxford Dictionaries president Casper Grathwohl predicted that the term could become “one of the defining words of our time.” Change “could” to ‘has,” and change one to “is,” and, unfortunately, you have an accurate characterization of today’s reality.

Kahneman’s two-system view of cognition is fitting here. This is a concept that should be familiar to healthy memory blog readers. System 1, is called, intuition, and refers to the most common mode of our cognitive processing. Normal conversation, or the performance of skilled tasks are System 1 processes. Emotional processing is also done in System 1. System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors do slip through.

Post truth processing is exclusively System 1. It involves neither proof nor accurate data, and is frequently emotional. That is the post truth world. One of the most disturbing facts in Hayden’s book, is that Trump does not care about objective truth. Truth is whatever he feels at a particular time. The possibility that Trump might have a delusional disorder, in which he is incapable of distinguishing fact from fiction has been mentioned in previous health memory blog posts. That was proposed as a possible reason for the enormous number of lies he tells. But it is equally possible that he has no interest in objective truth. As far as he is concerned, objective truth does not exist.

Tom Nichols writes in his 2017 book “The Death of Expertise” “The United States is now a country obsessed with the worship of its own ignorance…Google-fueled, Wikipedia-based, blog sodden…[with] an insistence that strongly held opinions are indistinguishable from facts.” Nichols also writes about the Dunning-Kruger effect, which should also be familiar to healthy memory blog readers. The Dunning-Kruger Effect describes the phenomenon of people thinking they know much more about a topic than they actually know, compared to the knowledgeable individual who is painfully aware of how much he still doesn’t know about the topic in question.

Trump is an ideal example of the Dunning-Kruger Effect. Mention any topic and Trump will claim that he knows more about the topic than anyone else. He knows more about fighting wars than his generals, He knows more about debt than anyone else (from a personal experience this might be true). He told potential voters that he was the only one who knew how to solve all their problems, without explaining how he knew or what his approach was. In point of fact, the only things he knows, and is unfortunately an expert at, are how to con and cheat people.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

What Can Be Done?

May 18, 2018

Many problems have been discussed in Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. First of all, people need to be made aware of these problems. Businesses, companies, and agencies should be willing, to the extent possible, to unweaponize these weapons of math destruction. If they are unwilling, laws should be enacted.

Dr. O’Neill thinks that data scientists should pledge a Hippocratic Oath, one that focuses on the possible misuses and misinterpretations of their models. Following the market crash of 2008, two financial engineers, Emanuel Derman and Paul Wilmots, drew up such an oath:

I will remember that I didn’t make the world, and it doesn’t satisfy my equations.

Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.

I will never sacrifice reality for elegance without explaining why I have done so.

Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.

The Electoral College Needs to Go

May 17, 2018

This post is based on Cathy O’Neil’s informative book, “Weapons of Math Destruction.” The penultimate chapter in the book shows how weapons of math destruction are ruining our elections. It is only recently that Facebook and Cambridge Analytics have be found to employ users data for nefarious purposes. Nevertheless Dr. O’Neil’s book was published in 2016. To summarize the chapter, weapons of math destruction are distorting if not destroying our elections. Actually the most informative and most important part of the chapter is found in a footnote at the end:

“At the federal level, this problem could be greatly alleviated by abolishing the Electoral College system. It’s the winner-take-all mathematics from state to state that delivers so much power to a relative handful of voters. It’s as if in politics, as in economics, we have a privileged 1 percent. And the money from the financial 1 percent underwrites the micro targeting to secure the votes of the political 1 percent. Without the Electoral College, by contrast, every vote would be worth exactly the same. That would be a step toward democracy. “

Readers of the healthy memory blog should realize that the Electoral College is an injustice that has been addressed in previous healthy memory blog posts (13 to be exact). Just recently, the Electoral College, not the popular vote, produced Presidents with adverse effects. One resulted in a war in Iraq that was justified by nonexistent weapons of mass destruction. And most recently, the most ill-suited person for the presidency became president, contrary to the popular vote.

The justification for the Electoral College was the fear that ill-informed voters might elect someone who was unsuitable for the office. If there ever was a candidate unsuitable for the office, that candidate was Donald Trump. It was the duty of the Electoral College to deny him the presidency, a duty they failed. So, the Electoral College needs to be disbanded and never reassembled.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Broken Windows Policing

May 16, 2018

This post is based on Cathy O’Neil’s informative book, “Weapons of Math Destruction.” The title of this post should be familiar to anyone who has viewed the Blue Bloods television series. It advanced Broken Windows Policing as justification for the policies they pursued to prevent serious crimes. The justification of this policy has been an article of faith since 1982, when a criminologist named George Kelling teamed up with a public policy expert, James Q. Wilson to write an article in the “Atlantic Monthly” on so-called broken-windows policing. According to Dr. O’Neil, “The idea was that low-level crimes and misdemeanors created an atmosphere of disorder in a neighborhood. This scared law-abiding citizens aware. The dark and empty streets they left behind were breeding grounds for serious crimes. The antidote was for society to resist the spread of disorder. This included fixing broken windows cleaning up graffiti-covered subway cars, and taking steps to discourage nuisance crimes. This thinking led in the 1990s to zero-tolerance campaigns most famously in New York City. Cops would arrest people for jumping subway turnstiles. They’d apprehend people caught sharing a single joint and rumble them around the city in a paddy wagon for hours before eventually booking them.”

There were dramatic campaigns for violent crimes. The zero-tolerance campaign was credited for reducing violent crime. Others disagreed citing the fallacy of “post hoc, propter hoc” (after this, therefore because of this) and other possibilities, ranging from the falling rates of crack cocaine addiction to the booming 1990s economy. Regardless, the zero-tolerance movement gained broad support, and the criminal justice system sent millions of mostly young minority males meant to prison, many of them for minor offenses.

Dr. O’Neil continues, “But zero tolerance actually had very little to do with Kelling and Wilson’s “broken-windows” thesis. Their case focused on what appeared to be a successful policing initiative in Newark, New Jersey. Cops who walked the beat there, according to the program, were supposed to be highly tolerant. Their job was to adjust to the neighborhood’s own standards of order and to help uphold them. Standards varied from one part of the city to another. In one neighborhood it might mean that drunks had to keep their bottles in bags and avoid major streets but that side streets were okay. Addicts could sit on stoops but not lie down. The idea was only to make sure the standards didn’t fall. The cops, in this scheme, were helping a neighborhood maintain its own order but not imposing their own.”

On the basis of this and other data, Dr. O’Neil comes to the conclusion, “that we criminalize poverty, believing all the while that our tools are not only scientific, but fair.” Dr. O’Neil asks, “What if police looked for different kinds of crimes?” That may sound counterintuitive, because most of us, including the police, view crime as a pyramid. At the top is homicide. It’s followed by rape and assault, which are more common, then shoplifting, petty fraud, and even parking violations, which happen all the time. Minimizing violent crime, most would agree, is and should be a central part of a police force’s mission.”

Dr. O’Neil asks an interesting question. What if we looked at the crimes carried out by the rich? “In the 2000s, the kings of finance threw themselves a lavish party. They lied, they bet billions against their own customers, they committed fraud and paid off rating agencies. Enormous crimes were committed there, and the result devastated the global economy for the best part of five years. Millions of people lost their homes, jobs, and health care.”

She continues,”We have every reason to believe that more such crimes are reoccurring in finance right now. If we’ve learned anything, it’s that the driving goal of the finance world is to make a huge profit, the bigger the better, and that anything resembling self-regulation is worthless. Thanks largely to the industry’s wealth and powerful lobbies, finance is underpoliced.”

Two Especially Troubling Problems

May 15, 2018

One of these problems is found in the Chapter “Propaganda Machine: Online Advertising in Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Advertising is legitimate, but predatory advertising is certainly not. In predatory advertising weapons of math destruction are used to identify likely subjects to be exploited. Not all, but some for-profit colleges were built and grew through weapons of math destruction. People who were identified as being in need of education or training were preyed upon and sold expensive on-line courses, that were not likely to pay off in jobs or any sort of advancement.

HM learned a new word reading Dr. Kathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. That word was clopening. This is when an employee works late one night to close the store or cafe and then returns a few hours later, before dawn, to open it. Having the same employee closing and opening, or clopening, can make logistical sense for a company, but it leads to sleep-deprived workers and crazy schedules. Weapons of math destruction can identify optimal schedules for the company, but they also need to take into account the welfare of the employee. Scheduling can place the employee’s health in jeopardy along with the employee’s family life.

Laws are clearly needed here. As for the predatory advertisers marketing on-line courses, they should be closed down and fined. Unfortunately, the Consumer Financial Protection Bureau that was policing this problem has been shut down. Companies and businesses need to be held responsible for the health and welfare of their employees.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The General Problem of Proxies

May 14, 2018

This general problem of proxies is fairly ubiquitous as outlined in Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Remember that proxies are variables used to compensate for the actual variables for which data are unavailable. The Chapter “Ineligible to Serve” addresses problems proxies can create in getting a job. Once on the job proxies can make it more difficult to hold the job. This is described in the chapter, “Sweating Bullets: on the Job.” Proxies also cause problems in getting credit, which is described in the chapter “Collateral Damage: Landing Credit.” Similarly proxies present problems in getting insurance described in the chapter, “No Safe Zone: Getting Insurance.”

So the effects of Weapons of Math Destruction are ubiquitous. People need to be aware of when they might be being screwed by these weapons. So “Weapons of Math Destruction” needs to be generally read.

Indeed, there are reasons why these weapons are being used, but care must be taken to reduce or eliminate the destruction. It is not only the individuals being evaluated who need to be aware, but also the businesses and agencies using them. They should be aware of their shortcomings and the need for eliminating these shortcomings when possible. These models need to be made transparent, so the proxies can be identified, and the possibility of misclassifications can be addressed.

There is also a chapter titled “The Targeted Citizen,” but since that topic is so much in the news about Facebook and the interference of Russia in the presidential election, that will not be addressed here.

Ranking Colleges

May 13, 2018

This post is based on Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”.

In 1983 the newsmagazine “U.S. News & World Report” decided it would evaluate 1,800 colleges and universities throughout the United States and rank them for excellence. Had they honestly considered if they could accurately do this they could have saved the country and the countries’ colleges and universities from anxiety and confusion. But they were not honest and proceeded to build the magazine’s reputation and fortune.

How could one do this? One could conduct a national survey and have individuals rate the schools in terms of prestige. This could be done validly. But to rate them in terms of excellence? How is excellence defined? Would it be the satisfaction of recent graduates? Would it be the satisfaction of graduates further down the course of life?

The healthy memory blog has made the point in previous posts that depending on what a student wants to learn and what career the student wants to pursue should be primary factors in choosing a college. All colleges, even the most prestigious ones, differ in what they have to offer. And what about the cost-effectiveness of colleges? This is probably the most important factor for the majority of students. One can pay through the nose to attend a prestigious college, but what is the benefit for the cost incurred?

The magazine picked proxies that seemed to correlate with success. They looked at SAT scores, student-teacher ratios, and acceptance rates. They analyzed the percentage of incoming freshmen who made it to the sophomore year and the percentage of those who graduated. They calculated the percentage of living alumni who contributed money to their alma mater, surmising that if they gave a college money there was a good chance they appreciated the education there. Three-quarters of the ranking would be produced by an algorithm, an opinion formalized in code, that incorporated these proxies. In the other quarter they would factor in the subjective views of college officials throughout the country.

HM regards this procedure pretty much as ad hoc selection with no external validation. However, Dr. O’Neil is more charitable writing, “U.S. News first data-driven ranking came out in 1988, and the results seemed sensible. However, as the rankings grew into a national standard, a vicious feedback loop materialized. The trouble was that the rankings were self-reinforcing.” So if a college was rated poorly in “U.S. News,” its reputation would suffer, and conditions would deteriorate. Top students would avoid it, as would top professors. Alumni would howl and cut back on contributions. The ranking would go down further. Dr. O’Neil concludes that the ranking was destiny.

Everyone was acting foolishly. In fact, this was a jury-rigged methodology that provided a proxy estimate of a school’s prestige. ‘U.S. News” should have discontinued the survey. Universities should have disclaimed the methodology and the ratings. Instead, they played the game and took actions just to improve their ratings. Read the book to learn the gory details.

Dr. O’Neil notes that when you create a model from proxies, it is far simpler to game it. This is because proxies are easier to manipulate than the complicated reality they represent. This is a common problem with big data and weapons of math destruction.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Finance and Big Data

May 12, 2018

This post is based on Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Dr. O’Neil was originally applying her mathematical knowledge and skills in finance. In 2008 there was a catastrophic market crash. Although weapons of math destruction did not solely cause the financial crash, they definitely contributed to it. So Dr. O’Neil moved from finance to Big Data where her skills were readily transferable.

She writes, “In fact, I saw all kinds of parallels between finance and Big Data. Both industries gobble up the same pool of talent, much of it from elite universities like MIT, Princeton, or Stanford. These new hires are ravenous for success and have been focused on external metrics—like SAT scores and college admissions—their entire lives. Whether in finance or tech, the message they’ve received is that a they will be rich, that they will run the world. Their productivity indicates that they’re on the right track, and it translates into dollars. This leads to the fallacious conclusion that whatever they’re doing to bring in more money is good. It ‘adds value.’ Otherwise, why would the market reward it?”

She continues, “In both of these industries, the real world, with all of its messiness sits apart. The inclination is to replace people with data trails, turning them into more effective shoppers, voters, or workers to optimize some objective. This is easy to do, and to justify, when success comes back as an anonymous score and when the people affected remain ever bit as abstract as the numbers dancing across the screen.”

She worried about the separation between technical models and real people and about the moral repercussions of the separation. She saw the same pattern emerging in Big Data that she’d witnessed in finance: a false sense of security was leading to widespread use of imperfect models, self-serving definitions of success, and the growing feedback loops.

She continued working in Big Data. She writes that the her journey to disillusionment was more or less complete, and the misuse of mathematics was accelerating. She started a blog on this problem and in spite of almost daily blogging she barely kept up with all the ways she was hearing of people being manipulated, controlled, and intimidated by algorithms. It began with teachers working under inappropriate value-added models (read the book to learn about this), then the LSI-R risk model, and and continued from there. She quit her job to investigate full time the issue leading to this book.

Three Kinds of Models

May 11, 2018

This post is based on Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Many of us likely develop predictive models, but remain unaware what we are doing. So Dr. O’Neil describes an internal intuitive model she uses in planning family meals. She has a model of everyone’s appetite. She knows that one of her sons loves chicken (but hates hamburgers), while another will eat only pasta (with extra grated parmesan cheese). She also has to take into account that people’s appetites vary from day to day, so a change can catch her internal model by surprise. In addition to the information she has about her family, she knows the ingredients she has on hand or knows are available, plus her own energy, time, and ambition. The output is how and what she decides to cook. She evaluated the success of a meal by how satisfied her family seems at the end of it, how much they’ve eaten, and how healthy the food was. Seeing how well it is received and how much of it is enjoyed allows her to update her model for the next time she cooks. These updates and adjustments make it what is called a “dynamic model.”
Her model is a good model as long as she restricts it to her family. The technical term for this limitation is that it doesn’t scale. It will not work with larger or different families.

Examples of the best models are those used by professional baseball teams. There are an enormous number of variables that can be used to predict a teams performance. Moreover, these models allow the prediction of the performance of the team when different players are added or subtracted. The measure this model is designed to predict is the number of wins. Wins provides the variable that it used to predict and improve the models.

Recidivism models are used to predict the likelihood that a prisoner, after being released from prison will return to criminal behavior and end up back in jail. One of the more popular models is the Level of Service Inventory-Revised (LSI-R). It includes a lengthy questionnaire for the prisoner to fill out. One of the questions—“How many prior convictions have you had?” is highly relevant to the risk of recidivism. Others are also clearly related. For example “What part did others play in the offense? What part did drugs and alcohol play?”

Other questions are more problematic. For example a question about the first time they ever were involved with the police. For a white subject the only incident to report might be the one that brought him to prison. However, young black males are likely to have been stopped by police dozens of times, even when they’ve done nothing wrong. A 2013 study by the New York Civil Liberties Union found the while black and Latino males between the ages of fourteen and twenty-four make up only on 4.7% of the cities population, but accounted for 40.6% of the stop-and-frisk checks by police. More than 90% of those stopped were innocent. Some of the others might have been drinking underage or carrying a joint. And unlike most rich kids, they got in trouble for it. So if early “involvement” with police signals recidivism, poor people and racial minorities look far riskier.

Although statistical systems like the LSI-R are effective in gauging recidivism risk, or at least more accurate than a judge’s random guess, we find ourselves descending into a pernicious WMD feedback loop. A person who scores as “high risk” is likely to be unemployed and to come from a neighborhood where many of his friends and family have had run-ins with the law. Dr. O’Neil writes, “Thanks in part to the resulting high score on the evaluation, he gets a longer sentence, locking him away for more years in a prison where he’s surrounded by criminals, which raises the likelihood that he’ll return to prison. If he commits another crime, the recidivism model can claim another success. But in fact the model contributes to a toxic situation and helps to sustain it. That’s a signature quality of a WMD.

This risk and the value of the LSR-R could be tested. There could be two groups. A control group would be administered the questionnaire. Another group would be administered a modified version of the questionnaire that did not include responses that would tip the race of the individual. The participants could be tracked over time. If the modified version of the questionnaire actually resulted in the a lower rate of recidivism, then the original questionnaire could be identified as harmful, not only to the respondent, but also to society that was increasing recidivism rather than reducing it.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Weapons of Math Destruction

May 10, 2018

The title of this book is identical to the title of a book by Dr. Cathy O’Neil. The subtitle is “How Big Data Increases Inequality and Threatens Democracy.” Dr. O’Neil is a mathematician. She left her academic position to work as a quant (a quantitative expert) for D. E. Shaw, a leading hedge fund. Initially she was excited by working in the global academy. But the economy crashing in the autumn of 2008 caused her to reevaluate what she was doing.

She writes, “The crash made it all too clear that mathematics, once my refuge, was not only deeply entailed in the world’s problems, but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment—all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I love so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to a system that I now recognized as flawed.”

She writes that the crisis should have caused all to take a step back and try to figure out how math had been misused and how a similar catastrophe in the future could be prevented. She writes, “But instead, in the wake of the crisis, new mathematical techniques were hotter than ever and expanding into still more domains. They churned 24/7 through petabytes of information, much of it scraped from social media, or e-commerce websites. And increasingly they focused not on the movements of global financial markets but on human beings, on us. Mathematicians and statisticians were studying our desires, movements, and spending power. They were predicting our trustworthiness and calculating our potential as students, workers, lovers, criminals.”

These math-powered applications were based on choices made by fallible human beings. Although some choices were made with the best intentions, many of the models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Dr. O’Neil came up with a name for these harmful kinds of models: Weapons of Math Destruction, or WMDs for short.

She notes that statistical systems require feedback—something to tell them when they’re off track. The example she provides is that if amazon.com, through a faulty correlation, started recommended lawn care books to teenage girls, the click would plummet, and the algorithm would be tweaked until it got it right. However, without feedback, a statistical engine can continue spinning out faulty and damaging analysis while never learning from its mistakes. These models end up defining their own reality and use it to justify its results. She writes that this type of model is self-perpetuating highly destructive—and very common.

This book focuses on the damage inflicted by WMDs and the injustice they perpetuate. It discusses harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job.

Responsible Tech is Google’s Likely Update

May 9, 2018

The title of this post is identical to the title of an article by Elizabeth Dworkin and Haley Tsukayama in the 8 May 2018 issue of the Washington Post. At its annual developer conference scheduled to kick off today in its hometown of Mountain View, CA, Google is set to announce a new set of controls to its Android operating system, oriented around helping individuals and families manage the time they spend on mobile devices. Google’s chief executive, Sundar Pichai is expected to emphasize the theme of responsibility in his keynote address.

Pichai is trying to address the increased public skepticism and scrutiny of the technology regarding the negative consequences of how its products are used by billions of people. Some of this criticism concerns the addictive nature of many devices and programs. In January two groups of Apple shareholders asked the company to design products to combat phone addiction in children. Apple chief executive Tim Cook has said he would keep the children in his life away from social networks, and Steve Jobs placed strict limitation on his children’s screen time. Even Facebook admitted that consuming Facebook passively tends to put people in a worse mood according to both its internal research as well as academic reports. Facebook chief executive Mark Zuckerberg has said that his company didn’t take a broad enough view of our responsibility to society, in areas such as Russian interference and the protection of people’s data. HM thinks that this statement should qualify as the understatement of the year.

Google appears to be ahead of its competitors with respect to family controls. Google offers Family Link, which is a suite of tools that allows parents to regulate how much time their children can spend on apps and remotely lock their child’s device. FamilyLink gives parents weekly reports on children’s app usage and offers controls to approve the apps kids download.

Google has also overhauled Google news. The new layout show how several outlets are covering the same story from different angles. It will also make it easier to subscribe to news organizations directly from its app store.

HM visited Google’s campus at Mountain View, which was one of the trips of a month long workshop he attended provided. It looks more like a university campus than a technology business. Different people explained what they were working on, and we ate at the Google cafeteria. This cafeteria is large, offers a wide variety of delicious food, and is open 24 hours so staff can snack or dine for free any time they want.

The most talented programmer with whom HM was privileged to work with, left us for an offer at Google. She felt that this was a needed move for her to develop further her already excellent programming skills.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Data is Needed on Facial Recognition Accuracy

May 8, 2018

This post is inspired by an article titled “Over fakes, Facebook’s still seeing double” by Drew Harrell in the 5 May 2018 issue of the Washington Post. In December Facebook offered a solution of its worsening coverage of fake accounts: new facial-recognition technology to spot when a phony profile tries to use someone else’s photo. The company is now encouraging its users to agree to expand use of their facial data, saying they won’t be protected from imposters without it. The Post article notes that Katie Greenmail and other Facebook users who consented to that technology in recent months have been plagued by a horde of identity thieves.

After the Post presented Facebook with a list of numerous fake accounts, the company revealed that its system is much less effective than previously advertised: The tool looks only for imposters within a user’s circle of friends and friends of friend’s of friend;s—not the site’s 2 billion-user network, where the vast majority of doppelgänger accounts are probably born.

Before any entity uses facial recognition software, they should be compelled to test the software and describe in detail the sample it was developed on including the size and composition of that sample, and the performance of the software with respect to correct identifications, incorrect identifications, and no classifications. Facebook needed to do this testing and present the results. And Facebook users needed to demand these results from testing before using face recognition. How many time do users need to be burned by Facebook before they terminate interactions with the application?

The way facial recognition is used on police shows on television seems like magic. A photo is taken at night with a cellphone and is tested against a data base that yields the identity of the individual and his criminal record. These systems seem to act with perfection. HM has yet to see a show in which someone in a database is incorrectly identified, and that individual arrested by the police, interrogated and charged. That must happen. But how often and under what circumstances? It seems likely that someone with a criminal record is likely to be in the database and it is possible that the individual whose photo was taken is not in the database. If there is no match will the system make the best match that it can and make a person who is in the database a suspect in the crime?

The public, and especially defense lawyers, need to have quality data on how well these recognition systems perform.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

What a 72-Year Old Remembers About Technology

May 7, 2018

When HM was in college, there were only mainframe computers that used tape drives. He took a course in computer programming. Fortran was the primary language for science and engineering, but the mathematicians at Ohio State developed and used Scatran instead. At that time there were no computer science departments. Computer science was divided between the mathematics department and the electrical engineering department. I would write my programs hand them off to the keypunch operators who always complained, and unfortunately justly so, about the illegibility of my printing. Then I would submit my punched cards to the mainframe. They would give an estimate regarding the waiting time, but typically it took several hours.

When you learned the program had been run, you returned and asked for your output. Usually, you could determine from the nature of your output, what had happened. If the output was only several pages, then it was likely that there was a formatting or logical error in your program. If the output was quite thick, then it was likely that you read in the data improperly. If there was a mistake, then you had to debug the program and make your own manual corrections. There were assistants available who provided advice.

HM worked as a clerk-typist in the Army for a while. When mistakes were made, you tried to correct them with white out. If there were too many mistakes, or if a rewrite was needed, then the entire document had to be retyped. As a graduate student HM paid typists to type his Master’s Thesis and doctoral dissertation. As a professional psychologist there were typists on staff. When documents were long, HM made rewrites and corrections and gave the document back to the typist. It was not unusual for the entire document to be retyped. However, when the entire document was retyped there usually were mistakes. Sometimes a point of diminishing returns was reached in which a retyping would result in more errors than were in the document that needed to be retyped.

The first computers usually had the Basic programing language installed and nothing else. These were primarily for hobbyists. When the first word processing programs appeared, they were like a godsend as they made the labor intensive typing task orders of magnitude easier. They eventually resulted in reductions in the secretarial staff, as professionals could do their own typing. However, at this time, most statistical analyses were done on mainframes. This involved having data and programs keypunched, submitted to the mainframe, waiting for processing, and picking up the results.

When statistical programs were developed for personal computers, this all could be done by the statistician. In contrast to the old days when there would typically be a break of several hours waiting for the results, the PCs spit the results back within seconds. If there were problems, they needed to be addressed directly. The old break waiting for the results was missed.

When HM took physics in high school, the teacher would have one student design a circuit and provide it to the rest of the class. The students would then need to manually compute the electrical values at different points in the circuit. When HM was assigned this task he designed a circuit where all these values could be computed in one’s head. At this time there were no pocket calculators. Only one student had a slide rule, so the rest of us needed to do the calculations manually. So when no manual calculations had to be made for my circuit, everyone got a perfect score. HM made his point. We all understood electrical circuits, but even after 12 years of education we still made arithmetical errors.

It is difficult for HM to identify what he likes most about the new technology. Of course, word processing is highly appreciated. But the computational aids are especially appreciated. HM worked with MathCad and really appreciated the ease with which complex mathematical equations could be manipulated. HM is sorry he did not have such tools when he was studying these subjects. Doing arithmetic for eight years was tedious and a waste of time. Arithmetic provides little understanding of or appreciation for mathematics.

So although HM is envious of the developments in technology, he is disturbed about how it is used. He fears that the benefits of technology are not being truly exploited and technology is being used in a superficial manner that can be unhealthy. It is unhealthy to be constantly plugged in. But everywhere you go you see people with their faces glued to their smartphones. When they are walking through a park, they are apparently oblivious to nature with their preoccupation with their smartphones. Even at professional conventions, where professionals have traveled to interact personally with other professionals, you see them sitting together, not conversing, but with their faces glued to their smartphones.

People are preoccupied with whether or not they are liked, and count the number of friends they have. But the number of true friends one can have is quite small. Read the healthy memory blog post “How Many Friends are Too Many?” Robin Dunbar concludes that the maximum number of people we can call friends is 150. And the number of true friends is much lower than that. True friends consume both time and effort.

Technology also seems to have exacerbated the Dunning-Kruger Effect. The Dunning-Kruger Effect describes the phenomenon of people thinking they know much more about a topic than they actually know, compared to the knowledgeable individual who is painfully aware of how much he still doesn’t know about the topic in question. The Wikipedia is a tremendous source of knowledge. Unfortunately, people think that since they have accessed a topic in the Wikipedia that they have acquired that knowledge, when what they have done is learned how to access the information. Understanding this knowledge requires time and effort.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How Facebook Let A Friend Pass My Data to Cambridge Analytica

April 24, 2018

The title of this post is identical to the title of a News & Technology piece by Timothy Revell in the 21 April 2018 issue of the New Scientist. This Is Your Digital Life (TIYDL) is the name of the Facebook App whose data ended up in the hands of Cambridge Analytica. Presumably only 270,000 people used the TIYDL app, but Facebook estimates that Cambridge Analytica ended up with data from 87 million people. These data were used by Cambridge Analytica to perform election shenanigans. The United Kingdom (UK) is gathering claimants to take Facebook to court for mishandling their data.

People who used the TIYDL app gave it permission to access the Facebook public profile page, date of birth and current city for each of their friends, along with the pages they liked. Facebook also says that “ a small number of people gave access to their own timeline and private messages, meaning that posts or messages from their friends would have been scooped up as well.

The TIDYL app was created by University of Cambridge professor Aleksandr Kogan to research how someone’s online presence corresponds to their personality traits. Kogan gave data from the app to Cambridge Analytics, which Facebook says was a violation of its terms of service. The UK’s information commissioner is also investigating whether it broke UK data protection laws. Data collected for research purposes can’t be given to a private company for a different use without consent. Kogan says that Facebook knew his intention was to pass it on and that it was written in the TIDYL app’s terms and conditions.

When reporters told Facebook about the situation in 2015, the firm said Cambridge Analytica had to delete the data. Cambridge Analytica said it did this, but whistle-blower Christopher Wylie said it didn’t.

Now Facebook is informing the people involved. It has released a tool that lets people check if their data were involved (bit.ly/2uXuHOY). The author used the tool and found, to his surprise, that a friend had used the app.

The problem is that to use virtually any software you need to agree to the terms of agreement, which include the privacy policies. Researchers at Carnegie Mellon University found in 2012 that it would take the average person 76 days to read all the privacy policies that they see each year. Clearly this is unreasonable.

Requirements should be made that these agreements be of reasonable length and understandable to the layperson. Moreover the default options should be “out” and action should be taken by the user to “opt in” This is necessary to be sure that people understand what they are doing.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How Wikipedia Became the Internet’s Good Cop

April 10, 2018

The title of this post is identical to the title of an article by Noam Cohen in the Outlook Section of the 8 April 2018 issue of the Washington Post. The subtitle is “To combat fake news, tech companies want the wisdom of the crowd.”

Actually it is not only tech companies, but it is everyone who should want the wisdom of the crowd. Moreover, the contributors to the Wikipedia constitute a very smart and intelligent crowd. There is a standard that needs to be reached to remain published in Wikipedia.

Wikipedia has sworn off advertising completely. Cohen writes, “When Tim Berners-Lee conceived the web, he imagined that it would look a lot like Wikipedia; that is, “ system in which sharing what you know or thought should be as easy as learning what somewhat else-knew.”

Wikipedia serves as a remedy to the Dunning-Kruger Effect. Previous healthy memory posts have written about the Dunning-Kruger Effect. The effect describes the phenomenon of people thinking they know much more about a topic than they actually know, compared to the knowledgeable individual who is painfully aware of how much he still doesn’t know about the topic in question. HM experiences this effect practically every time he consults the Wikipedia. He fairly soon becomes somewhat familiar with how much he does not know about the topic, and becomes engaged to remedy this shortcoming. But as the effect describes, the more you learn, typically the more you become aware of how much more there is still to learn.

It is not enough just learning the news of the day. Ultimately, this just results in superficial knowledge. In the Wikipedia, one can read meaningful integrated presentations on different topics. Infrequent trips to the Wikipedia are insufficient. The Wikipedia should become, at least, a daily habit.

The Wikipedia is also an outstanding tool for fostering growth mindsets. The practice of the daily learning of new information is emphasized in the healthy memory blog as being one of the primary means for fostering a healthy memory.

It appears that the Wikipedia has replaced the encyclopedia. In the traditional encyclopedia experts were hired to write about topics. The crowd-sourced Wikipedia provides a more diverse coverage of most topics.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith andhealthymemory.wordpress.com with appropriate and specific direction to the original content.

Ditch and Switch

April 9, 2018

This post is taken directly from the article titled, “Our Obsession with a ‘free’ internet led to Facebook data row, by Jacob Aron in the 7 April 2018 Issue of the New Scientist. This list offers privacy-respecting alternatives to online services:

Ditch: FACEBOOK
Facebook’s data-slurping habits are legendary, with many users choosing to delete the app from their phone in the wake of recent revelations.

Switch: DIASPORA
Diaspora decentralizes social networks by letting people set up their own servers to host content. Users retain ownership off their data and aren’t required to use their real name.

Ditch: GOOGLE
Google stores your entire search history and uses it to make website and video suggestions, profiles you and sell adverts.

Switch: DUCKDUCKGO
Search engine DuckDuckGo doesn’t store any information. All users see the same search results, so they aren’t tailored to your particular interests.

Ditch: TWITTER
Twitter uses the information it knows about you to sell ads—things like your age, gender or location.

Switch: MASTODON
Mastodon offers similar features to Twitter but is decentralized, meaning that anyone can set up a Mastodon server that is independently owned. Users on one server act as a single community, but can also communicate with people on other servers.

Ditch: GMAIL
Gmail use to make money by scanning your inbox for keywords, then showing you adverts based on your interests. Last year, Google announced it would no longer sell ads in this way—but emails are still scanned to power flight reminders, calendar updates and other Google features

Switch: PROTONMAIL
Protonmail encrypts all of its users’ emails, meaning it has no access to your inbox. A basic account is free, while extra features like folders require a subscription. The service is so secure that Cambridge Analytic reportedly used it.

The New Lesson Plan for Elementary School: Surviving the Internet

April 8, 2018

The title of this post is identical to the digital title of an article by Drew Harwell in the 7 April 2018 issue of the Washington Post. The article describe Yolanda Bromfield’s fifth grade digital-privacy class. The lesson was on online-offline balance, so she asked how would they act when they left school and reentered a world of prying websites, addictive phones and online scams. One student answered, “I will make sure that I don’t tell nobody my personal stuff, and be offline for at least two hours every night.”

Author Harwell writes, “Between their math and literacy classes, these elementary school kids were studying up on perhaps one of the most important and least understood school subjects in American—how to protect their brains and survive the big, bad Web.

This course is part of an experimental curriculum designed by Seton Hall University Law School professors and taught by a legal fellow such as Bromfield. This class has been rolled out in recent months to hundreds of children in a dozen classrooms across New York and New Jersey. These classes are free and are folded into kids’ daily schedules and taught in the classrooms where fifth- and sixth-graders typically learn about the scientific method and the food chain. The director of Seton Hall Law’s Institute for Privacy Protection, Gaia Bernstein, who designed the program, said each class included about a half-dozen lessons taught to kids over several weeks, as well as a separate set of leeches of parents concerned about how “their children are disappearing into their screens.”

The program is funded by a $1.7 million grant that was awarded by a federal judge as part of a class-action consumer-protection settlement pending over junk faxes—to teach students about privacy, reputation, online advertising and overuse at the age when their research found that many American kids get their first cell phones when they are 10 years old.

The Seton Hall instructors said they had no interest in teaching kids digital abstinence or in instructing parents how to be the computer police. They conceded that the internet is a fact of life and children always find ways around their parents’ barriers.

The students’ parents are offered separate classes that focus largely on how parents should deal with kid’s overuse. Of course in a world where much of their homework and friendships play out online, it needs to be defined what normal use even looks like. Bernstein said, “What really bothers parents is how they are losing their children, and how family life is changing.

In February the advocacy group Common Sense Media said it would expand a “digital citizenship” curriculum now offered free at tens of thousands of nationwide public schools. This program addresses the topics of self-image, relationships, information literacy and mental well-being. Lesson plans for the program range from kindergarten (“Going Place Safely,” Screen Out the Mean”) to high school (“Taking Perspectives on Cyberbullying,” “Oops! I Broadcast It on the Internet”).

Let us hope that these activities grow and become standards.

Psychology to the Rescue

April 3, 2018

Psychologists’ goal is to understand the mind. Psychologist Brendan Lake says,”I really see twin goals here: understanding the human mind better and also developing machines learning in more humanlike ways. I believe that if we can’t program a computer to explain human behavior, then we don’t fully understand it.” Noah Goodman, Ph.D., a professor of psychology and computer science at Stanford University says, “Humans are the most intelligent system we know.” (Knowing the intimate failures of HM’s own mind, he finds this hard to believe, but he defers to the expert) Alexa will respond to hundreds of voice commands, but can’t hold a real conversation. Similarly, IBM’s Watson can win at Jeopardy, but still is unable to accomplish some tasks that any one of us could do.

Watson and Google-affiliated Deep Mind are deep neural networks. These networks are inspired by the way that neurons connect in the brain and are related to the “connectionist” way of thinking about human intelligence. In AI, the idea works like this: instead of physical neurons, deep neural networks have neuron-like computational units, stacked together in dozens of connection layers. If you want to create a neural network that can tell the difference between apples and bananas for a visual learning system, then you present it with thousands of pictures of apples and bananas, Each image excites the “neurons” in the input layer. Those “neurons” pass on some information to the next layer, then the next layer and so on. As the training progresses, different layers start to identify patterns at increasing levels of abstraction, like color, texture, or shape. When the information system spits out a guess: apple or banana, if the system’s guess is wrong, then it can adjust the connections among the neurons accordingly. By processing thousands and thousands of training images, the system eventually becomes extremely good at the task at hand—figuring out the patterns that make an apple an apple and a banana a banana. This is a simple task, and the concept of neural networks has existed since the 1940s. Neural networks have increased enormously in complexity. The complexity of Watson truly boggles the human mind.

Unfortunately, neural networks do not provide an understanding into how we humans process information other than to state it is networks of neurons that do it. We humans cannot provide this understanding because we also do not understand how our neurons do this. Although we have access to our conscious processes, the vast majority of our processing is not accessible by our conscious processes. Connectionist-oriented AI researchers believe that if we want to build truly flexible, humanlike intelligence, we will need to not only write algorithms that reflect human reasoning, but also understand how the brain develops those algorithms to begin with. This is a job for psychology, and we psychologists have been working on these problems for close to a century.

Some researchers believe that studying how babies learn can provide insights that help build machines with flexible and humanlike intelligence. Dr. Linda Smith a psychologist and AI researcher at Indiana University, believes that answers to the problem of writing algorithms that a reflect human reasoning and also how the brain develops the algorithms begins with research using human babies.

Dr. Smith said, “My personal view is that babies are the smartest things on earth in terms of learning; they can learn anything and they can do it from scratch. And what babies do that machines don’t do is to generate their own data.”

In one of a series of studies, Dr. Smith and her colleagues are outfitting babies and preschoolers with head-mounted video cameras to closely analyze how they see the world. In one study they found that during mealtimes, 8 to 10-month-old babies looked preferentially at a limited number of scenes and objects—their chair, utensils, food and more—in a way that may later help them learn their first words. They also found that the scenes and objects the babies choose to look at differ from the types of “training images” often seen in computational models for AI visual learning systems (Phil. Trans. R. Soc. B. Vol 372, No.1711, 2017).

This is just one example of research being done that provides information to AI researchers. It appears that there is a need for a marriage between code developed from psychological research and connectionist code. This should achieve a true symbiosis benefitting both psychology and computer science.

This post is based on an article by Lea Winerman titled “Making a Thinking Machine” in the April 2018 issue of the “Monitor on Psychology.”

Many thanks to my colleague russvane3 for providing comments on this post.

How to Deactivate Facebook

March 31, 2018

Healthy memory blog readers should be aware of HM’s contempt for Facebook. The two immediately preceding posts offered alternatives. It appears that some younger users are eschewing Facebook. The research firm eMarketer found that the number of 12- to -17-year-old American users of Facebook declined 9.9% in 2017, which was part of a drop of 2.8 million U.S. users of Facebook under age 25. The firm expects Facebook to shed another 2.1 million this year, as young people switch to other platforms.

Here’s how to rid this pestilence from you computer:

Go to general account settings.

Select edit under manage account.

Select deactivate.

Overlook the continuing hard sell to keep you.

Be persistent.

Note that this only deactivates your account.
Removing your account is so complicated you need to go to the Facebook help center.

#deletefacebook? Nah, just get even

March 30, 2018

The title of this post is identical to the title of an article by Christine Emba in the 24 Mar ’18 issue of the Washington Post. Ms Emba suggests that rather than just staying mad, you should try to get even. You can do plenty to protect yourself.

The first is to stop sharing.

The second is to log off.
Remember that Facebook is trying to consume as much of your time as possible and as much of your conscious attention as possible. Both your time and your conscious attention are too precious to squander on Facebook.

And, finally.
Trust no one.

DeepMind’s Virtual Psychology Lab Seeks Flaws in Digital Minds

February 28, 2018

The title of this post is identical to the title of an article by Chris Baraniuk in the News Section of the 10 February 2018 issue of the New Scientist. A team at Google’s DeepMind has developed a virtual 3D laboratory called Psychlab in which both humans and machines can take a range of simple tests and compare their cognitive abilities.

The tests were originally designed by psychologists to isolate and evaluate specific mental faculties in people, such as the ability to detect a change in an object that disappears and reappears. Now DeepMind is taking the same tests.

It is not surprising that DeepMInd’s software was better at some tasks. For example, it excelled at visual search—finding a given symbol in a group of others. But it failed miserably when asked to track the position of multiple symbols on a screen, a task that people can do fairly well.

One point of the project is to expose weaknesses in AIs that might otherwise go unnoticed. This should help developers improve their own systems. Accordingly, DeepMind has released Psychlab as an open-source project so anyone can use and adapt it to their needs.

Walter Boot at Florida State University says “there may be few similarities between how an AI tackles a test and the way we do. Even if the AI performance matches the human performance, it could be doing task in a completely different to a human.”

Deepmind’s co-founder Dennis Hassabsbis, has a neuroscience background. Miles Brundage at the University of Oxford says, “Comparing AI cognition with human cognition is still tantalising. Psychlab is in this spirit.”

Facebook May Guess Millions of People’s Sexuality to Sell Adds

February 25, 2018

The title of this post is identical to the title of an article in the News Section of the 24 Feb 2018 Issue of the New Scientist. Last year Spain fined Facebook 1.2 million Euros for targeting adverts based on sensitive information without first obtaining explicit consent. In May, new EU-wide legislation called the General Data Protection Regulation (GDPR), which states that users must be specifically asked before companies collect and use their sensitive information.

Angel Cuevas Rumin at Charles III University of Madrid and his colleagues have been conducting research on how Facebook uses its users’ information to target its adverts. The research team purchased three Facebook ad campaigns. One targeted users interested in various religions, another was aimed at people based on their political opinions, and a third targeted those interested in “transsexualism” or “homosexuality.” For 35 Euros, they reached more than 25,000 people.

Remember that in Europe it is against the law for companies like Facebook to use sensitive information without first obtaining explicit consent from its users. So it would appear that Facebook has broken the law. However, Facebook argues that interests are not the same as sensitive information, so they claim that they are in compliance with the law.

To assess how often sensitive interests are used to target adverts on Facebook, Cuevas and his team created an internet browser extension that analyses how you interact with adverts. Moreover, it also records why you were shown a specific advert. Between October 2016 and October 2017, more than 3000 people from EU countries used the tool, corresponding to 5.5 million adverts. The team found more than 2000 reasons that Facebook had for showing someone an advert that related to sensitive interests, including politics, religion, health, sexuality, and ethnicity. About 905 of the people who used the extension were targeted with ads based on these categories.

Extrapolating from the demographics of the people using the browser extension, the team estimated that about 40% of all EU citizens, some 200 million people, may have been targeted using sensitive interests. (arxiv.org/abs/1802.05030).

Europeans do not like this state of affairs. A survey in 2015 found that 63% of EU citizens don’t trust online firms, and more than half don’t like providing personal information in return for free services.

Neither does HM who no longer uses Facebook.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Social Media Putting Democracy at Risk

February 24, 2018

This blog post is based on an article titled, “”YouTube excels at recommending videos—but not at deeming hoaxes” by Craig Timberg, Drew Harrell, and Tony Romm in 23 Feb 2018
issue of the Washington Post. The article begins, “YouTube’s failure to stop the spread of conspiracy theories related to last week’s school shooting in Florida highlights a problem that has long plagued the platform: It is far better at recommending videos that appeal to users than at stanching the flow of lies.”

To be fair, YouTube’s fortunes are based on how well its recommendation algorithm is tuned to the tastes of individual viewers. Consequently, the recommendation algorithm is its major strength. YouTube’s weakness in detecting misinformation was on stark display this week as demonstrably false videos rose to the top of YouTube’s rankings. The article notes that one clip that mixed authentic news images with misleading context earned more than 200,000 views before YouTube yanked it Wednesday for breaching its rules on harassment.

The article writes, “These failures this past week, which also happened on Facebook, Twitter, and other social media sites—make it clear that some of the richest, most technically sophisticated companies in the world are losing against people pushing content rife with untruth.”

YouTube apologized for the prominence of these misleading videos, which claimed that survivors featured in news reports were “crisis actors” appearing to grieve for political gain. YouTube removed these videos and said the people who posted them outsmarted the platform’s safeguards by using portions of real news reports about the Parkland, Fla, shooting as the basis for their conspiracy videos and memes that repurpose authentic content.

YouTube made a statement that its algorithm looks at a wide variety of factors when deciding a video’s placement and promotion. The statement said, “While we sometimes make mistakes with what appears in the Trending Tab, we actively work to filter out videos that are misleading, clickbait or sensational.”

It is believed that YouTube is expanding the fields its algorithm scans, including a video’s description, to ensure that clips alleging hoaxes do not appear in the trending tab. HM recommends that humans be involved with the algorithm scans to achieve man-machine symbiosis. [to learn more about symbiosis, enter “symbiosis” into the search block of the Healthymemory blog.] The company has pledged on several occasions to hire thousands more humans to monitor trending videos for deception. It is not known whether this has been done or if humans are being used in a symbiotic manner.

Google also seems to have fallen victim to falsehoods, as it did after previous mass shootings, via its auto-complete feature. When users type the name of a prominent Parkland student, David Hogg, the word “actor” often appears in the field, a feature that drives traffic to a subject.

 

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How Good is Face Recognition Software?

February 19, 2018

From what we see on police shows on TV it is truly amazing. But how good is it? An article titled, “Face-recognition software is perfect— if you’re a white man” by Timothy Revell in the This Week Section of 17 the Feb 2018 issue of the New Scientist.

Three commercially available face-recognition systems created by Microsoft, IBM and a Chinese company Megvii were tested by Joy Buolamwini of the Massachusetts Institute of Technology. The systems correctly identified the gender of white men 99% of the time. Identifying the gender does not seem to be particularly useful. However, the error rate rose for people with darker skin, reaching nearly 35% for women. So less than half of women had their gender identified correctly? These results will be presented at the Conference of Fairness, Accountability, and Transparency in New York later this month.

Presumably face-recognition software is already being used in many different situations. HM has been led to believe that police use it to identify suspects in a crowd and to automatically tag photos. Unfortunately, inaccuracies can have consequences, such as systematically ingraining biases in police stop and searches.

Artificial intelligence systems are dependent on the data on which they are trained. According to one study, a widely used data set is around 75% male and more than 80% white.

Organizations using face-recognition software need to test its accuracy for correctly identifying individuals for the subject populations of interest, and the results of these tests need to be published. Before selling face-recognition software, organizations need to describe the population on which it was developed and tested, and its accuracy for correctly identifying individuals. The performance of the software tested in this article is highly questionable. It is hard to envision for what applications it might be useful.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Smartphones and Teen Suicides

December 27, 2017

This post is based on an article written by Jean Twenge titled “As smartphones spread among teens, so did suicide,” in the Health Section of the 21 November 2017 issue of the Washington Post. The article summarizes the research she and her colleagues published in Clinical Psychological Science. The research found that the generation of teens called “iGen”, those born after 1995, is much more likely to experience mental-health issues than their millennial predecessors. Increases in depression, suicide attempts and suicide appeared among teens from every background: more privileged and less privileged, across all races and ethnicities, and in every region of the country.

According to the Pew Research Center, smartphone ownership crossed the 50% threshold in late 2012, right when teen depression and suicide began to increase. By 2015, 73% of teens had access to a smartphone. The research found the teens who spent five or more hours a day online were 71% more likely than those who spent only one hour a day to have at least one suicide risk factor (depression, thinking about suicide, making a suicide plan or attempting suicide). Suicide risk factors rose significantly after two or more hours a day of time online.

Two studies followed how people spend time. Both studies found that spending more time on social media led to unhappiness, while unhappiness did not lead to more study. An experiment randomly assigned participants to give up Facebook for week, vs. continuing their usual use. The group that avoided Facebook reported feeling less depressed at the end of the week.

The finding is that iGen folks spend much less time interacting with their friends in person. Interacting with people face to face is one of the deepest sources of human happiness. Teens who spent more time on average online and less time than average with friends in person were the most likely to be depressed. Since 2012 teens have spent less time on activities known to benefit mental health (in person social interaction) and more time on activities that may harm it (time online).

Teens are also sleeping less, and teens who spend more time on their phones are more likely than others to not get enough sleep. Insufficient sleep is a major risk factor for depression. So if smartphones are causing less sleep, that alone could explain why depression and suicide increased so suddenly.

Clearly restricting screen time, to two hours a day or less, is needed.
Twenge is professor of psychology at San Diego State University.

Addicted to Tech? A Brain Chemical Imbalance May Be to Blame

December 26, 2017

The title to this post is identical to the title of a News & Technology piece by Timothy Revell in the 9 December 2017 Issue of the New Scientist.

Hung Suk Seo at Korea University and his team scanned the brains of 19 teenagers who answered in surveys that their tech usage was detrimental to their lives, and compared the results with 19 others of similar age who said that had no problems with tech. The initial scans showed that those who said they were addicted had more of a neurotransmitter called GABA, which slows signals and is thought to help regulate anxiety, but less of a chemical glutamate, which caused neurons to become electrically excited.

Of the 19 tech addicts they examined, 12 undertook a course of cognitive behavioral therapy (CBT) designed to reduce the amount of time spent using technology. These participants then underwent a second scan. The relative amounts of GABA and glutamate converged to more normal levels after CBT. The amount of time spent using technology also moved to more normal levels.

Although the direction of cause and effect is unclear here (whether the abnormal levels caused the abnormal use, or whether abnormal use caused the abnormal levels) is not really important. What is important is that CBT can bring technology use to normal levels.

Although the term technology addiction is predominately used, and technology companies use insights from psychology to increase usage, the Diagnostic and Statistical Manual of Mental Disorder task force, which is used in the United States has yet to include internet addiction as a diagnosis for fear of mislabeling many of the 3 billion people around the world who are attached to their smartphones.

What is important is how the individual feels about their own technology use. Unless they feel that they are addicted, it is doubtful that they will free themselves of their perceived addiction. However, we all would do well to objectively consider if we are suffering adverse effects from technology use and respond accordingly.

Mindshift Resources

October 5, 2017

This post provides information on resources for mindshifts. Although this post focuses on massive online open courses (MOOCs), mindshifts can be accomplished from many sources. However, MOOCs are a new high tech means of learning. Some MOOCs are free, even from first rate universities, and some MOOCs require payment. Usually to get college credits payments are required. However, autodidacts do not necessarily want or desire college credits. There is a website nopaymba.com by Laura Pickard who writes, “I started the No-Pay MBA website as a way of documenting my studies, keeping myself accountable, and providing a resource for other aspiring business students. The resources on this site are for anyone seeking a world-class business education using the free and low-cost tools of the internet.  I hope you find them useful!” She explains how she got an business education equivalent to an MBA for less than1/100th the cost of a traditional MBA.

class-central.com lists free online courses from the best universities. She also list the best 50 MOOCs of all time. This is a good resource for learning about MOOCs.

Here are some notes on additional resources provided in Mindshift.

Coursera: This is the largest MOOC provider. It has courses on many different subjects and in many different languages. It also offers an MBA and data science master’s degree and offers “specializations”—clusters of MOOCs.

edX: Has a large number of courses on many different subjects and in in many different languages. Offers “MicroMasters”—cluster of MOOCs.

FutureLearn: Has a large number of courses on many different subjects and languages, particularly, but not exclusively from British universities. Offers “Programs”—clusters of MOOCs.

Khan Academy: Offers tutorial videos on a large number of subjects, from history to statistics. The site is multilingual and uses gasification.

Kadenze: Special focus on art and creative technology.

Canvas Network: Designed to give professors an opportunity to give their online classes a wider audience. Has a large number of courses on many different subjcts.

Open Education by Blackboard: Similar to Canvass Network.

World Science U: A platform designed to use great visuals to communicate ideas in science.

Instructables: Provides user-created and -uploaded do-it-yourself projects which are rated by other users.

You can find the author’s MOOC, “Learning How to Learn” on coursera.org.

 

 

How To Take Back Your Life from Disruptive Technology

September 27, 2017

There have been twelve posts on “The Distracted Mind: Ancient Brains in a High Tech World” that documented the adverse affects of technology. There was an additional post demonstrating that just the presence of a Smartphone can be disruptive. The immediately preceding post documented the costs of social media per se. First of all they have disruptive effects on lives and minds. And these disruptive effects degrade your mind, which the blog posts documented affect many aspects of your life, including education. Hence the title of this blog post.

Unfortunately, social media make social demands. So removing yourself from social media is something that needs to be explained to your friends, whom you should let know you’ll still be willing to communicate via email. Review with them the reason for your decision. Cite the relevant research presented in this blog and elsewhere. Point out that Facebook not only has an adverse impact on cognition, it was also a tool used by Russia to influence our elections. Facebook accepted rubles to influence the US Presidential election. The magnitude of this intervention has yet to be determined. For patriotic reasons alone, Facebook should be ditched. You are also taking these steps to reclaim control of your attentional resources and to build a healthy memory.

Carefully consider what steps you need to take. Heavy users become nervous when they are not answering alerts. One can gradually increase the increments in answering alerts. However, going cold turkey and simply turning off alerts might be more painful initially, but it would free you from the compulsion to answer alerts earlier should you of cold turkey. It would also make your behavior clearer to your friends earlier rather than later. Similarly you can only answer text messages and phone calls at designated. Voice mail assures you won’t miss anything.

If asked by a prospective employer or university as to why you are not on Facebook, explain that you want to make the most of your cognitive potential and that Facebook detracts from this objective. Cite the research. You can develop a web presence by having your own website that you would control. Here you could attach supporting materials as you deem fit.

Doing this should make you stand out over any other candidates who might be competing with you (unless they were also following the advice of this blog). If your reviewer is not impressed, you should conclude that he is not worthy of you and that affiliating with them would be a big mistake. Hold to this conclusion regardless of the reputation of the school or employer.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Happiness Effect

September 26, 2017

The subtitle to “The Happiness Effect” is “How Social Media is Driving a Generation to Appear Perfect at Any Cost,” a book by Donna Freitas. The book reports extensive research using surveys and interviews on the use of social media by college students. The subtitle could be expanded to “How Social Media is Driving a Generation to Appear Perfect at Any Cost Resulting In Unhappiness and Anxiety.’ The book focuses on the emotional and social costs and ends with suggestions regarding how to ameliorate the damage.

Although this is an excellent book, HM had difficulty finishing reading it. He kept thinking how stupid, moronic, and damaging social media are. How could new technology be adopted and put to such a counterproductive use? The reason that HM’s reaction is much more severe than that of Donna Freitas is that he is also considering social media in terms of how they exacerbate the problem of the Distracted Mind, which has been the topic of the fifteen healthy memory blog post immediately preceding this current one. So these activities that produce unhappiness and anxiety also assault the mind with more distractions.

They do so in two ways. First of all they subtract time from effective thinking. Social media also foster interruptions that further disrupt effective thinking. So consider the possibility that social media foster unhappy airheads.

Facebook pages are cultivated to impress future employers. Organizations and activities cultivate Facebook pages to provide good public relations for their organizations and activities. But remember the healthy memory blog post, “The Truth About Your Facebook Friends” based on Seth Stephens-Davidowitz’s groundbreaking book, “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who We Really Are.” You should realize that anyone who believes what they read on Facebook is a fool.

The following post will suggest some activities for you to consider should you be convinced of what you have read in the healthy memory blog and related sources on this topic. These suggestions go beyond what was presented in the blog post “Modifying Behavior.”

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

 

The Truth About the Internet

September 3, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Perhaps the most common statement about the internet with which everyone agrees is that the internet is driving Americans apart and that it plays a large part in the polarization of the nation. The only problem with this generally agreed upon view is that it is wrong.

The evidence against this piece of conventional wisdom comes from a 2011 study by two economists, Matt Gentzkow and Jesse Shapiro. They collected data on the browsing behavior of a large sample of Americans. Their dataset included the self-reported ideology, whether they were liberal or conservative, of the research participants.

Gentzkow and Shapiro asked themselves the following question: Suppose you randomly sampled two Americans who happen to both be visiting the same news website. What is the probability that one of them will be liberal and the other conservative? In other words, how frequently do liberals and conservatives “meet” on news sites? Suppose liberals and conservatives on the internet never got their online news from the same place? In other words, liberals exclusively visited liberal websites, and conservatives exclusively visited conservative ones. If this were the case, the chances that two Americans on a given news site have opposing political views would be 0%. The internet would be perfectly segregated. Liberals and conservatives would never mix.

However, suppose, in contrast, that liberals and conservatives did not differ at all in how they got their news. In other words, a liberal and a conservative were equally likely to visit any particular news site. If this were the case, the chances that two Americans on a given news website have opposing political views would be about 50%. Then the internet would be perfectly desegregated. Liberals and conservatives would perfectly mix.

According to Gentzkow and Shapiro in the United States, the chances that two people visiting the same news site have different political views is about 45%. So the internet is far closer to perfect desegregation than perfect segregation. Liberals and conservatives are “meeting” each other on the web all the time.

Using data from the General Social Survey, Gentzkow and Shapiro found that all these numbers were lower than the chances that two people on the same news website have different politics.

This lack of segregation on the internet can be put further in perspective by comparing it to segregation in other parts of our lives. Here are the probabilities that someone you meet has opposing political views

On a News website 45.2%
Coworker 41.6%
Offline Neighbor 40.3%
Family Member 37%
Friend 34,7%

So in other words, you are more likely to come across someone with opposing views online than offline.

As to why isn’t the internet more segregated, there are two factors that limit political segregation on the internet. The first reason is that the internet news industry is dominated by a few massive sites. In 2009, four sites, Yahoo News, AOL News, msnbc.com, and cnn.com —collected more than half of the news views. Yahoo News is the most popular news site among Americans, with close to 90 million unique monthly visitors. This is 600 times the white supremacist Stormfront audience. Mass media sites like to appeal to a broad, political diverse audience.

The second reason the internet isn’t all that segregated is that many people with strong political opinions visit sites of the opposite viewpoint. The reason here is similar to the reason for the hostility to the first address by President Obama on the Mass Shooting in San Bernadino. People like to defend their views, and, perhaps, to convince themselves that the opposition are idiots. Seth notes that someone who visits think progress.org and maven.org—two extremes liberal sites—is more likely than the average internet user to visit foxnews.com, a right leaning site. Someone who visits rushlimbaugh.com or glennbeck.com —two extremely conservative sites—is more likely than the average internet user to visit nytimes.com, a more liberal site.

The Gentzkow and Shapiro study was based on data from 2004-2009, which was relatively early in the history of the internet. Might the internet have grown more compartmentalized since then? Have social media, particularly Facebook, altered their conclusion. If our friends tend to share our political views, the rise of social media should mean a rise of echo chambers, shouldn’t it.

It’s complicated. Although it is true that people’s friends on Facebook are more likely than not to share their political views, a team of data scientists—Eytan Bakshy, Solomon Messing, and Lada Adamic—found that a surprising amount of the information people get on Facebook comes from people with opposing views? So how can this be? Don’t our friends tend to share our political views? They do? But there is a crucial reason that Facebook may lead to a more diverse political discussion than offline socializing. On average people have substantially more friends on Facebook than they do offline. These weak ties facilitated by Facebook are more likely to be people with opposite political views.

So Facebook exposes its users to weak social connections. These are people with whom you might never have external social interactions, but you do Facebook friend them. And you do see their links to articles with views you might never have otherwise considered.

In sum, the internet actually does not segregate different ideas, but rather gives diverse ideas a larger distribution.

 

Effectively Countering Islamophobia

September 2, 2017

The immediately preceding post on Obama’s Prime-time Address After the Mass Shooting in San Bernadino indicated that President’s Obama’s appeal to our better nature failed. Worse yet, it was counterproductive, with Islamophobia increasing, not decreasing. As promised, here is a more effective presentation President Obama made two months after that original piece. This time Obama spent little time insisting on the value of tolerance. Instead he focused overwhelmingly on provoking people’s curiosity and changing their perceptions of Muslim Americans. He told us that many of the slaves from Africa were Muslim; Thomas Jefferson and John Adams had their own copies of the Koran; the first mosque on U.S. soil was in North Dakota; a Muslim American designed skyscrapers in Chicago. Obama again spoke of Muslim athletes and armed service members but also talked of Muslim police officers and firefighters, teachers, and doctors.

So what was wrong with Obama’s original address? He was telling many in his audience that their emotional responses were wrong. Kahneman’s Two System view of cognition can be helpful. System 1 is named Intuition. System 1 is very fast, employs parallel processing, and appears to be automatic and effortless. They are so fast that they are executed, for the most part, outside conscious awareness. Emotions and feelings are also part of System 1. Islamophobic responses are essentially System 1 responses. Learning is associative and slow. For something to become a System 1 process requires much repetition and practice. Activities such as walking, driving, and conversation are primarily System 1 processes. They occur rapidly and with little apparent effort. We would not have survived if we could not do this types of processing rapidly. But this speed of processing is purchased at a cost, the possibility of errors, biases, and illusions. System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors to slip through.
In addition to engaging System 1 processes, many in the audience needed to justify their feelings. Consequently they made Google searchers hardening their views.

However, in his second address he bypassed System 1 processes by providing new information processing to System 2, which is what we commonly regard as thinking. So their views were not directly challenged in this nonthreatening presentation. New information was presented that might be further processed with a resulting decrease in Islamophobia.

Changing hardened beliefs is very difficult. Directly challenging these beliefs is counterproductive. So the approach needs to employ some sort of end run around these beliefs. That is what Obama did by providing nonthreatening information in his second address.

The Southern Poverty Law Center has developed some effective approaches in which people of different beliefs work together to solve a problem. This approach is difficult and time consuming but it has worked in a variety of circumstances. This approach is not likely to be universally applicable as it does require people of different beliefs to interact.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Response to Obama’s Prime-time Address After the Mass Shooting in San Bernadino

September 1, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” On December 2, 2015 in San Bernadino, California Rizwan Farook and Tashfeen Malik entered a meeting of Farook’s coworkers armed with semiautomatic pistols and semiautomatic rifles and murdered fourteen people. Literally minutes after the media first reported one of the shooter’s Muslim-sounding name, a disturbing number of Californians had decided what they wanted to do with Muslims: kill them.

The top Google search in California at the time was “kill Muslims” with about the same frequency that they searched for “martini recipe,” “migraine symptoms,” and “Cowboys roster.” In the days following the attack, for every American concerned with “Islamophobia” another was searching for “kill Muslems.” Hate searches were approximately 20% of all searches before the attack, more than half of all search volume about Muslims became hateful in the hours that followed it.

These search data can inform us how difficult it can be to calm the rage. Four days after the shooting, then-president Obama gave a prime-time address to the country. He wanted to reassure Americans that the government could both stop terrorism and, perhaps more important, quiet the dangerous Islamophobia.

Obama spoke of the importance of inclusion and tolerance in powerful and moving rhetoric. The Los Angeles Times praised Obama for “[warning] against allowing fear to cloud our judgment.” The New York times called the speech both “tough” and “calming.” The website Think Progress praised it as “a necessary tool of good governance, geared towards saving the lives of Muslim Americans.” Obama’s speech was judged a major success.

But was it? Google search data did not support such a conclusion. Seth examined the data together with Evan Soltas. In the speech the president said, “It is the responsibility of all American—of every faith—to reject discrimination.” But searches calling Muslims “terrorists,” “bad,” “violent,” and “evil” doubled during and shortly after the speech. President Obama also said, “It is our responsibility to reject religious tests on who we admit into this country.” But negative searches about Syrian refugees, a mostly Muslim group then desperately looking for a safe haven, rose 60%, while searches asking how to help Syrian refugees dropped 35%. Obama asked Americans to “not forget that freedom is more powerful than fear.” Still searches for “kill Muslims” tripled during the speech. Just about every negative search Seth and Soltas could think to test regarding Muslims shot up during and after Obama’s speech, and just about every positive search hey could think to test declined.

So instead to calming the angry mob, as people thought he was doing, the internet data told us that Obama actually inflamed it. Seth writes, “Things that we think are working can have the exact opposite effect from the one we expect. Sometimes we need internet data to correct our instinct to pat ourselves on the back.”

So what can be done to quell this particular form of hatred so virulent in America? We’ll try to address this in the next post.

Implicit Versus Explicit Prejudice

August 30, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Any theory of racism has to explain the following puzzle in America: On the one hand, the overwhelming majority of black Americans think they suffer from prejudice—and they have ample evidence of discrimination in police stops, job interviews, and jury decisions. On the other hand, very few white Americans will admit to being racist. The dominant explanation has been that this is due, in large part, to widespread implicit prejudice. According to this theory white Americans may mean well, but they have a subconscious bias, which influences their treatment of black Americans. There is an implicit-association test for such a bias. These tests have consistently shown that it takes most people milliseconds more to associate black faces with positive words such as “good,” than with negative words such as “awful.” For white faces, the pattern is reversed. The small extra time it takes is interpreted as evidence of someone’s implicit prejudice—a prejudice the person may not even be aware of.

There is an alternative explanation for the discrimination that African-Americans feel and whites deny: hidden explicit racism. People might be aware of widespread conscious racism but to which they do not want to confess—especially in a survey. This is what the search data seems to be saying. There is nothing implicit about searching for “n_____ jokes.” It’s hard to imagine that Americans are Googling the word “n_____“ with the same frequency as “migraine and economist” without explicit racism having a major impact on African-Americans. There was no convincing measure of this bias prior to the Google data. Seth uses this measure to see what it explains.

It explains, as was discussed in a previous post, why Obama’s vote totals in 2008 and 2012 were depressed in many regions. It also correlates with the black-white wage gap, as a team of economists recently reported. In other words, the areas Seth found that make the most racist searches underpay black people. When the polling guru Nate Silver looked for the geographic variable that correlated most strongly with support in the 2016 Republican primary for Trump, he found it in the map of racism Seth had developed. That variable was searches for “n_____.”

Scholars have recently put together a state-by-state measure of implicit prejudice agains black people, which enabled Seth to compare the effects of explicit racism, as measured by Google searches, and implicit bias. Using regression analysis, Seth found that, to predict where Obama underperformed, an area’s racist Google searches explained a lot. An area’s performance on implicit-association tests added little.

Seth has found subconscious prejudice may have a more fundamental impact for other groups. He was able to use Google searches to find evidence of implicit prejudice against another segment of the population: young girls.

So, who would be harboring bias against girls? Their parents. Of all Google searches starting “Is my 2-year-old, the most common next word is “gifted.” But this question is not asked equally about young boys and young girls. Parents are two and a half times more likely to ask “Is my son gifted?” than “Is my daughter gifted?” Parents overriding concerns regarding their daughters is anything related to appearance.

https://implicit.harvard.edu/implicit/

The URL above will take you to a number of options for taking and learning about the implicit association test.

The Truth About Your Facebook Friends

August 29, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Social media are another source of big data. Seth writes, “The fact is, many Big Data sources, such as Facebook, are often the opposite of digital truth serum.

Just as with surveys, in social media there is no incentive to tell the truth. Much more so than in surveys, there is a large incentive to make yourself look good. After all, your online presence is not anonymous. You are courting an audience and telling your friends, family members, colleagues, acquaintances, and strangers who you are.

To see how biased data pulled from social media can be, consider the relative popularity of the “Atlantic,” a highbrow monthly magazine, versus the “National Enquirer,” a gossipy often-sensational magazine. Both publications have similar average circulations, selling a few hundred thousand copies (The “National Enquirer” is a weekly, so it actually sells more total copies.) There are also a comparable number of Google searches for each magazine.

However, on Facebook, roughly 1.5 million people either like the “Atlantic” or discuss articles from the “Atlantic” on their profiles. Only about 50,000 like the Enquirer or discuss its contents.

Here’s an “Atlantic” versus “National Enquirer” popularity compared by different sources:
Circulation Roughly 1 “Atlantic” for every 1 “National Enquirer”
Google searches 1 “Atlantic” for every 1 “National Enquirer”
Facebook Likes 27 “Atlantic” of every 1 “National Enquirer”

For assessing magazine popularity, circulation data is ground truth. And Facebook data is overwhelmingly biased against the trashy tabloid, making it the worst data for determine what people really like.

Here are some excerpts from the book:
“Facebook is digital brag-to-my friends-about-how-good-my life-is-serum. In Facebook world, the average adult seems to be happily married, vacationing in the Caribbean, and perusing the “Atlantic.” In the real world, a lot of the people are angry, on supermarket checkout lines, peeking at the “National Enquirer”, ignoring phone calls from their spouse, whom them haven’t slept with in years. In Facebook world, family life seems perfect. In the real world, family life is messy. I can be so messy that a small number of people even regret having children. In Facebook world, it seems every young adult is at a cool party Saturday night. In the real world, most are at home alone, binge-watching shows on Netflix. In Facebook world, a girlfriends posts twenty-six happy pictures from her getaway with her boyfriend. In the real world, immediately after posting this, she Googles “my boyfriend won’t have sex with me.”

 

In summary:
DIGITAL TRUTH                          DIGITAL LIES
Searches                                        Social media posts
Views                                             Social media likes
Clicks                                             Dating profiles
Swipes

Some Common Ideas Debunked

August 28, 2017

This post is based on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.”

A common notion is that a major case of racism is economic insecurity and vulnerability. So it is reasonable to expect that when people lose their jobs, racism increases. But neither racist searches nor membership in Stormfront rises when unemployment does.

It is reasonable to think that anxiety is highest in overeducated big cities. A famous stereotype is the urban neurotic. However, Google searches reflecting anxiety—such as “anxiety symptoms” or “anxiety help” tend to be higher in places with lower levels of education, lower median incomes, and where a larger portion of the population lives in rural areas. There are higher search rates for anxiety in rural upstate New York than in New York City.

It is reasonable to think that a terrorist attack that kills dozens or hundreds of people would automatically be followed by massive, widespread anxiety. After all, terrorism, by definition, is supposed to instill a sense of terror. Seth looked for Google searches reflecting anxiety. He tested how much these searches rose in a country in the days, weeks, and months following every major European or American terrorist attack since 2004. So, on average, how much did anxiety-related searches rise? They didn’t. At all.

Humor as long been thought of as a way to cope with frustrations, the pain, the inevitable disappointments of life. Charlie Chaplin said, “laughter is the tonic, the relief, the surcease from pain.” Yet, searches for jokes are lowest on Mondays, they day when people report they are most unhappy. They are lowest on cloudy and rainy days. And they plummet after a major tragedy, such as when two bombs killed three and injured hundreds during the 2013 Boston Marathon. Actually people are more likely to look for jokes when things are going well in life than when they aren’t.

Seth argues that the bigness part of big data is overrated. He writes that the smartest Big Data companies are often cutting down their data. Major decisions at Google are based on only a tiny sampling of all their data. Seth continues, “You don’t always need a ton of data to find important insights. You need the right data. A major reason that Google searches are so valuable is not that there are so many of them; it is that people are so honest in them.

Every Body Lies

August 27, 2017

“Everybody Lies” is the title of a groundbreaking book by Seth Stephens-Davidowitz on how to effectively exploit big data. The subtitle to this book is “Big Data, New Data, and What the Internet Reveals About Who We Really are.” The title is a tad overblown as we always need to have doubts about data and data analysis. However, it is fair to say that the internet currently does the best job at revealing who we really are.

The problem with surveys and interviews is that there is a bias to make ourselves look better than we really are. Indeed, we should be aware that we fool ourselves and that we can think we are responding honestly when in truth we are protecting our egos.

Stephens-Davodowitz uses Google trends as his principle research tool and has found that people reveal more about their true selves in these searches than they do in interviews and surveys. Although the pols erred in predicting that Hilary Clinton would win the presidency, Google searches indicated that Trump would prevail.

Going back to Obama’s first election night, when most of the commentary focused on praise of Obama and acknowledgment of he historic nature of his election, roughly one in every hundred Google searches that included “Obama” also included “kkk” or “n_____.” On election night searches and sign-ups for Stormfont, a white nationalist site with surprisingly high popularity in the United States, were more than ten times higher than normal. In some states there were more searches for “n____- president” than “first black president.” So there was a darkness and hatred that was hiding from the traditional sources but was quite apparent in the searches that people made.

These Google searches also revealed that a much of what we thought about the location of racism was wrong. Surveys and conventional wisdom placed modern racism predominantly in the South and mostly among Republicans. However, the places with the highest racist search rates included upstate New York, western Pennsylvania, eastern Ohio, industrial Michigan and rural Illinois, along with West Virginia, southern Louisiana, and Mississippi. The Google search data suggested that the true divide was not South versus North, but East versus West. Moreover racism was not limited to Republicans. Racist searches were no higher in places with a high percentage of Republicans than in places with a high percentage of Democrats. These Google searches helped draw a new map of racism in the United States. Seth notes that Republicans in the South may be more likely to admit racism, but plenty of Democrats in the North have similar attitudes. This map proved to be quite significant in explaining the political success of Trump.

In 2012 Seth used this map of racism to reevaluate exactly the role that Obama’s race played. In parts of the country with a high number of racist searches, Obama did substantially worse than John Kerry, the white presidential candidate, had four years earlier. This relationship was not explained by an other factor about these ares, including educational levels, age, church attendance, or gun ownership. Racist searches did not predict poor performance for any Democratic candidate other than Obama. Moreover these results implied a large effect. Obama lost roughly 4% points nationwide just from explicit racism. Seth notes that favorable conditions existed for Obama’s elections. The Google trends data indicated the there were enough racists to help win a primary or tip a general election in a year not so favorable for Democrats.

During the general election there were clues in Google trends that the electorate might be a favorable one for Trump. Black Americans told polls they would turn out in large numbers to oppose Trump. However Google searches for information on voting in heavily black areas were way down. On election day, Clinton was hurt by low black turnout. There were more searches for “Trump Clinton” than for “Clinton Trump” in key states in the Midwest that Clinton was expected to win. Previous research has indicated that the first name in search pairs like this is likely the favored candidate.

The final two paragraphs in this post are taken directly from Seth’s book.

“But the major clue, I would argue, that Trump might prove a successful candidate—in the primaries, to begin with—was all that secret racism that my Obama study had uncovered, The Google searches revealed a darkness and hatred among a meaningful number of Americans that pundits, for many years, had missed. Search data revealed that we lived in a very different society from the one academics and journalists, relying on polls, thought that we lived in. It revealed a nasty, scary, and widespread rage that was waiting for a candidate to give voice to it.

People frequently lie—to themselves and to others. In 2008, Americans told surveys that they no longer cared about race. Eight years later, they elected as president Donald J. Trump, a man who retweeted a false claim that black people were responsible for the majority of murders of white American, defended his supporter for roughing up a Black Lives Matter protestor at one of his rallies, and hesitated in repudiating support from a former leader of the Ku Klux Klan (HM feels compelled to note that Trump has not renounced the latest endorsement by the leader of the Ku Klux Klan). The same hidden racism that hurt Barack Obama helped Donald Trump.

 

An AI Armageddon

July 27, 2017

This post is inspired by an article by Cleve R. Wootson, Jr. in the July 24, 2017 Washington Post article titled, “What is technology leader Musk’s great fear? An AI Armageddon”.

Before addressing an AI Armageddon Musk speaks of his company Neuralink, which would devise ways to connect the human brain to computers. He said that an internet-connected brain plug would allow someone to learn something as fast at it takes to download a book. Everytime HM downloads a book to his iPad he wonders, if only… However, HM knows some psychology and neuroscience, topics in which Musk and Kurzweil have little understanding. Kurzweil is taking steps to prolong his life until his brain can be uploaded to silicon. What these brilliant men do not understand is that silicon and protoplasm require different memory systems. They are fundamentally incompatible. Now there is promising research where recordings are made from the rat’s hippocampi while they are learning to perform specific tasks. Then they will try to play these recordings into the hippocampi of different rats and see how well they can perform the tasks performed by the previous rats. This type of research, which stays in the biological domain, can provide the basis for developing brain aids for people suffering from dementia, or who have had brain injuries. The key here is that they are staying in the biological domain.

This biological silicon interface needs to be addressed. And it would be determined that this transfer of information would not be instantaneous, it would be quite time consuming. And even if this is solved, both the brain and the human are quite complicated and there needs to be time for consolidation and other processes. Even then there is the brain mind distinction. Readers of this blog should know that the mind is not contained within the brain, but rather the brain is contained within the mind.

Now that that’s taken care off, let’s move on to Armageddon. Many wise men have warned us of this danger. Previous healthy memory posts, More on Revising Beliefs, being one of them reviewed the movie “Collosus: the Forbin Project.” The movie takes place during the height of the cold war when there was a realistic fear that a nuclear war would begin that would destroy all life on earth. Consequently, the United States created the Forbin Project to create Colossus. The purpose of Colossus was to prevent a nuclear war before it began or to conduct a war once it had begun. Shortly after they turn on Colossus, the find it acting strangely. They discover that it is interacting with the Soviet version of Colossus. The Soviets had found a similar need to develop such a system. The two systems communicate with each other and come to the conclusion that these humans are not capable of safely conducting their own affairs. In the movie the Soviets capitulate to the computers and the Americans try to resist but ultimately fail.

So here is an example of beneficent AI; one that prevents humanity from destroying itself. But this is a singular case of beneficent AI. The tendency is to fear AI and predict either the demise of humanity or a horrendous existence. But consider that perhaps this fear is based on our projecting our nature on to silicon. Consider that our nature may be a function of biology, and absent biology, these fears don’t exist.

One benefit of technology is that the risks of nuclear warfare seem to have been reduced. Modern warfare is conducted by technology. So the Russians do not threaten us with weapons; rather they had technology and tried to influence the election by hacking into our systems. This much is known by the intelligence community. The Russians conducted warfare on the United States and tried to have their candidate, Donald Trump, elected. Whether they succeeded in electing Donald Trump cannot be known in spite of claims that he still would have been elected. But regardless of whether their hacking campaign produced the result, they definitely have the candidate they wanted.

Remember the pictures of Trump in the Oval Office with his Russian buddies (Only Russians were allowed in the Oval Office). He’s grinning from ear to ear boasting about how he fired his FBI Director and providing them with classified intelligence that compromised an ally. Then he tries to establish a secure means of communication with the Russians using their own systems. He complains about the Russian investigation, especially those that involve his personal finances. Why is he fearful? If he is innocent, he will be cleared, and the best thing would be to facilitate the investigation rather than try to obstruct and invalidate it. Time will tell.

How could a country like the United States elect an uncouth, mercurial character who is a brazen liar and who could not pass an elementary exam on civics? Perhaps we are ready for an intervention of benign AI.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

 

 

Seven Ways to Overhaul Your Smartphone Use

July 21, 2017

This post is taken directly from the March 2017 issue of “Monitor on Psychology.”
If you want to minimize the pitfalls of smartphone use, research suggests seven good places to start.

Make Choices. The more we rely on smartphones, the harder it is to disconnect. Consider which functions are optional. Could you keep lists in a paper notebook? Use a standalone alarm clock? Make conscious choices about what you really need your phone for, and what you don’t.

Retrain yourself. Larry Rosen, Ph.D., advises users not to check the phone first thing in the morning. During the day, gradually check in less often—maybe every 15 minutes at first, then every 20, then 30. Over time, you’ll start to see notifications as suggestions rather than demands, he says, and you’ll feel less anxious about staying connected.

Set expectations. “In many ways, our culture demands constant connection. That sense of responsibility to be on call 24 hours a day comes with a greater psychological burden than many of us realize,” says Karla Murdock Ph.D. Try to establish expectations among family and friends so they don’t worry or feel slighted if you don’t reply to their texts or emails immediately. While it can be harder to ignore messages from your boss, it can be worthwhile to have a frank discussion about what his or her expectations are for staying connected after hours.

Silence notifications. It’s tempting to go with your phone’s default settings, but making the effort to tun off unnecessary notifications can reduce distractions and stress.

Protect sleep. Avoid using your phone late at night. If you must use it, turn down the brightness. When it’s time for bed, turn you phone off and place it in another room.

Be active. When interacting with social media sites, don’t just absorb other people’s posts. Actively posting idea or photos, creating content and commenting on others’ posts is associated with better subjective well-being.

And, of course, don’t text/email/call and drive. In 2014, more than 3,000 people were killed in distracted driving incidents on U.S. roads, according to the U.S. Department of Transportation. When you’re driving, turn off notifications and place your phone out of reach.

(Dis)connected

July 20, 2017

The title of this post is identical to the title of an article by Kirsten Weir in the March 2017 issue of “Monitor on Psychology.” This article reviews research showing how smartphones are affecting our health and well-being, and points the way toward taking back control.

Some of the most established evidence concerns sleep. Dr. Klein Murdock, a psychology professor who heads the Technology and Health Lab at Washington and Lee University followed 83 college students and found that those who were more-attuned to their nighttime phone notifications had poorer subjective sleep quality and greater self-reported sleep problems. Although smartphones are often viewed as productivity-boosting devices, their ability to interfere with sleep can have the opposite effect on getting things done.

Dr. Russell E. Johnson and his colleagues at Michigan State University surveyed workers from a variety of professions. They found that when people used smartphones at night for work-related purposes, they reported that they slept more poorly and were less engaged at work the next day. These negative effects were greater for smartphone users than for people who used laptops or tablets right before bed.

Reading a text or email at bedtime can stir your emotions or set your mind buzzing with things you need to get done. So your mind becomes activated at a time when it’s important to settle down and have some peace.

College students at the University of Rhode Island were asked to keep sleep diaries for a week. They found that 40% of the students reported waking at night to answer phone calls and 47% woke to answer text messages. Students who were more likely to use technology after they’d gone to sleep reported poorer sleep quality, which predicted symptoms of anxiety and depression.

FOMO is an acronym for Fear Of Missing Out. In one study, Dr Larry Rosen a professor emeritus of psychology at California State University and his colleagues took phones away from college students for an hour and tested their anxiety levels at various intervals. Light users of smartphones didn’t show any increasing anxiety as they sat idly without their phones. Moderate users began showing signs of increased anxiety after 25 minutes without their phones, but their anxiety held steady at that moderately increased level for the rest of the hour long study. Heavy phone users showed increased anxiety after just 10 phone-free minutes, and their anxiety levels continued to climb throughout the hour.

Rosen has found that younger generations are particularly prone to feel anxious if they can’t check their text messages, social media, and other mobile technology regularly. But people of all ages appear to have a close relationship with their phones. 76% of baby boomers reported checking voicemail moderately or very often, and 73% reported checking text messages moderately or very often. Anxiety about not checking in with text messages and Facebook predicted symptoms of major depression, dysthymia, and bipolar mania.

When research participants were limited to checking email messages just three times a day, they reported less daily stress. This reduced stress was associated with positive outcomes including greater mindfulness, greater self-perceived productivity and better sleep quality.

In another study participants were asked to keep all their smartphone notifications on during one week. In the other week, they were asked to turn notifications off and to keep their phones tucked out of sight. At the end of the study participants were given questionnaires. During the week of notifications participants reported greater levels of inattention and hyperactivity compared with their alert-free week. These feelings of inattention and hyperactivity were directly associated with lower levels of productivity, social connectedness, and psychological well being. Having your attention scattered by frequent interruptions has its costs.

The article also stresses the importance of personal interactions, which are inherently richer. The key to having healthy relationships with technology is moderation. We want to get the best from technology, but at the same time to make sure that it’s not controlling us.

 

Robots Will Be More Useful If They are Made to Lack Confidence

July 17, 2017

The title of this post is identical to the title of an article by Matt Reynolds in the News & Technology section of the 10 June 2017 issue of the New Scientist. The article begins “CONFIDENCE in your abilities is usually a good thing—as long as you know when it’s time to ask for help. Reynolds notes that as we build ever smarter software, we may want to apply the same mindset to machines.

Dylan Hadfield-Menell says that overconfident AIs can cause all kinds of problems. So he and his colleagues designed a mathematical model of an interaction between humans and computers called the “off-switch-game.” In the theoretical set-up robots are given a task to do and humans are free to switch them off whenever they like. The robot can also choose to disable its switch so the person cannot turn it off.

Robots given a high level of “confidence” that they were doing something useful would never let the human turn them off, because they tried to maximize the time spent doing their task. Not surprisingly, a robot with low confidence would always let a human switch it off, even if it was doing a good job.

Obviously, calibrating the level of confidence is important. It is unlikely that humans would ever provide a level of confidence that would not allow them to shut down the computer. A problem here is that we humans tend to be overconfident and to be unaware of how much we do not know. This human shortcoming is well documented in a book by Steven Sloman and Philip Fernbach titled “The Knowledge Illusion: Why We Never Think Alone.” Remember that transactive memory is information that is found in our fellow human beings and in technology that ranges from paper to the internet. Usually we eventually learn the best sources of information in our fellow humans and human organizations, and we need to learn where to find and how much confidence to have in information stored in technology, which includes AI robots. Just as we can have the wrong friends and sources of information, we have the same problem with robots and external intelligence.

So the title is wrong. Robots may not be more useful if they are made to lack confidence. They should have a calibrated level of confidence just as we humans should have calibrated levels of confidence depending upon the task and how skilled we are. Achieving the appropriate levels of confidence between humans and machines is a good example of the Man-Machine Symbiosis J.C.R. Licklider expounded upon in his Classic paper “Man-Computer Symbiosis.”

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Thinking with Technology

July 8, 2017

This is the seventh post in the series The Knowledge Illusion: Why We Never Think Alone (Unabridged), written by Steven Sloman and Phillip Fernbach. Thinking with Technology is a chapter in this book. Much has already been written in this blog on this topic, so this post will try to hit some unique points.

In the healthy memory blog Thinking with Technology comes under the category transactive memory as information in technology, be it paper or the internet, falls into this category. Actually Thinking with Other People also falls into this category as transactive memory refers to all information not stored in our own biological brains. Sloan and Fernbach realize this similarity as they write that we are starting to treat our technology more and more like people, like full participants in the community of knowledge. Just as we store understanding in other people, we store understanding in the internet. We already know that our having knowledge available in other people’s heads leads us to overrate our own understanding. We live in a community that shares knowledge, so each of us individually can fail to distinguish whether knowledge is stored in our own head or in someone else’s. This is the illusion of explanatory depth, viz., I think I understand things better than I do because I incorporate other people’s understanding into my assessment of my own understanding.

Two different research groups have found that we have the same kind of “confusion at the frontier” when we search the internet. Adrian Ward of the University of Texas found that engaging in Internet searches increased people’s cognitive self-esteem, their own sense of they ability to remember and process information. Moreover, people who searched the internet for facts they didn’t know and were later asked where they found the information often misremembered and reported that they had known it all along. Many completely forgot ever having conducted the search, giving themselves credit instead of Google.

Matt Fisher and Frank Keil conducted a study in which participants were asked to answer a series of general causal knowledge questions like, “How does a zipper work?” One group was asked to search the internet to confirm the details of their explanation. Th other group was asked to answer the questions without using any outside sources. Next, participants were asked to rate how well they could answer questions in domains that had nothing to do with the questions they were asked in the first phase. The finding was that those who had searched the internet rated their ability to answer unrelated questions as higher than those who had not.

The risk here should not be underestimated. Interactions with the internet can result in our thinking we know more than we know. It is important to make a distinction between what is accessible in memory and what is available in memory. If you can provide answers without consulting any external sources, then the information is accessible and is truly in your personal biological memory. However, if you need to consult the internet, or some other technical source,or some individual, then although the information is available, but not accessible. This is the difference between a closed book test or an open book test. Unless you can perform extemporaneously and accurately, be sure to consult transactive memory

Sloman and Fernbach have some unique perspectives. They discount the risk of super intelligence threatening humans, at least for now. They seem to think that there is no current basis for some real super intelligence taking over the world. The reason they offer for this is that technology does not (yet) share intentionality with us. HM does not quite understand why they argue this, and, in any case, the ‘yet” is enclosed in parentheses, implying that this is just a matter of time.

To summarize succinctly, technology increases our knowing more than we know. In other words, it increases the knowledge illusion.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Technology and Maturity

June 29, 2017

Sally Jenkins is one of my favorite writers. She writes substantive articles on sports for the Washington Post. She is an outstanding writer and what she writes on any topic is worth reading. Unfortunately, few of her articles are directly relevant to the Healthymemory blog. Fortunately, this current article “Women’s college athletes don’t need another cuddling parent, They need a couch” in the 25 June 2017 Washington Post is relevant. This article is relevant as it identifies certain adverse effects of technology.

The following is cited directly from the article. “According to a 2016 NCAA survey 76% of all Division I female athletes said they would like to go home to their moms and dads more often and 64% said they communicate with their parents at least once a day, a number that rises t0 73% among women’s basketball players. And nearly a third reported feeling overwhelmed.”

Social psychologists say that these numbers “reflect a larger trend in all college students that is attributable at least in part to a culture of hovering parental-involvement, participation trophies and constant connectivity via smartphones and social media, which has not made adolescents more secure and independent, but less.”

Since 2012 there has been a pronounced increase in mental health issues on campuses. Nearly 58% of students report anxiety and 35% experience depression, according to annual freshmen surveys and other assessments.

Research psychologist Jean Twenge wrote a forthcoming book, pointedly entitled “IGen: Why Today’s Super-Connected Kids are Growing Up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood and What That Means for the Rest of Us.” She writes that the new generation of students is preoccupied with safety. “Including what they call emotional safety. Perhaps because they grew up interacting online through text, where words can incur damage.”

Along with this anxiety, iGens have unrealistic expectations and exaggerated opinions of themselves. Nearly 60% of high school students say they expect to get a graduate degree. In reality, just 9 to 10% actually will. 47% of Division I women’s basketball players think it’s at least “somewhat likely” they will play professional or Olympic ball. In reality, the WNBA drafts just 0.9% of the players.

Dr. Twenge writes that if you compare IGEN to GEN-Xers or boomers, they are much more likely to say their abilities are ‘above average.’

Perhaps not all, but definitely some, and likely a large % of these problems are due to the adverse effects of technology

 

The Truth About Language

May 17, 2017

“The Truth About Language” is a most informative book by Michael C. Corvallis.  Its subtitle is “What It Is and Where It Came From.”  The title and the subtitle informs the reader exactly what the book is about.  This is an enormously complex topic.  There are more than six thousand languages today and they vary among themselves tremendously.  Moreover, this language ability is the skill that puts our species in a unique leadership place.

The question as to where it came from is still highly contentious.  Dr. Corvallis presents his analysis and conclusion, one which HM finds compelling, but there is no consensus on this topic.

This blog is posted under the category “Transactive Memory.”   Transactive Memory is memory storage external to our personal memories.   So this includes information stored in the memories of other humans, and memories storied in external media.  In this case the storage medium was a book and the presentation device was an iPAD.  There is a tremendous wealth of memory here.  Dr. Corvallis is a scholar of the highest caliber who is drawing from the knowledge of a very large number of outstanding minds.  And a reader applying attention to this book derives a large amount of knowledge.

There is a personal interest for HM here.  The book discusses the behaviorist B.F. Skinner’s tome, “Verbal Behavior.”   As an undergraduate, he argued Skinner’s thesis before a linguistics class.  Although his performance was pitiable, a charitable professor gave him an “A” for the class.  As a graduate student, he taught undergraduates Chomsky’s Transformational Generative Grammar.  His post-doctoral work did not involve linguistics, so he lost touch with the topic.  Dr. Corvallis’s book brought him up to date and reignited his interest.

So it is clear why HM is interested in this book.  Should any readers have a general interest in this topic, it provides fuel for a growth mindset which helps foster a healthy memory.

It is not known when language began.  Presumably sometime during the hominins, but that is debatable.  There is also no general agreement as to how long it took for language to develop.  There are two general schools.  One is that it developed suddenly.  This school is found in certain religions and with the linguist Noam Chomsky.  Dr. Corvallis is in the second school; it developed gradually over an unknown but probably long period of time.

Dr. Corvallis argues that the development involved gestures. It is interesting note here that deaf babies gesture.  It is also important to note that American Sign Language is recognized as a legitimate language.  The development was gradual and occurred over time.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Smartphones and Nature

May 5, 2017

My wife and I enjoy walking in the woods.  And while walking we frequently encounter a disturbing sight.  And that is someone walking in the woods with their face buried in their smartphone.  It is understood that there are good reasons for having a smartphone while walking in the woods.  An emergency might be encountered, or maybe it needs to be consulted for a reference regarding a bird, plant, tree, or some other aspect of nature.  But walking in the woods with one’s face buried in a smartphone largely defeats the benefits of walking in the woods.

First of all, walking, even with one’s face buried in a smartphone, still is good exercise.  But research has shown that walking in a natural setting as opposed to an urban setting is particularly beneficial.  Walking and appreciating nature is even more beneficial.  And what is most beneficial is a walking meditation in nature, being in the moment experiencing nature.

So go ahead and bring your smartphone with you in the woods.  Just do not bury your face in it and try walking meditation.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Can Democracy Survive the Internet?

April 24, 2017

The title of this post is part of the title of a column by Dan Balz in the 23 April 2017 issue of the Washington Post.  The complete title of the column is “A scholar asks, ‘Can democracy survive the internet?’  The scholar in question is Nathaniel Persily a law professor at Stanford University.  He has written an article in a forthcoming issue of the Journal of Democracy with the same title as this post.

Before proceeding, let HM remind you that the original purpose of the internet was to increase communication among scientists and engineers.  Tim Berners-Lee created and gave the technology that gave birth to the World Wide Web.  He gave it to the world for free to fulfill its true potential as a tool which serves all of the humanity. The healthy memory blog post “Tim Berners-Lee Speaks Out on Fake News” related some of the concerns he has regarding where the web is going.

Persily’s concerns go much further.  And they go way beyond Russian interference in the 2016 presidential election.  He notes that foreign attempts to interfere with what should be a sovereign enterprise are only one factor to be examined.  Persily argues that the 2016 campaign broke down previously established rules and distinctions “between insiders and outsiders, earned media and advertising, media and non-media, legacy media and new media, news and entertainment and even foreign and domestic sources of campaign communication.”  One of the primary reasons Trump won was that Trump realized the potential rewards of exploiting what the internet offered, and conducted his campaign through new, unconventional means.

Persily writes that Trump realized, “That it was more important to swamp the communication environment than it was to advocate for a particular belief or fight for the truth of a particular story.”  Persily notes that the Internet reacted to the Trump campaign, “like an ecosystem welcoming a new and foreign species.  His candidacy triggered new strategies and promoted established Internet forces.  Some of these (such as the ‘alt-right’ ) were moved by ideological affinity, while others sought to profit financially or further a geopolitical agenda.  Those who worry about the implications of the 2016 campaign are left to wonder whether it illustrates the vulnerabilities of democracy in the Internet age, especially when it comes to the integrity of the information voters access as they choose between candidates.”

Persily quotes a study by a group of scholars that said, “Retweets of Trump’s posts are a significant predictor of concurrent news coverage…which may imply that he unleashes ‘tweetstorms’ when his coverage is low.”

Persily also writes about the 2016 campaign, “the prevalence of bots in spreading propaganda  and fake news appears to have reached new heights.  One study found that between 16 September and 21 October 2016, bots produced about a fifth of all tweets related to the upcoming election.  Across all three presidential debates, pro-Trump twitter bots generated about four times as many tweets as pro-Clinton bots.  During the final debate in particular, that figure rose to seven times as many.”

Clearly, Persily raises an extremely provocative, disturbing, and important question.

What’s Next for The March for Science?

April 24, 2017

To find out go to https://satellites.marchforscience.com

And remember that science is essential for a healthy memory!

Irresistible

April 12, 2017

“Irresistible” is the title of a book by Adam Alter.  Its subtitle is “The Rise of Addictive Technology and the Business of Keeping Us Hooked.”  This is an important book because it addresses an important problem, the addiction to computer games.  The World of Warcraft (WOW) is perhaps the most egregious example in which lives have been and are continuing to be ruined.  The statistics will not be belabored here.  They are well presented in “Irresistible” along with numerous personal stories.  “Behavioral addiction” was discussed in a previous healthymemory blog post “Beware the Irresistible Internet.”  There is a series of posts based on Dr. Mary Aiken’s book, “The Cyber Effect” that has addressed this problem. Additional healthy memory posts on this topic can be found by entering “Sherry Turkle” into the search block of the healthymemory blog.  What is especially alarming is that Adam Alter makes a compelling argument that game makers are getting better at making their games irrestible, that is behaviorally addicting.

Of course, not all games are bad.  “Gamification”  is a term for games devoted to beneficial ends, such as education.  This can be very beneficial when learning, that could be tedious, is transformed into an entertaining game, which could be played for its entertainment value alone.  Good arguments can be made for these games provided that their educational benefits are documented.  However, even if it were possible, it would be dangerous if all of education were gamefied.  Not everything in life is enjoyable, and part of the educational process should be learning to assure the students persevere even when learning becomes difficult and frustrating.

Alter also does a commendable review of treatments for behavioral addictions and preventive measures to decrease the likelihood of addiction.  The book begins with Steve Jobs telling the New York Times journalist Nick Bilton that his children never used the iPAD, “We limit how much technology our kids use in theme.”  Bolton discovered that other tech giant imposed similar restrictions.  A former editor of “Wired,” Chris Anderson, enforced strict time limits on every device in his home, “because we have seen the dangers of technology firsthand.”  After relating the way tech giants controlled their childrens’ access to technology lAlter wrote, “It seemed as if the  people producing tech products were following the cardinal rule of drug dealing:  never get high on your own supply.”

Perhaps one of the most informative studies related in “Irresistible” is not specifically about addiction.  It related a paper published by eight psychologists in the journal “Science.”  In one study they asked a group of undergraduate students to sit quietly for twenty minutes.  They were told that their goal was to entertain themselves with your thoughts as best you can.  That is, your goal should be to have a pleasant experience, as opposed to spending time focusing on everyday activities or negative things.”  The experimenters hooked up  to a machine that administers electric shocks, and gave them a sample shock  to show that the experience of being shocked isn’t pleasant.   The students were told that they could self-administer the shock if they wanted to, but that “Whether you do so is completely up to you.”  It was their choice.
One student shocked himself one hundred and ninety times.  That’s once every six seconds, over and over for twenty minutes.   Although he was an outlier, two thirds of all male students and about one in three female students shocked themselves at least once.  Many shocked themselves more than once.  By their own admission in a questionnaire they didn’t find the experience pleasant, so they preferred to endure the unpleasantness  of a shock to the experience of sitting quietly with their thoughts.

Upon rereading this experiment HM became convinced that the teaching of mindfulness and meditation should be mandatory in the public school.  If so these students would have taken advantage of the situation to be “in the present ” and to meditate, just as they would if they found themselves stuck in traffic or being forced to wait.  (See the healthy memory blog post, “SPACE”)

Perhaps HM is a “goody two-shoes” but he has never been attracted to games.  He never cared how much he scored on a pin ball machine.  He is the same with respect to computer games.  They strike him as pointless activities, so he never plays them.

It strikes HM that public education is avoiding a key responsibility.  Students need to understand from an early age that their time on earth is limited.  This should not send them into panic or to avoid enjoyable pursuits.  But a question should be asked regarding any pursuit is what value does the pursuit have.  It is okay for some pursuits to be pursued for enjoyment alone.  But there are also pursuits, which in addition to being enjoyable, provide both personal benefits as well as societal benefits.

Ideally one should pursue a life with purpose as was related in the posts on Victor Strecher’s book “Life on Purpose.”  This provides for a benefiting an fulfilling life.  In the healthymemory blog post “SPACE” Stretcher argues for pursuing a healthy lifestyle to further the ends of living a life with purpose.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Perhaps Tim Berners-Lee Did Not Anticipate This Problem

April 1, 2017

And this problem can be found in a front page article by Elizabeth Dwoksin and Crain Timberg titled “Advertisers find it hard to avoid sites spawning hate” in the 25 March 2017 issue of the Washington Post.

This article begins, “As the owner of a small business in liberal Massachusetts, John Ellis was a natural sympathizer of the nationwide call for advertisers to boycott Breitbard News, with its hard-edge conservative politics and close ties to President Trump.  But it made Ellis wonder about other more extreme right-wing sites:  Who is placing adds on them?  A few clicks around the Internet revealed a troubling answer:  He was.”

He found an add for his engineering company company, Optics for Hire, on a website owned by white nationalist leader, Richard Spencer.  Of course, this meant that he was unknowingly supplying funding for this website.

The Post article continues that  in the booming world of Internet advertising, businesses use the latest in online advertising technology offered by Google, Yahoo, and their major competitors are increasingly finding their ads placed alongside politically extreme and derogatory content.

The reason for this is that the ad networks offered by Google, Yahoo, and others can display ads on vast numbers of third-party websites based on people’s search and browsing histories.  This strategy gives advertisers an unprecedented ability to reach customers who fit a narrow profile; it dramatically curtails their ability to control where their advertisements appear.

This week AT&T, Verizon and other leading companies pulled their business from Google’s AdSense network in response to news report that ads had appeared with propaganda from the Islamic State and violent groups.

A Washington Post examination of dozens of sites with politically extreme and derogatory content found that many were customers of leading ad networks, which share a portion of revenue gleaned from advertisers with the site’s operators.  The examination found that the networks had displayed ads for Allstate, IBM, DirectTV and dozens of other household brand names on websites with content containing racial and ethnic slurs, Holocaust denial and disparaging comments about African Americans, Jews, women, and gay people.

Other Google displayed ads, for Macy’s and the genetics company 23andMe, appeared on he website My Posting Career, which describes itself as a “white privilege zone,” next to a notice saying the site would offer a referral bonus for each member related to Adolph Hitler.

Some advertisers also expressed frustration that ad networks had failed to keep marketing messages from appearing alongside reader comments—even on sites that themselves do not promote extremist content.

Clearly more attention needs to be devoted to this topic along with better screening algorithms.  And perhaps some companies will need to make a choice between profits and offending content.

A Good Example of What Tim Berners-Lee Fears

March 31, 2017

It can be found in an article by Anthony Failoa and Stephanie Kirchner on page A8 in the 25 March issue of the Washington Post titled, “In Germany, online hate stokes right-wing violence”.

The Reichsburgers are an expanding movement in Germany with similarities to what are known as sovereign citizens groups in the United States.  Reichsburgers  reject the legitimacy of the federal government, seeing politicians and bureaucrats as usurpers.  After authorities  seized illegal weapons from his home, they charged Bangert, a Reichsburger, and five accomplices with plotting attacks on police officers, Jewish centers and refugee shelters.

Jan Rathje, a project leader at the Amadeu Antonio Foundation says, “It’s an international phenomenon of people claiming there are conspiracies going on, people with an anti-Semitic worldview who are also against Muslims, immigrants, and the federal government.  He continued, we’ve reached a point where it’s not just talk.  This kind of thinking is turning violent.”

Preliminary figures for last year show that at least 12,503 crimes were committed by far-right extremists—914 of which were violent.  The worst act was the fatal shooting of a German police officer by a Reichsburger member.  The preliminary figures roughly compete with levels in 2015, but they amount to a leap of nearly 20% from 2014.

Of course, Germans are especially sensitive about this as one time they were governed by Nazis.  Officials say they last time numbers surged this high was in the early 1990s, when Germany recorded a large but short-term jump in neo-Nazi activity following reunification.  Authorities believe the the surge is due, in part, by the arrival of early, mostly Muslim, asylum seekers.   Last year, there were nearly 10 anti-migrant attacks per day, ranging from vandalism to arson, to severe beatings.  Officials say the rise of conspiracy theorist websites, inflammatory fake news, and anti-federal government/right-wing activism have thrown more factors into the mix.

The Reichsburger movement consist of nearly 10,000 individuals who reject the authority of federal, state and city governments.  Some claim that the last real German government was the Third Reich of Adolf Hitler.  Although the Reichsburger movement may be uniquely German, its type of fringe thinking is universal.  German intelligence officials describe some of the tools used by the members, such as fake passports and documents used to declare their own governments, are nearly identical to those used by American sovereign citizens groups.

In October, a 49-year old Reichsburger  declared his home an “independent state,” shot and killed a police officer assigned to seize his hoarded weapons.  Last August, a former “Mr. Germany” and 13 of his supporters tried to prevent his eviction from his “sovereign home” by shooting at police.  Police fired back, severely injuring Ursache.  Two officers were also hurt.  This raid, along with the raid of 11 other apartments found evidence against Bangert and five other people suspected of having formed a far-right extremist network  They are believed by prosecutors to have been planning armed attacks agains police officers, asylum seekers, and Jews.

As the title of the Washington Post article suggests, online hate is stoking much of this right-wing violence.  It would be interesting to compare the number of right wing hate groups in Germany with right wing hate groups in the US.  This article provides some limited information on Germany.

To find evidence about dangerous hate groups in the US go to https://www.splcenter.org
At one time the FBI monitored these dangerous groups.  HM hopes they are continuing these activities.  However, The Southern Poverty Law Center does more than just monitor these groups.  They have programs that have reformed members of these hate groups, and they continue to develop more programs for this essential service.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Tim Berners-Lee Speaks Out on Fake News

March 30, 2017

The title  of this post is identical to the title of an article in the Upfront Section in the 18 March 2017 issue of the New Scientist.  Tim Berners-Lee is the creator of the World Wide Web.  He gave it to the world for free to fulfill its true potential as a tool which serves all of the humanity.  So it is interesting what he thinks of the web’s 28th birthday that was reached on 12 March.

Berners-Lee wrote an open letter to mark the web’s 20th birthday.  He wrote that it is too easy for misinformation to spread, because most people get their news from a few social media sites and search engines prioritize content on what people are likely to click on.

He also questioned the ethics of online political campaigning, which exploits vast amounts of data to target various audiences.  He wrote, “Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups,” is that democratic?”

He also said that we are losing control of our personal data, which we divulge to sign up for free services.

Berniers-Lee founded The Web Foundation that plans to work on these issues.

Bill Gates’ Robot Tax Alone Won’t Save Jobs: Here’s What Will

March 10, 2017

The title of this post is identical to the title of an article by Sumit Paul-Choudhury in the 4 March  2017 issue of the New Scientist.   Bill Gates argued that we should raise the same amount of money by taxing robots as we would lose in payroll taxes from the humans they supplant.  Then this money could be directed towards more human-dependent jobs, such as caring for the young, old and sick.  EU legislators rejected just such a proposal due to lobbying efforts by the robotics industry.

The article makes the valid assertion that automation is the biggest challenge to employment in the 21st century.  Research has shown that far more jobs are lost to automation than to outsourcing.  Moreover, this will get worse as machines become ever more capable of doing human jobs—not just those involving physical labor, but ones involving thinking also.

The common argument from the robot revolution is that previous upheavals have always created new kinds of jobs to replace the ones that have gone extinct.  But previously when automation hit one sector, employees would decamp to other industries.  However, the sweep of machine learning means that many sectors are automating simultaneously.  So perhaps it’s not about how many jobs ar left after the machines are done taking their pick, but which ones.

The article suggests that the evidence might not be very satisfying.  The rise of the “gig economy”, in which algorithms direct low-skilled human workers.  Although this might be an employer’s dream, it is frequently an insecure, unfulfilling and sometimes exploitative grind for workers.

The article argues that to stop this, it’s employers that need to be convinced, not the people making the technology, but it will be difficult to convince the employers who have huge incentives to replace all-too-human workers with machines that never stop working and never get paid.

Although the article fails to mention this, there is the danger of extremely high unemployment, particularly among the well-educated and formerly  well-off.  There have been several previous healthy memory blog posts by HM in which he discusses the future he was offered in the 1950s.  In elementary school we were told that by today technological advances would vastly increase leisure time.  Bear in mind that in the 50s very few mothers worked.  Moreover, technology has advanced far more than anticipated.  So, why is everyone working so hard?  Where is this promised leisure?

Unfortunately modern economies are predicated on growth.  They must grow which requires people to purchase junk and to keep working.  These economies are running towards disaster.  People need to demand the leisure promised in the 50s.   Paul-Choudry’s article does suggest that a business friendly middle ground might be for governments to subsidize reductions in working hours, an approach that has fended off labour crises before.  HM thinks that Paul-Choudhury has vastly underestimated the dangers of job losses.  HM thinks that this is of a magnitude that will threaten the stability of society.  So the working week will need to be drastically shortened to 20 hours (See the Healthymemory Blog Post “More on Rest”).

There have been previous healthy memory blog posts on having a basic minimum income, which also will need to be passed.

The primary forces arguing for these changes are the risks of societal collapse.

However, people need to have a purpose (ikigai) in their lives.  They need to have eudaemonic not hedonic pursuits.  Eudaemonic pursuits build societies; hedonic pursuits destroy society.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Media Multi-tasking

February 4, 2017

Media multitasking is another important topic addressed by Julia Shaw in “THE MEMORY ILLUSION.”  She begins this section as follows:  “Let me tell you a secret.  You can’t multitask.”  This is the way neuroscientist Earl Miller from MIT puts it, “people can’t multitask very well, and when people say they can, they’re deluding themselves…The brain is very good at deluding itself.”  Miller continues, “When people think they’re multitasking, they’re actually just switching from one task to another very rapidly.  And every time they do, there’s a cognitive cost.”

A review done in 2014 by Derk Crews and Molly Russ on the impact of task-switching has on efficiency concluded that it is bad for our productivity, critical thinking and ability to concentrate, in addition to making us more error-prone.  Moreover, they concluded that these consequences are  not limited to diminishing our ability to do the task at hand.  They also have an impact on our ability to remember things later.  Task switching also increases stress, diminishing people’s ability to manage a work-life balance, and can have negative social consequences.

Reysol Junco and Shelia Cotton further examined the impact of task-switching on our ability to learn and remember things. Their research was reported in an article entitled ‘No A 4 U’.  They asked 1,834 students about their use of technology and found that most of them spent a significant amount of time using information and communication technologies on a daily basis.  They found that 51% of respondents reported texting, 33% reported using Facebook, and 21% reported emailing while doing schoolwork somewhat or very frequently.  The respondents reported that while studying outside of class, they spent an average 60 minutes per day on Facebook, 43 minutes per day browsing the internet, and 22 minutes per day on their email.  This is over two hours attempting to multitask while studying per day.  The study also found that such multitasking, particularly the use of Facebook and instant messaging, was significantly negatively correlated with academic performance; the more time students reported spending using these technologies while studying, the worse their grades were.

David Strayer and his research team at the University of Utah published a study comparing drunk drivers to drivers who were talking on their cell phones.  It is assumed here that most conscious attention is being directed at the conversation and the driving has been relegated to automatic monitoring.  The results were that “When drivers were conversing on either a handheld or a hands-free cell phone, their braking reactions were delayed and they were involved in more traffic accidents than when they were not conversing on a cell phone.’  HM believes that this research was conducted in driving simulators and did not engender any carnage on the road.  Strayer also concluded that driving while chatting on the phone can actually be as bad as drunk driving, with both noticeably increasing the risk for car accidents.

Unfortunately, legislators have not understood this research.  Laws allow hand-free use of cell phones, but it is not the hands that are at issue here.  It is the attention available for driving.  Cell phone use regardless of whether hands are involved detracts from the attention needed during driving when emergencies or unexpected happenings occur.

Communications researchers Aimee Miller-Ott and  Lynne Kelly studied how constant use of our phones while also engaged in other activities can impede our happiness.  Their position is that we have expectations of how certain social interactions are supposed to look, and if these expectation are violated we have a negative response.
They asked 51 respondents to explain what they expect when ‘hanging out’ with friends and loved ones, and when going on dates.  They found that just the mere presence of a visible cell phone decreased the satisfaction of time spent together, regardless of whether the person was constantly using it.  The reasons offered by the respondents for disliking the other person being on their cell phone included the involution of the expectation of undivided attention during dates and other intimated moments.  When hanging out, this expectation was lessened, so the presence of a cell phone was not perceived to be as negative, but was still often considered to diminish the in-person interaction.  Their research corresponded to their review of the academic literature, where there is strong evidence showing that romantic partners are often annoyed  and upset when their partner uses a cell phone during the time spent together

Marketing professor James Roberts has coined the term ‘phub’— an elision of ‘phone’ and ‘snub’ to describe the action of a person choosing to engage with their  phone instead of engaging with another person.  For example, you might angrily say, “Stop phubbing me!”  Roberts says that phone attachment  leading to this kin of use behavior has ben lined with higher stress, anxiety, and depression.

The Alt-Right and the President-elect (via the Electoral College)

January 20, 2017

U.S Citizens should understand the ramifications that the alt-right has for the President-elect.  A quick way of accomplishing this is to read the e-book by Jon Ronson, “The Elephant in the Room:  A Journey into the Trump and the “Alt-Right.”  Jon Ronson can be regarded as the foremost expert on Alex Jones.  And Alex Jones is one of the foremost voices of the alt-right.  The President-elect has appeared on Jones’s radio talk show.

We’ll skip to the concluding paragraphs of this book, which was published before the election.

“But the alt-right’s appeal remains marginal because the huge majority of young Americans like multiculturalism.  They aren’t paranoid or hateful about other races  Those ideas are ridiculous to them.  The alt-right’s small gains in popularity will not be enough to win Trump the election.  This is not Germany in the 1930’s.  All that’s changed is that one of Alex’s fans—one of those grumpy looking middle-aged men sitting in David Icke’s audience—is now the Republican nominee.

But if some disaster unfolds—if Hillary’s health declines furthure, or she grows ever more off-cuttingly secretive—and Trump gets elected, he could bring Alex and other with him.  The idea of Donald Trump and Alex Jones and Roger Stone and Stephen Banning having power over us—that is terrifying.”

Might we be Germany in the 1930’s?

“The Elephant in the Room” is available from amazon.com for $1.99.  It is free for Amazon Prime members.

An Example from Lies Incorporated

January 19, 2017

This example was reported in the 7 Jan 2017 issue of the Washington Post.  The title of the article by Anthony Faiolo and Stephanie Kirchner is “Breitbart report triggers a backlash in Germany.”

The article begins, “Berlin—It was every God-fearing Christian’s worst nightmare about Muslim refugees.  “Revealed”, the Breitbart News Headline screamed, “1,000-Man Mob Attack Police, Set Germany’s Oldest Church Alight on New Year’s Eve.”  The only problem:  Police say that’s not what happened that night in the western city of Dortmund.”

So what did the police say?  They did not dispute that several incidents took place that night, but nothing to the extremes suggested by the Breitbart report.  They said the evening was comparatively calmer than previous New Years Eves.

The motivation for the false report is clear, To foster the alt-right agenda to create fear of the Moslems.  And this is Breitbart’s mission—to spread propaganda for the alt-right.  This swill is harmful to peace in the world, and pollutes healthy memories.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Lies, Incorporated

January 18, 2017

Lies, Incorporated is the title of a book by Ari-Havt and Media Matters for America.  This book is so thoroughly researched that it could not have been done by one individual, consequently the research of Media Matters for America is key.  The sub-title of this book is “The World of Post-Truth Politics.”  An earlier healthymemory blog post titled “Did Corporate PR Initiate the Post-Fact Era” discussed the beginning of the post-fact era by discussing the false scientific effort to document that smoking was safe.  That post also including the false scientific effort to argue against global warming.  “Lies Incorporated” elaborates on these topics and then has chapters titled “Lie Panel:  Health Care,”  “Growth in a Time of Lies:  Debt,” “On the Border of Truth:  Immigration Reform,” “Two Dangerous Weapons:  Guns and Lies,”  “One Lie, One Vote:  Voter I.D. Laws,” “Shut That Whole Lie Down:  Abortion,”  “A Lie’s Last Gasp:  Gay Marriage.”

The book begins with the statement, “Richard Berman is a Liar.”  He relished the title of “Dr. Evil” and develops the nastiest PR campaigns to undermine and discredit truth.  Berman’s motivation appears to be one of money.  He’ll sell himself to the highest bidder.  For others, the motivation is one of convenience.  If you are in the petroleum business, global warming is indeed an “inconvenient truth.”  HM admittedly chooses to ignore the true dietary guidance his wife offers because it is an “inconvenient truth.”  But many are simply ideologues.  They know what they believe and force facts into those ideologies by ignoring genuine facts and generate their version of facts.   This is termed “motivated reasoning.”  The criteria of truth is ignored.

Perhaps the most blatant example is provided by the “Death Panel Lie” generated to defeat the Affordable Care Act.  In June 2014 “The Washington Post” reported the story of a woman and her husband who were employed but receiving no benefits and would rather pay a penalty for being uninsured than participate in Obamacare.  They were afraid of the discredited notion of “Death Panels” and were paying serious out-of-pocket medical costs stemming from chronic conditions.  These people were not alone.  A November 2014 Gallup Poll found that 35% of uninsured Americans would rather pay the fine prescribed by law than receive health insurance.  There were people who said that they did not want government involvement, but that hands should be kept off their Medicare.  This, in part, explains why the United States has the most expensive medical costs with the results of a third world country.  It leads one to think that if there were a Stupidity Olympics, the United States might well dominate the competition.

One of the most disturbing realizations was that there are people with degrees who are dominated by their ideologies and should know better.  Perhaps this is not surprising as there were scientists who were fascists and supported totalitarian regimes with vigor.

The following two paragraphs are taken directly from the text.  “The purveyors of misinformation have a built-in advantage.  Lies are socially sticky, and even after one has been thoroughly debunked, it will still have advocates among those whose worldview it justifies.  These zombie lies continue to rise from the dead again and again, impacting political debate and swaying public opinion on a variety of issues.
Misinformation is damaging to those who read and absorb it.  Once a lie—no matter how outrageous—is part of the consciousness of a particular group, it is nearly impossible to eliminate, and like a virus it spreads uncontrollably within the affected communities.”  Richard Berman explained to energy executives that once you “solidify [a] position,” in a person’s mind, regardless of the truth, you have “achieved something the other side cannot overcome because it’s very tough to break common knowledge.  That “common knowledge” is repeated on radio, television, in print, and at the water cooler.  With each new citation, the lie becomes more entrenched.”

It is commonly known that certain politicians use “code words” to disguise racist statements.  HM found it interesting that in this book the author of these words was Lee Atwater, who was a former chairman of the Republican National Committee who helped elect Ronald Reagan and George H.W. Bush.  Here’s Atwater’s explanation of the delicate balance the Republican Party must play when using racially tinged issues to win election without appearing outwardly racist—by “getting abstract” when talking about race:
“You start out in 1954 by saying, “n——-, , n——-, n——-.”  By 1968 you can’t say
n——-, that hurts you, backfires.  So you stay stuff like forced busing, states’ rights, and all that stuff.  And you’re getting so abstract now that you’re talking about cutting taxes, and all these things you’re talking  are totally economic things and a byproduct of them is, blacks get hurt worst than whites.  And subconsciously maybe that is part of it.  It is getting that abstract, and that coded, then we’re doing away with the racial problem one way or the other.”

So what can be done about this political cesspool?  Be aware and do not allow yourself to be pulled in.  Finding the truth has been made more difficult, but we must all persevere.  Availing ourselves of such sites as factcheck.org, politifact.com,
https://thinkprogress.org.

Move Knowledge from the Cloud Into Your Head

November 29, 2016

There is much in Poundstone’s “Head In the Cloud” that is not covered in this blog.  HM encourages the interested reader to read the book.  Poundstone provides strategies for sorting through the vast amounts of available information.  However, HM wants to make a single point.  The notion that everything can be found, so nothing needs to be remembered, is dangerously in error.  Hence the title of this post, Move Knowledge from the Cloud into your biological brain.  Of course, it would be both impractical and impossible to move everything to our biological brains.  Most information can be ignored.  Some information can be made available, but not immediately accessible.  This is information that can be readily found via searching, bookmarking, or downloading to another storage device.  However, there is other information that needs to be accessible in your biological memory.  The problem is how much information and where should it be stored.  The answer to this question is reminiscent of Goldilocks.  That is not too much, and not too little.  This varies from individual and depends upon the nature of the topic.

Poundstone seems to imply that what information needs to go where is a triage problem solved by the brain.  What he neglects to mention is that this should be a conscious process.  Do not passively assume that the brain will perform this function effectively.  It needs input from your conscious mind.  It requires thinking, Kahneman’s System 2 processing.  Effective cognition requires effective communication among what is available in technology and our fellow humans, what we can readily access from technology and our fellow humans, and what needs to be held in our biological brains.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Research Ties Fake News to Russia

November 28, 2016

The title of this post is identical to a front page story by Craig Timberg in the 25 November 2016 issue of the Washington Post.  The article begins, “The flood of ‘fake news’ this election season got support from a sophisticated Russian propaganda campaign that created misleading articles online with the goal of punishing Democrat Hillary Clinton, helping Republican Donald Trump, and undermining faith in American democracy, say independent researchers who tracked the operation.”

The article continues, “Russia’s increasingly sophisticated machinery—including thousands of bonnets, teams of paid human “trolls,” and networks of websites and social-media accounts—echoed and amplified right-wing sites across the Internet as they portrayed Clinton as a criminal hiding potentially fatal health problems and preparing to hand control of the nation to a shadowy cabal of global financiers.  The effort also sought to heighten the appearance of international tensions and promote fear of looming hostilities with the nuclear-armed Russia.”

Two teams of independent researchers found that the Russians exploited American-made technology platforms to attack U.S. democracy at a particularly vulnerable moment.  The sophistication of these Russian tactics may complicate efforts by Facebook and Google to crack down on “fake news.”

Research was done by Clint Watts, a fellow at the Foreign Policy Research Institute has been tracking Russian propaganda since 2014 along with two other researchers,s  Andrew Weisburg and J.M. Berger.  This research can be found at warontherocks.com, “Trolling for Trump:  How Russia is Trying to Destroy our Democracy.”

Another group, PropOrNot, http://www.propornot.com/
plans to release its own findings today showing the startling reach and effectiveness of Russian propaganda campaigns.

Here are some tips for identifying fake news:

Examine the url, which sometimes are subtly changed.
Does the photo looked photoshopped or unrealistic (drop into Google images)
Cross check with other news sources.
Think about installing Chrome plug-ins to identify bad stuff.

The Knowledge Premium

November 26, 2016

The Knowledge Premium is a section in “Head In The Cloud,”  an important book by William Poundstone.  In this section he computes the monetary value of having facts in our brains as opposed to being in the cloud.  He uses regression techniques  to relate the scores on his knowledge of facts tests and to hold constant demographic variables such as differences in age and education.  This allows the computation of a knowledge premium, the increased income accountable to the test scores alone.  Poundstone created a trivia quiz that found that individuals who aced the test earned $94,959 and those who scored zero earned $40,360.  The difference, or knowledge premium is $54,599 a year.  Here are some of the questions that were used on this ten item test.

Who was Emily Dickinson—a chef, a poet, a designer, a philosopher or a reality-show star?
Which happened first, the US Civil War or the Battle of Waterloo?
Which artist created this painting?  (Shown was Picasso’s 1928 Painter and Model)
Which nation is Cuba? (Respondents had to locate it on a map)

These questions were characterized as trivial not because the information is unimportant, but because it seems to have nothing to do with basic survival or with make money.  But the statistic computed from this test says that it has a lot to do with making money.

Answers:  Dickinson was a poet; the Battle of Waterloo.  The Emily Dickinson question was answered by 93% correct, with about 70 to 75% answering the other questions correctly.

Two Scientists in Congress

November 25, 2016

At the time of writing “Head In The Cloud”  by William Poundstone there were only two scientists total in the United States Senate and House of Representatives.  That is of 535 representatives only 2 (0.3%) are scientists.  It seems only appropriate that a low-information electorate have a low intelligence congress.  HM says low intelligence as it is science that has produced advancement and modernity.   Absent science we would be living in filth and ignorance.  Included here are both the physical and social sciences.

It is more than scientific knowledge that is important.  The empirical basis of science together with evaluation methodologies and statistics are important.  We need these to have a rational basis for policies and for a means of evaluating the benefits and dangers of different policies.  When debates in Congress are based upon data, rigorous research can be done to assist in defining the ways to proceed.  Scientists do not always agree.  Nor are the initial results of investigations always correct.  But eventually there is convergence with resulting better ideas and policies.  This is the democracy of the future.  Will it ever be achieved?

The low-information electorate complements nicely argumentation based on beliefs.  People fail to realize that beliefs are double-edged stores where both edges are blunt. One blunt edge makes it difficult, if not impossible, to see the problems with one’s own beliefs.  The other blunt edge makes it difficult, if not impossible, to see alternative ideas and courses of action.

Some religious beliefs force religion into its historical role of retarding science and keeping humans ignorant.  Moreover, many of the people holding these religious beliefs are not satisfied with the religious freedom guaranteed in the Bill of Rights.  Rather, they feel compelled to enforce their beliefs on others by changing the laws of the land. What happened to, “Judge not that ye be not judged” (Matthew 7 1-3). These same people are appalled at the sharia practiced by some Muslems, yet fail to perceive that what they are doing in the United States is indeed sharia.  These same beliefs forbid the teaching of science and engaging in scientific and medical practices that can advance humankind and relieve a great deal of misery.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Low-Information Electorate

November 22, 2016

“The Low Information Electorate” is the title of Chapter Five in “Head In The Cloud”,  an important book by William Poundstone.  Both conservatives and liberals agree about how spectacularly dumb the great mass of conservatives and liberals are.  Poundstone notes that this statement is true and proceeds to prove his point.

Ignorance is probably most pronounced in judicial races.  In 1992  the well-respected California judge Abraham Aponte Khan lost an election to a virtually unknown challenger who had been rated “unqualified” by the Los Angeles County Bar Association.  The name of he challenger was Patrick Murphy, a name that sounded less foreign than “Khan.”  Should you ever have problems with judicial decisions, perhaps  the first factor to consider is how they are chosen.  There are ample data to show that judicial elections are a bad idea.

Poundstone conducted a survey of adults to name the holders of fourteen elected offices—national, state, and local.  He found that essentially everyone can name the president, 89% were able to name the vice-president, 62% could identify at leas one of their state’s US senators.  Slightly less than half could name both and 55% knew their district’s congressperson.  81% were able to name the governor of their state.  Barely half of those who said they lived in a municipality with a mayor or city manager were able to name that official.  These offices were the limit of the typical citizen’s knowledge.  Less than a third of the respondents could name the current holders of other offices.  These participants were asked to describe their political preferences on a five-point scale from “very conservative” to “very liberal.”  There was no correlation between these ratings and knowing the names of elected officials.

However, Poundstone did find a correlation between knowing the name and knowing something about the individual.  A voter who does not know the name of a mayor is unlikely to know much else about her, such as the issues she ran on and any accomplishments, failures, or criminal convictions that would bear on a bid for reelection.

in 2014 the Annenberg Public Policy Center conducted a survey of adults on facts that they should have learned in civics class.

*If the Supreme Court rules on a case 50 to 4, what does it mean?
21% answered, “The decision is sent back to Congress for reconsideration.”  Wrong!

*How much of a majority is required for the US Senate and the House of Representatives to override a presidential veto?
Only 27% gave the correct answer, two-thirds.
*Do you happen to know any of the three branches of government?  Would you mind naming any of them?
Only 36% were able to name all three (executive, legislative, judicial)

What is also striking is the ignorance among professional politicians.  In a 2015 speech presidential candidate Rick Perry quoted a great patriot:  “Thomas Paine wrote the ‘duty of a patriot’ is to protect his country from his government.”  Paine did not write this.  It appears in the writings of radical-left environmentalist Edward Abbey.

In 2011 another presidential contender, Michele Bachman told Nashua, New Hampshire, supporters, “You’re the state where the shot was heard around the world in Lexington and Concord.”  As the sharp readers of the healthy memory blog likely know that those towns are in Massachusetts.

Of course, these individuals are failed presidential candidates.  Bill Clinton, however, is a two-term president.  On October 16,1996 he said, “The last time I checked, the Constitution said, “Of the people, by the people and for the people”  That’s what the Declaration of Independence says.”  Unfortunately those words are from Lincoln’s Gettysburg Address and are not in either of the documents he cited. Bill Clinton has said many times, that Hillary is better than he is.  That is undoubtedly true, but unfortunately she had not proofread his speech.  All three individuals have staffs who should be vetting their speeches.  So what gives???

One might think that character can override ideology.  We hear of swing voters who say they will decide between two ideologically different candidates based on character, likability, or simply being the “better man or woman for the job.”  Unfortunately UCLA political scientist Lynn Vavreck has found the split-tickets—those who vote for candidates from more than one party—are less informed than those who hold to a party line.  She surveyed a sample of 45 thousand Americans, asking them to name the current occupations of politicians such as Nancy Pelosi and John Roberts.  She compared the survey results to voting patterns.  Those who fell in the bottom third of political knowledge stood a 12% chance of voting for senatorial and presidential candidates from different parties in the 2012 election.  Among the best-informed third, the chance of a split ticket was only 4%.

Under informed voters were also more likely to describe themselves as undecided on hot-button issues such as immigration, same-sex marriage, and increasing taxes on the wealthy.  These finds fit in with the notion of a “mushy muddle.”  Political pollers recognized that many who identify themselves as moderates are really just those who “don’t know.”

Poundstone writes, “We hope that voters in the middle supply a reality check to partisanship and help promote the compromise necessary to a democratic society.  There “are” voters who hold strong, well-reasoned political convictions that happen to lie in between those of the two parties.  There just aren’t too many of these voters, it seems.”

Given this epidemic of ignorance, how do democracies survive?   Here is an explanation offered by Poundstone.   “One way to think of it is that democracies are like casinos.  They exploit human irrationality—and, come to think of it, there aren’t many firmer foundations than that.  There are enough “irrational” voters to channel the wisdom of crowds and select candidates who are in tune with public sentiment and who are , usually not all that bad.”

HM is always annoyed and exhortations “to vote.”  The exhortation should be to get informed, and when once informed, consider voting.  There is already significant noise in elections.  What is the point of increasing the noise?

Poundstone concludes the chapter that relates knowledge of elected officers to personal wealth.  When he asked his respondents to name the current occupants of these seven elected offices:  at least one of your state’s two US senators, your state’s governor, you state senator, your county sheriff, your city of town councilperson, and your local school board representative.  The average adult can name only about three of the seven.  Those who could name all seven offices made about $43,000 more per year than those who couldn’t name any of the offices.

This fact points to the importance of certain information being in one’s brain rather than being found some place in the cloud.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The One-in-Five Rule

November 21, 2016

The One-in-Five Rule is chapter four of “Head In The Cloud” is an important book by William Poundstone.  Survey makers are aware of this rule, and so should you.  About 20% of the public believes just about any nutty idea a survey taker dares to ask about.  A 2010 “Huffington Post article sample survey reported that under informed 20%ers
* believe that witches are real
* believe the sun revolves around the earth
* believe in alien abductions
* believe Barack Obama is a Muslim, and
* believe the lottery is a good investment

Poundstone has a heading in this chapter titled “The Paranoid Style in American Cognition,” although HM is more inclined to believe that this paranoid style is a human problem rather than one specific to America.  However, the examples provided are regarding Americans.

In 2014 psychologists Stephan Lewandowsky, Gilles E. Gignac, and Klaus Oberauer reported a survey asking for True or False responses to the following experiences:

* The Apollo moon landings never happened and were staged in a Hollywood film studio.
* The US government allowed the 9/11 attacks to take place so the it would have an excuse to achieve foreign and domestic goals (e.g., the wars in Afghanistan and Iraq and attacks on American civil liberties) that had been determined prior to the attacks.
* The alleged link between secondhand tobacco smoke and ill health is based on bogus science and is an attempt by a corrupt cartel of medical researchers to replace rational science with dogma.
*US agencies intentionally created the AIDS virus and administered it to black and gay men in the 1970s.

These respondents were also asked whether they agree or disagreed with the following statements:

* The potential for vaccinations to maim and harm children outweigh their health benefits.
* Humans are too insignificant to have an appreciable impact on global temperature.
* I believe that genetically engineered food have already damaged the environment.

Poundstone concludes the chapter with the following paragraph”
“Those who believed in flat-out conspiracy theories were also more likely to agee with the above statements ()the first two are wrong, and the third is unproven).  Unlike the typical  conspiracy theory, these beliefs affect everyday behavior, both in the voting booth and outside it.  Should I vaccinate my kids?  Are hybrid cars worth the extra cost? Which tomato do I buy?  The One-in-Five American casts a long shadow.”

More Facts Citizens Should Know

November 20, 2016

This post is based on information in “Head In The Cloud”  by William Poundstone. From 1993 to 2010 the US violent crime rate dropped precipitously.  The firearms homicide rate dropped from 7.0 to 3.6% per 100,000, almost in half.  The nonviolent crime rate plunged to a little more that a quarter of what it had been.  It is difficult to think of another major social problem that had shown such dramatic improvement, but were people aware of this improvement?

A 2013 Pew Research Center poll asked whether gun crimes had gone up, down, or stayed the same over the last twenty years.  56% thought that the crime rate had gone up (wrong), and 26% thought it had stayed the same (also wrong).  Just 12% thought it had gone down.

It is interesting that both sides of the gun issue believe that they have a better remedy for a surging crime rate that doesn’t  actually exist.

Poundstone did a survey for an estimate of “the average amount of memory for a new tablet computer.”  The most common answer, 10-99 gigabytes was the most reasonable one at the time of the survey.  This answer got 40% of the responses.  The second most common answer was gigabytes and that got slightly over 20% of the responses.  So at least these respondents had the correct prefix before bytes.  But the range of responses  was from less than a kilobyte to more than hundreds of petabytes.

Poundstone also found that Americans think that there are far more Blacks, Asians, Gays and Moslems than there are actually are.   In the public mind, Latinos, black, Asians, gays, and Muslims constitute about 25%, 23%, 13%, 11%, and 15% of the populations, respectively.   This adds up to 87% of the population.  Poundstone notes that even when you account for overlap, these high-profile minorities account for about two-thirds of the US population.  So according to what these people think, whites are already a minority, and they feel threatened. The correct values are 17%, 15%, 6%, and 1%, respectively, which yields a total of 39%.

Head In The Cloud

November 18, 2016

“Head In The Cloud” is an important book by William Poundstone.  The subtitle is “Why Knowing Things Matters When Facts Are So Easy to Look Up.”  Psychologists make the distinction between information that is accessible in memory and information that is available in memory.  Information that you can easily recall is obviously accessible in memory.  However, there is other information that you might not be able to recall now, but that you know that you know it.  This information eventually becomes accessible and can appear suddenly unsummoned in consciousness.

Transactive memory refers to information you can get from our fellow humans or from technology.  Most information available in technology can readily be summoned via Google searches.  An extreme view argues that since all this information is available, we do not need to remember the information itself as long as we know how to search for the information.  Whenever we encounter new information we are confronted with the question as to whether we need to commit this information to our biological memory.  This is a nontrivial question as committing information to memory requires cognitive effort, thinking, or in terms of Kahneman’s Two Process Theory, engaging our System 2 processes.  The healthy memory blog  has a category devoted to mnemonic techniques explicitly designed to assist in memorizing information as well as other discussions regarding how to make information memorable.  But all of this involves effort, so why bother if it can simply be looked up?  “Head in the Cloud” explains the benefits of moving some information from the cloud into our brains.

Poundstone describes an experiment done in 2011 by Daniel Wegner.  He presented volunteers with a list of forty trivia facts—short, pithy statement such as “An ostrich’s eye is bigger than its brain.”  Half of the volunteers were told to remember the facts.  The other half were not.  Within each of these groups half were informed that their work would be stored on the computer, and half were told that their work would be immediately erased after the task’s completion.    All these volunteers were later given a quiz on the facts they typed.  It did not matter whether they had been instructed to remember the information or not.  It only mattered if they thought their work was going to be erased after the task.  These volunteers remembered more regardless of whether they were told to remember the information.

The following is directly from the text “It is impossible to remember everything.  The brain must constantly be doing triage on memories, without conscious intervention.  And apparently it recognizes that there is less need to stock our minds with information that can be readily retrieved.  So facts are more often forgotten when people believe the facts will be archived.  This phenomenon has earned a name—the Google effect—describing the automatic forgetting of information that can be found online.”

HM does not disagree with any of the above quote.  However, he is alarmed by what is omitted.  That omission regards a conscious decision as to whether the information should be further processed to increase its accessibility without technology and whether it is related to other information that might require further research.  It is true that we are time constrained, so that depending on the situation the time available for such consideration will be important.  But as Poundstone will show, it is important to get some information out of the cloud and into the brain, and we can consciously alter the processing we give to the retrieved information.  Sans attention, it will likely remain in the cloud.

Poundstone reports an enormous amount of research conducted by a new type of polling called an Internet panel survey.  These are conducted by an organization that has recruited a large group of subjects (the panel)  who agree to participate in surveys.  When a new survey begins, the software selects a random sample of the panel to contact.  E-mails containing links are sent to the selected participants, typically in several waves to achieve a demographic balance closely approximating the general populations.  The sample can be balance for sex, age, ethnicity, education, income, and other demographic markers of interest to the research project.

A prior healthy memory blog post appropriately titled “The Dunning-Kruger Effect” discusses the Dunning-Kruger Effect.  Dunning is a psychology professor and Kruger was a graduate student.  The effect is that “Those most lacking in knowledge and skills are least able to understand their lack of knowledge.”  The flip-side of this effect is that those most knowledgeable are most aware of any holes in their knowledge.

“Actor John Cleese concisely explains the Dunning-Kruger effect in a much-shared You Tube video:  ‘If you’re very, very stupid how can you possibly realize that you’re very, very stupid?  You’d have to be relatively intelligent to realize how stupid you are…And this explains not just Hollywood but almost the entirety of Fox News’”

The chaos and contradictions of the current political environment can perhaps best be characterized as a glaring example of the Dunning-Kruger effect.  Just a few moments of contemplation should reveal the potential danger from this effect.  Poundstone’s book reveals the glaring lack of knowledge in many important areas by too many individuals.  He also provides ample evidence of the benefits of moving certain information from the cloud and into our brains.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

 

Are Video Games Luring Men From the Workforce?

October 29, 2016

The title of this post is identical to the title of an article by Ana Swanson in the 24 September issue of the Washington Post.  It begins with the story of a high school graduate who has dropped out of the workforce because he finds little satisfaction in the part-time, low wage jobs he’s had since graduating from high school.  Instead he plays video games, including FIFA 16 and Rocket League on Xbox One and Pokemon Go on his smartphone.

The article notes that of last year 22% of the men between the ages of 21 and  30 with less than a bachelor’s degree reported not working at all in the previous year.  This is up from 9.5% in 2000.  These young men have replaced 75% of the time they used to spend working with time on the computer mostly playing video games.

From 2004 to 2007, before the recession, unemployed men averaged 5.7 hours on he computer, whereas employed men average 3.4 hours. This included video game time.  After the recession, between 2011 to 2014 unemployed men average 12.2 hours per week, whereas employed men averaged 4.7 hours.  With respect to video games from 2004-07 unemployed men averaged 3.4 hours per week versus 2.1 hours fro employed men.  During the period from 2011-2014 unemployed men average 8.6 hours playing video games verses 3.2 hours for employed men.

Researchers are arguing that these increases in game playing are partially  due to the games appeal having been increased. The estimate runs from one-fifth to one-third of the decreased work is attributable to the rising appeal of video games.  HM believes that prior to these games most unemployed were confronted primarily to daytime television, which provided a strong inducement to seek work.  Today video games provide an entertaining alternative to seeking work.  As the games improve and become more sophisticated, the argument is that they have become even more appealing.

The article notes that the extremely low cost makes these games even more accessible.  It states that recent research has found that households making $25K to $35K a year spent 92 more minutes a week online that households making $100K or more per year.

The article also notes that for the first time since the 1930s more U.S. men ages 18 to 34 are living with their parents than with romantic partners according to the Pew Research center.

The article argues that these men are happy.  HM feels that this happiness is likely to be short-lived, and that there is a serious risk that these men will end up as adults who are stunted intellectually and emotionally.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Trump, The World’s Greatest Troll

September 17, 2016

This title was bestowed on Trump by Nate Silver, a statistician and the best campaign prognosticator.  What makes him the greatest troll is the devastating effect he has had on the American political system.  Trump plays to the mob, and in cyberspace the cyber mob.  Donald Trump has a unique and disturbing leadership style.  Rather than demonstrating gravitas and intelligence with measured remarks and diplomacy, he succeeds with brutal populism and personal attacks.  As Dr. Mary Aiken notes, “ he seems to relish being nasty—even sadistic, at times.”  Dr Aiken continues, “Power no longer centers on leadership but on followership.”  The norms of cyberspace, where cruelty is amplified, escalated, and encouraged, have jumped into politics.

“Trolls” appear to be the greatest attention—seekers online.  They have chosen the appellation, “trolls.”  Dr. Aiken believes that the motivation for trolling behaviors is a combination of boredom, revenge, pleasure, attention, and a desire to cause disruption and acquire power.  On multiplayer gaming sites they test and taunt children and then post video or audio of the children crying.  On dating sites trolls are capable of anything from cyber-stalking to sexual harassment and threats.

Dr. Aiken argues that Trump’s success as a presidential candidate is a vivid example of what she calls cyber-socialization.  “Leading by building followers, he employs many of the tactics of a malicious online bully, from his use of taunts and name-calling of fellow candidates (“Crooked Hillary” and “Crazy Bernie” and “Lying Ted”) to his obsession with physical appearance (“Little Marco”) and special hostility for women (“”dogs,” “pigs” and “disgusting”).

Trump has 8.19 million followers on Twitter and dominates the social media landscape of the election.  Unfortunately, social media have become an environment where pathological behavior is gaining ground and being normalized.  There is a loss of empathy online, a heightened detachment from the feelings and rights of others, which is seen in extreme cyberbullying and sadistic trolling.

Psychologists have found a relationship between individuals who comment frequently online and identify themselves as “trolls” with three of the four components of what is known as the dark tetrad of personality, a set of characteristics that are found together in a morbid cluster:  narcissism (the characteristic not included), sadism, psychopathy and Machiavellianism.  In the case of Trump, HM thinks that narcissism could also be appropriate.  The researchers concluded that trolling was a manifestation of “everyday sadism.”

The concluding sentence is Dr. Aiken’s essay is “Sadly for those of us trying to eradicate cyber-bullying and online harassment, and educate children and teenagers about the great emotional costs of this behavior, our job becomes much harder when high-profile leaders use cruelty as strategy—and win elections for it.

Dr. Aiken’s essay, from which large portions of this post have obviously be taken, can be found at time.com and searching for Welcome to the Troll Election.

The Cyber Frontier

September 15, 2016

“The Cyber Frontier” is the final chapter of the “Cyber Effect,”an important book by Mary Aiken, Ph.D., a cyberpsychologist.   She writes, “If we think of cyberspace as a continuum, on the far left we have the idealists, the keyboard warriors, the early adopters, philosophers who feel passionately about the freedom of the Internet and don’t want that marred or weighted down with regulation and governance.  On the other end of the continuum, you have the tech industry with its own pragmatic vision of freedom of the Net—one that is driven by a desire for profit, and worries that governance costs money and that restrictions impact the bottom line.  These two groups, with their opposing motives, are somehow strategically aligned in cyberspace and holding firm.”  She writes that the rest of us and our children, about 99.9%, live somewhere in the middle, between these two options.

She says that we should regain some societal control and make it harder for organized cybercrime.  Why put up with a cyberspace that leaves us vulnerable, dependent, and on edge?

Dr. Aiken writes that the architects of the Internet and its devices know enough about human psychology to create products that are a little too irresistible, but that don’t always bring out the best in ourselves.  She calls this the “techno-behavioral effect.”  The developers and their products engage our vulnerabilities and weaknesses, instead of engaging our strengths.  They can diminish us while making us feel invincible and distract us from things in life that are much more important, more vital to happiness, and more crucial to our survival.  She writes that we need to stop and consider the social impact or what she called the “techno-social effect.”

Dr Aiken argues that in the next decade there’s a great opportunity before us— a possible golden decade of enlightenment during which we could learn so much about human nature and human behavior, and how best to design technology that is not just compatible with us, but that truly helps our better selves create a better world.  If we can create this balance, the cyber future can look utopian.

Dr. Aiken argues that we should support and encourage acts of cyber social consciousness, like those of Mark Zuckerberg and Priscilla Chan, the Bill and Melinda Gates Foundation, Paul Allen, Pierre and Pam  Omiya, and the Michael and Susan Dell Foundation.

Tim Berners-Lee, the father of today’s internet has become increasingly ambivalent about his creation and has recently outlined his plans for a cyber “Magna Carta.”  (Go to http://www.theguardian.com and enter Tim Berners-Lee into the search box.)  Dr. Aiken argues for a global initiative.  She writes, “The United Nations could lead in this area, countries could contribute, and the U.S. could deploy some of its magnificent can-do attitude.  We’ve seen what it has been capable of in the past.  The American West was wild until it was regulated by a series of federal treaties and ordinances.  And if we are talking about structural innovation, there is no greater example that Eisenhower’s Federal-Aid Highway Act of 1956, which transformed the infrastructure of the U.S. road system, making it safer and more efficient.  It’s time to talk about a Federal Internet Act.”

There are already countries who have taken actions from which we can learn.  Ireland has taken the initiative to tackle legal but age-inappropriate content online.  South Korea has been a pioneer in early learning “netiquette: and discouraging Internet addictive behavior.  Australia has focused on solutions to underage sexting.  The EU has created the “right to be forgotten,” to dismantle the archives of personal information online.  Japan has no cyberbullying.  Why?  What is Japanese society doing right?  We need to study this and learn from it.  Antisocial social media needs to be addressed.

What Lies Beneath: The Deep Web

September 14, 2016

“What Lies Beneath:  The Deep Web”  is Chapter 8 of The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.  Dr Aiken likens the Deep Web to the pirates of the Caribbean in the seventeenth and eighteenth centuries.  She writes that it a vast uncharted sea that cybercriminals navigate skillfully, taking advantage of the current lack of governance and authority—or adequate legal constructs to stop them.
Although cybercriminals can be found anywhere on the Internet, they have a much easier time operating in the murky waters of the darkest and deepest parts.

Almost any kind of criminal activity—extortion, scams, hits, and prostitution—can be ordered up, thanks to well-run websites with shopping carts, concierge hospitality, and surprisingly great customer service.  Cybercriminals are con artists who are expert observers of human behavior, especially cyberbehavior.  They know how to exploit the natural human tendency to trust others, as well as how to manipulate to give up their confidential, information, or what is called a socially engineered attack.  Regarding  identity theft or cyber fraud, it is usually much easier to fool a person into giving  you a password that it is to hack it.  This type of social engineering is a crucial component of cybercriminal tactics, and usually involves persuading people to run their “free” virus—laden malware or  dangerous software by peddling a lot of frightening scenarios (which is called shareware).  Fear sells.

The Deep web refers to the unindexed part of the Internet.  Dr. Aiken says that it accounts for 96 to 99 percent of content on the internet.  Most of the content is pretty dull stuff, a combination of spam and storage—U.S. government databases, medical libraries, university records, classified cellphone and email histories.  Just like the Surface Web, it is a place where content can be shared.

What makes the Deep Web different is that content on the Deep Web can be shared without identity or location disclosure, without your computer’s IP address and other common traces.  Since these sites are hot indexed they are not searchable by typical browsers like Chrome or Safari or Firefox.  For software that protects your identity, an add-on browser like Tor is one of the most common ways in.  Tor is an acronym for “the online router” because of the layers of identity-obscuring rerouting.  The Deep Web was first used by the U.S. government, and the protocols for the browser Tor were developed with federal funds so that any individuals whose identity needed to be protected—from counterintelligence agents to journalists to political dissenters in other countries—could communicate anonymously with the government in a safe and secure way.  But since 2002, when the software for Tor became available as a free download, a a digital black market has grown there.  This criminal netherworld is populated by terrorists networks, criminal gangs, drug dealers, assassins for hire, and sexual predators looking for images of children and new victims.

Monitoring and policing the Deep Web is a problem because there is almost an infinite number of hiding places, and most illegal sites are in a constant state of relocation to new domains with yet another provisional address.  Many of these sites do not use traceable credit cards or PayPal accounts.  Virtual currencies, such as Bitcoin are the coins of this realm.

Hidden services include crimes for hire and the selling of stolen credit information or dumps.  McDumpals is one of the leading sites  marketing stolen data  has a clever company logo featuring familiar golden arches and a McDumplas mascot, a gagster-cool Ronald McDonald.

Silk Road was an online black market, the first of its kind—offering drugs, drug paraphernalia, computer hacking and forgery services as well as other illegal merchandise—all carefully organized for the shopper.  Ross William Ulbricht ran the Silk Road for 2.5 years.  Silk Road attracted several thousand sellers and more than one hundred thousand buyers.  It was said to have generated more than  $1.2 billion dollars in sales revenue.  According to a study in “Addiction”, 18% of drug consumers in the U.S. between 2011 and 2013 used narcotics bought on this site.  The FBI estimated that Ulbricht’s black market had brought him $420 million in commissions, making him, according to “Rolling Stone” “one of the most successful entrepreneurs of the dot-com age.

According to the U.S. District Judge who sentenced Ulbricht at his 2014 trial, Silk Road created drug users and expanded the market, increasing demands in places where poppies are grown for heroin manufacture.  This black market site had impacted the global market.  The prosecutors alleged that Ulbricht had ordered up and paid for the executions of five Silk Road sellers who had tried to blackmail or reveal his identity.  Prosecutors traced the deaths of six people who had overdosed on drugs back to Silk Road, and two parents who had lost sons spoke at the trial.   Ulbricht was found guilty of seven drug and conspiracy charges and was given two life sentences, another of twenty years, another for fifteen years, and another for five years, without the chance of parole.

Shortly after the arrest of Ulbricht and the shutting down of the Silk Road in 2013, Silk Road 2.0 emerged.  Many more copycat sites sprang up  like Evolution, Agora, Sheep, Blackmarket Reloaded, AphaBay and Nucleus, which are often referred to as cyryptomarkets by law enforcement.

Dr. Aiken goes into the morality of the users of the Deep Web, the psychology of the hacker, and Cyber-RAT (routine activity theory) in more depth than can be related in a blog post.

Cyberchondria and the Worried Well

September 13, 2016

“Cyberchondria and the Worried Well” is chapter 7 of The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.  Reports estimate that up to $20 billion is spent annually in the U.S. on unnecessary medical visits.  Dr. Aiken asked how many of these wasted effects are driven by a cyber effect?  A majority of people in a large international survey said they used the Internet to look for medical information, and almost half admitted to making self-diagnoses following a web search.  A follow-up survey found that 83% of 13,373 respondents searched the Internet often for information and advice about health, medicine or medical conditions.  People in “emerging economies” used online sources for this purpose the most frequently—China (94%), Thailand (93%), Saudi Arabia (91%), and India (90%) led the table of twelve countries.

Dr. Aiken writes that 20 years ago when people experienced any physical condition that persisted to the  point of interfering with their activities they would visit a doctor’s office and consult a doctor.  In the digital age, people might analyze their own symptoms and play doctor at home.  She notes that about half of the medical information offered on the Internet has been found by experts to be inaccurate  or disputed.  HM feels compelled to insert here the conclusion expressed by Ioannidis’s 2005 paper, which is still believed by most statisticians and epidemiologists, “Why Most Published Research Findings are False.”  This implies that the on-line information is similar to the information available in the research world.  And physicians are working with a questionable data base, so the problem of accurate research information is real and not an artifact of the internet.  [To learn more about Ioannidis see the following healthy memory blog posts,”Liberator of Knowledge from Tyranny of Profit,” “Thinking 2.0,” “Most Published Research Findings are False,’ and “The Problem with Scientific Journals, Especially Elite Ones.”]

There are also online support groups such as the website MDJunction.com.  These groups do provide a place where thousands meet every day to discuss their feelings, questions, and hopes with like minded friends.  Although these places provide support, they might not be the best sources of information.  And MDJunction.com does have a fine print disclaimer at the bottom of the page—“The information provided in MDJunction is not a replacement for medical diagnosis, treatment, or professional medical advice.”

The term “cyberchondria” was first coined in a 2001 BBC News report that was popularized in a 2003 article in “Neurology, Neurosurgery and Psychiatry,” and later supported by an important study by Ryen White and Eric Horvitz, two research scientists at Microsoft, who wanted to describe an emerging phenomenon engendered by new technology—a cyber effect.  In the field of cyberpsychology, cyberchondria is defined as “anxiety induced by escalation during health-related search online.”

The term “hyperchondria” has become outdated due to the Fifth Edition (DSM-5) of the “Diagnostic and Statistical Manual of Mental Disorders.”  About 75% of what was previously called “hypochondria” is now subsumed under a new diagnostic concept called “somatic symptom disorder,” and the remaining 25% is considered ‘illness anxiety disorder”.  Together these condition are found in 4 to 9% of the population.

Most doctors regard people with these disorders as nuisances who take up space and time that could be devoted to truly sick people who need care.  And when a doctors informs the patient that they do not have a diagnosable condition they become frustrated and upset.

Conversion disorders are what was called “hysterical conditions,’ which formerly went by such names as “hysterical blindness” and ‘hysterical paralysis.”  These have been renamed “functional neurological symptom disorder”, formerly called “Munchhausen syndrome”, is a psychiatric conditioning which patients deliberately produce or falsify symptoms or signs of illness for the principal purpose of assuming the sick role.

Iatrogenesis is a Greek term meaning “brought forth,” which refers to an illness “brought forth by the healer.”  It can take many forms including an unfortunate drug effect or interaction, a surgical instrument malfunction, medical error, or pathogens in the treatment room, for example.  A study in 2000 reported that it was the third most common cause of death in the United States, after heart disease and cancer.  So having an unnecessary surgery or medical treatment of any kind means taking a big gamble with your life.

In 1999 the estimate was of between 44,000 and 98,000 deaths annually in the United States  when the Institute of Medicine issued its infamous report, “To Err is Human.  HM is proud to note that a one of his colleagues, Marilyn Sue Bogner, was a pioneer in this area of research.  The first edition of her book “Human Error in Medicine predated the IOM report.  In 2003 she published “”Misadventures in Health Care:  (Inside Stories:  Human Error and Safety.”  Unfortunately, she has recently passed away.  And, unfortunately, matters seem to be getting worse.  In 2009 the estimates of each due to failures in hospital care rose to an estimated 180,000 annually.  In 2013 the estimates ran between 210,00 and 440,000 hospital patients in the United States die as a result of a preventable mistake.  Dr. Aiken believes that part of this escalation is due to the prevalence of Internet Medical searches.

So we have a difficult situation.  Cyberspace has erroneous information, but the underlying medical research also contains erroneous information and doctors are constrained by these limitations.  We should be aware of these limitations and be cognizant that the diagnosis and recommended treatment might be wrong.  The best advice is to solicit multiple independent opinions and to always be aware that “do nothing” is also an option.  And it could be an option that will safe your life.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Cyber Romance

September 12, 2016

Cyber Romance is Chapter 6 in The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.   This chapter looks at the ways cyber effects are shifting mating rituals and romance. Romantic love manifests itself in its expression in the brain.  The left dorsal anterior cingulate cortex (dACC) becomes active, as well as the insula, caudate, amygdala, accumbent temporo-parietal junction, posterior cingulate cortex, medial prefrontal cortex, inferior parietal lobule, precuneus, and temporal lobe.

Dr Aiken discusses a paradox know as the “stranger on the train syndrome.”  This refers to people feeing more comfortable disclosing personal information to someone that they may never meet again.  She also mentions the cyber effects of online disinhibition and anonymity.  We feel less at risk of being hurt by a partner who has not seen us in real life.  An urgent wish to form a bond might induce us to disclose intimate details of our lives without much hesitation.  The risks of doing this should be obvious.  However, online we might overshare and confess, revealing too much personal information with a potential love interest online doesn’t help predict compatibility the way it might in the real world.

Communications expert Joseph Walther describes hyper personal communication as a process by which participants eagerly seek commonality and harmony.  The getting-to-know-you experience is thrown off-kilter.  The two individuals—total strangers really—seek similarities with other rather than achieving a more secure bond that will allow for blunt honesty or clear-eyed perspective.    When we are online, free of face-to-face contact, we can feel less vulnerable and not “judged.”  This can feel liberating but be dangerous.  Dr. Aiken does not comment as to whether the use of visual media, such as Skype, might mitigate this problem.  But she does say that dating online involves four selves—two real-world selves and two cyber ones.

Relying on normal, as opposed to cyber, instinct can lead vulnerable individuals into true danger.  A woman who meets a man in a bar might never consider accepting a ride with him after only one encounter.  Yet that same woman, after only a few days of interacting through email and texts with a man she’s met on an online dating site, may fire out her address because she feels such a strong connection with him.

Dr. Aiken cites a February 2016 report by the U.K. National Crime Agency (NCA) of a sixfold increase in online-dating related rape offenses over the previous five years.  The team analyzing the findings presented potential explanations, including people feel disinhibited online and engage in conversations that quickly become sexual in nature, which can lead to “misdirected expectation” on the first date.  Seventy-one percent of these rapes took place on the first date in either the victim’s or offender’s residence.     The perpetrators of these online date-rape crimes did not seem to fit the usual profile of a sex offender; that is, a person with a criminal history or previous conviction.  So we don’t fully understand the complexity of online data and associated sexual assault, but the cyber effects of syndication and disinhibition are clearly important.  The NCA offers the following helpful advice for online data:

Meet in public, stay in public
Get to know the person, not the profile
Not going well?  Make excuses and leave.
If you are sexually assaulted, get help immediately.

Nevertheless, the online data industry has been successful.  The industry was profitable almost immediately.   By 2007, online data was bringing in $500 million annually in the U.S., and that figure had risen to $2.2 billion by 2015, when match.com turned twenty years old.  By then, the website claimed to have helped create 517,000 relationships, 92,000 marriages, and 1 million babies.

When you make assumptions about a person based on their profile photo, it’s termed impression management.  When you filter, fix, and curate your own profile photo it’s impression management.  The mere act of choosing a picture to use on a data site—active, smiling, unblemished, or nostalgic—requires that you imagine how you look tooters and aim to enhance that impression.

Here are some impression—management tips for your profile photo.

Wear a dark color
Post a head—to—waist shot
Make sure that the jawline has a shadow (but no shadow on hair or eyes)
Don’t obstruct the eyes (no sunglasses)
Don’t be overtly sexy
Smile and show your teeth (but please no laughing)
Squinch

If you don’t know how to squinch, here are some tips
“It is a slight squeezing of the lower lids of the eyes, kind of like Clint Eastwood makes in his Dirty Harry movies, just before he says, “Go ahead.  Make my day.”  It’s less than a squint, not enough  to cause your eyes to close or your crow’s feet to take over your face.  If you want a tutorial on how to produce the perfect one, Dr. Aiken recommends one by professional photographer Peter Hurley, available on YouTube, called “It’s All About the Squinch.”

Another risk in cyberspace is identity-deception.  People can make-up identities that they present in cyberspace.  There have always been tricksters, con artists and liars who pretend to be somebody they aren’t.  Technology has now made this so much easier.

Dr. Aiken also warns about narcissists in cyberspace.  Narcissists need admiration, flattery, loads of attention, plus an audience.  The problem is that given the way they ooze confidence and cybercharm, it may be harder of spot them—and know to stay away.  Here is a mini-inventory of questions to ask yourself:

*Doe they always look amazing in their photos?
*Are they in almost all of their photos?
*Are they in the center of their group photos?
*Do they post or change their profile constantly?
*When they post an update, is it always about themselves?

There is also a topic called Cyber-Celibacy.  A government survey in Japan estimated that nearly 40% of Japanese men and women in their twenties and thirties are single, not actively in a relationship, and not really interested in finding a romantic parent either.  Relationships were frequently described as bothersome.  The estimation, if current trends continue, Japan’s population will have shrunk by more than 30% by 2060.   Do not make the mistake of assuming that the explosion of  virgins is restricted to Japan.

Dr. Aiken provides more material than can be summarized in this blog.  The bottom line warning for Cyber Romance is the same as it is for all activities in cyberspace, be careful and proceed cautiously.

Teenagers, Monkeys, and Mirrors

September 11, 2016

“Teenagers, Monkeys, and Mirrors” is chapter 5 in The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.  This post will say nothing about monkeys and mirrors.  To read about monkeys and mirrors in this context you will need to get your own book.

Humanistic psychologist Carl Rogers did valuable research into how a young person develops identity.  He describe self-concept as having the following three components:
The view you have of yourself—or “self-image.”
How much value you place on your worth—or “self-esteem.”
What you wish you were like—or “the ideal self.”

Carl Rogers lived long before the creation of cyberspace.  Were he alive today it is likely he would have added a fourth aspect of “self.”  Dr. Aiken calls this “the cyber self”—who you are in a digital context.  This is the idealized self, the person you wish to be, and therefore an important aspect of self-concept.  It is a potential new you that now lives in a new environment, cyberspace.  Increasingly, it is the virtual self that today’s teenager is busy assembling, creating and experimenting with.  The ubiquitous selfies ask a question of their audience:  Like me like this?  Dr. Aiken asked the question, which matters the most: your real-world self or the one you’ve created online?  Her answer is probably the one with the greater visibility.

Adolescents are preoccupied with creating their identity.  The psychologist Erik Erikson described this period of development between the ages of twelve and eighteen as a state of identity versus role confusion, when individuals become fascinated with their appearance because their bodies and faces are changing so dramatically.  So this narcissistic behavior  is considered a natural part of development and is usually outgrown. However, in this age of cyberspace fewer young adults are moving beyond their narcissistic behavior.  A study of U.S. college students found a significant increase in scores on the Narcissistic Personality Inventory between 1982 and 2006.

Plastic surgery is another area that has been impacted by technology.  The easy curating of selfies is likely linked to a rise in plastic surgery.  According to a 2014 study by the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS), more than half of the facial surgeons polled  reported an increase in plastic surgery for people under thirty.  Surgeons have also reported that bullying is also a cause of children and teens asking for plastic surgery.  This is usually a result of being bullied rather than a way to prevent it.

Another problem is Body Dysmorphic Disorder (BDD).  “Individuals with body dysmorphic disorder are obsessed with imagined or minor defects, and this belief can severely impair their lives and cause distress.  Individuals with BDD are completely convinced that there’s something wrong with their appearance—and no matter how reassuring friends and family, or even plastic surgeons, can be, they cannot be dissuaded.  In some cases, they can be reluctant to seek help, due to extreme and painful self-consciousness  But if left untreated, BDD does not often improve or resolve itself, but become worse over time, and can lead to suicidal behavior.

Dr. Aiken notes that Mark Zuckerberg and his wife Priscilla Chan have pledged to donate 99% of their Facebook shares to the cause of human advancement.  That represented about $45 billion of Facebook’s current valuation.  She respectfully suggests that all of this money be directed toward human problems associated with social media.

Dr. Aiken notes that eighty years ago the American philosopher and social psychologist George Herbert Mead had something very relevant to say about how we think about ourselves—and express who we are that has special relevance today.  Mead studies the use of first-person pronouns as a basis for describing the process of self-reflection.  How we use “I” and “me” demonstrates how we think of self and identity.  There is “I”.  And there is “me”.  Using “I” shows that he or she has a conscious understanding of self on some level.  Using “I” speaks directly from that self.  The use of “me” requires the understanding of the individual as a social object.  To use “me” means figuratively leaving one’s body and being a separate object.  “I” seems to have been lost in cyberspace.  The selfie is all “me”.  It is an object—a social artifact that has no deep layer.  Dr. Aiken writes,  “This may explain why the expressions on the facies of selfie subjects seem so empty.  There s no consciousness.  The digital photo is a superficial cyber self.

Dr. Aiken advises to do what you can to pull kids back to “I’ and not let them drift to “me.”.  This is strengthened by conservations such as

*Ask them about their real-world day, and don’t forget to ask them about what’s happening in their cyber life.

*Tell them about risks in the real world, accompanied by real stories—then tell them about evolving risks online and how to not show vulnerability.

*Talk about identity formation and what it means—distinguishing between the real-world self and the cyber self.

*Talk about body dysmorphia, eating disorders, body image, and self-esteem—and the ways their technology use may not be constructive.

*Tell your girls not to allow themselves to become a sex object—and tell your boys not to treat girls as object online—or anywhere else.

HM is often envious of of the technology available to today’s youth.  And he is envious of cyberspace with the exception of the difficulties created by the perverse way that technology is being used that exacerbates the transition through adolescence.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Frankenstein and the Little Girl

September 10, 2016

“Frankenstein and the Little Girl” is Chapter 4 in “The Cyber Effect”  an important book by Mary Aiken, Ph.D., a cyberpsychologist.  Frankenstein refers to online search.  This chapter examines the online lives of children four to twelve years old. This is the age group that is most vulnerable on the Internet in terms of risk and harm.  This age group is naturally curious and wants to explore.  They are old enough to be competent with technology, and in some cases, extremely so.  But they aren’t old enough to be wary of the online risks and don’t yet understand the consequences of their behavior there.
The psychologist John Suler has said “Your wouldn’t take your children and leave them alone in the middle of New York City, and that’s effectively what you’re doing when you allow them in cyberspace alone.”

According to the journal “Pediatrics” 84% of U.S. children and teenagers have access to the internet on either a home computer, a tablet, or another mobile device.   More than half of US children who are eight to twelve have a cellphone.  A 2015 consumer report shows that most American children get their first cellphone when they are six years old.

There are some benefits and research has shown a positive relation between texting and literacy.  And there is an enormous amount of good material on the web.  However, some developmental downsides of persistent and pervasive use of technology are apparent.  Jo Heywood, a headmistress of a private primary school in Britain has made the observation, which is shared by other educators, that children are starting kindergarten at five and six years old with the communications skills of two- and three-year olds, presumably because their parents or caregivers have been “pacifying” them with iPads rather than talking to them.  Moreover, this is seen in children from all backgrounds, both disadvantaged and advantaged.

A national sample of 442 children in the United States between the ages of eight and twelve were asked how they spent their time online.  Children from eight to ten spent an average of forty-six minutes per day on the computer.  Children from eleven to twelve years spent an average of one you and forty-six minutes per day on the computer.

When asked what kinds of sites they visited, YouTube dominated significantly, followed by Facebook, and game and virtual-world play sites—Disney, Club Penguin, Webkinz, Nick, Pogo, Poptropica, PBS KIds, and Google.  Why is Facebook on this list?  You are supposed to be thirteen years old to activate an account.  One quarter of the children in the US study reported using Facebook even thought it is a social network meant for children and adults.  According to “Consumer Reports” twenty million minors use Facebook, 7.5 million  of these are under thirteen.  These underage users access the site by creating a fake profile, often with the awareness and approval of their parents.

Cyberbullying is an ugly topic that has received coverage in the popular press.  Cyberbullying has resulted in suicides.  Dr. Aiken notes the existence of bystander apathy in these events.  Few, if any, seem to come to the aide of those being bullied.  In a poll conducted in 24 countries, 12% of parents reported their child had experienced cyberbullying, often by a group.  A U.S. survey by “Consumer Reports” found that 1 million children over the previous year had been “harassed, threatened, or subjected to other forms of cyberbullying on Facebook.

It appears that the younger you are, the number of friends increases.  In a 2014 study of American users on Facebook, for those sixty-five years old, the average number of friends is 102.  For those between forty-five and fifty-four years old, the average is 220.  For those twenty-five to thirty-five years old, the average is 360.  For those eighteen to twenty-four, the average is 649.  Dare we extrapolate to younger age groups?  Dunbar’s number has been discussed in previous healthy memory blog posts.  It is based on the size of the average human brain and is the number of social contacts or “casual friends” with whom and average individual can handle and maintain social relationships is around 150.

Be a Cyber Pal was conceived as an antidote to cyberbullying, and was about actively being a kind, considerate, supportive, and loyal friend.  And it is cause for hope that it became the most downloaded poster of the campaign that year.  She thinks that the positive message gave teachers and families something that’s easier to talk about.

Dr. Aiken is using an approach she calls the math of cyberbullying using digital forensics to identify both victims and perpetrators.  She is working with a tech company in Palo Alto to apply this algorithm to online communication.

She discusses pornography, which she terms The Elephant in the Cyber Room.

Let me conclude by presenting a four-point approach developed by a panel of experts to protect children online.

1.  Using technical mediation in the form of parental control software, content filters, PIN       passwords, or safe search, which restricts searching to age-appropriate sites.
2.  Talking regularly to your children about managing online risks.
3.  Setting rules or restrictions around online access and use.
4.  Supervising your children when they are online.

Cyber Babies

September 9, 2016

“Cyber Babies” is chapter 3 in Dr.  Aiken’s new book “The Cyber Effect.”   She begins by relating a story of when she was traveling on a train watching a mother feeding her baby.   She held the baby’s bottle in one hand  and a mobile phone in the other.  Her head was bent looking at the screen.  The mother looked exclusively at her phone while the baby fed.  The baby gazed upward as babies do looking adoringly at the mother’s jaw, as the mother gazed adoringly at her phone.   The feeding lasted about 30 minutes and the mother did not make eye contact with the infant or once pull her attention from the screen of her phone.  Dr. Aiken was appalled as eye contact between baby and mother is quite important for the development of the child.  She mentioned that parents frequently ask her  at what age is it appropriate for a baby to be introduced to electronic screens.  She agrees that this is an important question but asks the parents first to think about this question:  What is the right age to introduce your baby to your mobile phone use?

She elaborates on the importance of face time with a baby.  They need the mother’s eye contact.   They need to be talked to, tickled, massaged, and played with.  She writes that there is no study of early childhood development that doesn’t support this.

She continues, “By experiencing your facial expressions—your calm acceptance of them, your love and attention, even you occasional groggy irritation—they thrive and develop.  This is how emotional attachment style is learned.  A baby’s attachment still is created by the baby’s earliest experiences with parents and caregivers.”  She further notes “A mother and her child need to be paying attention to each other.  They need to engage and connect.  It cannot be simply one-way.  It isn’t just about your baby bonding with you.   Eye contact is also about bonding with your baby.”

In a 2014 study in the journal “Pediatrics” fifty-five caregivers with children were observed in fast food restaurants, forty caregivers used mobile devices during the meal and sixteen used their devices continuously with their attention directed primarily at the device and not the children.  Dr. Aiken wishes that the following warning be placed on mobile phones:  “Warning:  Not Looking at your Baby Could Cause Significant Delays.”

She devotes considerable space to products that promise early childhood development, products such as the Baby Einstein Products.  Very little, if any, research has gone into the development of these products, and evaluations of these products provide no evidence that they are effective.

The research is clear that the best way to help a baby learn to talk or develop any other cognitive skill is through live interaction with another human being.  Videos and television shows have been shown to be ineffective in learning prior to the age of two.  A study of one thousand infants found that babies who watched more than two hours of DVDs per day performed worse on language assessments than babies who did not watch DVDs.  For each hour of watching a DVD, babies knew six to eight words fewer than babies who did not watch DVDs.  She does note that quieter shows with only one story line, such as “Blue’s Clues” and Teletubbies” can be more effective.   Still babies learn best from humans and not machines.

Some early-learning experts believe there is a connection between ADHD and screen use in children.  ADHD is now the most prevalent psychiatric illness of children and teenagers in America.  The number of young people being treated with medication of ADHD grows every year.  More than ten thousand toddlers, ages two and three years old, are among the children taking ADHD drugs, even though prescribing these falls outside any established pediatric guidelines.

Dr Aiken offers the following ideas for parents pending more guidance and information on proper regulation:

Don’t use a digital babysitter or, in the future, a robot nanny.  Babies and toddlers need a real caregiver, not a screen companion, to cuddle and talk with.  There is no substitute for a real human being.

Because your baby’s little brain is growing quickly and developed through sensory stimulation, consider the senses—touch, smell, sight, sound.  A baby’s early interactions and experiences are encoded in the brain and will ave lasting effects.

Wait until your baby is two or three years old before they get screen time.  And make a conscious decision about the screen rules for them taking into account that screens could be impacting how your child is being raised.

Monitor you own screen time.  Whether or not your children are watching, be aware of how much your television is on at home—and if the computer screen is always glowing and beckoning.  Be aware of how often you check you mobile phone in front of your baby or toddler.

Understand that babies are naturally empathetic and can be very sensitive to emotionally painful, troubling, or violent content.  Studies show that children have a different perception of reality and fantasy than adults do.  Repetitive viewings of frightening or violent content will increase retention, meaning they will form lasting unpleasant memories.

Don’t be fooled by marketing claims.  Science shows us that tablet apps may not be as educational as claimed and that screen time can, in fact, cause developmental delays and may even cause attention issues and language delays in babies who view more than two hours of media per cay.

Put pressure on toy developers to support their claims with better scientific evidence and new studies that investigate cyber effects.