Archive for December, 2019

An Extremely Important New Year’s Resolution

December 31, 2019

One being to build a healthy memory through healthy practices, but most importantly growth mindsets. Growth mindsets require new learning and the development of critical thinking. Both of these involve Kahneman’s System 2 processing, more commonly known as thinking. However, it has become apparent this year that the development of healthy memories is essential to the maintenance of a healthy country.

Consider the following message from Karl Rove, senior advisor to George W. Bush in 2004:
“[You] in what we call the reality-based community…believe that solutions emerge from your judicious study of discernible reality. That’s not the way the world really works anymore. We create our own reality.”

and the following message from Kellyanne Conway, counselor to President Donald Trump in 2017:
“You’re saying it’s a falsehood. And they’re giving…our press secretary gave alternative facts.”

So, what is a good path to a healthy memory? Perhaps the best place to start is the Constitution of the United States. It appears that too few citizens are familiar with the Constitution. But what is more frightening is that many people in the Congress either do not know or disbelieve the Constitution and are behaving in a manner contrary to the Constitution that puts our democracy at risk. The name of our species is Homo sapiens, which means wise man. Too many humans are not living up to the name of their species.

Even for those who have read the Constitution, we should remember that people make it their life’s work to study the Constitution. Still, even being expert in the Constitution is insufficient. Critical thinking is also needed.

The following aphorism is attributable to, at least Daniel Moynihan and Thomas Jefferson: You’re entitled to your own opinions and your own fantasies but not your own facts—especially if your fantastical facts hurt people.

Understand that Kellyanne Conway was not offering alternative facts. There was no evidence underlying her facts. This is a further way that the water has been poisoned. Facts are being offered as facts for which there is no evidence. And all too often what is offered as evidence is in truth a fabrication.

Very often it is difficult determining what to believe. This is certainly true in scientific investigations where research may go on for decades or even centuries, before a consensus is achieved. Even after a consensus is accepted, scientists still should be open to a new theory if more evidence or a more comprehensive theory is offered.

Critical thinking is hard. Believing is much, much easier. The advance of mankind was very slow until the scientific method was developed that challenged beliefs and offered empirical evidence as an alternative. Technology is the result of this science. Perhaps it is a tad ironic that a product of the scientific method, the internet, is a tool for promoting disinformation and false beliefs.

There are a few keys that one can employ to facilitate critical thinking. Certain behaviors indicate which sources, be it individuals or publications, should be completely ignored. One is the claiming that information is false without offering alternative explanations supported by facts. Claiming conspiracies or witch hunts is another tool used by totalitarian dictators. Similarly, failing to allow access to individuals or documents indicates underlying guilt. Personal insults do not disguise the fact that a legitimate factual response is impossible.

The following passage comes from Hannah Arendt’s book, The Origins of Totalitarianism:
“A mixture of gullibility and cynicism have been an outstanding characteristic of mob mentality before it became an everyday phenomenon of masses. In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and that nothing was true…Mass propaganda discovered that its audience was ready at all times to believe the worst, no matter how absurd, and did not particularly object to being deceived because it held every statement to be a lie anyhow. The totalitarian mass leaders based their propaganda on the correct psychological assumption that, under such conditions, one could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable evidence of their falsehood, they would take refuge in their cynicism; instead of deserting the leaders who had lied to them, they would protest that they had known all along that the statement was a lie and would admire the leaders for their superior tactical cleverness.”
Arendt published Origins of Totalitarianism when Stalin was in power and Hitler only six years gone.

The following is taken from FANTASYLAND: HOW AMERICAN WENT HAYWIRE: A 500-YEAR A 500-YEAR HISTORY by Kurt Andersen:
“The seven centuries of Greek civilization are divided into three eras—the Archaic, then the Classical, then the Hellenistic. During the first, the one depicted by Homer, Greeks’ understanding of existence defaulted to supernaturalism and the irrational. Then suddenly science and literature and all the superstar geniuses emerged—Aeschylus, Sophocles, Euripides, Socrates, Plato, Aristotle—in the period we canonize as “ancient Greece.” But that astonishing era lasted less than two centuries, after which Athens returned to astrology and magical cures and alchemy, the end. Why? According to The Greeks and the Irrational, by the Oxford classicist Eric Dodd, it was because they finally found freedom too scary, frightened by the new idea that their lives and fates weren’t predestined or managed by gods and they really were on their own. Maybe America’s Classical period also lasted two centuries, 1800 to 2000, give or take a few decades on each end.”

So, for all who care about the United States, please engage your critical thought processes and build a growth mindset. This will benefit not only your memory, but also the survival of democracy in the United States.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Healthymemory Blog is Going on a Hiatus

December 21, 2019

Happy Holidays!

The Healthymemory Blog is Going on a Hiatus

If you have not read the immediately preceding posts on Clint Watts excellent book, Messing with the Enemy, it is strongly recommended that you do so. The book is highly relevant to the crisis in which the United States finds itself. Or you can look for topics in the search box at healthymemory.wordpress.com. Or go to the following website
centerhealthyminds.org.

The healthymemory blog shall return in time for New Year’s Eve.

Surviving in a Social Media World

December 20, 2019

The title of this post is the title of the final chapter in Messing with the Enemy an excellent book by Clint Watts. Go to Starbucks or any public space. The customers heads are down, peering at smartphones; rarely do eyes meet. Customers might stand in the same line with the same people hundreds of times each year and never utter a word or even remember each other’s faces.

HM attends professional conferences on psychonomics (cognitive psychology), the America Psychological Association, the American Psychological Society, and Human Factors and Ergonomics. Professionals come from all over the world to attend these conferences and to learn from other professionals with shared interests. HM sees groups of people, sitting together peering down at their smartphones. During talks, many are not looking at the speaker or the slides but are peering down at their smartphones.

Robert Putnam, in his book Bowling Alone: The Collapse and Revival of American Community, further defined the social capital concept pioneered by Tocqueville, dividing it into two types: bonding and bridging. Bonding capital involves Americans associating with people similar to themselves. Bridging capital comes when we make friendships and associations with people unlike ourselves. Putnam argued that these two types of capital, when combined together, power American democracy. The decline of bridging capital that is occurring signals an ominous future for the United States.

After publication of this book, Putnam not only defended his thesis, but worked to identify solutions for increasing American social capital. In 2001 his Social Capital Community Benchmark Survey sought to discover approaches for increasing social capital but instead revealed more troubling indicators for American society. The study noted: “Our survey results makes clear the serious challenges of building social capital in a large, ethnically diverse community. The more diverse a community in our study, the less likely its residents are: to trust other people…to connect with other people, even informally…to participate in politics…to connect across class lines.”

Watts writes, “Democracy dies in preference bubbles. That’s it, there’s no way for Americans to communicate, debate, compromise, and thrives as these bubbles diverge and insulate themselves from challengers. The United States, if it stays on this trajectory, ultimately may not endure. I’ve explored social media preference bubbles in great detail, but they drive physical-world preference bubbles as well. We all increasingly live in places where we walk like, talk like and look like one another. Members of the same social media preference bubbles move to places where they can reside with like-minded people who share the values, ethnicity, identity, and lifestyle of their social media nationalism. The Islamic State, while seen as extreme in the West, provides an early example of this phenomenon. Social-media-induced fantasies led young Muslims, entire families of women and children, to voluntarily move to a war zone in Syria and Iraq—the digital tail wagged the physical dog.”

Watts writes,”I’ve offered some thoughts on how the U.S. government can protect American against Russian interference, but the threat to democracy comes not from Russia but from America. The U.S. government will not save Americans from their preference bubbles, and since the election we’ve seen not just Russian active measures attempting to destroy our democracy, but American active measures tearing down our institutions. It will take Americans fighting for their own democracy to fend off the social media manipulators, the hidden core, who seek to hear them and coalesce them into a movement outside of their control and only partly of their own design. Public and civil society must come together, leaders must emerge, and civil society must be rebuilt—on the ground, not online”

Watts says the retired General Stanley McChrystal’s recommendation of a national service, beyond the military, would be an excellent way to bring citizens together through common cause and shared values. Here HM strongly concurs. HM was drafted and served two years. Initially this was regarded as a burdensome obligation. But it turned out to be, perhaps, the most rewarding two years of his life. HM worked for NCOs, who were black. One of his best friends was a poor white from Louisiana. He was so poor that he had plates of artificial teeth. When inducted his teeth were so rotten that they all needed to be removed. This was not an uncommon experience for new draftees. Absent the draft, HM’s chances of meeting, much less befriending, such individuals were virtually nil. Watts writes, “Ultimately real-world physical relationships will be the only way to defeat the online troll armies tearing democracies apart.

Watts and his colleagues have proposed the equivalent of Consumer Reports should be created for social media feeds. Information Consumer Reports would be an independent, nongovernmental rating agency that evaluated news outlets across all types of media during a rating period. Outlets would receive marks based on their performance as assessed on two principal axes: fact versus fiction in the content it produces, and subjective opinion versus objective reporting.

Watts notes that Finland fought Soviet disinformation for years, and Russian resurgence in this space led the Finns to develop a coordinated plan and trained personnel to deflect propaganda. They’ve also invested heavily in good public education, equipping their citizens not only to assess incoming information, but also to recognize falsehoods because they understand how their own government institutions and processes work. Americans enraged by WikiLeaks dumps, shouting claims of corruption or collusion, actually know little about the operation of the branches and the electoral process. Civic classes alone could enable Americans to better spot falsehoods.”

Watts also writes, Social media users can take several steps to survive in the modern social media world. First, and above all, ask whether the benefits of using social media outweigh the costs, and even if the answer to that question is yes, try to use social media less.”

HM blog readers should recognize this as a recommendation repeatedly offered in this blog.

From Preference Bubbles to Social Inception:

December 18, 2019

The title of this post is identical to half of a title in Messing with the Enemy an excellent book by Clint Watts. The second half of the title is “The Future of Influence.” In previous posts HM has mentioned the tremendous optimism regarding the internet that was written in this blog when it began in 2009. Physical boundaries no longer mattered. People passionate about chess, cancer research, or their favorite television shows could find like-known enthusiasts around the world wanting to share their thoughts and experiences. Those under oppressive regimes, denied access to information and the outside world, could leverage the web’s anonymity to build connections, share their experiences, and hope for a better world, either at home or elsewhere. All these sources of knowledge became widely available for those with growth mindsets.

Unfortunately, hackers and cybercriminals were some of the first actors to exploit the internet in pursuit of money and fame. Hate groups and terrorists found the internet an anonymous playground for connecting with like-minded people. Even though there were only a handful, or possibly only one, extremists in any given town, but with the internet, there were now hundreds and even thousands of extremists who used only internet connections to facilitate physical massing of terrorists in global safe havens or remote compounds.

The internet provided a virtual safe haven for bin Laden’s al-Qaeda, allowing a small minority of Muslims inclined to jihadi extremism to connect with like-minded supporters. As counter terrorists searched the earth for al-Qaeda’s head shed, the internet provided enough cover, capacity and space for the terror group to survive physically by thriving virtually. Watts writes, “This made al-Qaeda bigger, but not necessarily better—more diffuse and elusive, but vulnerable to fissures and difficult to manage.

Watts writes, “My experiences with the crowd—watching the mobs that toppled dictators during the Arab Spring, the hordes that joined ISIS, the counterterrorism punditry that missed the rise of ISIS, and the political swarms duped by Russia in the 2016 presidential election—led me to believe that crowds are increasingly dumb, driven by ideology, desire, ambition, fear, and hatred, or what might collectively be referred to as “preferences.”

Social media amplifies confirmation bias through the sheer volume of content provided, assessed, and shared. And this is further amplified by interactions with their friends, family, and neighbors—people who more often than not, think like they do, speak like they do, and look like they do.

Watts writes, “Confirmation bias and implicit bias working together pull social media users into digital tribes. Individuals sacrifice their individual responsibility and initiative to the strongest voices in their preferred crowd. The digital tribe makes collective decisions based on groupthink, blocking out alternative viewpoints, new information, and ideas. Digital tribes stratify over time into political, social, religious, ethnic,and economic enclaves. Status quo bias, a preference for the current state of affairs over a change, sets into these digital tribes, such that members must mute dissent or face expulsion from the group. Confirmation, implicit, and status quo bias, on a grand social media scale, harden preference bubbles. These three world-changing phenomena build upon one another to power the disruptive content bringing about the Islamic State and now shaking Western Democracies.

Watts continues, “Clickbait populism—the promotion of popular content, opinions, and the personas that voice them—now sets the agenda and establishes the parameters for terrorism, governance, policy direction, and our future. Audiences collectively like and retweet that which conforms to their preferences. To win the crowd, leaders, candidates, and companies must play to test collective preferences.”

This clickbait populism drives another critical emerging current: social media nationalism. Each year, social media access increases and virtual bonds accelerate, digital nations increasingly form around online communities where individual users have shared preferences.

Watts writes, “Social media nationalism and clickbait populism have led to a third phenomenon that undermines the intelligence of crowds, threatening the advancement of humanity and the unity of democracies, the death of expertise. Expertise is undermined by those on the internet who ignore facts and construct alternative realities.

Consider two preference bubbles, the ISIS boys, and Trump supporters. For the ISIS boys it was more important to have a caliphate than to do it right. It was more essential to pursue extreme violence than to effectively govern.

For Trump supporters, it is more important to win than be correct, more important to be tough than compromise and move forward. They appear to be living in an alternative reality that disdains factual information. The Republican Party can be regarded as one big preference bubble. To be fair, one might argue that the Democratic Party should also be regarded as a preference bubble, but one does not find the unanimity created in a true preference bubble.

Postmortem

December 18, 2019

The title of this post is identical to the title of a post in Messing with the Enemy an excellent book by Clint Watts. The postmortem on Russia’s influence and meddling in the presidential election of 2016 may never end. Trump was completely unconventional, uninformed, unlikable in so many ways, and yet had become the leader of the free world. Fake news entered the American lexicon, and Watts pre-election detailing of Russian active measures on the internet became the subject of hot debate. Had fake news swayed the U.S. presidential election?

Social media companies began digging into the data. What they found spelled dangerous trends for democracy. Americans were increasingly getting their news and information from social media instead of mainstream media. Users were not consuming factual content. Fake news, false or misleading series from outlets of uncertain credibility was being read far more than that from traditional newsrooms. EndTheFed.com and Political Insider produced four of the five most read false news stories in the three months leading up to the election. One story falsely claimed that Pope Francis had endorsed Donald Trump and another story falsely claimed that Hillary Clinton’s emails hosted on WikiLeaks certified her as an ISIS supporter. Throughout December, fears of Russian election manipulations grew, and each day brought more inquiries into how Russia had trolled for Trump.

The American electorate remains divided, government operations are severely disrupted, and faith in elected leaders continues to fall. Apparently, the objectives of Russia’s active measures have been achieved. Watts concludes that Americans still don’t grasp the information war Russia perpetrated against the West, why it works, and why it continues.

Watts writes, “The Russians didn’t have to hack election machines; they hacked American minds. The Kremlin didn’t change votes; it won them, helping tear down its less-preferred candidate, Hillary Clinton, to promote one who shares their worldviews, Donald Trump.

Watts continues, “Americans’ rapid social media consumption of news creates a national vulnerability for foreign influence. Even further, the percentage of American adults fifty and older utilizing social media sites is one of the highest in the world, at 50%. Younger Americans, aged eighteen to thirty-four, sustain a utilization rate about 80%. Deeper analysis by the Pew Research Center shows that U.S. online news consumers still get their information from news organizations more than from their friends, but they believe the friends they stay in touch with on social media applications provide information that is just as relevant.

A look at the Columbia Journalism Review’s media map demonstrates how social media encouraged information bubbles for each political leaning. Conservatives strongly entered their consumption around Breitbart and Fox News, while liberals relied on a more diverse spread of left-leaning outlets. For a foreign influence operation like the one the Russians ran against the United States, the highly concentrated right-wing social media landscape is an immediate, ripe target for injecting themes and messages. The American-left is diversely spread making targeting messages more difficult.

The Internet Research Agency in St. Petersburg, Russia bought $4,700 in advertising and through eighteen channels, hosted more than 1,000 videos that received more than 300,000 views.

The Russians created a YouTube page called Williams and Kalvin. The page’s videos showcase two black video bloggers, with African accents, appearing to read script that Barack Obama created police brutality and calling Hillary Clinton an “old racist bitch.” The Williams and Calvin page garnered 48,000 fans. Watts writes,”Russian influence operators employed most every platform—Instagram, Tumblr, even PokemonGo—but it was the Kremlin’s manipulation via Twitter that proved the most troubling.”

Watts concludes that U.S. government resources are needed to find a truly effective effort. Intelligence agencies, Homeland Security, and the State Department need to rally and coordinate. Rex Tillerson was late in using the $80 million Congress had set aside for counterpropaganda resources, and then used only half of the appropriated amount. This is just a start, and a small one at that, of what America needs to do against Russian influence. The last sentence in this chapter reads, “Kislyak was right, and Putin must still wonder, “Why hasn’t America punched back.”

Putin’s Plan

December 17, 2019

The title of this book is identical to the title of a chapter in Messing with the Enemy an excellent book by Clint Watts. In the fall of 2015 Russia’s dedicated hacking campaign proved to be unique in history. Unlike the hacking of criminals, Russia didn’t pursue indiscriminate breaches for financial gain. It sought information from politicians, government officials, journalists, media personalities, and foreign policy experts numbering in the thousands, according to government and media estimates.

The Russians had perpetrated cyberattacks as part of its military campaigns prior to invading Georgia in 2008, when it defaced and disabled Georgian government websites as part of a psychological warfare campaign. In 2014, a pro-Russian group called CyberBerkut surfaced alongside Kremlin hackers and penetrated Ukraine’s Central Election Commission, altering the nation-wide presidential vote in favor of Russia’s preferred candidate, Dmytro Yarosh. Fortnately, the Ukrainian caught the manipulation before the results were aired. Throughout 2015 and 2016, Ukrainian businesses and government agencies suffered endless cyber assaults. The Blackenergy attack struck power grids of the Ivano-Frankivsk region of Ukraine, disabling electricity during one of the country’s coldest periods. Watts writes, “These attacks, though, sought to damage infrastructure and undermine Eastern European countries through humiliation and confusion. The Russian-connected breaches surfacing in America, though, sought something different.

Beginning in the late summer of 2015 and extending through the fall, Russia undertook the largest, most sophisticated, most targeted hacking campaign in world history, breaking into the email accounts of thousands of American citizens and institutions. Analysts believe that the cyber offensive was perpetrated by two of Russia’s intelligence agencies: The Main Intelligence Directorate, known as GRU, and the Federal Security Service, known as FSB, which is primarily an internal intelligence arm, but is particularly sophisticated in cyber operations.

The GRU and FSB operatives act as Advanced Persistent Threats (APTs), a reference to their dedicated targeting and a wide array of cyber-hacking techniques. APTs have sufficient resourcing to stay on their targets until they penetrate the systems they want to access. They use a range of techniques, from the simple to the complex, employing all forms of social engineering and specifically tailored malware known as “zero days.”

These Russian APTs were known as APT28 (Fancy Bear) and APT29 (Cozy Bear). They represented competing Russian hacker groups seeking access and compromising information from democratically elected officials adversarial to Russia, media personalities (particularly reporters who interfaced with anonymous sources), military leaders, academic researches, and policy think tanks studying Russia. In other words, anyone and everyone opposing Russia was targeted in hopes that their private communications, if revealed, would undermine the credibility of a Russian adversary and/or sow divisions and mistrust between the targeted individuala and those they maligned in private.

“Spearphishing” is the most useful and common technique for gaining access to users’ accounts. Messages made to appear legitimate would tell them they needed to sign in to change their username, and users often complied.

In the fall of 2015 the Kremlin election hacking wave began. In September 2015, the Democratic National Committee (DNC) was breached. Both Fancy Bear and Cozy Bear breached the DNC in separate attacks. Separately, hackers penetrated the Democratic Congressional Campaign sometime around March or April 2016.

By 2016, Russia had advanced from spearphising of political parties to “whalephishing” of key political operatives and government officials. Whalephishing targets prominent individuals within organizations or governments whose private communications likely provide a wealth of insight and troves of secrets to propel conspiracies. The campaign manager of Hillary Clinton, John Podesta, proved to be the biggest whale hacked in 2016.

The troll army’s interest in the U.S. presidential collection gained steam toward the end of 2015. The following article in Sputnik caught Watt’s eye, “Is Donald Trump a Man to Mend US Relations with Russia?” At the time Trump’s campaign seemed more a celebrity stunt than a deliberate effort to lead the nation, but the post was curious, given that Russian disdain for both parties and their leaders had historically been a constant.

Watts writes, “From then on, the social media war in America surrounding the election prove unprecedented, and the Russians were there laying the groundwork for their information nuclear strike. Russian state-sponsored media, the English-speaking type, was quite clear: Putin did not want Hillary Clinton to become president. Aggressive anti-Clinton rhetoric from state-sponsored outlets, amplified by their social media trolls, framed Clinton as a globalist, pushing democratic agendas against Russia—an aggressor who could possibly bring about war between the two countries. The trolls anti-Clinton drumbeat increased each month toward the end of 2015 and going into 2016.”

Continuing, “Trump’s brash barbs against his opponents were working unexpectedly well. Kicking off 2016, the troll army began promoting candidate Donald Trump with increasing intensity, so much so that their computational propaganda began to distort organic support for Trump, making his social media appeal appear larger than it truly was.”

Wikileaks released Clinton’s emails. Five days after the WikiLeaks’ dump of DNC emails, Trump announced, “Russia, if you’re listening, I hope you’re able to find the thirty thousand emails that are missing…I think you will probably be rewarded mightily by our press.” Watts writes, “I watched the clip several times, and a sick feeling settled in my stomach. I’d watched the Russian system push for Trump and tear down Clinton, but up to that point, I hadn’t believed the Trump campaign might be working with the Russians to win the presidency. I’d given briefs on the Russian active measure system in many government briefings, academic conferences and think tank sessions for more than a year. But nothing seemed to register. Americans just weren’t interested; all national security discussions focused narrowly on the Islamic State’s recent wave of terrorism in Europe. I did what most Americans do when frustrated by politics. I suffered a Facebook meltdown, noting my disbelief that a U.S. presidential candidate would call on a foreign country one already pushing for his victory, to target and discredit a former first lady, U.S. senator, and secretary of state.”

Watts writes, “By Election Day, allegations of voter fraud and the election being rigged created such anxiety that I worried that some antigovernment and domestic extremists might undertake violence.” But there was no need to worry. Putin’s candidate had won.

Harmony, Disharmony, and the Power of Secrets

December 15, 2019

The title of this post is identical to the title of a chapter in Messing with the Enemy, an excellent book by Clint Watts. In 2002, CIA director George Tenet created the Harmony data base as the intelligence community’s central repository for all information gathered during counterterrorism operations. This data base served as a single place for bringing together information of the conduct of the emerging field of DOMINEX (document and media exploitation). At first , the Harmony database assisted soldiers picking up clues about enemy whereabouts and communications from many different babble fields and helped support the prosecution of alleged terrorists.

A Major Steve saw al-Qaeda’s secrets from a different perspective. He focused on the strategic deliberations of terrorists, their biases and preference, expense reports, likes and dislikes, and successes and failures, as well as what they thought of one another. In sum these documents yielded insights into the group’s strategic weakness and internal fractures.

Major Steve moved to the Combat Terrorism Center at West Point, which offered an interface for the military and government to connect with top experts in new cultures, regions, languages, and politics challenging effective counters operations. Major Steve could unlock the Harmony database’s secrets, create am open-source repository for the public, and enlist highly educated military officers stationed at West Point to study and collaborate with top professors around the world. In 2005, the CTC launched the Harmony Program “to contextualize the inner-functioning of al-Qaeda, its associated movement, and other security threats through primary source documents. In addition to conducting initial research on the materials, the program aimed “to make these sources, which are captured in the course of operations in Iraq, Afghanistan and other theaters, available to other scholars for further study.

The first study was tiled Harmony and Disharmony: Exploiting al-Qaeda’s Organizational Vulnerabilities. The study reviewed the employee contracts which showed that Arab recruits were paid more than African recruits, and married volunteers with children received more benefits and vacation than single members. The report noted that ineffective terrorists, instead of being plucked off the battlefield, should not be removed from the network if they can be reliably be observed, even if they present easy targets. The report’s justifications for this recommendation were pulled from a 1999 email sent by Ayman al-Zawahiri to a Yemeni cell leader in which he scolded a subordinate, saying, “With all due respect, this is not an accounting. It’s a summary accounting. For example, you didn’t write any date, and many of the items are vague. Watts writes, “Nearly twenty years later, Zawahiri’s letter offers some insights into why terrorists in the ranks sought to defect to ISIS after bin Laden’s death: he was a stickler of a boss.”

The key recommendation from the report follows: “increase internal dissension within al-Qaeda’s leadership.” Communique’s between al-Qaeda subordinates challenged the direction put out by the group’s leaders and questioned whether orders should be obeyed. One member said that faulty leadership held the group back, asserting that bin Laden had rushed “to move without visions,” and asked Khalid Sheikh Mohammed, mastermind of the 9/11 attacks, to reject bin Laden’s orders.

Another study using the Harmony Database found that al-Qaeda, as a military organization, had never been particularly strong, and its success as a media organization masked deep internal divides between its leaders over strategic direction.

The Russians recognized that transparency movements relied on content, and compromising information seeded to WikiLeaks provided a new method for character assassination.The Russian intelligence services forged ahead compromising adversaries in cyber through the late 1990s and early 2000s. They secretly placed child pornography on the computers of defectors and intelligence officers and leaked sex videos of political opponents on the internet, creating a media feeding frenzy. Outlets like Wikileaks were a perfect vehicle for widespread character assassination of enemies worldwide, an open-source vehicle for planting information that allowed for plausible deniability.

Watts concludes this chapter as follows: “Many of the great chess masters have been Russian, and their leader, Vladimir Putin, is a lover of judo. Both require strategy, and victory routinely goes to those who employ their adversary’s strengths against them. As Putin famously demonstrated his judo skills on Youtube, Edward Snowden settled into a Kremlin-provided safe house. Julian Assange stowed away in the Ecuadorian embassy. The Kremlin trolls practiced on audiences in Ukraine and Syria, and occasionally heckled me. As for the hackers swirling around the Syrian Electronic Army, some of them went offline, busy working on a new project. And Russia’s cyber team came together for a new mission, with some new methods the world had yet to see and still doesn’t quite comprehend.”

Rise of the Trolls

December 15, 2019

The title of this post is identical to the title of a chapter in Messing with the Enemy, an excellent book, by Clint Watts. Watts writes that Andrew Weisburd was a natural social media savant. He could examine an online persona, spot it as friend or foe, and trace its connections to a host of bad act,ors online. In the 2000s, as a hobby, Weisburd began tracking al-Qaeda online from his couch. He identified and outed terrorists lurking on the internet so well that al-Qaeda fanatics mailed a white powder package to his house along with this death threat: “To the jewish asshole Aaron [sic] Weisburd, This is our donation to you. Either you close the website called Internet Haganah by next week or you will [be] beheaded. No anthrax was found and the website continued as usual.

Weisburd connected some of the trolls to a recent internet nemesis: The Syrian Electronic Army (SEA). The SEA presented itself as a new hybrid threat of the online world, embodying the spirit of more popular activist collectives such as Anonymous and LulzSec, but clearly in the bag for President Assad and the Syrian regime. Thus, the SEA effectively became the first nation-state cyber proxy force on the internet. Since 2011 the SEA had undertaken a string of targets by taking aim at many mainstream media outlets that were revealing the horrors of the Syrian civil war.

Effective troll armies consist of three types of accounts: hecklers, honeypots, and hackers. Hecklers lead the propaganda army, winning audiences through their derisive banter and content-fueled feeds. Hecklers identify and drive wedge issues into their target audiences by talking up online allies and arming them with their preferred news consisting of both true and false information, loaded with opinion, that confirms audience member beliefs. Hecklers also target social media adversaries and focus the angst of their cultivated supporters against opposing messages and their messages. For example, in the case of Syria anyone pointing out President Assad’s human rights violations might immediately be called a terrorist sympathizer and subjected to endless 140-character and taunts.

When hecklers alone can’t stop the challenges of the opposition, honeypots sweep in to compromise adversaries. In the traditional espionage sense, honeypots are attractive women who seduce men into compromising sexual situations. Females remain the predominant form on Twitter, but they can also assume the persona of an allied political partisan. Among the SEA, attractive females—or what appear to be women—performed the traditional mission befriending men in the target audience or sidling up to adversary accounts hoping to compromise personas or publicly embarrass them. “Can you follow me so I can DM you something important?” might be the siren song of one of these e-ladies. Lady honeypots in 2014 were seeking follower relationships with men, which would lead to privileged insights, but often contained a malware payload allowing them to gain entry to a target’s computer.

Behind the scenes, but still observable in the SEA social media storm, were hacker accounts. Examination of their follower and following relationships showed that they were highly networked with honeypot accounts, likely controlling the conversation between the loverly lady personas and their unwitting accounts. The message that honeypots delivered to unsuspecting men opened doorways to their phones and computers, causing them to give up their personal emails, corporate communications, and, in some cases, their contact lists allowing for malicious spam distribution.

In 2013 and 2014, honeypots and the hackers behind them waged a highly successful campaign across a swath of companies and Western personalities. Corporate America suffered as unwitting employees clicked onto malicious links and, in turn, coughed up access to private databases of subscribers and workers.

CRIME

December 14, 2019

This post is based on Messing with the Enemy an excellent book by Clint Watts. CRIME is an acronym used by Watts to describe the motivations and enticements an intelligence officer or law enforcement investigator uses to recruit an agent overseas or a street informant in America (Watts notes that the CIA uses the acronym MICE). They’re the reasons why people turn, or flip, when they begin reporting on a group they once declared allegiance to or betray an ally on behalf of their foe. C stands for compromise. Compromised people can be coerced into doing something they might not normally consider. A criminal charge, an outstanding arrest warrant, unpaid debts, a sick loved one who needs surgery—all provide avenues for convincing a person to provide assistance. He adds R for revenge. Watts writes, “Think Mandy Patinkin in The Princess Bride”: ‘My name is Inigo Montoya. You killed my father. Prepare to die.’ There may not be any other motivation that makes people as relentless in its pursuit. The unjust murder of a loved one, wrongful treatment of others, perceived injustice by a rival: revenge once pursued, usually can be countered only by death. Ideology constitutes the I and represents the purest motivation for any action. Those driven by ideology always prove the hardest to flip and most difficult to stop. M should indeed be bold: money. It’s the most common reason for betrayal and the flimsiest. Those incentivized by cash prove to be the easiest to recruit, and most likely to deceive or switch teams. Finally, the E is for ego. Fame and glory, the desire to be a hero, makes men do strange things, Empower and embrace the ego of a narcissist and he’ll be a cost-effective asset, a turncoat for good or evil, depending on the suitor.”

Not all terrorists communicate in Arabic. English is also used, which is useful for recruiting in the United States and other English speaking countries. Omar Hammami is an interesting terrorist. He was born and lived in the United States. But he moved to the Arab world and set himself up to be not only a terrorist, but also a leader of terrorists. Watts engaged Hammami. He did a quick assessment of Hammami’s motivations that were revealed so much on twitter. “The most obvious motivation for his endless disclosures was compromise. The more Omar got his story into the public regarding al-Shabaab (another terrorist) hunting him, the more likely he’d be able to survive, gain protectors, and push Shabaab into a no-win situation. If the terrorists killed Hammami, they’d hurt their brand in the eyes of future recruits and international supporters. Furthermore, each Shabaab attempt to hunt Hammami and quell his supporters increased Omar’s revenge response. A war inside was what I (Watts) wanted. I’d (Watts) amplify any of Omar’s resentments and accentuate his quest for revenge.”

Watts continues,”More subtle but still immediately apparent were Omar’s egoistical motivations. Hammami loved attention—loved it. He thought of himself as a future jihadi visionary and consistently sought to showcase his theological expertise, and pined for the attention of senior jihidas and well-known terrorist experts. Omar wanted to be famous, and I’d (Watts) help him do that. In so doing, I’d (Watts) undermine motivations others might have for heading of to Somalia and joining al-Shabaab.”

Continuing further, “There were also topics I wanted to avoid when chatting with Omar. Hammami wanted to be an ideological expert, and he’d spent time studying and pontificating, developing his own vision for the future of global jihad. As a non-Muslim lacking any theological expertise, I (Watts) risked empowering Omar by engaging in his religious rants and raising his profile among his supporters. I (Watts) wouldn’t be able to convince him that he was wrong about his religion, and I (Watts) stood to look quite stupid if I (Watts) tried and failed. A second area I (Watts) sought was to avoid was money, specifically his financial situation. He had left America to join terrorists in one of the most impoverished countries in the world. Sitting in prosperous America, I (Watts) didn’t want to glorify his financial sacrifice.”

Continuing still further, “I (Watts) took the three motivations I (Watts) wanted to amplify and then identified common ground I could make with Hammami for rapport building, He wanted to talk with me (Watts)—that was obvious—but I (Watts) didn’t want to speak with him strictly regarding terrorism. One heated debate would end our engagements. Persuading him to divulge more information or discuss his positions would mean first getting him to feel a deeper connection.”

“RPMs” is a discussion technique used to nudge guilty people toward a confession. R stands for rationalize so Watts would justify some of Hammami’s actions. P stands for projection in which he would saddle up to Omar’s position and take his side. M stands for minimize, minimize his actions which was difficult as he had killed people.

To cut to the end of this story, Omar Hammami never returned to the United States. Twelve years and a day after the September 11 terrorist attacks, al-Shabbab hunted him down in the forests of Somalia and killed him.

Messing with the Enemy

December 13, 2019

Messing with the Enemy is an excellent book by Clint Watts. He is a Robert A. Fox Fellow in the Foreign Research Institute’s Program on the Middle East as well as a senior fellow at the Center for Cyber and Homeland Security at the George Washington University. He is graduate of the U.S Military Academy and in addition to his work as an Army officer, he also served in the F.B.I. He founded the Combating Terrorism Center at the Military academy.

He used the internet to study, or as he writes, mess with extremists half a world away. He observed their debates, gauged their commitment to terrorist principles, and poked them with queries from a laptop at home. He was also able to pose as a fellow terrorist.

The internet provided assistance to al-Qaeda operatives when Osama bin Laden was forced out of Tora Bora, Afghanistan. Hunted by the entire international community, his aides and deputies were constantly on the run. The internet allowed for communication between and control of these aides and deputies. Throughout the mid-to late nineties, websites and email chains provided a communications leap forward to terrorists (and the rest of the world), but they had a major limitation: they were one-way modes of information sharing. Bin Laden could only broadcast to audiences. They could not easily follow up with those inclined to join the ranks. All that changed with the dawn of the new millennium. With the emergence of vBulletin, commercially available software allowing group discussions and Yahoo groups, audiences now had a direct window to communicate with Islamist webmasters, clerics, and leaders. In 2001, the Global Islamic Media Front started a Yahoo Group and a related website. They required users to acquire a password to access the discussion page. Many others featuring general Islamist discussions with a sprinkling of jihadi messaging popped up and down toward the end of the decade. Watts writes that none endured for long before rumors of intelligence operatives penetrating them squelched their dialogue and counterterrorism arrests of forum administrators led to their closure. Two-way communication between al-Qaeda leaders and hopeful jihadis increased, but more content needed to follow to sustain audience engagement.

al-Qaeda created an official media group, al-Sahab, to fill the void and gain greater control of jihadi discussions. Bin Laden recognized the value of jihadi websites and began sending audio and written statements from top al-Qaeda leaders directly to al-Qaeda in the Arabian Peninsula’s leader, Yusuf al-Uyayri, and his site Al Neda. Websites and forums served as principal communication points for those around the world inspired by the incredible success of the 9/11 attacks and seeking to join bin Laden’s ranks.

Replication of sites and duplication of content became key features of online survival for al-Qaeda supporters. Openly available software and hosting services meant websites and forums could be created by anyone in minutes, and accessed by anyone around the world with an Internet connection. This lowered technical boundary for mainstream internet users meant relatively novice jihadis now had the power to create their own safe havens online.

In his book The Wisdom of Crowds, James Surowiecki describes how the internet provided a vehicle for crowds to make smarter decisions than even the smartest person in the crowd, working alone, could make. On-line Watts made the prediction on January 2, 2011 that Osama bin Laden would be killed that year. He made this prediction to work as a vehicle for crowdsourcing an important question. What would al-Qaeda and the world of terrorism be if bin Laden were no more? He used this New Year’s prediction to provoke the audience to answer this question. Watts was disappointed to find that rather than yielding great wisdom of important insight from experts, the results instead returned a pattern of answers of no consequence, “Nothing will change,” and “It doesn’t matter” became patent answers from the best thinkers in the field, regardless of the question.

So Watts took recourse in research that has been reported in previous healthymemory posts on Philip Tetlock. In his 2005 book, Expert Political Judgment he reported the survey of hundreds of experts in political thought over two decades. He determined that, en masse, experts were no more successful at predicting future events than a simple coin toss. He identified two kinds of forecasters. He borrowed from a Greek saying, “The fox knows many things, but the hedgehog knows but one big thing.” He classified those good predictors as “foxes” and poorer performers as “hedgehogs.” What differentiated the two groups’ success was how they thought, not what they thought. Tetlock’s foxes were self-critical, used no template, and acknowledged their misses. By contrast, hedgehogs sought to reduce every problem to a single theory, were not comfortable with complexity, we’re overconfident in their assessments, and placed their faith in one big idea, pushing aside alternative explanations. He saw a lot of hedgehogs in his online surveys,and occasional foxes to get insights. He developed a techniques to identify, in advance, foxes.

At this point, there will be a break in this narrative to mention that Tetlock has conducted additional research into intelligence analysis using a very large sample of analysts. There he was able to identify analysts who performed better than chance, and these analysts were, of course, foxes. These posts can be found by entering “Tetlock” into the search box at healthymemory.wordpress.com.

Returning to the current post on Alan Watts, he used the research of Daniel Kahneman and Amos Tversky (two authors oft cited in this blog) Their research identified a series of heuristics and noted the circumstances where biases emerged to make incorrect judgments. Long ago they identified the predictive missteps Watts had observed in his polls. Status quo bias, a belief that tomorrow will most likely look like today, ruled the responses. Loss aversion, a tendency to avoid anticipated losses rather than pursue equally likely gains, filled the results of counterterrorism policy questions. Herding, the tendency of large groups of people to behave the same way and pursue groupthink, drove Watt’s social media recruits to the same set of answers.

Watts changed his approach using Tetlock’s insigts and Kahneman and Tversky’s heuristics and biases. Instead of asking simple yes-no questions, he flooded respondents with as many potential outcomes as he could think of, making it challenging for non experts to wade though the responses. He identified novices and less innovative thinkers by playing to the status quo bias. Every question had a “no change” response option, surrounded by responses imitating common thinking stripped from Google searches, newspaper headlines, and cable news pundits. With every question he offered survey takers a comment box or allowed them to craft an “other”response.

The prediction he made was confirmed when on May 2, 2012, U.S. Navy SEALS killed Osama in Laden in Abbottabad, Pakistan. The result was that his Twitter feed of only a couple hundred followers suddenly became more active than usual. For a brief Google search period, news of bin Laden’s death brought a world of visitors to his New Year’s prediction. His small blog suddenly had an audience, and he had a new opportunity for rater perspectives from a larger crowd.

Hypnotherapy Can Aid Some With Surgery

December 12, 2019

The title of this post is identical to the title of an article by Debra Bruno in the Health & Science Section to the 12 November 2019 issue of the Washington Post. Some U.S. hospitals are offering hypnosis to patients to lessen preoperative anxiety, to manage postoperative pain, and even to substitute for general anesthesia for partial mastectomies in breast cancer. The article notes that hypnosis has been used of years to help people quit smoking, lose weight, get to sleep, and control stress.

Staff anesthesiologist Elizabeth Rebello of Houston MD Anderson Cancer Center uses hypnotherapy for segmental (partial) mastectomies and sentinel node biopsies, in which doctors identify and remove a lymph node in the underarm area as well as cancerous tumors in the breast.

Although there have been no published results yet of the hospital’s ongoing randomized control study comparing surgical patients who get either general anesthesia or hypnosis with local anesthesia, the feedback from the 60 hypnotized patients in the study has been positive. Before the surgery, patients have a 15 to 20 minute practice session with a hypnotherapist. During the breast surgery itself, the patients are awake and EEG monitoring of brain electric impulses show many patients responding to the hypnotherapy as if they were under sedation. When asked if whether they would undergo hypnotherapy again, the overwhelming response is “yes.”

The definition for hypnotherapy is “focused attention that allows a patient to enhance control over mind and body.” It can work for minor surgeries. It also could be an option for older patients who are more susceptible to delirium after general anesthesia.

Patients need to be able to expect that their pain can be controlled by a combination of local anesthesia and hypnosis. Anesthesiologists don’t want to compromise the procedure because the patient is suffering and in pain.

It is not surprising that hypnotherapy works with pain management. Pain perception, because it originates in the brain, can be different for every person. Hypnotherapy can alter how much pain a person feels. Stanford medical school offers patients classes in self-hypnosis to deal with a variety of medical issues, including pain, stress-related neurological problems, phobias, and side effects from medical treatments, such as nausea, vomiting, and cancer.

Dr. Elizabeth Rebello, an associate professor in anesthesiology at MD Anderson Cancer Center in Houston, notes that using hypnotherapy in place of sedating and pain medications in some breast cancer surgeries has resulted in less reliance on opioids for relief during and after the procedure. She says, “Hypnosedation will not completely replace general anesthesia, but in some cases when the standard of care is general anesthesia, hypnosedation might be a better plan. If this is the case we owe it to our patients to explore this option.”

the evolving self

December 11, 2019

the evolving self is a new book by mihaly csikszentmihalyi. He’ll be referred to in this post as mc. The subtitle is “a psychology for the new millennium.” mc sets a high goal for himself. He sees it critical for the evolving self to evolve to overcome the forces of entropy. Indeed this is an extraordinary objective to achieve.

As a scholarly work, the evolving self is impressive. He reviews the worlds of genes, culture, and the self. He discusses predators and parasites, and the competition between memes and genes. HM learned much in reading this book. While reading he was thinking that an enormous number of posts would be required to capture the meaning of this book. But he came to the conclusion that this work is seriously flawed, and that it would be a mistake to have readers reading these posts. Still, if you find this topic interesting, read the book.

Key to everything mc writes is the concept of flow. Flow is what one experiences when a skill or train of thought is proceeding well. Indeed, flow is a most enjoyable experience. The problem is that mc seems to regard flow as an end in itself. To the best of HM’s experience, mc never discusses what happens when flow ceases or is disrupted. Presumably this is something that most of us have experienced. And it is an experience that can readily be viewed on television. Watch the performance of a figure skater who is obviously experiencing flow in a beautiful, flawless experience. Then she suddenly falls splat onto the ice. Or the professional golfer who is hitting birdies and eagles on consecutive holes. Then suddenly, his game deteriorates. Double bogies and sand traps become the rule. These sudden cessations in flow are most unpleasant.

mc sets the seeking of flow as goal in itself. But this could be quite harmful. The easier the task, the easier it is to achieve flow. Seeking flow itself could lead one to become addicted to such tasks, in effect becoming addicted to flow.

More difficult tasks and bodies of knowledge require extensive periods of learning which can be quite frustrating. Using the lingo of this blog, flow is a System 1 process. System 2 processing, more commonly known as thinking, requires the expenditure of mental effort.

Our personal development requires extensive System 2 processing. There are times when this becomes easy and flow is achieved. But this is not the end in itself. Indeed, it signals that the time has come to advance and to take on more difficulty.

This is what is advocated by this blog. Growth mindsets and continuous growth of these mindsets throughout one’s life. This results in a more fulfilling life and in the decrease in the likelihood of falling prey to Alzheimer’s or dementia.

Growth mindsets benefit not only the individual, but society as a whole. The advancements of science and technology require growth mindsets.

Moreover, one’s goals should not be on the acquisition of wealth and possessions. We must all feel responsible for all our fellow humans and for the development and advancement of society as a whole.

It is astonishing that despite all mc’s knowledge, there remains an enormous lacuna This gap is meditation. There have been many posts about meditation in the HM blog. There are more than 100 posts on this topic (search for meditation in the blog’s search box which is found at https://healthymemory.wordpress.com/?s=meditation.

Meditation is central because it helps us develop our powers of attention, which are central to cognitive achievement. Meditation can also lead to appreciation for and love of our fellow humans.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Gravity and the Dunning-Krueger Effect

December 10, 2019

Ask someone what they think gravity is and they will remember Sir Isaac Newton and the falling apple. And they will think that gravity is something that keeps us attached to the earth. But it is unlikely that they understand the truly remarkable contribution of Newton. Newton realized that gravity was operating in space and in the interactions of objects in space. He studied the data and over many years of data collection and mathematical developments he described how gravity affected the entire universe. And these descriptions were precise enough so that predictions could be made.

Most people think of gravity as a force of nature, but it is not necessarily a force. Newton thought of gravity less as a force than as something mysterious that acts across space. Einstein also thought of gravity less as a force than as something mysterious that acts across space. Quantum physicists agree with both Newton and Einstein: Gravity is something.

The author of Gravity said we he initiated conversations on the subject of gravity, the conversations tended to fall into one of two categories.

Category One:

Author: Nobody knows what gravity is.
Civilian: (Pause). What do you mean, nobody knows what gravity is?
Author: I mean nobody knows what gravity actually is.
Civilian: (Pause.) Isn’t it a force of nature?
Author: Okay, fine—but what does that even mean?
Civilian: (Silence.)

Category Two:

Author: Nobody know what gravity is.
Scientist: That’s right.

The author concludes, Nobody knows what gravity is, and almost nobody knows that nobody knows what gravity is. The exception is scientists. They know that nobody knows what gravity is, because they know that they don’t know what gravity is.

The author continues. “We know what gravity does, of course. In the heavens, gravity tethers the Moon to Earth, other moons to other planets, moons and planets to the Sun, the Sun to the stars, stars to stars, galaxies to galaxies. On our own planet, we know that gravity is what planes have to overcome. We all know what gravity does.”

The author is Richard Panek and the title of the book is The Trouble with Gravity: Solving the Mystery Beneath Our Feet.

Readers of the healthymemory blog should know that the Dunning-Krueger Effect consists to two components. We humans tend to think we know much more than we know. However, true experts in a field are painfully aware of how much they don’t know.
The understanding of gravitation provides an ideal example of this effect.

Physicists have estimate how much they know. The estimate is that about 4% of the universe is understood. The remaining 96% is referred to as Dark Matter and Dark energy.

Think of this estimate as an accomplishment, not as a shortcoming. It is important in every endeavor to have some grasp of what is known and what still needs to be learned. And consider what has been accomplished with the 4% that is understood. Also consider what will be accomplished as more and more of the Universe is understood. Research continues. Notions and theories are being advanced, and some highly sophisticated experiments are being designed and conducted.

This blog recommends growth mindsets. Lifelong learning encompassing new topics. HM recommends Panek’s book as a vehicle for cognitive growth. Fear not. There is no math in this book. Still it is quite challenging. One might want to skim the earlier chapters and start concentrating when Newton arrives on the scene.

Misinformation and Morality

December 9, 2019

The title of this post is the same as the title of an article by Daniel A. Effron and Medha Raj in the journal Psychological Science (2019). The subtitle of the article is Encountering Fake-News Headlines Makes Them Seem Less Unethical to Publish and Share.

The rapid spread of “fake news” has some of us worried that misinformation has become the major moral crisis of out times. When people find misinformation permissible, they should be less inclined to take action to stop it, less likely to hold its purveyors accountable, and more likely to spread it themselves. 14% of U.S. adults and 17% of UK adults admitted to sharing news that they thought was fake at the time.

The present research investigated what shapes moral judgment of fake news, “articles that are intentionally and verifiably false, and could mislead readers.” Among fact-checked news fake articles were more likely than real articles to “go viral” on social media. When a fake news article goes viral, people may encounter it multiple times. Previous research raised the concern that people are more likely to believe a fake news headline if they have seen it before. The authors of the research write. “regardless of whether one believes a piece of fake news, prior encounter with it can reduce how unethical one thinks spreading it would be. This prediction is based on the idea that previously encountered information makes it feel more fluent. People also judge repeated statements as more accurate. This is called the “illusory-truth effect.”

Four experiments were conducted. Experiment 1 tested whether four previous encounters with a fake-news headline would make the headline seem less unethical to publish. Experiment 2 tested whether a single encounter would suffice. Experiment 3 tested the following boundary condition: If previously encountered (vs. new) misinformation seems less unethical to spread because it felt more intuitively true, then encouraging people to think deliberatively instead of intuitively should attenuate he effect. Experiment 4 addressed whether repeatedly encountering the same headlines could affect moral judgments above and beyond judgments of their accuracy, likability, and popularity. These experiments examined whether prior encounters with headlines would increase people’s intentions to share and “like” them, reduce their inclination to censor the people who posted them, and increase actual sharing behavior.

Experiment 1 found that repeatedly encountering a fake news headline can reduce people’s moral condemnation for publishing it, increase their inclination to promote it on social media, and decrease their inclination to block or unfollow someone who posted it.

Experiment 2 found that encountering a fake news article once is sufficient to reduce people’s moral condemnation of publishing this information when it is encountered again.

Experiment 3 found that among participants instructed to think intuitively, previous encounters with fake-news headlines made those headlines seem less unethical to publish, which correlated in a mediation analysis with a stronger inclination to ‘like” and share those headlines and a weaker inclination to block or unfollow someone who shared them, which is consistent with the results of prior experiments, And the authors concluded that the evidence was not sufficient to conclude that instructing people to think deliberatively attenuated these effects.

Experiment 4 extended the generalizability of the previous three experiments. It showed that repeated encounters with a fake news headline can reduce moral condemnation of sharing it when people are not informed that the headline is fake. The effect was robust when analyses controlled for known consequences of repetition (judgments of accuracy, liking, and popularity), casting doubts on alternative explanations. This suggests that a relationship exists between moral judgments and social-media behaviors, a mediation analysis again showed that reduced moral condemnation of previously seen vs. new headlines correlated with stronger intentions to share the headlines. Beyond intentions, people were more likely to actually share repeated headlines than new headlines.

The authors conclude that efforts to curb misinformation are difficult to achieve. Future research is needed to understand whether moral intuitions causally affect sharing behavior in real social-media environments. The authors conclude, “The wider misinformation spreads the more likely individuals will be to encounter it multiple times. And encountering it multiple times could reduce the moral condemnation of it, and license them to spread it further.

Advertising

December 8, 2019

The title of this post is identical to the title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. This is last fix he provides.

He writes, “Advertisers use ad mediation software provided by the platforms to find the most relevant audiences for their ads. These ad platforms take into account a user’s region, device, likes, searches, and purchasing history. Something called dynamic creative optimization, a tool that uses artificial intelligence, allows advertisers to optimize their content for the user and find the most receptive audience. Targeted ads are dispatched automatically across thousands of websites and social media feeds. Engagement statistics are logged instantaneously to tell the advertiser and the platform what is working and what is not. The system tailors the ads for the audiences likely to be most receptive.”

Of course, the bad guys use all these tools to find the audiences they want as well. The Russians became experts at using two parts of Facebook’s advertising infrastructure: the ads auction and something called Custom Audiences. In the ads auction, potential advertisers submit a bid for a piece of advertising real estate. Facebook not only awards the space to the highest bidder, but also evaluates how clickbaitish the copy is. The more eyeballs the ad will get the more likely it is to get the ad space, even if the bidder is offering a lower price. Since the Russians did not care about the accuracy of the content they were creating, they’re willing to create sensational false stories that become viral. Hence, more ad space.

The Russians efforts in the 2016 election have been reviewed in previous healthy memory blog posts. The Trump organization itself used the same techniques and spent exponentially more on these Facebook ads than the Russians did.

Stengel concludes this section on techniques for reducing the amount of disinformation in our culture would reduce, but not eliminate, disinformation. He writes that disinformation will always be with us, because the problem is not facts, or the lack of them, or misleading stories filled with conjecture; the problem is us (homo sapiens) .  There are all kind of cognitive biases and psychological states, but the truth is that people are gong to believe what they want to believe. It would be wonderful if the human brain came with a lie detector, but it doesn’t.

HM urges the reader not to take this conclusion offered by Stengel too seriously. It is true that human information processing is biased, because it needs to be. Our attention is quite limited. But rather than throwing in the towel, we need to deal with our biases as best we can. The suggestions offered by Stengel are useful to this end.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Media

December 7, 2019

The title of this post is identical to he title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. Stengel writes, “American doesn’t have a “fake news” problem—it has a media literacy problem.”

He continues, “Millions of American aren’t able to tell a made-up story from a factual one. Few Americans examine the provenance of the information they get, and many will trust a story from an unknown source as much as one from the New York Times. Moreover, disinformtionists have gotten better and better at creating stories and websites that appear legitimate. During the presidential campaign the Russians created sites with names like Denver Guardian, which appeared to be genuine news sites.”

Schools don’t teach media literacy, and they need to. Students need to learn how news organizations work, and how they identify the provenance of information. Stengel notes, “Making journalism a staple of secondary education would go a long way toward solving the “fake news” problem.

Stengel makes some recommendations that would radically improve online news by using the very technology on which this news is presented. He suggests that online the story should essentially deconstruct itself. Next to the text there should be links to the full transcripts of interviews the reporter did. Those links would also include the URLs of biographies of those in the story. Writers and editors should include links to the primary and secondary sources for the story—all the research—including other news stories, books, video, and scholarly articles. There should be links to other photos or videos that were considered for the story. He would even have a link to the original outline of the story so that the reader could see how it was conceived. The top of each story should feature a digital table of contents that shows each of these aspects of the story. This is a technologically modern and even more open version of what scholars do with footnotes and bibliographies. The basic idea is to show the reader every step of the story and to show how it turned out the way it did.

Stengel concludes this section by arguing that news organizations must get rid of online clickbait and so-called content recommendation networks and “Sponsored Stories” that Taboola and Outbrain perch at the bottom of the screen and pretend to be news. He states that their presence at the bottom of the page weakens and undermines the credible journalism above it.

Algorithms, Rating Systems, and Artificial Intelligence

December 6, 2019

The title of this post is identical to he title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. Currently, the algorithms that decide what story goes to the top of Google’s search results or Facebook’s newsfeed rely in large part on how viral a story is, meaning how often it was linked to or shared by other users. It correlates popularity with value. Here the working assumption is that the more popular a story is the more valuable it is. So stories about a Kardashian quarrel might likely outrank one about nuclear weapons in Pakistan being insecure. Research shows that stories that are emotional or sensational, which also are stories that are more likely to be filled with misinformation, are shared much more widely than stories less emotional, less sensational. Consequently, these algorithms are boosting deceptive stories over factual ones. This also incentivizes people to create stories that are emotional and misleading because such stories produce more advertising revenue.

Currently, the algorithms that do this are black boxes that no one can see into. Platform companies should be compelled to be more transparent about their algorithms. If people had to publicly explain their formulas for relevance and importance, people would be able to make intelligent choices about the search engines they use. Wouldn’t you like to know the priorities of the search engine(s) you use?

Stengel notes that there has been a valuable movement toward offering ratings systems for news. These systems allow users to evaluate the trustworthiness of individual stories and the news organizations themselves. A study by the Knight Foundation found that when a news rating tool marked a site as reliable, readers’ belief in its accuracy went up. A negative rating for a story or brand made users less likely to use the information.

The Trust Project posts “Trust Indicators” for news sites, providing details of an organization’s ethics and standards. Slate has a Chrome extension called “This is Fake,” which puts a red banner over content that has been debunked, as well as on sites that are recognized as “serial fabricators.” Factmata is a start-up that is attempting to build a community-driven fact-checking system in which people can correct news articles. Stengel is on the board of advisors of NewsGuard, which labels news sites as trustworthy or not as determined by extensive research and a rigorous set of criteria.

Stengel writes that the greatest potential for detecting and deleting disinformation and “junk news” online is through artificial intelligence and machine learning. This involves using computer systems to perform human tasks such as visual perception, speech recognition, decision-making, and reasoning to detect and then delete false and misleading content.  Pattern recognition finds collections of groupings of dubious content. Data-based network analysis can distinguish between online networks that are formed by actual human beings and those that are artificially constructed by bots. Companies can adjust their algorithms to favor human-created networks over artificial ones. The platforms can even offer a predictor, based on sourcing, data. and precedent, as to whether a certain piece of content is likely to be false.

Of course, the bad guys can use it too. Stengel writes, “they are bleary developing their own systems to understand how their target audiences behave online and how to tailor disinformation for them so that they will share it. Platforms can help advertisers and companies find and reach their best audiences, and this works for bad guys as well as good. Platforms have to work to stay one step ahead of the disinformationist by developing more nuanced AI systems to protect their users from disinformation and information they do not want.

Privacy and Elections

December 5, 2019

The title of this post is identical to he title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. If your privacy information is protected, you are less likely to be targeted by deceptive information and disinformation.

Stengel thinks that an online privacy bill of rights is a good idea. One thing that needs to be mandatory in any digital bill of rights: the requirement that platforms obtain consent from users to share or sell their information and notify users about the collection of their data. This is the absolute minimum.

Regarding elections, platforms need to alert people when a third party uses their private information for online advertising. Political campaigns are highly sophisticated in their ability to use your consumer information to target you with advertising. If they know what movies you like, what shoes you by, and what books you read, they know what kind of campaign advertising you will be receptive to. At the same time, advertisers must give users the ability to output any content they receive.

The following fix could happen quickly: treat digital and online campaign advertising with the same strictness and rigor as television, radio, and print advertising. Currently, the Federal Election Commission does not do this. Television and radio stations, as well as newspapers, must disclose the source of all political advertising and who is paying for it. This is not true for digital advertising. The Honest Ads Act, which was introduced by the late Senator John McCain, Senator Amy Klobuchar, and Senator Mark Warner, is an attempt to solve the problem of hidden disinformation campaigns by creating a disclosure system for online political advertising. It would require online platforms to obtain and disclose information on who is buying political advertising, as well as who the targeted audience is for the ads. It requires that platform companies disclose if foreign agents are paying for the ads. Platform companies would also be responsible for identifying bots so that voters know whether they are being targeted by machines or actual human beings. Stengel writes that all of this is both necessary and the absolute minimum.

For this regulation to be effective, it must also be done in real time during campaigns. Currently, according to the Federal Election Commission, political campaigns do not have to disclose their ad buys until a year after the fact. This is absurd. People need to know if they are being fed disinformation and falsehoods, and to know this in a timely way so they can factor it in their decision-making. Immediacy is more important during political campaigns than at any other time. Finding out a year later that you were targeted with a false ad by a bot that influenced your vote is worse than useless.

Congress also needs to designate state and local election systems to be national critical infrastructure. This would the federal government broader posers to intervene in a crisis. The Obama administration tried to do this, but the Republican majority in Congress voted it down. Stengel writes, “This is an essential change, and should be a bi-partisan issue.

Section 230

December 4, 2019

The title of this post is identical to he title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. New legislation is needed to create an information environment that is more transparent, more consumer-focused, and makes the creators and purveyors of disinformation more accountable. Stengel calls this section legislations’s original sin, the Communicates Decency Act (CDA) of 1996. The CDA was one of the first attempts by Congress to regulate the internet. Section 230 of this act says that online platforms and their users are not considered publishers and have immunity from being sued for the content they post. It reads No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Congress’s motivation back in 1996 was not so much to shield these new platforms as to protect free speech. Congress didn’t want the government to police these platforms and thereby potentially restrict freedom of speech—it wanted the platforms to police themselves. Congress worried that if the platforms were considered publishers, they would be too draconian in policing their content and put a chill on the creation of content by third parties. The courts had suggested that if a platform exercised editorial control by removing offensive language, that made it a publisher and therefore liable for the content on its site. The idea of Section 230 was to give these companies a “safe harbor” to screen harmful content. The reasoning was that if they received a general immunity, they would be freer to remove antisocial content that violated their terms of service without violating constitutional free speech provisions.

Focus on the year this act was passed. This was the era of America Online, CompuServe, Netscape, Yahoo and Prodigy. That was a different world and there was no way to anticipate the problems brought by Facebook. Stengel notes that Facebook is not like the old AT&T. Facebook makes money off the content it hosts and distributes,. They just call it “sharing.” Facebook makes the same amount off ad revenue from shared content that is false as from shared content that is true. Note that this problem is not unique to Facebook, but perhaps Facebook is the most prominent example.

Stengel continues, “If Section 230 was meant to encourage platforms to limit content that is false or misleading, it’s failed. No traditional publisher could survive if it put out the false and untrue content that these platforms do. It would be constantly sued. The law must incentivize the platform companies to be proactive and accountable to fighting disinformation. Demonstrable false information needs to be removed from the platforms. And that’s just the beginning.”

Stengel concludes this section as follows: “But let’s be realistic. The companies will fight tooth and nail to keep their immunity. So, revising Section 230 must encourage them to make good-faith efforts to police their content, without making them responsible for every phrase or sentence on their services. It’s unrealistic to expect these platforms to vet every tweet or post. One way to do this is to revise the language of the CDA to say that no platforms that make a good faith effort to fulfill its responsibility to delete harmful content and provide information to users about that content can be liable for the damage that it does start. It’s a start.”

What to Do About Disinformation

December 3, 2019

The title of this post is identical to the title of a section in Richard Stengel’s informative work, Information Wars. This book is highly informative and provides information not only about the State Department, but also about the actions Rick Stengel took performing his job. But the most useful part of the book is this section, What to Do About Disinformation. Several posts are needed here, and even then, they cannot do justice to the information provided in the book.

When the Library of Congress was created in 1800 it had 39 million books. Today the internet generates 100 times that much data every second. Information definitely is the most important asset of the 21st Century. Polls show that people feel bewildered by the proliferation of online news and data. Mixed in with this daily tsunami there is a lot of information that is false as well as true.

Disinformation undermines democracy because democracy depends on the free flow of information. That’s how we make decisions. Disinformation undermines the integrity of our choices. According to the Declaration of Independence “Governments are instituted among Men, deriving their just powers from the consent of the governed.” If that consent is acquired through deception, the powers from it are not just. Stengel states that it is an attack on the very heart of our democracy.

Disinformation is not news that is simply wrong or incorrect. It is information that is deliberately false in order to manipulate and mislead people.

Definitions of important terms follow:
Disinformation: The deliberate creation and distribution of information that is false and deceptive in order to mislead an audience.
Misinformation: Information that is false, though not deliberately; that is created inadvertently or by mistake.
Propaganda: Information that may or may not be true that is designed to engender support for a political view or ideology.

“Fake news” is a term Donald Trump uses to describe any content he does not like. But Trump did not originate the term. The term was familiar to Lenin and Stalin and almost every other dictator of the last century. Russians were calling Western media fake news before Trump, and Trump in his admiration of Russia followed suit. Stengel prefers the term “junk news” to describe information that is false, cheap, and misleading that has been created without regard for its truthfulness.

Most people regard “propaganda” as pejorative, but Stengel believes that it is—or should be—morally neutral. Propaganda can be used for good or ill. Advertising is a form of propaganda. What the United States Information Agency did during the Cold War was a form of propaganda. Advocating for something you believe in can be defined as propaganda. Stengel writes that while propaganda is a misdemeanor, disinformation is a felony.
Disinformation is often a mixture of truth and falsity. Disinformation doesn’t necessarily have to be 100% false to be disinformation. Stengel writes that the most effective forms of disinformation are a mixture of information that is both true and false.

Stengel writes that when he was a journalist he was close to being a First Amendment absolutist. But he has changed his mind. He writes that in America the standard for protected speech has evolved since Holme’s line about “falsely shouting fire in a theater.” In Brandenburg v. Ohio, the court ruled that speech that led to or directly caused violence was not protected by the First Amendment.

Stengel writes that even outlawing hate speech will not solve the problem of disinformation. He writes that government may not be the answer, but it has a role. He thinks that stricter government regulation of social media can incentivize the creation of fact-based content and discentivize the creation of disinformation. Currently big social media platforms optimize content that has greater engagement and vitality, and such content can sometimes be disinformation or misinformation. Stengel thinks the these incentives can be changed in part through regulation and in part through more informed user choices.

What Stengel finds most disturbing is that disinformation is being spread in a way and through means that erode trust in public discourses and democratic processes. This is precisely what these bad actors want to accomplish. They don’t necessarily want you to believe them—they don’t want you to believe anybody.

As has been described in previous healthy memory blog posts, the creators of disinformation use all the legal tools on social media platforms that are designed to deliver targeted messages to specific audiences. These are the same tools—behavioral data analysis, audience segmentation, programmatic ad buying—that make advertising campaigns effective. The Internet Research Agency in St. Petersburg, Russia uses the same behavioral data and machine-learning algorithms that Coca-Cola and Nike use.

All the big platforms depend on the harvesting and use of personal information. Our data is the currency of the digital economy. The business model of Google, Facebook, Amazon, Microsoft, and Apple, among others, depends on the collection and use of personal information. They use this information to show targeted advertising. They collect information on where you go, what you do, whom you know, and what your want to know about, so they can sell that information to advertisers.

The important question is, who owns this information? These businesses argue that because they collect, aggregate, and analyze our data, they own it. The law agrees in the U.S. But in Europe, according to the EU’s General Data Protection Regulation, people own their own information. Stengel and HM agree that this is the correct model. America needs a digital bill of rights that protects everyone’s information as a new social contract.

Stengel’s concluding paragraph is “I’m advocating a mixture of remedies that optimize transparency, accountability, privacy, self-regulation, data protection, and information literacy. That can collectively reduce the creation, dissemination, and consumption of false information. I believe that artificial intelligence and machine learning can be enormously effective in combating falsehood and disinformation. They are necessary but insufficient. All three efforts should be—to use one of the military’s favorite terms—mutually reinforcing.”

What Has Happened to the Global Engagement Center?

December 2, 2019

This post is a follow up to the post titled “Information Wars.” A few weeks before the end of the Obama administration Congress codified the Global Engagement Center (GEC) into law in the 2017 National Defense Authorization Act. It’s mission was to “lead synchronize, and coordinate efforts of the Federal Government to recognize, understand, expose, and counter foreign state and non-state propaganda and disinformation efforts aimed at undermining the United States national security interests.”

When Secretary of State Rex Tillerson finally requested the money, it was not for more than a year. When Tillerson finally did make the request, a year and a half into the administration, he asked for half of the $80 million that Congress had authorized. The sponsors of the legislation, Senators Portman and Murphy said this delay was “indefensible” at a time when “ISIS is spreading terrorist propaganda and Russia is implementing a sophisticated disinformation campaign to undermine the United States and our allies.”

The GEC is no longer in the business of creating content to counter disinformation. It has become an entity that uses data science and analytics to measure and better understand disinformation. Over the past two years, a steady stream of people has quit or retired and the GEC has had a hard time hiring replacements.

What is especially worrisome is when one hears Republicans arguing the points of Russian disinformation, for example, that the Ukraine was involved in disrupting the 2016 election in debating the Democrats. One has to think that true Republicans like Ronald Reagan and John McCain are thrashing about in their graves.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Information Wars

December 1, 2019

The title of this post is identical to the title of an informative book by Richard Stengel, a former editor of Time magazine. During the second term of the Obama administration he was appointed and confirmed as the Under Secretary for Public Diplomacy. The book provides a detailed and interesting description of the State Department and the organization and workings of the State Department.

Stengel was appointed to lead the Center for Strategic Counterterrorism Communication. It was integral to the Global Engagement Center. This is important because information warfare is the primary means terrorist organizations fought. It was punctuated by despicable terrorist acts, but the primary messaging was done using the internet. Effective counter-messaging needed to be developed to counter the messaging of the terrorists.

Although ISIS and al-Qaeda are currently recognized as the primary terrorist organizations, it is important not to overlook the largest and most threatening terrorist organization, Russia. Our term “disinformation” is in fact an adaptation of the Russian word dezinformatsiya, which was the KGB term for black propaganda. The modern Russian notion of hybrid warfare comes from what is called the Gerasimov model. Gerasimov has appeared in previous healthy memory blog posts. He is the father of the idea that in the 21st century only a small part of war is kinetic. He has written that modern warfare is nonlinear with no clear boundary between military and nonmilitary campaigns. The Russians, like ISIS, merged their military lines of effort with their information and messaging line of effort.

In the old days, disinformation involved placing a false story (often involving forged documents) in a fairly obscure left-leaning newspaper in a country like India or Brazil; then the story was picked up and echoed in Russian state media. A more modern version of dezinformatsiya is the campaign in the 1990s that suggested that the U.S. had invented the AIDS virus as a kind of “ethnic bomb” to wipe out people of color.

Two other theorists of Russian information warfare are Igor Panarin, an academic, and a former KGB officer; and Alexander Dugin, a philosopher whom he called “Putin’s Rasputin.” Panarin sees Russia as the victim of information aggression by the United States. He believes there is a global information war between what he calls the Atlantic world, led by the U.S. and Europe; and the Eurasian world, led by Russia.

Alexander Dugan has a particularly Russian version of history. He says that the 20th century was a titanic struggle among fascism, communism, and liberalism, in which liberalism won out. He thinks that in the 21st century there will be a fourth way. Western liberalism will be replaced by a conservative superstate like Russia leading a multipolar world and defending tradition and conservative values. He predicts that the rise of conservative strongmen in the West will embrace these values. Dugan supports the rise of conservative right-wing groups all across Europe. He has formed relationships with white nationalists’ groups in America. Dugan believes immigration and racial mixing are polluting the Caucasian world. He regards rolling back immigration as one of the key tasks for conservative states. Dugan says that all truth is relative and a question of belief; that freedom and democracy are not universal values but peculiarly Western ones; and that the U.S. must be dislodged as a hyper power through the destabilization of American democracy and the encouragement of American isolationism.

Dugan says that the Russians are better at messaging than anyone, and that they’ve been working on it as a part of conventional warfare since Lenin. So the Russians have been thinking and writing about information war for decades. It is embedded in their military doctrines.

Perhaps one of the best examples of Russia’s prowess at information warfare is Russia Today, (RT). During HM’s working days his job provided the opportunity to view RT over an extensive period of time. What is most remarkable about RT is that it appears to bear no resemblance of information warfare or propaganda. It appears to be as innocuous as CNN. However, after long viewing one realizes that one is being drawn to accept the objectives of Russian information warfare.

Stengel notes that Russian propaganda taps into all the modern cognitive biases that social scientists write about: projection, mirroring, anchoring, confirmation bias. Stengel and his staff put together their own guide to Russian propaganda and disinformation, with examples.

*Accuse your adversary of exactly what you’re doing.
*Plant false flags.
*Use your adversary’s accusations against him.
*Blame America for everything!
*America blames Russia for everything!
*Repeat, repeat, repeat.

Stengel writes that what is interesting about this list is that it also seems to describe Donald Trump’s messaging tactics. He asks whether this is a coincidence, or some kind of mirroring?

Recent events have answered this question. The acceptance of the alternative reality that the Ukraine has a secret server and was the source of the 2016 election interference is Putin’s narrative developed by Russian propaganda. Remember that Putin was once a KGB agent. His ultimate success here is the acceptance of this propaganda by the Republican Party. There is an information war within the US that the US is losing.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.