Posts Tagged ‘Google’

Stanford Helped Pioneer Artificial Intelligence

May 21, 2019

The title of this post is identical to the first half of a title by Elizabeth Dworkin in the 19 March 2019 issue of the Washington Post. The second half of the title is “Now it wants humans at the core.” A Stanford University scientist coined the term artificial intelligence (AI) and advancements have continued at the university including the first autonomous vehicle.

Silicon Valley is facing a reckoning over how technology is changing society. Stanford wants to be at the forefront of a different type of innovation, one that puts humans and ethics at the center of the booming field of AI. The university is launching the Stanford Institute for Human-Centered Artificial Intelligence (HAI). It is intended as a think tank that will be an interdisciplinary hub for policymakers, researchers and students who will go on to build the technologies of the future. The goal is to inculcate in the next generation a more worldly and humane set of values than those that have characterized it so far—and guide politicians to make more sophisticated decisions about the challenging social questions wrought by technology.

Fei-Fei-Li, an AI pioneer and former Google vice president who is one of the two directors of the new institute said, I could not have envisaged that the discipline I was so interested in would, a decade and a half later, become one of the driving forces of the changes that humanity will undergo. That realization became a tremendous sense of responsibility.”

The goal is to raise more than $1billion. It’s advisory panel is a who’s who of Silicon Valley titans, that includes former Google executive chairman Eric Schmidt, LinkedIn co-founder Reid Hoffman, former Yahoo chief executive Marissa Mayer and co-founder Jerry Yang, and the prominent investor Jim Breyer. Bill Gates will keynote its inaugural symposium.

The ills and dangers of AI have become apparent. New statistics emerge about the tide of job loss wrought by the technology, from long-haul truckers to farmer workers to dermatologists. Elon Musk called AI “humanity’s existential threat” and compared it to “summoning the demon.”

Serious problems were raised in the series of healthy memory posts based on the book, “Zuck.” The healthy memory posts based on the book “LikeWar” raised additional problems. Both these problems could be addressed with IA. Actually IA is being used to address the issues in “LIkeWar.” Regarding the problems raised in the book “Zuck”, rather than hoping that Facebook will self-police or trying to legislate against Facebook’s problematic practices, AI could police online all these social networks and flag problematic practices.

It is the position of this blog to advocate AI be used to enhance human intelligence. This is especially important in areas where human intelligence is woeful lacking, that is intelligent augmentation (IA). Unfortunately, humans, who are regarded as social animals, have difficulties reconciling conflicting political and religious beliefs. Artificial intelligence could be used here in an intelligence augmented (IA) role. Given polarized beliefs dead ends are reached. IA could suggest different ways of framing problematic issues. Lakoff’s ideas that were promoted in the series of healthy memory blog posts under the rubric “Linguistics and Cognitive Science in the Pursuit of Civil Discourse” could provide the initial point of departure. Learning would take place and these ideas would be refined further to result in disagreeing parties being surprised about their ultimate agreement.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Zucked

March 28, 2019

The title of this post is the first part of a title of an important book by Roger McNamee. The remainder of the title is “Waking Up to the Facebook Catastrophe.” Roger McNamee is a longtime tech investor and tech evangelist. He was an early advisor to Facebook founder Mark Zuckerberg. To his friends Zuckerberg is known as “Zuck.” McNamee was an early investor in Facebook and he still owns shares.

The prologue begins with a statement made by Roger to Dan Rose, the head of media partnerships at Facebook on November 9, 2016, “The Russians used Facebook to tip the election!” One day early in 2016 he started to see things happening on Facebook that did not look right. He started pulling on that thread and uncovered a catastrophe. In the beginning, he assumed that Facebook was a victim and he just wanted to warn friends. What he learned in the months that followed shocked and disappointed him. He learned that his faith in Facebook had been misplaced.

This book is about how Roger became convinced that even though Facebook provided a compelling experience for most of its users, it was terrible for America and needed to change or be changed, and what Roger tried to do about it. This book will cover what Roger knows about the technology that enables internet platforms like Facebook to manipulate attention. He explains how bad actors exploit the design of Facebook and other platforms to harm and even kill innocent people. He explains how democracy has been undermined because of the design choices and business decisions by controllers of internet platforms that deny responsibility for the consequences of their actions. He explains how the culture of these companies cause employees to be indifferent to the negative side effects of their success. At the time the book was written, there was nothing to prevent more of the same.

Roger writes that this is a story about trust. Facebook and Google as well as other technology platforms are the beneficiaries of trust and goodwill accumulated over fifty years of earlier generations of technology companies. But they have taken advantage of this trust, using sophisticated techniques to prey on the weakest aspects of human psychology, to gather and exploit private data, and to craft business models that do not protect users from harm. Now users must learn to be skeptical about the products they love, to change their online behavior, insist that platforms accept responsibility for the impact of their choices, and push policy makers to regulate the platforms to protect the public interest.

Roger writes, “It is possible that the worst damage from Facebook and the other internet platforms is behind us, but that is not where the smart money will place its bet. The most likely case is that technology and the business model of Facebook and others will continue to undermine democracy, public health, privacy, and innovations until a countervailing power, in the form of government intervention or user protest, forces change.

The World Wide Web Goes Mobile

January 16, 2019

This is the fourth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” On January 9, 2007, Apple cofounder and CEO Steve Jobs introduced the world to the iPhone. Its list of features: a touchscreen; handheld integration of movies, television, and music; a high quality camera; plus major advances in call reception and voicemail. The most radical innovation was a speedy, next-generation browser that could shrink and reshuffle websites, making the entire internet mobile-friendly.

The next year Apple officially opened its App Store. Now anything was possible as long as it was channeled through a central marketplace. Developers eagerly launched their own internet-enabled games and utilities, built atop the iPhone’s sturdy hardware (There are about 2.5 million such apps today). The underlying business of the internet soon changed with the launch of Google’s Android operating system and competing Google Play Store that same year, smartphones ceased to be the niche of tech enthusiast, and the underlying business of the internet soon changed.

There were some 2 billion mobile broadband subscriptions worldwide by 2013. By 2020, that number is expected to reach 8 billion. In the United States, where three-quarters of Americans own a smartphone, these devises have long since replaced televisions as the most commonly used piece of technology.

The following is taken directly from the text: “The smartphone combined with social media to clear the last major hurdle in the race started thousands of years ago. Previously, even if internet services worked perfectly, users faced a choice. They could be in real life but away from the internet. Or they could tend to their digital lives in quiet isolation, with only a computer screen to keep them company. Now, with an internet-capable device in their pocket, it became possible for people to maintain both identities simultaneously. Any thought spoken aloud could be just as easily shared in a quick post. A snapshot of a breathtaking sunset or plate of food (especially food) could fly thousands of miles away before darkness had fallen or the meal was over. With the advent of mobile livestreaming, online and offline observers could watch the same even unfold in parallel.”

Twitter was one of the earliest beneficiaries of the smartphone. Silicon Valley veterans who were hardcore free speech advocates founded the companion 2006. The envisioned a platform with millions of public voices spinning the story of their lives in 140-character bursts. This reflected the new sense that it was the network, rather than the content on it, that mattered.

Twitter grew along with smartphone use. In 2007, its users were sending 5,000 tweets per day. By 2010, that number was up to 50 million; by 2015, 500 million. The better web technology offered users the chance to embed hyperlinks, images, and video in their updates.

The most prominent Twitter user is Donald Trump, who likened it to “owning your own newspaper.” What he liked most about it was that it featured one perfect voice: his own.
It appears that it is his primary means of communications. It also highlights the risks inherent in using Twitter impulsively.

Donald Trump and the Dunning-Kruger Effect

January 11, 2019

There have been fourteen prior healthy memory blog posts on the Dunning-Kruger effect. Angela Fritz in the 8 Jan 2019 issued of the Washington Post wrote a timely article titled “Psychological phenomenon helps explain the confidence of the incompetent.” The subtitle is “Dunning-Kruger effect drawing a surge of interest during the Trump years.” She writes, “In their 1999 paper, published in the Journal of Personality and Social Psychology, David Dunning and Justin Kruger put data to what has been known by philosophers since Socrates, who supposedly said something along the lines of “the only true wisdom is knowing when you know nothing.” Charles Darwin followed that up in 1871 with “ignorance more frequently begets confidence than does knowledge.”

Dunning and Kruger quizzed people on several topics, such as grammar, logical reasoning, and humor. After each test, they asked the participants how they thought the did. Specifically, participants were asked how many other quiz-takers they beat. Even though the results confirmed their hypothesis, the researchers were still shocked by the results. No matter, the subject, people who did poorly on the test ranked their competence much higher. On average, test takers who scored as low as the 10th percentile ranked themselves near the 70th percentile. Those least likely to know what they were talking about believed they knew as much as the experts. These results have been replicated in at least a dozen different domains including: math skills, wine tasting, chess, medical knowledge among surgeons, and firearm safety among hunters.

The author notes that during the election and in the months after the presidential inauguration, interest in the Dunning-Kruger effect surged. Google searches for “dunning-kruger” peaked in May 2017, according to Google Trends, and has remained high. Time spent on the Dunning-Kruger Effect Wikipedia entry skyrocketed since late 2015.

The immediately preceding post, “A President Divorced from Reality” documents the enormous knowledge that Trump says he has to accompany his highest IQ. If anything, his delusional disorder only amplifies this effect.

Brendan Nyhan, a political scientist at the University of Michigan said, “Donald Trump has been overestimating his knowledge for decades. It’s not surprising that he would continue that pattern into the White House.”

Steven Sloman, a cognitive psychologist at Brown University said, Dunning-Kruger “offers an explanation for a kind of hubris. The fact is, that’s Trump in a nutshell. He’s a man with zero political skill who has no idea he has zero political skill. And it’s given him extreme confidence.”

Sloman thinks that Dunning-Kruger effect has become popular outside of the research world because it is a simple phenomenon that could apply to all of us, as people are desperate to understand what’s going on in the world. Many people “cannot wrap their minds around the rise of Trump,” Sloman said. “He’s exactly the opposite of everything we value in a politician, and he’s the exact opposite of what we thought Americans valued.” It’s clear that this view was not reflective of what too many Americans actually thought.

Additional research by Dunning shows the poorest performers are also the least likely to accept criticism or show interest in self improvement.

Some might argue, what then about Trump’s success as a businessman and celebrity. His celebrity was based on the false belief that Trump was a successful businessman. The truth is that Trump is a failed businessman, who has declared bankruptcy numerous times. According to Donald Trump Jr., his father’s financing comes from the Russians. The Russians have recruited him and are using him for their purposes.

According to Dunning, the effect is particularly dangerous when someone with influence or the means to do harm doesn’t have anyone who can speak honestly about their mistakes. He notes several plane crashes that could have been avoided if the crew had spoken up to an overconfident pilot.

Dunning explained, “You get into a situation where people can be to deferential to the people in charge. You have to have people around you that are willing to tell you you’re willing to make an error.”

HM is more upset about Trump supporters than by Trump himself. Eventually the country should be rid of Trump, but his supporters will remain. How to explain them? Perhaps the Dunning-Kruger effect can be extended to them. These people eschew expertise ascribing expertise to the deep state. And they are highly confident in their contempt for expertise.

HM’s fear is that there is a stupidity pandemic that can be understood by the Dunning-Kruger effect. Research needs to be done on how to overcome this pandemic.

Scale of Russian Operation Detailed

December 23, 2018

The title of this post is identical to the title of an article by Craig Timberg and Tony Romm in the 17 Dec ’18 issue of the Washington Post. Subtitles are: EVERY MAJOR SOCIAL MEDIA PLATFORM USED and Report finds Trump support before and after election. This post is the first to analyze the millions of posts provided by major technology firms to the Senate Intelligence Committee.

The research was done by Oxford University’s Computational Propaganda Project and Graphic, a network analysis firm. It provides new details on how Russians worked at the Internet Research Agency (IRA), which U.S. officials have charged with criminal offenses for interring in the 2016 campaign. The IRA divided Americans into key interest groups for targeted messaging. The report found that these efforts shifted over time, peaking at key political moments, such as presidential debates or party conventions. This report substantiates facts presented in prior healthy memory blog posts.

The data sets used by the researchers were provided by Facebook, Twitter, and Google and covered several years up to mid-2017, when the social media companies cracked down on the known Russian accounts. The report also analyzed data separately provided to House Intelligence Committee members.

The report says, “What is clear is that all of the messaging clearly sought to benefit the Republican Party and specifically Donald Trump. Trump is mentioned most in campaigns targeting conservatives and right-wing voters, where the messaging encouraged these groups to support his campaign. The main groups that could challenge Trump were then provided messaging that sought to confuse, distract and ultimately discourage members from voting.”

The report provides the latest evidence that Russian agents sought to help Trump win the White House. Democrats and Republicans on the panel previously studied the U.S. intelligence community’s 2017 finding that Moscow aimed to assist Trump, and in July, said the investigators had come to the correct conclusion. Nevertheless, some Republicans on Capitol Hill continue to doubt the nature of Russia’s interference in the election.

The Russians aimed energy at activating conservatives on issues such as gun rights and immigration, while sapping the political clout of left-leaning African American voters by undermining their faith in elections and spreading misleading information about how to vote. Many other groups such as Latinos, Muslims, Christians, gay men and women received at least some attention from Russians operating thousands of social media accounts.

The report offered some of the first detailed analyses of the role played by Youtube and Instagram in the Russian campaign as well as anecdotes about how Russians used other social media platforms—Google+, Tumblr and Pinterest—that had received relatively little scrutiny. That also used email accounts from Yahoo, Microsoft’s Hotmail service, and Google’s Gmail.

While reliant on data provided by technology companies the authors also highlighted the companies’ “belated and uncoordinated response” to the disinformation campaign and, once it was discovered, their failure to share more with investigators. The authors urged that in the future they provide data in “meaningful and constructive “ ways.

Facebook provided the Senate with copies of posts from 81 Facebook pages and information on 76 accounts used to purchase ads, but it did not share posts from other accounts run by the IRA. Twitter has made it challenging for outside researchers to collect and analyze data on its platform through its public feed.

Google submitted information in an especially difficult way for researchers to handle, providing content such as YouTube videos but not the related data that would have allowed a full analysis. They wrote that the YouTube information was so hard to study, that they instead tracked the links to its videos from other sites in hopes of better understand YouTube’s role in the Russian effort.

The report expressed concern about the overall threat social media poses to political discourse within and among nations, warning them that companies once viewed as tools for liberation in the Arab world and elsewhere are now a threat to democracy.

The report also said, “Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement to being a computational tool for social control, manipulated by canny political consultants and available to politicians in democracies and dictatorships alike.”

The report traces the origins of Russian online influence operations to Russian domestic politics in 2009 and says that ambitions shifted to include U.S. politics as early as 2013. The efforts to manipulate Americans grew sharply in 2014 and every year after, as teams of operatives spread their work across more platforms and accounts to target larger swaths of U.S. voters by geography, political interests, race, religion and other factors.

The report found that Facebook was particularly effective at targeting conservatives and African Americans. More than 99% of all engagements—meaning likes, shares and other reactions—came from 20 Facebook pages controlled by the IRA including “Being Patriotic,” “Heart of Texas,” “Blacktivist” and “Army of Jesus.”

Having lost the popular vote, it is difficult to believe that Trump could have carried the Electoral College given this impressive support by the Russians. One can also envisage Ronald Reagan thrashing about in his grave knowing that the Republican Presidential candidate was heavily indebted to Russia and that so many Republicans still support Trump.
© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Cyberwar

October 31, 2018

“Kiselev called information war the most important kind of war. At the receiving end, the chairwoman of the Democratic Party wrote of ‘a war, clearly, but edged on a different kind of battlefield.’ The term was to be taken literally. Carl von Clausewitz, the most famous student of war, defined it as ‘an act of force to compel our enemy to do our will.’ What if, as the Russian military doctrine of the 2010s posited, technology made it possible to engage the enemy’s will directly, without the medium of violence? It should be possible as a Russian military planning document of 2013 proposed, to mobilize the ‘protest potential of the population’ against its own interests, or, as the Izborsk Club specified in 2014, to generate in the United States a ‘destructive paranoid reflection. Those are concise and precise descriptions of Trump’s candidacy. The fictional character won, thanks to votes meant as a protest against the system, and thanks to voters who believed paranoid fantasies that simply were not true… The aim of Russian cyberwar was to bring Trump to the Oval Office through what seemed to be normal procedures. Trump did not need need to understand this, any more than an electrical grid has to know when it is disconnected. All that matters is that the lights go out.”

“The Russian FSB and Russian military intelligence (the GRU) both took part in the cyberwar against the United States. The dedicated Russian cyberwar center known as the Internet Research Agency was expanded to include an American Department when in June 2015 Trump announced his candidacy. About ninety new employees went to work on-site in St. Petersburg. The Internet Research Agency also engaged about a hundred American political activists who did not know for whom they were working. The Internet Research Agency worked alongside Russian secret services to move Trump into the Oval Office.”

“It was clear in 2016 that Russians were excited about these new possibilities. That February, Putin’s cyber advisor Andrey Krutskikh boasted: ‘We are on the verge of having something in the information arena that will allow us to talk to the Americans as equals.’ In May, an officer of the GRU bragged that his organization was going to take revenge on Hillary Clinton on behalf of Vladimir Putin. In October, a month before the elections, Pervyi Kanal published a long and interesting meditation on the forthcoming collapse of the United States. In June 2017, after Russia’s victory, Putin spoke for himself, saying that he had never denied that Russian volunteers had made cyber war against the United States.”

“In a cyberwar, an ‘attack surface’ is the set of points in a computer program that allow hackers access. If the target of a cyberwar is not a computer program but a society, then the attack surface is something broader: software that allows the attacker contact with the mind of the enemy. For Russian in 2015 and 2016, the American attack surface was the entirety of Facebook, Instagram, Twitter, and Google.”

“In all likelihood, most American voters were exposed to Russian Propaganda. It is telling that Facebook shut down 5.8 million fake accounts right before the election of November 2016. These had been used to promote political messages. In 2016, about a million sites on Facebook were using a tool that allowed them to artificially generate tens of millions of ‘likes,’ thereby pushing certain items, often fictions, into the newsfeed of unwitting Americans. One of the most obvious Russian interventions was the 470 Facebook sites placed by Russia’s Internet Research Agency, but purported to be those of American political organizations or movements. Six of these had 340 million shares each of content on Facebook, which would suggest that all of them taken together had billions of shares. The Russian campaign also included at least 129 event pages, which reached at least 336,300 people. Right before the election, Russia placed three thousand advertisements on Facebook, and promoted them as memes across at least 180 accounts on Instagram. Russia could do so without including any disclaimers about who had paid for the ads, leaving Americans with the impression that foreign propaganda was an American discussion. As researchers began to calculate the extent of American exposure to Russian propaganda, Facebook deleted more data. This suggests that the Russian campaign was embarrassingly effective. Later, the company told investors that as many as sixty million accounts were fake.”

“Americans were not exposed to Russian propaganda randomly, but in accordance with their own susceptibility, as revealed by their practices on the internet. People trust what sounds right, and trust permits manipulation. In one variation, people are led towards even more intense outrage about what they already fear or hate. The theme of Muslim terrorism, which Russia had already exploited in France and Germany, was also developed in the United States. In crucial states such as Michigan and Wisconsin, Russia’s ads were targeted at people who could be aroused by anti-Muslim messages. Throughout the United States, likely Trump voters were exposed to pro-Clinton messages on what purported to be American Muslim sites. Russian pro-Trump propaganda associated refugees with rapists. Trump had done the same when announcing his candidacy.”

“Russian attackers used Twitter’s capacity for massive retransmission. Even in normal times on routine subjects, perhaps 10% of Twitter accounts (a conservative estimate) are bots rather than human beings: that is computer programs of greater or lesser sophistication, designed to spread certain messages to a target audience. Though bots are less numerous that humans on Twitter, they are more efficient than humans in sending messages. In the weeks before the election, bots accounted for about 20% of the American conversation about politics. An important scholarly study published the day before the polls opened warned that bots could ‘endanger the integrity of the presidential election.’ It cited three main problems: ‘first, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can be further polarized; third, spreading misinformation and unverified information can be enhanced.’ After the election, Twitter identified 2,752 accounts as instruments of Russian political influence. Once Twitter started looking it was able to identify about a million suspicious accounts per day.”

“Bots were initially used for commercial purposes. Twitter has an impressive capacity to influence human behavior by offering deals that seem cheaper or easier than alternatives. Russia took advantage of this. Russian Twitter accounts suppressed the vote by encouraging Americans to ‘text-to-vote,’ which is impossible. The practice was so massive that Twitter, which is very reluctant to intervene in discussions over its platform, finally had to admit its existence in a statement. It seems possible that Russia also digitally suppressed the vote in another way: by making voting impossible in crucial places and times. North Carolina, for example, is a state with a very small Democratic majority, where most Democratic voters are in cities. On Election Day, voting machines in cities ceased to function, thereby reducing the number of votes recorded. The company that produced the machines in question had been hacked by Russian military intelligence, Russia also scanned the electoral websites of at least twenty-one American states, perhaps looking for vulnerabilities, perhaps seeking voter data for influence campaigns. According to the Department of Homeland Security, “Russian intelligence obtained and maintained access to elements of multiple U.S. state or local electoral boards.

“Having used its Twitter bots to encourage a Leave vote in the Brexit referendum, Russia now turned them loose in the United States. In several hundred cases (at least), the very same bots that worked against the European Union attacked Hillary Clinton. Most of the foreign bot traffic was negative publicity about her. When she fell ill on September 11, 2016, Russian bots massively amplified the case of the event, creating a trend on Twitter under the hashtag #Hillary Down. Russian trolls and bots also moved to support Donald Trump directly at crucial points. Russian trolls and bots praised Donald Trump and the Republican National Convention over Twitter. When Trump had to debate Clinton, which was a difficult moment for him, Russian trolls and bots filled the ether with claims that he had won or that the debate was somehow rigged against him. In crucial swing states that Trump had won, bot activity intensified in the days before the election. On Election Day Itself, bots were firing with the hashtag #WarAgainstDemocrats. After Trump’s victory, at least 1,600 of the same bots that had been working on his behalf went to work agains Macron and for Le Pen in FRance, and against Merkel and for the AfD in Germany. Even at this most basic technical level, the war against the United States was also the war against the European Union.”

“In the United States in 2016, Russia also penetrated email accounts, and then used proxies on Facebook and Twitter to distribute selection that were deemed useful. The hack began when people were sent an email message that asked them to enter their passwords on a linked website. Hackers then used security credentials to access that person’s email account and steal its contents. Someone with knowledge of the American political system then chose what portions of this material the American public should see, and when.”

The hackings of the Democratic convention and wikileaks are well known. The emails that were made public were carefully selected to ensure strife between supporters of Clinton and her rival for the nomination, Bernie Sanders. Their release created division at the moment when the campaign was meant to coalesce. With his millions of Twitter followers, Trump was among the most important distribution channels of the Russian hacking operation. Trump also aided the Russian endeavor by shielding it from scrutiny, denying repeatedly that Russia was intervening in the campaign.
Since Democratic congressional committees lost control of private data, Democratic candidates for Congress were molested as they ran for Congress. After their private data were released, American citizens who had given money to he Democratic Party were also exposed to harassment and threats. All this mattered at the highest levels of politics, since it affected one major political party and not the other. “More fundamentally, it was a foretaste of modern totalitarianism is like: no one can act in politics without fear, since anything done now can be revealed later, with personal consequences.”

None who released emails over the internet has anything say about the relationship of the Trump campaign to Russia. “This was a telling omission, since no American presidential campaign was ever so closely bound to a foreign power. The connections were perfectly clear from the open sources. One success of Russia’s cyberwar was the seductiveness of the secret and the trivial drew America away from the obvious and the important: that the sovereignty of the United States was under attack.”

Quotes are taken directly from “The Road to Unfreedom: Russia, Europe, America” by Timothy Snyder

The Powerful Influence of Information Friction

June 29, 2018

This is the fourth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Dr. Roberts related that in May 2011 she had been following news about a local protest in Inner Mongolia in which an ethnic Mongol herdsmen had been killed by a Han Chinese truck driver. In the following days increasingly large numbers of local Mongols began protesting outside of government buildings, culminating in sufficiently large-scale protests that the Chinese government imposed martial law. These were the largest protests that Inner Mongolia had experienced in twenty years. A few months later Dr. Roberts arrived in Beijing for summer. During discussions with a friend she brought up the Inner Mongolia protest. Her friend could not recollect the event, saying that she had not heard of it. A few minutes later, she remembered that a friend of hers had mentioned something about it. but when she looked for information online, she could not find any, so she assumed that the protest itself could not have been that important.

This is what happened. Bloggers who posted information about the protest online had their posts quickly removed from the Internet by censors. As local media were not reporting on the event, any news of the protest was reported mainly by foreign sources, many of which had been blocked by the Great Firewall. Even for the media, information was difficult to come by, as reporting on the protests on the ground had been banned, and the local Internet had been shut off by the government.

Dr. Roberts noted that information about the protest was not impossible to find on the Internet. She had been following news from Boston and even in China. The simple use of a Virtual Private Network and some knowledge of which keywords to search for had uncovered hundreds of news stories about the protests. But her friend, a well-to-do, politically interested, tech-savvy woman, was busy and Inner Mongolia is several hundred miles away. So after a cursory search that turned up nothing, she concluded that the news was either unimportant or non-existent.

Another of her friends was very interested in politics and followed political events closely. She was involved in multiple organizations that advocated for genuine equality and was an opinionated feminist. Because of her feminist activist, Dr. Roberts asked her whether she had heard of the five female activists who had been arrested earlier that year in China, including in Beijing, for their involvement in organizing a series of events meant to combat sexual harassment. The arrests of these five women had been covered extensively in the foreign press and had drawn an international outcry. Articles about the activists had appeared in the New York Times and on the BBC. Multiple foreign governments had called for their release. But posts about their detention were highly censored and the Chinese news media were prohibited from reporting on it. Her friend, who participated in multiple feminist social media groups, and had made an effort to read Western news, still had not heard about their imprisonment.

Dr. Roberts kept encountering examples like these, where people living in China exhibited surprising ignorance about Chinese domestic events that had made headlines in the international press. They had not heard that the imprisoned Chinese activist Liu Xiao had won the Nobel Peace Prize. They had not heard about major labor protests that had shut down factories or bombings of local government offices. Despite the possibility of of accessing this information without newspapers, television, and social media blaring these headlines, they were much less likely to come across these stories.

Content filtering is one of the Chinese censorship methods. This involves the selective removal of social media posts in China that are written on the platforms of Chinese owned internet service providers. The government does not target criticism of government policies, but instead removes all posts related to collective action events, activists, criticism of censorship, and pornography. Censorship focuses on social media posts that are geo-located in more restive areas, like Tibet. The primary goal of government censorship seems to be to stop information flow from protest areas to other areas of China. Since large-scale protest is known to be one of the main threats to the Chinese regime, the Chinese censorship program is preventing the spread of information about protests in order to reduce their scale.

Despite extensive content filtering, if users were motivated and willing to invest time in finding information about protests, they could overcome information friction to find such information. Information is often published online before it is removed by Internet Companies. There usually is a lag of several hours to a day before content is removed from the Internet.

Even with automated and manual methods of removing content, some content is missed. And if the event is reported in the foreign press, Internet users could access information by jumping the Great Firewall using a VPN.

The structural frictions of the Great Firewall are largely effective. Only the most dedicated “jump” the Great Firewall. Those who jump the Great Firewall are younger and have more education and resources. VPN users are more knowledgeable about politics and have less trust in government. Controlling for age, having a college degree means that a user is 10 percentage points more likely to jump the Great Firewall. Having money is another factor that increases the likelihood of jumping the Great Firewall. 25% of those who jump the Great Firewall say they can understand English, as compared with only 6% of all survey respondents. 12% of those who jump work for a for a foreign-based venture compared to only 2% of all survey respondents. 48% of the jumpers have been abroad compared with 17% of all respondents.

The government has cracked down on some notable websites. Google began having conflicts with the Chinese government in 2010. Finally, in June 2014, the Chinese government blocked Google outright.

The Wikipedia was first blocked in 2004. Particular protests have long been blocked . but the entire Wikipedia website has occasionally been made unaccessible to Chinese IP addresses.

Instagram was blocked on September 29, 2014 from mainland Chinese IP addresses due to increase popularity among Hong Kong protestors.

DeepMind’s Virtual Psychology Lab Seeks Flaws in Digital Minds

February 28, 2018

The title of this post is identical to the title of an article by Chris Baraniuk in the News Section of the 10 February 2018 issue of the New Scientist. A team at Google’s DeepMind has developed a virtual 3D laboratory called Psychlab in which both humans and machines can take a range of simple tests and compare their cognitive abilities.

The tests were originally designed by psychologists to isolate and evaluate specific mental faculties in people, such as the ability to detect a change in an object that disappears and reappears. Now DeepMind is taking the same tests.

It is not surprising that DeepMInd’s software was better at some tasks. For example, it excelled at visual search—finding a given symbol in a group of others. But it failed miserably when asked to track the position of multiple symbols on a screen, a task that people can do fairly well.

One point of the project is to expose weaknesses in AIs that might otherwise go unnoticed. This should help developers improve their own systems. Accordingly, DeepMind has released Psychlab as an open-source project so anyone can use and adapt it to their needs.

Walter Boot at Florida State University says “there may be few similarities between how an AI tackles a test and the way we do. Even if the AI performance matches the human performance, it could be doing task in a completely different to a human.”

Deepmind’s co-founder Dennis Hassabsbis, has a neuroscience background. Miles Brundage at the University of Oxford says, “Comparing AI cognition with human cognition is still tantalising. Psychlab is in this spirit.”

Social Media Putting Democracy at Risk

February 24, 2018

This blog post is based on an article titled, “”YouTube excels at recommending videos—but not at deeming hoaxes” by Craig Timberg, Drew Harrell, and Tony Romm in 23 Feb 2018
issue of the Washington Post. The article begins, “YouTube’s failure to stop the spread of conspiracy theories related to last week’s school shooting in Florida highlights a problem that has long plagued the platform: It is far better at recommending videos that appeal to users than at stanching the flow of lies.”

To be fair, YouTube’s fortunes are based on how well its recommendation algorithm is tuned to the tastes of individual viewers. Consequently, the recommendation algorithm is its major strength. YouTube’s weakness in detecting misinformation was on stark display this week as demonstrably false videos rose to the top of YouTube’s rankings. The article notes that one clip that mixed authentic news images with misleading context earned more than 200,000 views before YouTube yanked it Wednesday for breaching its rules on harassment.

The article writes, “These failures this past week, which also happened on Facebook, Twitter, and other social media sites—make it clear that some of the richest, most technically sophisticated companies in the world are losing against people pushing content rife with untruth.”

YouTube apologized for the prominence of these misleading videos, which claimed that survivors featured in news reports were “crisis actors” appearing to grieve for political gain. YouTube removed these videos and said the people who posted them outsmarted the platform’s safeguards by using portions of real news reports about the Parkland, Fla, shooting as the basis for their conspiracy videos and memes that repurpose authentic content.

YouTube made a statement that its algorithm looks at a wide variety of factors when deciding a video’s placement and promotion. The statement said, “While we sometimes make mistakes with what appears in the Trending Tab, we actively work to filter out videos that are misleading, clickbait or sensational.”

It is believed that YouTube is expanding the fields its algorithm scans, including a video’s description, to ensure that clips alleging hoaxes do not appear in the trending tab. HM recommends that humans be involved with the algorithm scans to achieve man-machine symbiosis. [to learn more about symbiosis, enter “symbiosis” into the search block of the Healthymemory blog.] The company has pledged on several occasions to hire thousands more humans to monitor trending videos for deception. It is not known whether this has been done or if humans are being used in a symbiotic manner.

Google also seems to have fallen victim to falsehoods, as it did after previous mass shootings, via its auto-complete feature. When users type the name of a prominent Parkland student, David Hogg, the word “actor” often appears in the field, a feature that drives traffic to a subject.

 

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How Google and Facebook Hooked Us—and How to Break the Habit

February 17, 2018

The title of this post is identical to the title of a post by Douglas Heaven in the Features section of the 10 February 2018 issue of the New Scientist.

In 2009 Justin Rosenstein created Facebook’s “Like” Button. Now he has dedicated himself to atoning for it. Martin Moore of King’s College London said, “Just a few years ago, no one could say a bad word about the tech giants. Now no one can say a good word.” The author writes, “Facebook, Google, Apple and Amazon variously avoid tax, crush competition, and violate privacy, the complaints go. Their inscrutable algorithms determine what we see and what we know, shape opinions, narrow world views and even subvert the democratic order that spawned them.”

“Facebook knew right from the start it was making something that would exploit vulnerabilities in our psychology. Behavior design for persuasive tech, a discipline found at Stanford University in California in the 1990s, is baked into much of big tech’s hardware and software. Whether it is Amazon’s “customers who bought this also bought function”, or the eye-catching red or orange “something new” dots on you smartphone app icons, but tech’s products are not just good, but subtly designed to control us, even to addict us — to grab us by the eyeballs and hold us there.”

The article goes on and develops this theme further. Here are data points offered in the article. There are 2 billion Active Facebook Users. 88% of Google’s 2017 income came from advertising. 20% of global spending on advertising goes to Facebook and Google.

And these products have been used to interfere with democracy and to subvert elections.

The article goes on and discusses various regulatory approaches for dealing with these problems, but warns about unintended consequences.

The most telling point follows: “But if big tech’s power is based entirely on our behavior, the most effective brake on their influence is to change our own bad habits.” This point has long been advocated in the healthy memory blog. The web is filled with tips for tuning out as is the healthy memory blog. Entering “technology addiction” will lead you to ways to free yourself from this addiction. Entering “Mary Aiken” will lead you to many posts based on her book “The Cyber Effect,” which you might find are well worth your time.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Workplace

September 19, 2017

This is the tenth post based on “The Distracted Mind: Ancient Brains in a High Tech World” by Drs. Adam Gazzaley and Larry Rosen.

A study of more than 200 employees at a variety of companies studied the facts that predicted employee stress levels. Although having too much work to do was the best prediction, it was only slightly stronger in predicting exhaustion, anxiety, and physical complaints than outside interruptions, many of which were electronic in nature. Gloria Mark summarized one study that “working faster with interruptions has its cost: people in the interrupted conditions experienced a higher workload, more stress, more time pressure and effort. So interrupted work may be done faster, but at a price. Clive Thompson, in a New York Times interview, summed up research results on workplace interruptions by asserting that “we humans are Pavlovian; even thought we know we’re just pumping ourselves full of stress, we can’t help frantically checking our email the instant the bell goes ding.”

Open offices settings further exacerbate this problem. Approximately 70% of US offices—including Google, Yahoo, Goldman Sachs, and Facebook, have either no partitions or low ones that do not make for quiet workplaces. Research has shown that open offices promote excessive distractions. HM personally testifies regarding the disruptive effects of these distractions. A content analysis of 27 open-office studies identified auditory distractions, job dissatisfaction, illness, and stress as major ramifications of this type of workplace.

The bottom line is that being constantly interrupted and having to spend extra time to remember what we were doing has a negative impact on workplace productivity and quality of life. One 2005 study, before the major increase in smartphone usage, estimated that when office workers are interrupted as often as eleven times an hour it costs the United States $558 billion per year.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

 

These Posts Only Scratched the Surface

September 5, 2017

Of the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who We Really Are, ” the preceding posts have only scratched the surface. The adjective groundbreaking is appropriate as this book opens up a new and very valuable source of data, internet searches. These searches bypass most of our defenses and provide a more accurate view of the person making the searches. Seth describes not only how words are used as data, but also how bodies and pictures are used as data.

One section is titled Digital Truth Serum. In addition to Hate and Prejudice, and the Internet itself it covers the truth about customers, child abuse, abortion, and sex. HM expects that this book will become a best seller primarily for its truth about these very sensitive topics. Much of this true content is depressing and the author asks, “Can We Handle the Truth?”

A section titled Zooming In discusses
What’s Really Going on in Our Counties, Cities, and Towns?
How We fill Our Minutes and Hours
Our Doppelgängers
Seth tells stories using data.

A section titled All the World’s a Lab discusses the techniques Google and other companies use to test and evaluate their presentations. It also discusses what Seth terms Natures Cruel—but Enlightening Experiments.

The last part of the book is titled: BIG DATA HANDLE WITH CARE.
Here he discusses what Big Data Can and Cannot Do that includes The Curse of Dimensionality and The Overemphasis on What is Measurable. Although the discussion is technical, it should be accessible to most readers.

The penultimate chapter discusses two dangers:
The Danger of Empowered Corporations
The Danger of empowered Governments

 

The Response to Obama’s Prime-time Address After the Mass Shooting in San Bernadino

September 1, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” On December 2, 2015 in San Bernadino, California Rizwan Farook and Tashfeen Malik entered a meeting of Farook’s coworkers armed with semiautomatic pistols and semiautomatic rifles and murdered fourteen people. Literally minutes after the media first reported one of the shooter’s Muslim-sounding name, a disturbing number of Californians had decided what they wanted to do with Muslims: kill them.

The top Google search in California at the time was “kill Muslims” with about the same frequency that they searched for “martini recipe,” “migraine symptoms,” and “Cowboys roster.” In the days following the attack, for every American concerned with “Islamophobia” another was searching for “kill Muslems.” Hate searches were approximately 20% of all searches before the attack, more than half of all search volume about Muslims became hateful in the hours that followed it.

These search data can inform us how difficult it can be to calm the rage. Four days after the shooting, then-president Obama gave a prime-time address to the country. He wanted to reassure Americans that the government could both stop terrorism and, perhaps more important, quiet the dangerous Islamophobia.

Obama spoke of the importance of inclusion and tolerance in powerful and moving rhetoric. The Los Angeles Times praised Obama for “[warning] against allowing fear to cloud our judgment.” The New York times called the speech both “tough” and “calming.” The website Think Progress praised it as “a necessary tool of good governance, geared towards saving the lives of Muslim Americans.” Obama’s speech was judged a major success.

But was it? Google search data did not support such a conclusion. Seth examined the data together with Evan Soltas. In the speech the president said, “It is the responsibility of all American—of every faith—to reject discrimination.” But searches calling Muslims “terrorists,” “bad,” “violent,” and “evil” doubled during and shortly after the speech. President Obama also said, “It is our responsibility to reject religious tests on who we admit into this country.” But negative searches about Syrian refugees, a mostly Muslim group then desperately looking for a safe haven, rose 60%, while searches asking how to help Syrian refugees dropped 35%. Obama asked Americans to “not forget that freedom is more powerful than fear.” Still searches for “kill Muslims” tripled during the speech. Just about every negative search Seth and Soltas could think to test regarding Muslims shot up during and after Obama’s speech, and just about every positive search hey could think to test declined.

So instead to calming the angry mob, as people thought he was doing, the internet data told us that Obama actually inflamed it. Seth writes, “Things that we think are working can have the exact opposite effect from the one we expect. Sometimes we need internet data to correct our instinct to pat ourselves on the back.”

So what can be done to quell this particular form of hatred so virulent in America? We’ll try to address this in the next post.

Implicit Versus Explicit Prejudice

August 30, 2017

This post is based largely on the groundbreaking book by Seth Stephens-Davidowitz “Everybody Lies: Big Data, New Data, and What the Internet Reveals About Who we Really Are.” Any theory of racism has to explain the following puzzle in America: On the one hand, the overwhelming majority of black Americans think they suffer from prejudice—and they have ample evidence of discrimination in police stops, job interviews, and jury decisions. On the other hand, very few white Americans will admit to being racist. The dominant explanation has been that this is due, in large part, to widespread implicit prejudice. According to this theory white Americans may mean well, but they have a subconscious bias, which influences their treatment of black Americans. There is an implicit-association test for such a bias. These tests have consistently shown that it takes most people milliseconds more to associate black faces with positive words such as “good,” than with negative words such as “awful.” For white faces, the pattern is reversed. The small extra time it takes is interpreted as evidence of someone’s implicit prejudice—a prejudice the person may not even be aware of.

There is an alternative explanation for the discrimination that African-Americans feel and whites deny: hidden explicit racism. People might be aware of widespread conscious racism but to which they do not want to confess—especially in a survey. This is what the search data seems to be saying. There is nothing implicit about searching for “n_____ jokes.” It’s hard to imagine that Americans are Googling the word “n_____“ with the same frequency as “migraine and economist” without explicit racism having a major impact on African-Americans. There was no convincing measure of this bias prior to the Google data. Seth uses this measure to see what it explains.

It explains, as was discussed in a previous post, why Obama’s vote totals in 2008 and 2012 were depressed in many regions. It also correlates with the black-white wage gap, as a team of economists recently reported. In other words, the areas Seth found that make the most racist searches underpay black people. When the polling guru Nate Silver looked for the geographic variable that correlated most strongly with support in the 2016 Republican primary for Trump, he found it in the map of racism Seth had developed. That variable was searches for “n_____.”

Scholars have recently put together a state-by-state measure of implicit prejudice agains black people, which enabled Seth to compare the effects of explicit racism, as measured by Google searches, and implicit bias. Using regression analysis, Seth found that, to predict where Obama underperformed, an area’s racist Google searches explained a lot. An area’s performance on implicit-association tests added little.

Seth has found subconscious prejudice may have a more fundamental impact for other groups. He was able to use Google searches to find evidence of implicit prejudice against another segment of the population: young girls.

So, who would be harboring bias against girls? Their parents. Of all Google searches starting “Is my 2-year-old, the most common next word is “gifted.” But this question is not asked equally about young boys and young girls. Parents are two and a half times more likely to ask “Is my son gifted?” than “Is my daughter gifted?” Parents overriding concerns regarding their daughters is anything related to appearance.

https://implicit.harvard.edu/implicit/

The URL above will take you to a number of options for taking and learning about the implicit association test.

Every Body Lies

August 27, 2017

“Everybody Lies” is the title of a groundbreaking book by Seth Stephens-Davidowitz on how to effectively exploit big data. The subtitle to this book is “Big Data, New Data, and What the Internet Reveals About Who We Really are.” The title is a tad overblown as we always need to have doubts about data and data analysis. However, it is fair to say that the internet currently does the best job at revealing who we really are.

The problem with surveys and interviews is that there is a bias to make ourselves look better than we really are. Indeed, we should be aware that we fool ourselves and that we can think we are responding honestly when in truth we are protecting our egos.

Stephens-Davodowitz uses Google trends as his principle research tool and has found that people reveal more about their true selves in these searches than they do in interviews and surveys. Although the pols erred in predicting that Hilary Clinton would win the presidency, Google searches indicated that Trump would prevail.

Going back to Obama’s first election night, when most of the commentary focused on praise of Obama and acknowledgment of he historic nature of his election, roughly one in every hundred Google searches that included “Obama” also included “kkk” or “n_____.” On election night searches and sign-ups for Stormfont, a white nationalist site with surprisingly high popularity in the United States, were more than ten times higher than normal. In some states there were more searches for “n____- president” than “first black president.” So there was a darkness and hatred that was hiding from the traditional sources but was quite apparent in the searches that people made.

These Google searches also revealed that a much of what we thought about the location of racism was wrong. Surveys and conventional wisdom placed modern racism predominantly in the South and mostly among Republicans. However, the places with the highest racist search rates included upstate New York, western Pennsylvania, eastern Ohio, industrial Michigan and rural Illinois, along with West Virginia, southern Louisiana, and Mississippi. The Google search data suggested that the true divide was not South versus North, but East versus West. Moreover racism was not limited to Republicans. Racist searches were no higher in places with a high percentage of Republicans than in places with a high percentage of Democrats. These Google searches helped draw a new map of racism in the United States. Seth notes that Republicans in the South may be more likely to admit racism, but plenty of Democrats in the North have similar attitudes. This map proved to be quite significant in explaining the political success of Trump.

In 2012 Seth used this map of racism to reevaluate exactly the role that Obama’s race played. In parts of the country with a high number of racist searches, Obama did substantially worse than John Kerry, the white presidential candidate, had four years earlier. This relationship was not explained by an other factor about these ares, including educational levels, age, church attendance, or gun ownership. Racist searches did not predict poor performance for any Democratic candidate other than Obama. Moreover these results implied a large effect. Obama lost roughly 4% points nationwide just from explicit racism. Seth notes that favorable conditions existed for Obama’s elections. The Google trends data indicated the there were enough racists to help win a primary or tip a general election in a year not so favorable for Democrats.

During the general election there were clues in Google trends that the electorate might be a favorable one for Trump. Black Americans told polls they would turn out in large numbers to oppose Trump. However Google searches for information on voting in heavily black areas were way down. On election day, Clinton was hurt by low black turnout. There were more searches for “Trump Clinton” than for “Clinton Trump” in key states in the Midwest that Clinton was expected to win. Previous research has indicated that the first name in search pairs like this is likely the favored candidate.

The final two paragraphs in this post are taken directly from Seth’s book.

“But the major clue, I would argue, that Trump might prove a successful candidate—in the primaries, to begin with—was all that secret racism that my Obama study had uncovered, The Google searches revealed a darkness and hatred among a meaningful number of Americans that pundits, for many years, had missed. Search data revealed that we lived in a very different society from the one academics and journalists, relying on polls, thought that we lived in. It revealed a nasty, scary, and widespread rage that was waiting for a candidate to give voice to it.

People frequently lie—to themselves and to others. In 2008, Americans told surveys that they no longer cared about race. Eight years later, they elected as president Donald J. Trump, a man who retweeted a false claim that black people were responsible for the majority of murders of white American, defended his supporter for roughing up a Black Lives Matter protestor at one of his rallies, and hesitated in repudiating support from a former leader of the Ku Klux Klan (HM feels compelled to note that Trump has not renounced the latest endorsement by the leader of the Ku Klux Klan). The same hidden racism that hurt Barack Obama helped Donald Trump.

 

Is Googling Sufficient?

May 11, 2011

Googling has become synonymous with internet searching, but is googling sufficient? What about other search engines? I did a search using the keywords “healthy memory” on google.com, bing.com, yahoo.com, and ask.com. The first item returned was the same for all the search engines. After that, discrepancies appeared, although there was notable commonality among the four lists. I found it disturbing that all four searches also returned urls for foam mattresses. I also was disturbed, but not surprised, to see that the healthymemory blog was not among the items returned on the first page. This similarity in search results is not surprising as the search algorithms are quite similar and apparently companies can buy their way to a higher listing. I find it particularly annoying when you search for a tax form for a particular state and still see commercial firms at the top of the listings. It would be nice to have a search engine that did not allow firms to buy their way to the top of the listings. If anyone know of such a search engine, please comment.

I have maintained a standing query on Google to send me notices of entries on healthy memory. The returns I receive are slim. I find this depressing because I think this would be a topic of general interest, particularly among baby boomers who are facing the prospect of losing their memories. For a while I did receive notices occasionally about postings I had made to the healthymemory blog. Google changed some of its search criteria and I have not seen a single return regarding the healthymemory blog since. Search Engine Optimization (SEO) is a hot topic. Its objective is to recommend keywords that increase the probability of your blog or website being picked up by a search engine. I would like to increase the readership of this blog. But I don’t want to compromise it by trying to work “Lindsay Lohan” into my postings, nor do I have the resources to pay for high placements. On the other hand, I have little difficulty finding most of my published professional papers on the scholar.google.com search engine.

So how does one find websites and blogs like healthymemory? Using Google’s blog search, blogsearch.google.com, has some chance of catching one of healthymemory’s postings. Using the regular google.com the following query will yield healthymemory postings

wordpress.com:healthymemory.

The bottom line is that search engines are driven by a sites popularity and by commercial payments. Quality, by itself, does not come in to the search. So users need to use their wits, multiple search engines, and clever search strategies.  

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Search Tips from a Google Scientist

April 27, 2011

Transactive memory refers to information that is not resident in one’s own biological memory but resides externally. This external source could be human, a knowledgeable person whom you ask. Or the external source could be technological. Technology can range from a note on a postit pad, to someplace in cyberspace. Memory theorists make a distinction between information that is accessible and information that is available. Memories that are accessible are memories that can be recalled with little or no effort. Available memory is a superset of accessible memories (all accessible memories are available). Information can be available, but not accessible at the moment. Often we know that we know something, but just cannot recall it. Metamemory refers to our knowledge of our own memories. Sometimes long after we have expended great effort in trying to recall something, it will suddenly pop into our minds. Your brain can continue to search after you have abandoned your conscious attempts.

Similarly, transactive memory can be divided into three sets. The superset being potential transactive memory. Potential transactive memory includes all information stored in any form of technology and/or in any human being. Available transactive memory is information that you know exists and have probably accessed previously, but need to search for it know. Accessible transactive memory is information that you know how to access immediately without having to search for it.

So there are two circumstances when you have to resort to your search tool. One is the available case, in which the know the information exists, but have forgotten how to access it. And the other is the potential transactive memory case, in which you think the information might be available, but you are not sure. The APS Observer published a piece by a Google scientist offering search tips.1 The article pointed out that searching that there are similarities between searching on the internet and searching in your own mind. The author used the term “framing” the query. So a successful search involves finding the correct context and retrieval cue for the desired information.

Difficult search tasks are called “long tail” problems because they are tasks that require more than the usual number of searches to find. Most searches are accomplished quickly. But difficult searches can take a long, long time in to find the successful key term. These difficult search tasks are more commonly found in the technical literature. Popular searches tend to be easier. Once you have done a search on Google, you will receive a list of many potential responses. If you don’t find a good response on that first page of results, you can look at the left column for a variety of options. Clicking on “Search Tools” will provide a variety of options. Clicking on “related searches” will provide a list of searches made on this or similar topics. This can provide an aide for refining your search. It also might lead you to some serendipitous site with some interesting and useful information.

Google has an advanced search option that is quite easy to use. Many who have had bad experiences trying to search databases with arcane formulae might be scared off this option. It is quite easy to use. You can specify sites that contain all the words, you specify, some of the words you specify, and you can even include words that would exclude the site for consideration. There are also options on language, file type, and even reading level.

If you know the website where the information is located, you can put that in your search. For example if you were on google and looking for something on this blog you could simply enter

healthymemory.wordpress.com method of loci and you would be find a variety of listings specific to this topic and this blog.

1Russell, D.M. (2011). Making the Most of Online Searches. Observer, 24, April, 3-4.

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Google Art Project

March 16, 2011

I find searching for online art frustrating. Most of the websites are commercial, and that is not surprising. Should any readers know of sites that are good just for viewing art, please leave comments. For the future, however, the Google Art Project1 bodes well. Google is using its “street view” technology. Frankly, I was freaked when I first saw my neighbor’s house when I did directions on Google. I have tried a few “virtual galleries” in the past, but have been disappointed. Navigating them was difficult and the art seemed to loose quality.

Google promises to remedy these shortcomings. The “street view” technology allows the viewer to stroll through a gallery or museum and browse. But the viewer can choose to zoom in on pieces of interest. A gigapixel process is employed. On average, there are 7 billion pixels per image. This is a thousand times more than the average digital camera. In the digitized version of Whistler’s “The Princess from the Land of Porcelain” it is possible to see the faintest trace of white paint Whistler used to make his subject’s eyes glisten, as well as the nubby, gridlike texture of the canvas. Clearly, Google is offering a much more vivid rendering of online art than has been previously available.

Julian Raby, the Director of the Freer Gallery said that “the giga-pixel experience brings us very close to the essence of the artist that simply can’t be seen in the gallery.” Brian Kennedy, the Director of the Toledo Museum of Art, said that these giga-pixel images can bring out details that might not be visible to ordinary museum-goers in a gallery, but that scholars would still want a three-dimensional view of art.

Kennicott, the author of the Washington Post article, gave the technology a mixed review. During the walk-through images often appeared to be washed out and grainy. Navigation also presented some problems. I think that Google is working on these problems.

So far Google has teamed up with the Metropolitan Museum of Art, the Museum of Modern Art, the Frick Gallery in New York, the Smithsonian’s Freer Gallery of Art in DC, as well as museums in London, Madrid, Moscow, Amsterdam, Florence, Berlin, and St. Petersburg (Russia).

The Google Art Project is currently available, although it does require perseverance and the clicking on links on multiple menus. Go to google.com. Click the “more” link. Then click “even more”. Then click “labs”. Then you should find the Art Project Powered by Google. There is a video, click on learn more, explaining how to use the Google Art Project. You have the capability of saving paintings and building your own collection. We’re anxious to hear your comments and opinions.

1Kennicott, J.P. National Treasures: Google Art Project Unlocks Riches of World’s Galleries. February 2 Style Section, C1. also Washingtonpost.com/wp-dyn/content/article/2011/02/01/AR2011020106321_pf.html 

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Google vs. Facebook Revisited

February 9, 2011

A number of blog posts back I expressed disappointment that Facebook had replaced Google in terms of usage. The stated grounds for my disappointment was that Facebook consisted primarily of superficial postings. True, they are enjoyable and fun, but little is learned and there is little cognitive growth. Although it is true that there are trivial searches on Google, a Google search is more likely for some useful point of knowledge. So, according to my line of reasoning, Google users were more likely to benefit from cognitive growth than were Facebook users.

In retrospect, I think that I might have been a bit unfair with my Facebook criticism, even though I did admit that many professional organizations are on Facebook. This blog post falls into the category of transactive memory. Now if you search for transactive memory on the Wikipedia (or you can search for it on Facebook that will link you to the Wikipedia) you will find that it is memory shared among a group. Actually the Healthymemory Blog is waging a rather lonely vigil by including the other meaning of transactive memory, namely, information that is found in all forms of technology (the internet, but also in conventional libraries). Although I do think that Google provides a more ready entry to transactive memory in the sense of technology, Facebook provides an entry to transactive memory in terms of memories shared with people.

I should also note that cognitive growth does not require delving into deep academic topics. For purposes of a healthy memory, information about sports and movies can form new memory circuits and reinvigorate old memory circuits in the brain. So the important point is to be cognitively active. In this respect Facebook can be quite helpful. It can serve as a resource for sharing information and collaborating with fellow human beings.

Personally, I provide a poor example. The Healthymemory Blog does have a Facebook posting, but I have done nothing with it, so it is rather sparse. I am interested in any experiences readers of this blog might have had in using Facebook in learning about topics of interest and in sharing information regarding those topics of interest. Please leave your comments. 

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Google vs. Facebook

January 19, 2011

I found the news that Facebook had surpassed Google in usage quite depressing, particularly with respect to considerations regarding cognitive growth and development. Of course, it seems that everyone, myself included, is on Facebook. Included here are professional organizations and businesses. So the news should not be surprising; so why then do I find it depressing?
Let us compare and contrast the reasons for using Google against the reasons for using facebook. Someone who uses Google is usually trying to learn something. This might simply be information on a restaurant, or a movie, or a stock investment. Or someone might be looking for the definition of a word or trying to understanding a topic. Someone who is really interested in a topic might be using Google Scholar. Or someone might be trying to remember what the name of something is by searching for other things that remind you of the thing. It seems to me that these activities lead to cognitive growth, of course, some to deeper levels than others. And you can use Google to find people and build social relationships.

Perhaps it is this last activity where Facebook excels over Google. It is true hat one can build and renew social relationships, but it seems that most “friending” is done at a superficial level. Some people “friend” just to boast of the number of friends they have. I continually receive “friend” requests from people I don’t know and can find no reason for wanting to know. With the exception of genuine social relationships, I see little on Facebook that would foster cognitive growth or a healthy memory. When I review most of the postings on Facebook, I do not think that it would be any great loss if they were lost forever. Now the loss of a truly great search engine like Google would be catastrophic.

Of course, Myspace was once a top website that has declined seriously in popularity. I just looked at the top websites as of January 5, 2011 and saw that Google was back on top. Now wikipedia.org was in 7th place. Wikipedia should be one of the premier websites for cognitive growth.

I would like to hear your opinions on this topic. Please submit your comments.

© Douglas Griffith and healthymemory.wordpress.com, 2011. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Google and Transactive Memory

January 12, 2010

The American Dialect Society has picked “Google” as the word of the decade (2000-2009). It is worth pondering the significance of this selection. “I think my life has been more affected by ‘Google’ than ‘9/11’” said one college student1. At first, this assertion might seem a bit extreme, but if you were not personally affected 9/11, it just might be true. Google has achieved such dominance in the market that it has become a synonym for internet search. For most of us, it has become a part of our daily lives. We take it for granted and perhaps fail to appreciate its larger significance.

Google is a tool that facilitates the accessing and searching of transactive memory that is located in cyberspace. It is helpful to distinguish three classes of transactive memory on the internet. Accessible transactive memory does not require Google. This is information that you cannot recall from your personal memory, but you do remember how to access via the internet. Google, however, is useful for available transactive memory. This is information that you know is on the internet. You might well have visited this site before, but it is not bookmarked and you do not know how to find it. Then it’s Google to the rescue. Potential transactive memory is truly vast. That is all the information available on the internet, which is a substantial percentage of all human knowledge. Potential transactive memory presents the enormous opportunity for cognitive growth. Google, along with other sites such as delicious.com, are key tools for accessing potential transactive memory and converting portions of it to available transactive memory, accessible transactive memory, or your own personal biological memory depending on how well you need to know this information.

In this light, Google is a key tool for an healthy memory and cognitive growth. As we age there is an increasing tendency to rely upon what we know and not to pursue new knowledge. We should pursue new knowledge as long as we live.

 1Zak., D. (2010). American Dialect Society picks ‘tweet,’ ‘Google’ as top words for 2009, decade. The Washington Post, January 9,2010;C01. Also search tags for “Transactive Memory” on delicious.com.

© Douglas Griffith and healthymemory.wordpress.com, 2009. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.