Posts Tagged ‘Roger McNamee’

Get A Life!

April 9, 2019

This is the final post of a series of posts based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” Perhaps the best way of thinking about Facebook and related problems is via Nobel Winning Lauerate Daniel Kahneman’s Two System View of Cognition. System 1 is fast and emotional. Beliefs are usually the result of System 1 processing. System 2 is slow, and what we commonly regard as thinking.

The typical Facebook user is using System 1 processing almost exclusively. He is handing his life over to Facebook. The solution is to Get a Life and take your life back from Facebook.

The easiest way to do this is to get off from Facebook cold turkey. However, many users have personal reasons for using Facebook. They should take back their lives by minimizing their use of Facebook.

First of all, ignore individual users unless you know who they are. Ignore likes and individual opinions unless you know and can evaluate the individual. Remember what they say about opinions, “they’e like a—h—-s, everybody has one.” The only opinions you should care about are from responsible polls done by well known pollsters.

You should be able to find useful sources on your own without Facebook. Similarly you can find journalists and authors on your own without Facebook. Spend time and think about what you read. Is the article emotional? Is the author knowledgeable?

If you take a suggestion from Facebook, regard that source skeptically.

Try to communicate primarily via email and avoid Facebook as much as possible.

When possible, in person meetings are to be preferred.

In closing, it needs to be said that Facebook use leads to unhealthy memories. And perhaps, just as in the case of Trump voters, HM predicts an increased incidence of Alzheimer’s and dementia among heavy Facebook users.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

What’s Being Done

April 8, 2019

This is the twelfth post based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” The remainder of the book, and that remainder is large, discusses what is being done to remedy these problems. So people are concerned. One approach is to break up monopolies. But that approach ignores the basic problem. Facebook is taking certain actions, one of which is encryption is definitely bad Encryption would simply allow Facebook to hide its crimes.

One idea, which is not likely but has received undeserved attention, is to monetize users’ data so the Facebook would have to pay for its use. Unfortunately, this has likely provided users with hopes of future riches for their Facebook use. Although this is indeed how Facebook makes it money, it is unlikely to want to share it with users. Advertisements are pervasive in the world. Although we can try to ignore them in print media, advertisements need to be sat through on television unless one wants to record everything and fast forward through the ads later.

Moreover, there are users, and HM is one of them, who want ads presented on the basis of online behavior. Shopping online is much more efficient than conventional shopping, and ads taken from interests users shown online, provide more useful information. Amazon’s suggestions are frequently very helpful.

The central problem with Facebook is the artificial intelligence and algorithms that bring users of like mind together, and foster hate and negative emotions. This increases polarization and hatred that accompanies polarization.

Does Facebook need to be transparent and ask if users want to be sent off to these destinations the algorithms and AI have chosen? Even when explanations are provided polarization might still be enhanced as birds of a feather do tend to flock together on their own, but perhaps with less hate and extremism. There are serious legal and freedom of speech problems that need to be addressed.

Tomorrow’s post provides a definitive answer to this problem.

Damaging Effects on Public Discourse

April 7, 2019

This is the eleventh post based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” In the MIT Technology Review professor Zeynep Tufekci explained why the impact on internet platforms is so damaging and hard to fix. “The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone. It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities and we seek approval from our like-minded peers. We bond with our team by yelling at the fans on the other one. In sociology terms, we strengthen our feeling of ‘in-group’ belonging by increasing our distance from and tension with the ‘out-group’—us versus them. Our cognitive universe isn’t an echo chamber, but our social one is. That is why the various projects for fact-checking claims in the news, while valuable, don’t convince people. Belonging is stronger than facts.” To this HM would add “beliefs are stronger than facts.” Belonging leads to believing what the group believes. As has been written in previous healthymemory blog posts, believing is a System One Process in Kahneman’s Two-process view of cognition. And System One processing is largely emotional. It shuts out System Two thinking and promotes stupidity.

Facebook’s scale presents unique threats for democracy. These threats are both internal and external. Although Zuck’s vision of connecting the world and bringing it together may be laudable in intent, the company’s execution has had much the opposite effect. Facebook needs to learn how to identify emotional contagion and contain it before there is significant harm. If it wants to be viewed as a socially responsible company, it may have to abandon its current policy of openness to all voices, no matter how damaging. Being socially responsible may also require the company to compromise its growth targets. In other words, being socially responsible will adversely affect the bottom line.

Are you in Control?

April 6, 2019

This is the tenth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Facebook wants you to believe that you are in control. But this control is an illusion. Maintaining this illusion is central to every platform’s success, but with Facebook, it is especially disingenuous. Menu choices limit user actions to things that serve Facebook’s interest. Facebook’s design teams exploit what are known as “dark patterns” in order to produce desired outcomes. Wikipedia defines a dark pattern as “a user interface that has been carefully crafted to trick users into doing things.” Facebook tests every pixel to ensure it produces the desired response. For example: which shade of red best leads people to check their notifications? for how many milliseconds should notifications bubbles appear in the bottom left before fading away to most effectively keep users on site? what measures of closeness should we recommend new friends of you to “add”?

With two billion users the cost for testing every possible configuration is small. And Facebook has taken care to make its terms of service and privacy headings hard to find and nearly impossible to understand. Facebook does place a button on the landing page to provide access to the terms of service, but few people click on it. The button is positioned so that hardly anyone even sees it. And those who do see it have learned since the early days of the internet to believe that terms or service are long and incomprehensible, so they don’t press it either.

They also use bottomless bowls. News Feeds are endless. In movies and television, scrolling credits signal to the audience that it is time to move on. They provide a “stopping cue.” Platforms with endless news feeds and autoplay remove that signal to ensure that users maximize their time on site for every visit. They also use autoplay on their videos. Consequently, millions of people are sleep deprived from binging on videos, checking Instagram, or browsing on Facebook.

Notifications exploit one of the weaker elements of human psychology. They exploit an old sales technique, called the “foot in the door” strategy,” that lures the prospect with an action that appears to be low cost, but sets in motion a process leading to bigger costs. We are not good at forecasting the true cost of engaging with a foot-in-door strategy. We behave as though notifications are personal to us, completely missing that they are automatically generated, often by an algorithm tied to an artificial intelligence that has concluded that the notification is just the thing to provoke an action that will serve Facebook’s economic interests.

We humans have a need for approval. Everyone wants to feel approved of by others. We want our posts to be liked. We want people to respond to our texts, emails, tags, and shares. This need for social approval is what what made Facebook’s Like button so powerful. By controlling how often an entry experiences social approval, as evaluated by others, Facebook can get that user to do things that generate billions of dollars in economic value. This makes sense because the currency of Facebook is attention.

Social reciprocity is a twin of social approval. When we do something for someone else, we expect them to respond in kind. Similarly, when when a person does something for us, we feel obligated to reciprocate. So when someone follows us, we feel obligated to follow them. If w receive an invitation to connect from a friend we may feel guilty it we do not reciprocate the gesture and accept it.

Fear of Missing Out (FOMO) is another emotional trigger. This is why people check their smart phone every free moment, perhaps even when they are driving. FOMO also prevents users from deactivating their accounts. And when users do come to the decision to deactivate, the process is difficult with frequent attempts to keep the user from deactivating.

Facebook along with other platforms work very hard to grow their user count but operate with little, if any, regard for users as individuals. The customer service department is reserved for advertisers. Users are the product, at best, so there is no one for them to call.

It Gets Even Worse

April 5, 2019

This is the ninth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” This post picks up where the immediately preceding post, “Amplifying the Worse Social Behavior” stopped. Users sometimes adopt an idea suggested by Facebook or by others on Facebook as their own. For example, if someone is active in a Facebook Group associated with a conspiracy theory and then stop using the platform for a time, Facebook will do something surprising when they return. It might suggest other conspiracy theory Groups to join because they share members with the first conspiracy Group. Because conspiracy theory Groups are highly engaging, they are likely to encourage reengagement with the platform. If you join the Group, the choice appears to be yours, but the reality is that Facebook planted the seed. This is because conspiracy theories are good for them, not for you.

Research indicates that people who accept one conspiracy theory have a high likelihood of accepting a second one. The same is true of inflammatory disinformation. Roger accepts the fact that Facebook, YouTube, and Twitter have created systems that modify user behavior. Roger writes, “They should have realized that global scale would have an impact on the way people use their products and would raise the stakes for society. They should have anticipated violations of their terms of service and taken steps to prevent them. Once made aware of the interference, they should have cooperated with investigators. I could no longer pretend that Facebook was a victim. I cannot overstate my disappointment. The situation was much worse than I realized.”

Apparently, the people at Facebook live in their own preference bubble. Roger writes, “Convinced of the nobility of their mission, Zuck and his employees reject criticism. They respond to every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. To them, connecting 2.2 billion people is so obviously a good thing, and continued growth so important, that they cannot imagine that the problems that have resulted could be in any way linked to their designs or business decisions. As a result, when confronted with evidence that disinformation and fake news spread over Facebook influenced the Brexit referendum and the election of Putin’s choice in the United States, Facebook took steps that spoke volumes about the company’s world view. They demoted publishers in favor of family, friends, and Groups on the theory that information from those sources would be more trustworthy. The problem is that family, friends, and Groups are the foundational elements of filter and preference bubbles. Whether by design or by accident, they share the very disinformation and fake news that Facebook should suppress.

Amplifying the Worst Social Behavior

April 4, 2019

This is the eighth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “The competition for attention across the media and technology spectrum rewards the worst social behavior. Extreme views attract more attention, so platforms recommend them. News Feeds with filter bubbles do better at holding attention than News Feeds that don’t have them. If the worst thing that happened with filter bubbles was that they reinforced preexisting beliefs, they would be no worse than many other things in society. Unfortunately, people in a filter bubble become increasingly tribal, isolated, and extreme. They seek out people and ideas that make them comfortable.”

Roger continues, “Social media has enabled personal views that had previously been kept in check by social pressure—white nationalism is an example- to find an outlet.” This leads one to ask the question whether Trump would have been elected via the Electoral College if it weren’t for social media. Trump’s base consists of Nazis and white supremacists and constitutes more than a third of the citizens. Prior to the election, HM would never have believed that this was the case. Now he believes and is close to being clinically depressed.

Continuing on, “Before the platforms arrived, extreme views were often moderated because it was hard for adherents to find one another. Expressing extreme views in the real world can lead to social stigma, which also keeps them in check. By enabling anonymity and/or private Groups, the platforms removed the stigma, enabling like-minded people, including extremists, to find one another, communicate, and, eventually, to lose the fear of social stigma.”

Once a person identifies with an extreme position on an internet platform, that person will be subject to both filter bubbles and human nature. There are two types of bubbles. Filter bubbles are imposed by others, whereas a preference bubble is a choice, although the user might be unaware of this choice. By definition, a preference bubble takes users to a bad place, and they may not even be conscious of the change. Both filter bubbles and preference bubbles increase time on site, which is a driver of revenue. Roger notes that in a preference bubble, users create an alternative reality, built around values shared with a tribe, which can focus on politics, religion, or something else. “They stop interacting with people with whom they disagree, reinforcing the power of the bubble. They go to war against any threat to their bubble, which for some users means going to war against democracy and legal norms, They disregard expertise in favor of voices from their tribe. They refuse to accept uncomfortable facts, even ones that are incontrovertible. This is how a large minority of Americans abandoned newspapers in favor of talk radio and websites that peddle conspiracy theories. Filter bubbles and preference bubbles undermine democracy by eliminating the last vestiges of common ground among a huge percentage of Americans. The tribe is all that matters, and anything that advances the tribe is legitimate. You see this effect today among people whose embrace of Donald Trump has required them to abandon beliefs they held deeply only a few years earlier. Once again, this is a problem that internet platforms did not invent. Existing issues in society created a business opportunity that platforms exploited. They created a feedback loop that reinforces and amplifies ideas with a speed and at a scale that are unprecedented.”

Clint Watts in his book, “Messing with the Enemy” makes the case that in a preference bubble, facts and expertise can be the core of a hostile system, an enemy that must be defeated. “Whoever gets the most likes is in charge; whoever gets the most shares is an expert. Preference bubbles, once they’ve destroyed the core, seek to use their preference to create a core more to their liking, specially selecting information, sources, and experts that support their alternative reality rather than the real physical world.” Roger writes, “The shared values that form the foundation of our democracy proved to be powerless against the preference bubbles that have evolved over the past decade. Facebook does not create preference bubbles, but it is the ideal incubator for them. The algorithms that users who like one piece of disinformation will be fed more disinformation. Fed enough disinformation, users will eventually wind up first in a filter bubble and then in a preference bubble. if you are a bad actor and you want to manipulate people in a preference bubble, all you have to do is infiltrate the tribe, deploy the appropriate dog whistles, and you are good to go. That is what the Russians did in 2016 and what many are doing now.

The Effects Facebook Has on Users

April 3, 2019

This is the seventh post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “It turns out that connecting 2.2 billion people on a single network does not naturally produce happiness at all. It puts pressure on users, first to present a desirable image, then to command attention in the form of Likes or shares from others. In such an environment, the loudest voices dominate.” This can be intimidating. Consequently, we follow the human tendency to organize into clusters and tribes. This begins with people who share our beliefs. Most often this consists of family, friends, and Facebook Groups to which we belong. Facebook’s news feed encourages every user to surround him- or herself with like-minded people. Notionally, Facebook allows us to extend our friends network to include a highly diverse community, but many users stop following people with whom they disagree. Usually it feels good when we cut off someone who provokes us and lots of people do so. Consequently friends lists become more homogeneous over time. Facebook amplifies this effect with its approach to curating the News Feed. Roger writes, “When content is coming from like-minded family, friends, or Groups, we tend to relax our vigilance, which is one of the reasons why disinformation spreads so effectively on Facebook.

An unfortunate by-product of giving users what they want are filter bubbles. And unfortunately, there is a high correlation between the presence of filter bubbles and polarization. Roger writes, “I am not suggesting that filter bubbles create polarization, but I believe they have a negative impact on public discourse and political because filter bubbles isolate the people stuck in them. Filter bubbles exist outside Facebook and Google, but gains in attention for Facebook and Google are increasing the influence of their filter bubbles relative to others.”

Although practically everyone on Facebook has friends and family, many also are members of Groups. Facebook allows Groups on just about anything, including hobbies, entertainment, teams, communities, churches, and celebrities. Many groups are devoted to politics and they cross the full spectrum. Groups enables easy targeting by advertisers so Facebook loves them. And bad actors like them for the same reason. Case Sunstein, who was the administrator of the White House Office of Information and Regulatory Affairs for the first Obama administration conducted research indicating that when like-minded people discuss issues, their views tend to get more extreme over time. Jonathan Morgan of Data for Democracy has found that as few as 1 to 2 percent of a group can steer the conversation if they are well-coordinated. Roger writes, “That means a human troll with a small army of digital bots—software robots—can control a large, emotional Group, which is what the Russians did when they persuaded Groups on opposite sides of the same issue—like pro-Muslim groups and anti-Muslim groups—to simultaneously host Facebook events in the same place at the same time hoping for a confrontation.

Roger notes that Facebook asserts that users control their experience by picking the friends and sources that populate their News Feed when in reality an artificial intelligence, algorithms, and menus created by Facebook engineers control every aspect of that experience. Roger continues, “With nearly as many monthly users are there are notional Christians in the world, and nearly as many daily users as there are notional Muslims, Facebook cannot pretend its business model does not have a profound effect. Facebook’s notion that a platform with more than two billion users can and should police itself also seems both naive and self-serving, especially given the now plentiful evidence to the contrary. Even if it were “just a platform,” Facebook has a responsibility for protecting users from harm. Deflection of responsibility has serious consequences.”

Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks

April 2, 2019

This is the sixth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” In 2014, Facebook published a study called “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” “This experiment entailed manipulating the balance of positive and negative messages in News Feeds of nearly seven hundred thousand users to measure the influence of social networks on mood. The internal report claimed the experiment provided evidence that emotions can spread over its platform. Facebook did not get prior informed consent or provide any warning. Facebook made people sad just to see if it could be done. Facebook was faced with strong criticism for this experiment. Zack’s right hand lady, Sheryl Sandberg said: “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated. And for that communication we apologize. We never meant to upset you.”

Note that she did not apologize for running a giant psychological experiment on users. Rather, she claimed that experiments like this are normal “for companies.” So she apologized only for the communication. Apparently running experiments on users without prior consent is a standard practice at Facebook.

Filter Bubbles

April 1, 2019

This is the fiftth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Adults get locked into filter bubbles. Wikipedia defines filter bubbles as “a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the users, such as location, past click-behavior and search history.

Filter bubbles are not unique to internet platforms. They can also be found on any journalistic medium that reinforces preexisting beliefs of its audience, while surprising any stories that might contradict them, such as Fox News, In the context of Facebook, filter bubbles have several elements. In Facebook’s endless pursuit of engagement, Facebook’s AI and algorithms feed users a steady diet of content similar to what has engaged us most in the past. Usually that is content that we “like.” Each click, share, and comment helps Facebook refine its AI. With 2.2 billion people clicking, sharing, and commenting every month—1.47 billion every day—Facebook’s AI knows more about users than the users can possibly imagine. All that data in one place is a target for bad actors, even if it were well-protected. But Roger writes that Facebook’s business model is to give the opportunity to exploit that data to just about anyone who is willing to pay for the privilege.

One can make the case that these platforms compete in a race to the bottom of the brain stem—where AIs present content that appeals to the low-level emotions of the lizard brain, such things as immediate rewards, outrage, and fear. Roger writes, “Short videos perform better than longer ones. Animated GIFs work better than static photos. Sensational headlines work better than calm descriptions of events. Although the space of true things is fixed, the space of falsehoods can expand freely in any direction. False outcompetes true. Inflammatory posts work better at reaching large audiences within Facebook and other platforms.”

Roger continues, “Getting a user outraged, anxious, or afraid is a powerful way to increase engagement. Anxious and fearful users check the site more frequently. Outraged users start more content to let other people know what they should also be outraged about. Best of all from Facebook’s perspective, outraged or fearful users in an emotionally hijacked state become more reactive to further emotionally charge content. It is easy to imagine how inflammatory content would accelerate the heart rate and trigger dopamine hits. Facebook knows so much about each user that they can often tune News Feed to promote emotional responses. They cannot do this all the time for every user, but they do it far more than users realize. And they do it subtly in very small increments. On a platform like Facebook, where most users check the site every day small nudges over long periods of time can eventually produce big changes.”

The Role of Artificial Intelligence

March 31, 2019

This is the fourth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Companies like Facebook and Google use artificial intelligence (AI) to build behavioral prediction engines that anticipate our thoughts and emotions based on patterns found in the vast amount of data they have accumulated about users. Users of likes, posts, shares, comments, and Groups have taught Facebook’s AI how to monopolize our attention. As a result, Facebook can offer advertisers exceptionally high-quality targeting.

This battle for attention requires constant innovation. In the early days of the internet the industry learned that a user adapts to predictable ad layouts, skipping over them without registering any of the content. There’s a tradeoff when it comes to online ads. Although it is easy to see that the right person is seeing the ad, it is much harder to make sure that the person is paying attention to the ad. The solution to the latter problem is to maximize the time users spend on the platform. If users devote only a small percentage of attention to the ads they see, then they try to monopolize as much of the users’ attention as possible. So Facebook as well as other platforms add new content formats and products to stimulate more engagement. Text was enough at the outset. Next came photos, then mobile. Video is the current frontier. Facebook also introduces new products such as Messenger and, soon, dating. To maximize profits, Facebook and other platforms hide the data on the effectiveness of ads.

Platforms prevent traditional auditing practices by providing less-than-industry-standard visibility. Consequently advertisers say, “I know half my ad spending is wasted; I just don’t know which half. Nevertheless, platform ads work well enough that advertisers generally spend more every year. Search ads on Google offer the clearest payback, but brand ads on other platforms are much harder to measure. But advertisers need to put their message in front of prospective customers, regardless of where they are. When user gravitate from traditional media to the internet, the ad dollars follow them. Platforms do whatever they can to maximize daily users’ time on site.

As is known from psychology and persuasive technology, unpredictable, variable rewards stimulate behavioral addiction. Like buttons, tagging, and notifications trigger social validation loops. So users do not stand a chance. We humans have evolved a common set of responses to certain stimuli that can be exploited by technology. “Flight or fight” is one example. When presented with visual stimuli, such as vivid colors, red is a trigger color—or a vibration agains the skin near our pocket that signals a possible enticing reward, the body responds in predictable ways, such as a faster heartbeat and the release of dopamine are meant to be momentary responses that increase the odds of survival in a life-or-death situation. Too much of this kind of stimulation is bad for all humans, but these effects are especially dangerous in children and adolescents. The first consequences include lower sleep quality, an increase in stress, anxiety, depression, and inability to concentrate, irritability, and insomnia. Some develop a fear of being separated from their phone.
Many users develop problems relating to and interacting with people. Children get hooked on games, texting, Instagram, and Snapchat that change the nature of human experience. Cyberbullying becomes easy over social media because when technology mediates human relationships, the social cues and feedback loops that might normally cause a bully to experience shunning or disgust by their peers are not present.

Adults get locked into filter bubbles. Wikipedia defines filter bubbles as “a state of intellectual isolation that can result from personalized searches when a website algorithms selectively guesses what information a user would like to see.

Brexit

March 30, 2019

This is the third post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” The United Kingdom voted to exit the European Union in June 2016. Many posts have been written regarding how Russia used social media, including Facebook, to push Trump in the voting so that he won the Electoral College (but not the popular vote which was won by his opponent by more than 3 million votes).

The Brexit vote came as a total shock. Polling data had suggested that “Remain” would win over “Leave” by about four points. Precisely the opposite happened, and no one could explain the huge swing. A possible explanation occurred to Roger. “What if Leave had benefited from Facebook’s architecture? The Remain campaign was expected to win because the UK had a sweet deal with the European Union: it enjoyed all the benefits of membership, while retaining its own currency. London was Europe’s undisputed financial hub, and UK citizens could trade and travel freely across the open borders of the continent. Remain’s “stay the course” message was based on smart economics but lacked emotion. Leave based its campaign on two intensely emotional appeals. It appealed to ethnic nationalism by blaming immigrants for the country’s problems, both real and imaginary. It also promised that Brexit would generate huge savings that would be used to improve the National Health Service, an idea that allowed voters to put an altruistic shine on an otherwise xenophobic proposal.” So here is an example of Facebook exploiting System 1 processes that was explained in the immediately preceding post.

Roger writes, “The stunning outcome of Brexit triggered a hypothesis: in an election context, Facebook may confer advantages to campaign messages based on fear or anger over those based on neutral or positive emotions. It does this because Facebook’s advertising business model depends on engagement, which can best be triggered through appeals to our most basic emotions. What I did not know at the time is that while joy also works which is why puppy and cat videos and photos of babies are so popular, not everyone reacts the same way to happy content. Some people get jealous, for example. ‘Lizard brain’ emotions such as fear and anger produce a more uniform reaction and are more viral in a mass audience. When users are riled up, they consume and share more content. Dispassionate users have relatively little value to Facebook, which does everything in its power to activate the lizard brain. Facebook has used surveillance to build giant profiles on every user.”

The objective is to give users what they want, but the algorithms are trained to nudge user attention in directions that Facebook wants. These algorithms choose posts calculated to press emotional buttons because scaring users or pissing them off increases time on site. Facebook calls it engagement when users pay attention, but the goal is behavior modification that makes advertising more valuable. At the time the book was written, Facebook is the fourth most valuable company in America, despite being only fifteen years old, and its value stems from its mastery of surveillance and behavioral modification.
So who was using Facebook to manipulate the vote? The answer is Russia. Just as they wanted to elect Trump president. Russia used the Ukraine as a proving ground for their disruptive technology on Facebook. Russia wanted to breakup the EU, of which Great Britain was a prominent part. The French Minister of Foreign Affairs has found that Russia is responsible for 80% of disinformation activity in Europe. One of Russia’s central goals is to break up alliances.

Zucking

March 29, 2019

This is the second post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “Zuck created Facebook to bring the world together.’ What he did not know when he met Zuck, but that he eventually discovered was that Zuck’s idealism was unbuffered by realism or empathy. Zuck seems to have assumed that everyone would view and use Facebook the way he did, not imagining how easily the platform could be exploited to cause harm. He did not believe in data privacy and did everything he could to maximize disclosure and sharing. Roger writes that Zuck operated the company as if every problem could be solved with more or better code. “He embraced invasive surveillance, careless sharing of private data, and behavior modification in pursuit of unprecedented scale and influence. Surveillance, the sharing of user data, and behavioral modification are the foundation of Facebook’s success. Users are fuel for Facebook’s growth and, in some cases, the victims of it.”

The term “behavioral modification” is used here in a different sense than how it is usually meant. Typically behavioral modification is used to modify or eliminate undesirable behaviors, such as smoking. Although sometimes this involves the use of painful stimuli, there are effective techniques that avoid aversive stimuli.

The behavioral modification involved in Zucking can best be understood in terms of Kahneman’s two process view of cognition. The two process view of cognition provides a means of understanding both how we can process information so quickly and why cognition fails and is subject to error. There are several two systems views of cognition, all of which share the same basic ideas. Perhaps the most noteworthy two system view is that of Nobel Laureate Daniel Kahenman.

System 1 is named Intuition. System 1 is very fast, employs parallel processing, and appears to be automatic and effortless. They are so fast that they are executed, for the most part, outside conscious awareness. Emotions and feelings are also part of System 1. Learning is associative and slow. For something to become a System 1 process typically requires much repetition and practice. Activities such as walking, driving, and conversation are primarily System 1 processes. They occur rapidly and with little apparent effort. We would not have survived if we could not do these types of processes rapidly. But this speed of processing is purchased at a cost, the possibility of errors, biases, and illusions.

System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors to slip through.

Zuck’s behavioral modification involves System 1 processing almost exclusively. System 1 is largely emotional and involves little, if any thinking. “Likes” are largely emotional responses. People like something because it is something they agree with and invokes a favorable emotional response. Similarly, when someone accesses a site, it is most likely a site that they like and have a favorable response.

Facebook collects the data to send users to sites that they like and are interested in. Most of this processing occurs at a non conscious level so users are not conscious that they are being manipulated. But they are being manipulated which can lead to poor decisions. Moreover, they are directed to like-minded individuals, so there is minimal chance that they will know about different opinions and different ideas.

This behavior that is being modified is all beneficial to Facebook. Facebook wants to keep users on Facebook as long as possible. This results in increased ad revenues for Facebook. The critical resource here is attention. And Facebook’s procedures are extremely effective at capturing and keeping attention.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Zucked

March 28, 2019

The title of this post is the first part of a title of an important book by Roger McNamee. The remainder of the title is “Waking Up to the Facebook Catastrophe.” Roger McNamee is a longtime tech investor and tech evangelist. He was an early advisor to Facebook founder Mark Zuckerberg. To his friends Zuckerberg is known as “Zuck.” McNamee was an early investor in Facebook and he still owns shares.

The prologue begins with a statement made by Roger to Dan Rose, the head of media partnerships at Facebook on November 9, 2016, “The Russians used Facebook to tip the election!” One day early in 2016 he started to see things happening on Facebook that did not look right. He started pulling on that thread and uncovered a catastrophe. In the beginning, he assumed that Facebook was a victim and he just wanted to warn friends. What he learned in the months that followed shocked and disappointed him. He learned that his faith in Facebook had been misplaced.

This book is about how Roger became convinced that even though Facebook provided a compelling experience for most of its users, it was terrible for America and needed to change or be changed, and what Roger tried to do about it. This book will cover what Roger knows about the technology that enables internet platforms like Facebook to manipulate attention. He explains how bad actors exploit the design of Facebook and other platforms to harm and even kill innocent people. He explains how democracy has been undermined because of the design choices and business decisions by controllers of internet platforms that deny responsibility for the consequences of their actions. He explains how the culture of these companies cause employees to be indifferent to the negative side effects of their success. At the time the book was written, there was nothing to prevent more of the same.

Roger writes that this is a story about trust. Facebook and Google as well as other technology platforms are the beneficiaries of trust and goodwill accumulated over fifty years of earlier generations of technology companies. But they have taken advantage of this trust, using sophisticated techniques to prey on the weakest aspects of human psychology, to gather and exploit private data, and to craft business models that do not protect users from harm. Now users must learn to be skeptical about the products they love, to change their online behavior, insist that platforms accept responsibility for the impact of their choices, and push policy makers to regulate the platforms to protect the public interest.

Roger writes, “It is possible that the worst damage from Facebook and the other internet platforms is behind us, but that is not where the smart money will place its bet. The most likely case is that technology and the business model of Facebook and others will continue to undermine democracy, public health, privacy, and innovations until a countervailing power, in the form of government intervention or user protest, forces change.