Posts Tagged ‘YouTube’

How Can We Keep Technology from Rotting Our Brains

January 27, 2020

First of all it is important to understand that it is not technology that is rotting out brains, it is the way we are using technology that is rotting our brains. If used properly technology provides an ideal means of enhancing our brains and building healthy memories.

The first action should be to get off social media, in general, and Facebook, in particular. The dangers of Facebook are well documented in this blog. Entering “Facebook” into healthymemory.wordpress.com will yield pages of posts about Facebook. The dangers of social media are also well documented in this blog. Besides, Facebook should be paying to use your data. So in addition to the other evils one might also add theft.

We all got along before Facebook and we will find that our lives are better after Facebook. HM certainly did.

One can develop one’s own interest groups on various topics. Go to the healthy memory blog post “Mindshift Resources.” Unfortunately, usually fees are involved in actually getting a degree. Go to
nopaymba.com to learn how to get an MBA-level business education at a fraction of the cost. Laura Pickard explains how to get an MBA for less than1/100th the cost of a traditional MBA.

Go to Wikipedia and search for topics of interest or to just browse. When you find topics worth pursuing, pursue them. This will involve System 2 processing at least.

You can learn juggling on YouTube. Juggling is one of many activities that is good for developing a healthy memory.

As for the GPS, it is recommended to try navigating without GPS. Go to a new, safe, area, traverse it and build a mental topographic map. Two activities that benefit a healthy memory can be engaged here, walking and mental navigating building a mental topographic map.

Visiting museums is another means of developing mental spatial maps. Museums provide another opportunity for engaging in two activities that build healthy memories. Building mental spatial maps, and learning the content present in the museum.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Postmortem

December 18, 2019

The title of this post is identical to the title of a post in Messing with the Enemy an excellent book by Clint Watts. The postmortem on Russia’s influence and meddling in the presidential election of 2016 may never end. Trump was completely unconventional, uninformed, unlikable in so many ways, and yet had become the leader of the free world. Fake news entered the American lexicon, and Watts pre-election detailing of Russian active measures on the internet became the subject of hot debate. Had fake news swayed the U.S. presidential election?

Social media companies began digging into the data. What they found spelled dangerous trends for democracy. Americans were increasingly getting their news and information from social media instead of mainstream media. Users were not consuming factual content. Fake news, false or misleading series from outlets of uncertain credibility was being read far more than that from traditional newsrooms. EndTheFed.com and Political Insider produced four of the five most read false news stories in the three months leading up to the election. One story falsely claimed that Pope Francis had endorsed Donald Trump and another story falsely claimed that Hillary Clinton’s emails hosted on WikiLeaks certified her as an ISIS supporter. Throughout December, fears of Russian election manipulations grew, and each day brought more inquiries into how Russia had trolled for Trump.

The American electorate remains divided, government operations are severely disrupted, and faith in elected leaders continues to fall. Apparently, the objectives of Russia’s active measures have been achieved. Watts concludes that Americans still don’t grasp the information war Russia perpetrated against the West, why it works, and why it continues.

Watts writes, “The Russians didn’t have to hack election machines; they hacked American minds. The Kremlin didn’t change votes; it won them, helping tear down its less-preferred candidate, Hillary Clinton, to promote one who shares their worldviews, Donald Trump.

Watts continues, “Americans’ rapid social media consumption of news creates a national vulnerability for foreign influence. Even further, the percentage of American adults fifty and older utilizing social media sites is one of the highest in the world, at 50%. Younger Americans, aged eighteen to thirty-four, sustain a utilization rate about 80%. Deeper analysis by the Pew Research Center shows that U.S. online news consumers still get their information from news organizations more than from their friends, but they believe the friends they stay in touch with on social media applications provide information that is just as relevant.

A look at the Columbia Journalism Review’s media map demonstrates how social media encouraged information bubbles for each political leaning. Conservatives strongly entered their consumption around Breitbart and Fox News, while liberals relied on a more diverse spread of left-leaning outlets. For a foreign influence operation like the one the Russians ran against the United States, the highly concentrated right-wing social media landscape is an immediate, ripe target for injecting themes and messages. The American-left is diversely spread making targeting messages more difficult.

The Internet Research Agency in St. Petersburg, Russia bought $4,700 in advertising and through eighteen channels, hosted more than 1,000 videos that received more than 300,000 views.

The Russians created a YouTube page called Williams and Kalvin. The page’s videos showcase two black video bloggers, with African accents, appearing to read script that Barack Obama created police brutality and calling Hillary Clinton an “old racist bitch.” The Williams and Calvin page garnered 48,000 fans. Watts writes,”Russian influence operators employed most every platform—Instagram, Tumblr, even PokemonGo—but it was the Kremlin’s manipulation via Twitter that proved the most troubling.”

Watts concludes that U.S. government resources are needed to find a truly effective effort. Intelligence agencies, Homeland Security, and the State Department need to rally and coordinate. Rex Tillerson was late in using the $80 million Congress had set aside for counterpropaganda resources, and then used only half of the appropriated amount. This is just a start, and a small one at that, of what America needs to do against Russian influence. The last sentence in this chapter reads, “Kislyak was right, and Putin must still wonder, “Why hasn’t America punched back.”

Harmony, Disharmony, and the Power of Secrets

December 15, 2019

The title of this post is identical to the title of a chapter in Messing with the Enemy, an excellent book by Clint Watts. In 2002, CIA director George Tenet created the Harmony data base as the intelligence community’s central repository for all information gathered during counterterrorism operations. This data base served as a single place for bringing together information of the conduct of the emerging field of DOMINEX (document and media exploitation). At first , the Harmony database assisted soldiers picking up clues about enemy whereabouts and communications from many different babble fields and helped support the prosecution of alleged terrorists.

A Major Steve saw al-Qaeda’s secrets from a different perspective. He focused on the strategic deliberations of terrorists, their biases and preference, expense reports, likes and dislikes, and successes and failures, as well as what they thought of one another. In sum these documents yielded insights into the group’s strategic weakness and internal fractures.

Major Steve moved to the Combat Terrorism Center at West Point, which offered an interface for the military and government to connect with top experts in new cultures, regions, languages, and politics challenging effective counters operations. Major Steve could unlock the Harmony database’s secrets, create am open-source repository for the public, and enlist highly educated military officers stationed at West Point to study and collaborate with top professors around the world. In 2005, the CTC launched the Harmony Program “to contextualize the inner-functioning of al-Qaeda, its associated movement, and other security threats through primary source documents. In addition to conducting initial research on the materials, the program aimed “to make these sources, which are captured in the course of operations in Iraq, Afghanistan and other theaters, available to other scholars for further study.

The first study was tiled Harmony and Disharmony: Exploiting al-Qaeda’s Organizational Vulnerabilities. The study reviewed the employee contracts which showed that Arab recruits were paid more than African recruits, and married volunteers with children received more benefits and vacation than single members. The report noted that ineffective terrorists, instead of being plucked off the battlefield, should not be removed from the network if they can be reliably be observed, even if they present easy targets. The report’s justifications for this recommendation were pulled from a 1999 email sent by Ayman al-Zawahiri to a Yemeni cell leader in which he scolded a subordinate, saying, “With all due respect, this is not an accounting. It’s a summary accounting. For example, you didn’t write any date, and many of the items are vague. Watts writes, “Nearly twenty years later, Zawahiri’s letter offers some insights into why terrorists in the ranks sought to defect to ISIS after bin Laden’s death: he was a stickler of a boss.”

The key recommendation from the report follows: “increase internal dissension within al-Qaeda’s leadership.” Communique’s between al-Qaeda subordinates challenged the direction put out by the group’s leaders and questioned whether orders should be obeyed. One member said that faulty leadership held the group back, asserting that bin Laden had rushed “to move without visions,” and asked Khalid Sheikh Mohammed, mastermind of the 9/11 attacks, to reject bin Laden’s orders.

Another study using the Harmony Database found that al-Qaeda, as a military organization, had never been particularly strong, and its success as a media organization masked deep internal divides between its leaders over strategic direction.

The Russians recognized that transparency movements relied on content, and compromising information seeded to WikiLeaks provided a new method for character assassination.The Russian intelligence services forged ahead compromising adversaries in cyber through the late 1990s and early 2000s. They secretly placed child pornography on the computers of defectors and intelligence officers and leaked sex videos of political opponents on the internet, creating a media feeding frenzy. Outlets like Wikileaks were a perfect vehicle for widespread character assassination of enemies worldwide, an open-source vehicle for planting information that allowed for plausible deniability.

Watts concludes this chapter as follows: “Many of the great chess masters have been Russian, and their leader, Vladimir Putin, is a lover of judo. Both require strategy, and victory routinely goes to those who employ their adversary’s strengths against them. As Putin famously demonstrated his judo skills on Youtube, Edward Snowden settled into a Kremlin-provided safe house. Julian Assange stowed away in the Ecuadorian embassy. The Kremlin trolls practiced on audiences in Ukraine and Syria, and occasionally heckled me. As for the hackers swirling around the Syrian Electronic Army, some of them went offline, busy working on a new project. And Russia’s cyber team came together for a new mission, with some new methods the world had yet to see and still doesn’t quite comprehend.”

It Gets Even Worse

April 5, 2019

This is the ninth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” This post picks up where the immediately preceding post, “Amplifying the Worse Social Behavior” stopped. Users sometimes adopt an idea suggested by Facebook or by others on Facebook as their own. For example, if someone is active in a Facebook Group associated with a conspiracy theory and then stop using the platform for a time, Facebook will do something surprising when they return. It might suggest other conspiracy theory Groups to join because they share members with the first conspiracy Group. Because conspiracy theory Groups are highly engaging, they are likely to encourage reengagement with the platform. If you join the Group, the choice appears to be yours, but the reality is that Facebook planted the seed. This is because conspiracy theories are good for them, not for you.

Research indicates that people who accept one conspiracy theory have a high likelihood of accepting a second one. The same is true of inflammatory disinformation. Roger accepts the fact that Facebook, YouTube, and Twitter have created systems that modify user behavior. Roger writes, “They should have realized that global scale would have an impact on the way people use their products and would raise the stakes for society. They should have anticipated violations of their terms of service and taken steps to prevent them. Once made aware of the interference, they should have cooperated with investigators. I could no longer pretend that Facebook was a victim. I cannot overstate my disappointment. The situation was much worse than I realized.”

Apparently, the people at Facebook live in their own preference bubble. Roger writes, “Convinced of the nobility of their mission, Zuck and his employees reject criticism. They respond to every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. To them, connecting 2.2 billion people is so obviously a good thing, and continued growth so important, that they cannot imagine that the problems that have resulted could be in any way linked to their designs or business decisions. As a result, when confronted with evidence that disinformation and fake news spread over Facebook influenced the Brexit referendum and the election of Putin’s choice in the United States, Facebook took steps that spoke volumes about the company’s world view. They demoted publishers in favor of family, friends, and Groups on the theory that information from those sources would be more trustworthy. The problem is that family, friends, and Groups are the foundational elements of filter and preference bubbles. Whether by design or by accident, they share the very disinformation and fake news that Facebook should suppress.

Crowdsourcing

January 18, 2019

This is the sixth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. The terrorist attack on Mumbai opened up all the resources of the internet using Twitter to defend against the attack. When the smoke cleared, the Mumbai attack left several legacies. It was a searing tragedy visited upon hundreds of families. It brought two nuclear powers to the brink of war. It foreshadowed a major technological shift. Hundreds of witnesses—some on-site, some from afar—had generated a volume of information that previously would have taken months of diligent reporting to assemble. By stitching these individual accounts together, the online community had woven seemingly disparate bits of data into a cohesive whole. The authors write, “It was like watching the growing synaptic connections of a giant electric brain.”

This Mumbai operation was a realization of “crowdsourcing,” an idea that had been on the lips of Silicon Valley evangelists for years. It had originally been conceived as a new way to outsource programming jobs, the internet bringing people together to work collectively, more quickly and cheaply than ever before. As social media use had sky rocketed, the promise of had extended a space beyond business.

Crowdsourcing is about redistributing power-vesting the many with a degree of influence once reserved for the few. Crowdsourcing might be about raising awareness, or about money (also known as “crowdfunding.”) It can kick-start a new business or throw support to people who might have remained little known. It was through crowdsourcing that Bernie Sanders became a fundraising juggernaut in the 2016 presidential election, raking in $218 million online.

For the Syrian civil war and the rise of ISIS, the internet was the “preferred arena for fundraising.” Besides allowing wide geographic reach, it expands the circle of fundraisers, seemingly linking even the smallest donor with their gift on a personal level. The “Economist” explained, this was, in fact, one of the key factors that fueled the years-long Syrian civil war. Fighters sourced needed funds by learning “to crowd fund their war by using Instagram, Facebook and YouTube. In exchange for a sense of what the war was really like, the fighters asked for donations via PayPal. In effect, they sold their war online.”

In 2016 a hard-line Iraqi militia took to Instagram to brag about capturing a suspected ISIS fighter. The militia then invited its 75,000 online fans to vote on whether to kill or release him. Eager, violent comments rolled in from around the world, including many from the United States. Two hours later, a member of the militia posted a follow-up selfie; the body of the prisoner lay in a pool of blood behind him. The caption read, “Thanks for the vote.” In the words of Adam Lineman, a blogger and U.S. Army veteran, this represented a bizarre evolution in warfare: “A guy on the toilet in Omaha, Nebraska could emerge from the bathroom with the blood of some 18-year-old Syrian on his hands.”

Of course, crowdsourcing can be used for good as well as for evil.

Scale of Russian Operation Detailed

December 23, 2018

The title of this post is identical to the title of an article by Craig Timberg and Tony Romm in the 17 Dec ’18 issue of the Washington Post. Subtitles are: EVERY MAJOR SOCIAL MEDIA PLATFORM USED and Report finds Trump support before and after election. This post is the first to analyze the millions of posts provided by major technology firms to the Senate Intelligence Committee.

The research was done by Oxford University’s Computational Propaganda Project and Graphic, a network analysis firm. It provides new details on how Russians worked at the Internet Research Agency (IRA), which U.S. officials have charged with criminal offenses for interring in the 2016 campaign. The IRA divided Americans into key interest groups for targeted messaging. The report found that these efforts shifted over time, peaking at key political moments, such as presidential debates or party conventions. This report substantiates facts presented in prior healthy memory blog posts.

The data sets used by the researchers were provided by Facebook, Twitter, and Google and covered several years up to mid-2017, when the social media companies cracked down on the known Russian accounts. The report also analyzed data separately provided to House Intelligence Committee members.

The report says, “What is clear is that all of the messaging clearly sought to benefit the Republican Party and specifically Donald Trump. Trump is mentioned most in campaigns targeting conservatives and right-wing voters, where the messaging encouraged these groups to support his campaign. The main groups that could challenge Trump were then provided messaging that sought to confuse, distract and ultimately discourage members from voting.”

The report provides the latest evidence that Russian agents sought to help Trump win the White House. Democrats and Republicans on the panel previously studied the U.S. intelligence community’s 2017 finding that Moscow aimed to assist Trump, and in July, said the investigators had come to the correct conclusion. Nevertheless, some Republicans on Capitol Hill continue to doubt the nature of Russia’s interference in the election.

The Russians aimed energy at activating conservatives on issues such as gun rights and immigration, while sapping the political clout of left-leaning African American voters by undermining their faith in elections and spreading misleading information about how to vote. Many other groups such as Latinos, Muslims, Christians, gay men and women received at least some attention from Russians operating thousands of social media accounts.

The report offered some of the first detailed analyses of the role played by Youtube and Instagram in the Russian campaign as well as anecdotes about how Russians used other social media platforms—Google+, Tumblr and Pinterest—that had received relatively little scrutiny. That also used email accounts from Yahoo, Microsoft’s Hotmail service, and Google’s Gmail.

While reliant on data provided by technology companies the authors also highlighted the companies’ “belated and uncoordinated response” to the disinformation campaign and, once it was discovered, their failure to share more with investigators. The authors urged that in the future they provide data in “meaningful and constructive “ ways.

Facebook provided the Senate with copies of posts from 81 Facebook pages and information on 76 accounts used to purchase ads, but it did not share posts from other accounts run by the IRA. Twitter has made it challenging for outside researchers to collect and analyze data on its platform through its public feed.

Google submitted information in an especially difficult way for researchers to handle, providing content such as YouTube videos but not the related data that would have allowed a full analysis. They wrote that the YouTube information was so hard to study, that they instead tracked the links to its videos from other sites in hopes of better understand YouTube’s role in the Russian effort.

The report expressed concern about the overall threat social media poses to political discourse within and among nations, warning them that companies once viewed as tools for liberation in the Arab world and elsewhere are now a threat to democracy.

The report also said, “Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement to being a computational tool for social control, manipulated by canny political consultants and available to politicians in democracies and dictatorships alike.”

The report traces the origins of Russian online influence operations to Russian domestic politics in 2009 and says that ambitions shifted to include U.S. politics as early as 2013. The efforts to manipulate Americans grew sharply in 2014 and every year after, as teams of operatives spread their work across more platforms and accounts to target larger swaths of U.S. voters by geography, political interests, race, religion and other factors.

The report found that Facebook was particularly effective at targeting conservatives and African Americans. More than 99% of all engagements—meaning likes, shares and other reactions—came from 20 Facebook pages controlled by the IRA including “Being Patriotic,” “Heart of Texas,” “Blacktivist” and “Army of Jesus.”

Having lost the popular vote, it is difficult to believe that Trump could have carried the Electoral College given this impressive support by the Russians. One can also envisage Ronald Reagan thrashing about in his grave knowing that the Republican Presidential candidate was heavily indebted to Russia and that so many Republicans still support Trump.
© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Social Media Putting Democracy at Risk

February 24, 2018

This blog post is based on an article titled, “”YouTube excels at recommending videos—but not at deeming hoaxes” by Craig Timberg, Drew Harrell, and Tony Romm in 23 Feb 2018
issue of the Washington Post. The article begins, “YouTube’s failure to stop the spread of conspiracy theories related to last week’s school shooting in Florida highlights a problem that has long plagued the platform: It is far better at recommending videos that appeal to users than at stanching the flow of lies.”

To be fair, YouTube’s fortunes are based on how well its recommendation algorithm is tuned to the tastes of individual viewers. Consequently, the recommendation algorithm is its major strength. YouTube’s weakness in detecting misinformation was on stark display this week as demonstrably false videos rose to the top of YouTube’s rankings. The article notes that one clip that mixed authentic news images with misleading context earned more than 200,000 views before YouTube yanked it Wednesday for breaching its rules on harassment.

The article writes, “These failures this past week, which also happened on Facebook, Twitter, and other social media sites—make it clear that some of the richest, most technically sophisticated companies in the world are losing against people pushing content rife with untruth.”

YouTube apologized for the prominence of these misleading videos, which claimed that survivors featured in news reports were “crisis actors” appearing to grieve for political gain. YouTube removed these videos and said the people who posted them outsmarted the platform’s safeguards by using portions of real news reports about the Parkland, Fla, shooting as the basis for their conspiracy videos and memes that repurpose authentic content.

YouTube made a statement that its algorithm looks at a wide variety of factors when deciding a video’s placement and promotion. The statement said, “While we sometimes make mistakes with what appears in the Trending Tab, we actively work to filter out videos that are misleading, clickbait or sensational.”

It is believed that YouTube is expanding the fields its algorithm scans, including a video’s description, to ensure that clips alleging hoaxes do not appear in the trending tab. HM recommends that humans be involved with the algorithm scans to achieve man-machine symbiosis. [to learn more about symbiosis, enter “symbiosis” into the search block of the Healthymemory blog.] The company has pledged on several occasions to hire thousands more humans to monitor trending videos for deception. It is not known whether this has been done or if humans are being used in a symbiotic manner.

Google also seems to have fallen victim to falsehoods, as it did after previous mass shootings, via its auto-complete feature. When users type the name of a prominent Parkland student, David Hogg, the word “actor” often appears in the field, a feature that drives traffic to a subject.

 

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.