Posts Tagged ‘Instagram’

Internet: Online Time—Oh, and Other Media, Too

April 14, 2019

The title of this post is the same as the second chapter in iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D.

iGen-ers sleep with their phones. They put them under their pillows, on the mattress, or at least within arm’s reach of the bed. They check social media websites and watch videos right before they go to bed, and reach for their phones again as soon as they wake up in the morning. So their phone is the last thing they see before they go to sleep, and the first thing they see when they wake up. If they wake up in the middle of the night, they usually look at their phones.

Dr. Twenge notes, “Smartphones are unlike any other previous form of media, infiltrating nearly every minute of our lives, even when we are unconscious with sleep. While we are awake, the phone entertains, communicates, and glamorizes. She writes, “It seems that teens (and the rest of us) spend a lot of time on phones—not talking but texting, on social media, online, and gaming (togther, these are labeled ‘new media’). Sometime around 2011, we arrived at the day when we looked up, maybe from our own phones, and realized that everyone around us had a phone in his or her hands.”

Dr, Twenge reports, “iGen high school seniors spent an average of 2.25 hours a day texting on their cell phone, about 2 hours a day on the Internet, 1.5 hours a day on electronic gaming , and about a half hour on video chat. This sums to a total of 5 hours a day with new media, This varies little based on family background; disadvantaged teens spent just as much or more time online as those with more resources. The smartphone era has meant the effective end of the Internet access gap.

Here’s a breakdown of how 12th graders are spending their screen time from Monitoring the Future, 2013-2015:
Texting 28%
Internet 24%
Gaming 18%
TV 24%
Video Chat 5%

Dr. Twenge reports that in seven years (2008 to 2015) social media sites went from being a daily activity for half of teens, to almost all of them. In 2015 87% of 12th grade girls used social media sites almost every day in 2015 compared to 77% of boys.
HM was happy to see that eventually many iGen’ers see through the veneer of chasing likes—but usually only once they are past their teen years.

She writes that “social media sites go into and out of fashion, and by the time you read this book several new ones will probably be on the scene. Among 14 year olds Instagram and Snapchat are much more popular than Facebook.“ She notes that recently group video chat apps such as Houseparty were catching on with iGEN, allowing them to do what they call ‘live chilling.”

Unfortunately, it appears that books are dead. In the late 1970s, a clear majority of teens read a book or a magazine nearly every day, but by 2015, only 16% did. e-book readers briefly seemed to rescue books: the number who said they read two or more books for pleasure bounced back in the late 2000s, but they sank again as iGEN (and smartphones) entered the scene in the 2010. By 2015, one out of three high school seniors admitted they had not read any books for pleasure in the past year, three times as many as in 1976.

iGEN teens are much less likely to read books than their Millennial, GenX, and Boomer predecessors. Dr. Twenge speculates that a reason for this is because books aren’t fast enough. For a generation raised to click on the next link or scroll to the next page within seconds, books just don’t hold their attention. There are also declines for iGen-ers with respect to magazines and newspapers.

SAT scores have declined since the mid-2000s, especially in writing (a 13-point decline since 2006) and critical reading ( a 13-point decline since 2005).

Dr, Twenge raises the fear that with iGen and the next generations never learning the patience necessary to delve deeply into a topic, and the US economy falling behind as a result.

We Need to Take Tech Addiction Seriously

March 26, 2019

The title of this post is the same as an article by psychologist Doreen Dodgen-Magee in the 19 March 2019 issue of the Washington Post. The World Health Organization has recognized Internet gaming as a diagnosable addiction. Dr. Dodgen-Magee argues that psychologists and other mental-health professionals must begin to acknowledge that technology use has the potential to become addictive and impact individuals and communities. Sometime the consequences are dire.

She writes that the research is clear, that Americans spend most of their waking hours interacting with screens. Studies from a nonprofit group Common Sense Media indicate that U.S. teens average approximately nine hours per day with digital media, tweens spend six hours and our youngest, ages zero to 8, spend 2.5 hours daily in front of a screen. According to research by the Nielsen Company, the average adult in the United States spends more than 11 hours a day in the digital world. Dr. Dodgen-Magee claims that when people invest this kind of time in any activity, we must at least start to ask what it means for their mental health.

Both correlational and causal relationships have been established between tech use and various mental-health conditions. Research at the University of Pittsburgh found higher rates of depression and anxiety among young adults who engage many social media platforms than those who engage only two. Jean Twenge found that the psychological development of adolescents is slowing down and depression, anxiety and loneliness, which she attributes to tech engagement are on the rise. Multitasking, a behavior that technology encourages and reinforces is consistently correlated with poor cognitive and mental-health outcomes. Researchers at the University of Pennsylvania have published the first experimental data linking decreased well-being to Facebook, Snapchat, and Instagram use in young adults. Dr. Dodgen-Magee concludes that our technology use is affecting our psychological functioning.

The author has been examining the interplay between technology and mental health for close to two decades. She finds that while technology can do incredible things for us in nearly every area of life, it is neither all good nor benign.

The author writes that when the mental-health community resists fully exploring the costs associated with constant tech interaction, it leaves those struggling with compulsive or potentially harmful use of their devices few places to turn. She continues that recently a woman scheduled a consultation with her because she was concerned about her inability to focus. She was a self-described Type A personality who found herself simultaneously interacting with three or four screens for nearly 20 hours a day, determined to stay on top of every demand. When it came time for her biannual revision of an important procedural manual, she couldn’t focus on the single tasks for the time to do it effectively. She is not the only individual with this problem.

She writes that consequently our attention spans are short. Our ability to focus on one task at a time is impaired. And our boredom tolerance is nil. People now rely on the same devices that drive so much of our anxiety and alienation for both stimulation and soothing. While, for many people, these changes will never move into the domain of addiction, for others they already have. In a recent Common Sense Media poll, 50% of adolescents reported already feeling that their use had become addictive and 27% of parents reported the same.

She writes, “If Americans were interacting with anything else for 11-plus hours a day, I feel confident we’d be talking more about how that interaction shapes us. Mental-health professionals must begin to educate themselves about the digital pools in which their clients swim and learn about the impact of excessive technology use on human development and functioning. It is too easy for therapists to assume that everyone’s engagement with the digital domain looks just their own and to go merrily from there. We would serve our client well by understanding the unique way in which many platforms encourage addictive pattens and behaviors. We should also create non-shaming environments in which they can candidly explore how their tech use impacts them.

It’s time to put our phones down and begin an informed conversation about how technology is impacting our mental health. Our clients’ health and the well-being of our communities may depend on it.”

Crowdsourcing

January 18, 2019

This is the sixth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. The terrorist attack on Mumbai opened up all the resources of the internet using Twitter to defend against the attack. When the smoke cleared, the Mumbai attack left several legacies. It was a searing tragedy visited upon hundreds of families. It brought two nuclear powers to the brink of war. It foreshadowed a major technological shift. Hundreds of witnesses—some on-site, some from afar—had generated a volume of information that previously would have taken months of diligent reporting to assemble. By stitching these individual accounts together, the online community had woven seemingly disparate bits of data into a cohesive whole. The authors write, “It was like watching the growing synaptic connections of a giant electric brain.”

This Mumbai operation was a realization of “crowdsourcing,” an idea that had been on the lips of Silicon Valley evangelists for years. It had originally been conceived as a new way to outsource programming jobs, the internet bringing people together to work collectively, more quickly and cheaply than ever before. As social media use had sky rocketed, the promise of had extended a space beyond business.

Crowdsourcing is about redistributing power-vesting the many with a degree of influence once reserved for the few. Crowdsourcing might be about raising awareness, or about money (also known as “crowdfunding.”) It can kick-start a new business or throw support to people who might have remained little known. It was through crowdsourcing that Bernie Sanders became a fundraising juggernaut in the 2016 presidential election, raking in $218 million online.

For the Syrian civil war and the rise of ISIS, the internet was the “preferred arena for fundraising.” Besides allowing wide geographic reach, it expands the circle of fundraisers, seemingly linking even the smallest donor with their gift on a personal level. The “Economist” explained, this was, in fact, one of the key factors that fueled the years-long Syrian civil war. Fighters sourced needed funds by learning “to crowd fund their war by using Instagram, Facebook and YouTube. In exchange for a sense of what the war was really like, the fighters asked for donations via PayPal. In effect, they sold their war online.”

In 2016 a hard-line Iraqi militia took to Instagram to brag about capturing a suspected ISIS fighter. The militia then invited its 75,000 online fans to vote on whether to kill or release him. Eager, violent comments rolled in from around the world, including many from the United States. Two hours later, a member of the militia posted a follow-up selfie; the body of the prisoner lay in a pool of blood behind him. The caption read, “Thanks for the vote.” In the words of Adam Lineman, a blogger and U.S. Army veteran, this represented a bizarre evolution in warfare: “A guy on the toilet in Omaha, Nebraska could emerge from the bathroom with the blood of some 18-year-old Syrian on his hands.”

Of course, crowdsourcing can be used for good as well as for evil.

Cyberwar

October 31, 2018

“Kiselev called information war the most important kind of war. At the receiving end, the chairwoman of the Democratic Party wrote of ‘a war, clearly, but edged on a different kind of battlefield.’ The term was to be taken literally. Carl von Clausewitz, the most famous student of war, defined it as ‘an act of force to compel our enemy to do our will.’ What if, as the Russian military doctrine of the 2010s posited, technology made it possible to engage the enemy’s will directly, without the medium of violence? It should be possible as a Russian military planning document of 2013 proposed, to mobilize the ‘protest potential of the population’ against its own interests, or, as the Izborsk Club specified in 2014, to generate in the United States a ‘destructive paranoid reflection. Those are concise and precise descriptions of Trump’s candidacy. The fictional character won, thanks to votes meant as a protest against the system, and thanks to voters who believed paranoid fantasies that simply were not true… The aim of Russian cyberwar was to bring Trump to the Oval Office through what seemed to be normal procedures. Trump did not need need to understand this, any more than an electrical grid has to know when it is disconnected. All that matters is that the lights go out.”

“The Russian FSB and Russian military intelligence (the GRU) both took part in the cyberwar against the United States. The dedicated Russian cyberwar center known as the Internet Research Agency was expanded to include an American Department when in June 2015 Trump announced his candidacy. About ninety new employees went to work on-site in St. Petersburg. The Internet Research Agency also engaged about a hundred American political activists who did not know for whom they were working. The Internet Research Agency worked alongside Russian secret services to move Trump into the Oval Office.”

“It was clear in 2016 that Russians were excited about these new possibilities. That February, Putin’s cyber advisor Andrey Krutskikh boasted: ‘We are on the verge of having something in the information arena that will allow us to talk to the Americans as equals.’ In May, an officer of the GRU bragged that his organization was going to take revenge on Hillary Clinton on behalf of Vladimir Putin. In October, a month before the elections, Pervyi Kanal published a long and interesting meditation on the forthcoming collapse of the United States. In June 2017, after Russia’s victory, Putin spoke for himself, saying that he had never denied that Russian volunteers had made cyber war against the United States.”

“In a cyberwar, an ‘attack surface’ is the set of points in a computer program that allow hackers access. If the target of a cyberwar is not a computer program but a society, then the attack surface is something broader: software that allows the attacker contact with the mind of the enemy. For Russian in 2015 and 2016, the American attack surface was the entirety of Facebook, Instagram, Twitter, and Google.”

“In all likelihood, most American voters were exposed to Russian Propaganda. It is telling that Facebook shut down 5.8 million fake accounts right before the election of November 2016. These had been used to promote political messages. In 2016, about a million sites on Facebook were using a tool that allowed them to artificially generate tens of millions of ‘likes,’ thereby pushing certain items, often fictions, into the newsfeed of unwitting Americans. One of the most obvious Russian interventions was the 470 Facebook sites placed by Russia’s Internet Research Agency, but purported to be those of American political organizations or movements. Six of these had 340 million shares each of content on Facebook, which would suggest that all of them taken together had billions of shares. The Russian campaign also included at least 129 event pages, which reached at least 336,300 people. Right before the election, Russia placed three thousand advertisements on Facebook, and promoted them as memes across at least 180 accounts on Instagram. Russia could do so without including any disclaimers about who had paid for the ads, leaving Americans with the impression that foreign propaganda was an American discussion. As researchers began to calculate the extent of American exposure to Russian propaganda, Facebook deleted more data. This suggests that the Russian campaign was embarrassingly effective. Later, the company told investors that as many as sixty million accounts were fake.”

“Americans were not exposed to Russian propaganda randomly, but in accordance with their own susceptibility, as revealed by their practices on the internet. People trust what sounds right, and trust permits manipulation. In one variation, people are led towards even more intense outrage about what they already fear or hate. The theme of Muslim terrorism, which Russia had already exploited in France and Germany, was also developed in the United States. In crucial states such as Michigan and Wisconsin, Russia’s ads were targeted at people who could be aroused by anti-Muslim messages. Throughout the United States, likely Trump voters were exposed to pro-Clinton messages on what purported to be American Muslim sites. Russian pro-Trump propaganda associated refugees with rapists. Trump had done the same when announcing his candidacy.”

“Russian attackers used Twitter’s capacity for massive retransmission. Even in normal times on routine subjects, perhaps 10% of Twitter accounts (a conservative estimate) are bots rather than human beings: that is computer programs of greater or lesser sophistication, designed to spread certain messages to a target audience. Though bots are less numerous that humans on Twitter, they are more efficient than humans in sending messages. In the weeks before the election, bots accounted for about 20% of the American conversation about politics. An important scholarly study published the day before the polls opened warned that bots could ‘endanger the integrity of the presidential election.’ It cited three main problems: ‘first, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can be further polarized; third, spreading misinformation and unverified information can be enhanced.’ After the election, Twitter identified 2,752 accounts as instruments of Russian political influence. Once Twitter started looking it was able to identify about a million suspicious accounts per day.”

“Bots were initially used for commercial purposes. Twitter has an impressive capacity to influence human behavior by offering deals that seem cheaper or easier than alternatives. Russia took advantage of this. Russian Twitter accounts suppressed the vote by encouraging Americans to ‘text-to-vote,’ which is impossible. The practice was so massive that Twitter, which is very reluctant to intervene in discussions over its platform, finally had to admit its existence in a statement. It seems possible that Russia also digitally suppressed the vote in another way: by making voting impossible in crucial places and times. North Carolina, for example, is a state with a very small Democratic majority, where most Democratic voters are in cities. On Election Day, voting machines in cities ceased to function, thereby reducing the number of votes recorded. The company that produced the machines in question had been hacked by Russian military intelligence, Russia also scanned the electoral websites of at least twenty-one American states, perhaps looking for vulnerabilities, perhaps seeking voter data for influence campaigns. According to the Department of Homeland Security, “Russian intelligence obtained and maintained access to elements of multiple U.S. state or local electoral boards.

“Having used its Twitter bots to encourage a Leave vote in the Brexit referendum, Russia now turned them loose in the United States. In several hundred cases (at least), the very same bots that worked against the European Union attacked Hillary Clinton. Most of the foreign bot traffic was negative publicity about her. When she fell ill on September 11, 2016, Russian bots massively amplified the case of the event, creating a trend on Twitter under the hashtag #Hillary Down. Russian trolls and bots also moved to support Donald Trump directly at crucial points. Russian trolls and bots praised Donald Trump and the Republican National Convention over Twitter. When Trump had to debate Clinton, which was a difficult moment for him, Russian trolls and bots filled the ether with claims that he had won or that the debate was somehow rigged against him. In crucial swing states that Trump had won, bot activity intensified in the days before the election. On Election Day Itself, bots were firing with the hashtag #WarAgainstDemocrats. After Trump’s victory, at least 1,600 of the same bots that had been working on his behalf went to work agains Macron and for Le Pen in FRance, and against Merkel and for the AfD in Germany. Even at this most basic technical level, the war against the United States was also the war against the European Union.”

“In the United States in 2016, Russia also penetrated email accounts, and then used proxies on Facebook and Twitter to distribute selection that were deemed useful. The hack began when people were sent an email message that asked them to enter their passwords on a linked website. Hackers then used security credentials to access that person’s email account and steal its contents. Someone with knowledge of the American political system then chose what portions of this material the American public should see, and when.”

The hackings of the Democratic convention and wikileaks are well known. The emails that were made public were carefully selected to ensure strife between supporters of Clinton and her rival for the nomination, Bernie Sanders. Their release created division at the moment when the campaign was meant to coalesce. With his millions of Twitter followers, Trump was among the most important distribution channels of the Russian hacking operation. Trump also aided the Russian endeavor by shielding it from scrutiny, denying repeatedly that Russia was intervening in the campaign.
Since Democratic congressional committees lost control of private data, Democratic candidates for Congress were molested as they ran for Congress. After their private data were released, American citizens who had given money to he Democratic Party were also exposed to harassment and threats. All this mattered at the highest levels of politics, since it affected one major political party and not the other. “More fundamentally, it was a foretaste of modern totalitarianism is like: no one can act in politics without fear, since anything done now can be revealed later, with personal consequences.”

None who released emails over the internet has anything say about the relationship of the Trump campaign to Russia. “This was a telling omission, since no American presidential campaign was ever so closely bound to a foreign power. The connections were perfectly clear from the open sources. One success of Russia’s cyberwar was the seductiveness of the secret and the trivial drew America away from the obvious and the important: that the sovereignty of the United States was under attack.”

Quotes are taken directly from “The Road to Unfreedom: Russia, Europe, America” by Timothy Snyder

The Powerful Influence of Information Friction

June 29, 2018

This is the fourth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Dr. Roberts related that in May 2011 she had been following news about a local protest in Inner Mongolia in which an ethnic Mongol herdsmen had been killed by a Han Chinese truck driver. In the following days increasingly large numbers of local Mongols began protesting outside of government buildings, culminating in sufficiently large-scale protests that the Chinese government imposed martial law. These were the largest protests that Inner Mongolia had experienced in twenty years. A few months later Dr. Roberts arrived in Beijing for summer. During discussions with a friend she brought up the Inner Mongolia protest. Her friend could not recollect the event, saying that she had not heard of it. A few minutes later, she remembered that a friend of hers had mentioned something about it. but when she looked for information online, she could not find any, so she assumed that the protest itself could not have been that important.

This is what happened. Bloggers who posted information about the protest online had their posts quickly removed from the Internet by censors. As local media were not reporting on the event, any news of the protest was reported mainly by foreign sources, many of which had been blocked by the Great Firewall. Even for the media, information was difficult to come by, as reporting on the protests on the ground had been banned, and the local Internet had been shut off by the government.

Dr. Roberts noted that information about the protest was not impossible to find on the Internet. She had been following news from Boston and even in China. The simple use of a Virtual Private Network and some knowledge of which keywords to search for had uncovered hundreds of news stories about the protests. But her friend, a well-to-do, politically interested, tech-savvy woman, was busy and Inner Mongolia is several hundred miles away. So after a cursory search that turned up nothing, she concluded that the news was either unimportant or non-existent.

Another of her friends was very interested in politics and followed political events closely. She was involved in multiple organizations that advocated for genuine equality and was an opinionated feminist. Because of her feminist activist, Dr. Roberts asked her whether she had heard of the five female activists who had been arrested earlier that year in China, including in Beijing, for their involvement in organizing a series of events meant to combat sexual harassment. The arrests of these five women had been covered extensively in the foreign press and had drawn an international outcry. Articles about the activists had appeared in the New York Times and on the BBC. Multiple foreign governments had called for their release. But posts about their detention were highly censored and the Chinese news media were prohibited from reporting on it. Her friend, who participated in multiple feminist social media groups, and had made an effort to read Western news, still had not heard about their imprisonment.

Dr. Roberts kept encountering examples like these, where people living in China exhibited surprising ignorance about Chinese domestic events that had made headlines in the international press. They had not heard that the imprisoned Chinese activist Liu Xiao had won the Nobel Peace Prize. They had not heard about major labor protests that had shut down factories or bombings of local government offices. Despite the possibility of of accessing this information without newspapers, television, and social media blaring these headlines, they were much less likely to come across these stories.

Content filtering is one of the Chinese censorship methods. This involves the selective removal of social media posts in China that are written on the platforms of Chinese owned internet service providers. The government does not target criticism of government policies, but instead removes all posts related to collective action events, activists, criticism of censorship, and pornography. Censorship focuses on social media posts that are geo-located in more restive areas, like Tibet. The primary goal of government censorship seems to be to stop information flow from protest areas to other areas of China. Since large-scale protest is known to be one of the main threats to the Chinese regime, the Chinese censorship program is preventing the spread of information about protests in order to reduce their scale.

Despite extensive content filtering, if users were motivated and willing to invest time in finding information about protests, they could overcome information friction to find such information. Information is often published online before it is removed by Internet Companies. There usually is a lag of several hours to a day before content is removed from the Internet.

Even with automated and manual methods of removing content, some content is missed. And if the event is reported in the foreign press, Internet users could access information by jumping the Great Firewall using a VPN.

The structural frictions of the Great Firewall are largely effective. Only the most dedicated “jump” the Great Firewall. Those who jump the Great Firewall are younger and have more education and resources. VPN users are more knowledgeable about politics and have less trust in government. Controlling for age, having a college degree means that a user is 10 percentage points more likely to jump the Great Firewall. Having money is another factor that increases the likelihood of jumping the Great Firewall. 25% of those who jump the Great Firewall say they can understand English, as compared with only 6% of all survey respondents. 12% of those who jump work for a for a foreign-based venture compared to only 2% of all survey respondents. 48% of the jumpers have been abroad compared with 17% of all respondents.

The government has cracked down on some notable websites. Google began having conflicts with the Chinese government in 2010. Finally, in June 2014, the Chinese government blocked Google outright.

The Wikipedia was first blocked in 2004. Particular protests have long been blocked . but the entire Wikipedia website has occasionally been made unaccessible to Chinese IP addresses.

Instagram was blocked on September 29, 2014 from mainland Chinese IP addresses due to increase popularity among Hong Kong protestors.