Archive for the ‘Transactive Memory’ Category

Habits and Norms

November 12, 2019

This is the eleventh post in the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World. The title of this post is identical to the title of a chapter in the book. The author begins this chapter with the following three statements.

It is easier to establish healthy norms than to break unhealthy habits.
It is easy to establish habits.
It is not easy to establish healthy norms.
This entire book could be summed up by these three sentences.

This is certainly true, so the reader might wonder why so many posts. The reason is that this is a long book, and HM needed to pass on as much knowledge and advice in the book that could be done in a series of blog posts. Essentially, it is difficult breaking unhealthy habits and in establishing healthy norms. It is possible that there are habits that correspond to norms, but in most cases knowledge and tips are required.

Habits feel almost instinctual and we enact them with little or no thought. Repeated actions the seem to happen almost outside our awareness become the habits that shape our lives. Examples of habituated behaviors are automatically reaching for something in the refrigerator when we’re bored, the feeling that comes over us when we make a mistake, or our emotional and behavioral reactions to an ideology that differs from our own.

The alternative to living habitually is living from intentionally chosen and established norms. “Norms” is a shortened reference to the phrase “normative behaviors/patterns” and refers to intentionally chosen behaviors and thought/feeling responses that both result and emerge from the conscious creation of healthy patterns. Norms often form the beginning of many of our habits, especially habits we develop by acts of our will.

The final paragraph of the chapter follows: “Unless a person has an iron-strong will, an incredibly sturdy sense of self, and an ability to be persistent about healthy norm maintenance, the shiny, hyper stimulating, constantly moving world of digital engagement is perfect for pulling us off true north and creating habits that distract, detract, and diminish our volitional way of being in the world. Because technology is here to stay and our engagement with it is almost always a must, the establishment of norms that enable health living, a balance of experiences and relationships in both our digital and embodied spaces, and the development of skills related to focus, delay of gratification, and regulation are key.”

Cultivating an Internal Locus of Control

November 11, 2019

This is the tenth post in the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World. The title of this post is identical to the title of a chapter in the book. Here are three lessons about cultivating an internal locus of control.
Know your stuff.
Start non-shaming conversations about your own and other’s technology use.
Live wild, fiery embodied lives and invite others to do the same.

Here are three ways for a caregiver to respond to a child who falls in a park and an injury results.

The caregiver reacts strongly on one end of a “this is a huge deal” to “this is not a big deal at all” continuum. Both ends of this continuum are the wrong way to respond.
2. The caregiver, whether physically present or not responds with complacency or absence’
Wrong.
3. The caregiver responds to the child’s inquisitive glance with an empathic and connected reply calling for an assessment. “How are you? Are you okay? How can I help?

Let’s figure out what is happening here. This response communicates both the child’s own assessment of and communication about the situation. The response puts the child in the driver’s seat by enabling him or her to slow down and consider what he or she needs and then allows for a partnership in addressing the need.

Self-promotion can serve as a precursor to an external locus of control. Initially this might appear to be ironic, because self-promotion might initially appear to be an internal locus of control. But if the concern is with external approval, then the locus of control is actually external.

Self-knowing awareness is a precursor for an internal locus of control. Here self-loving awareness, or self love provides an alternative to self-promotion.

The objective is to move from an external to an internal locus of control. Here are the traits, actions, and capacities that are induced in the healthy relationship with one’s self.

*Capacity for honest awareness of strengths and weaknesses.
*Knowledge of one’s emotional range and an ability to moderate and regulate affect emotions.
*Flexibility in relation to the knowledge and recognition as well as the importance of one’s personal needs versus the needs of the communities in which someone lives.
*Ability to function independently and interdependently and to post intimate relationships with others without compromising or disowning important parts of the self.
*General awareness of one’s physiological being and ability to be comfortable in one’s skin.

Here are the ideas provided for Establishing and Maintaining a Healthy Relationship with the Self.

LEARN TO BE ALONE.

USE AT LEAST A SMALL AMOUNT OF ALONE TIME TO CONSIDER YOUR PREFERENCES, STRENGTHS, WEAKNESSES, AND BIASES.

TALK TO YOURSELF OR JOURNAL.

SEEK OUT WISE OTHERS WHO CAN HELP YOU KNOW YOURSELF MORE DEEPLY.

The Fertile Ground of Idle Time

November 10, 2019

This is the ninth post in the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World. The title of this post is identical to the title of a chapter in the book. The author writes, “Idling actually has immense potential to command our attention. When we are in constant intellectual, emotional, or physical motion, we lack the spaciousness needed to come to understand and make sense of the full richness of our humanity. We are all familiar with the experience of feeling hungry or tired and not paying attention. Our stomachs growl or we yawn, yet we mindlessly push forward. We might drink coffee or eat something out of the vending machine, whatever is needed to keep moving through our very full day instead of taking the hunger pains or feelings of fatigue under real and rationed consideration. Our cultural norms reinforce this compensatory pattern by rewarding constant productivity, action, and advancement. As such, we are most commonly validated for having our attention focused outside ourself. Not only are we rewarded for being available to our employers, educators, and social connections twenty-four hours a day, but we are also privy to a never-ending stream of entertainment, education, and information that feels as though it builds, soothes, or stimulates us. Little reason (let alone demand) exists anymore for using our idle time to turn our attention inward.

The intolerance of stillness results in a deficit of self-soothing abilities. We cannot be still because we can’t soothe ourselves, quiet our thoughts, or regulate our emotions. Rather we stimulate ourselves, distracting ourselves or denying our need for comfort. Self-soothing skills, emotional regulation, critical-thinking capabilities, boredom tolerance, and creativity might all be enhanced by putting ourselves in the uncomfortable new space of stillness.

The author suggests that there is merit in learning to be calmly and fully present in any given moment. Experienced meditators tell us that this type of stillness comes only with great practice, and that a lack of practice leads to feelings of anxiety and agitation when distractions are unavailable. The author writes, “Self-soothing skills, emotional regulation, and critical thinking capabilities might all be enhanced by simply putting ourselves in the uncomfortable new space of stillness. Without doing the intentional work of saving some of our idle time to develop such skills, the opportunities for practice elude us and the malicious cycle of stimulation-distraction-information sets in. Not only does this rob of us our ability to practice tolerating stillness, it also keeps us valuing being informed over learning to be.

Boredom tolerance correlates positively with measures of creativity and experiencing intentional boredom paves the way for learning to function in “being” states as opposed to “doing” states. When we are bored, we find out how to stimulate or soothe ourselves. We learn to determine and meet our needs from this place. If we just meet boredom with an impulsive action to distract or engage the self in pursuits outside the self, we will never be fully capable functioning from a space of purely being who we are. Boredom tolerance and anxiety tolerance are twin requirements for learning to tolerate stillness. To be our healthiest and sturdiest selves requires an ability to be with ourselves in all our states of being; this enables the cultivation of imagination, engagement with complexities of thought, and a familiarity with our feelings. Often this externally looks like standing still and might look or feel like laziness, it is in reality much more of an idling where much internal activity is going on, even if the body is still.

There is a distinction between having time versus making it. Stop using the phrase “I don’t have time to…” and replace it with “I choose not to make time for…”

Technology and the Self

November 9, 2019

This is the eighth post in the book by doreen dodgem-magee titled “DEVICED: Balancing Life and Technology in a Digital World.” The title of this post is identical to the title of a chapter in the book. The author begins “The ‘self’ is an expansive topic that has been considered throughout history and referred to by names such as ‘soul,’ ‘psyche,’ and ‘essential identity’.” Despite conflicting theories regarding the development of this foundational element of our humanity, some generally held ideas about it mean to live with a cohesive and stable sense of self do exist. At least, people with such an identity are able to:

*Experience their self as distinct from others.
*Connect to and separate from others in healthy ways (being neither overly dependent nor overly independent).
*Have a general sense of more values and worldview.
*Perform general processes related to being active participants in the world.
*Handle consequences related to their actions in the world.

Dr. doreen dodgen-magee writes of the importance of science and technology as a usurper of the sense of self. She offers the following ideas for creating and using silence.

SET SOME TIMES OF THE DAY FOR TURNING OFF ALL ELECTRONIC SOUNDS

PRACTICE A GUIDED MEDITATION, THEN TRY IT ON YOUR OWN
Here you can read the posts on the relaxation response. There is a guided meditation at MARC.UCLA.edu. It counts even if you try to sit in one place and breath in silence for just three minutes. Dr. doreen dodgen-magee and HM encourage you to do this frequently.

USE A SINGING BOWL (of the Tibetan variety. HM has one).

Here are some ideas for fighting Fear of Missing Out (FOMO)
TAKE BREAKS FROM THE NEWS AND SOCIAL MEDIA AND COMMIT TO NOT “CATCH UP”

CONSCIOUSLY WORK THROUGH FEELINGS OF BEING LEFT OUT

AFFIRM THE PLACES AND PEOPLE TO WHOM YOU ARE MEANINGFULLY ATTACHED AND INVESTED IN

She encourages the abandonment of a Fixed Mind-set for a Growth Mind-set. There have been numerous healthy memory blog posts on this topic. She provides the following ideas for Enhancing a Growth Mind-set.

TRY NEW THINGS WITHOUT OVEREMPHASIS ON MASTERY
MASTER A USELESS SKILL THAT TAKES TIME TO LEARN

INSTEAD OF JOURNALING, TRY A BRAIN DUMP (STREAM OF CONSCIOUS WRITING).
Write down your thoughts on a piece of paper for five to ten minutes straight. Don’t try to construct sentences or bold ideas. Simply write whatever comes to mind. When you are done, rip up the piece of paper or burn it. The process, not the outcome, is the goal.

Here are some ideas she offers for BOREDOM INTOLERANCE

TURN OFF NOTIFICATIONS

SET A PASSWORD to decrease the likelihood of being overly attentive to one’s phone.

LEAVE YOUR PHONE IN THE TRUNK OF YOUR CAR

HOST A BOREDOM PARTY

Here are suggestions she offers for underdeveloped resilience, which is the ability to handle difficulties and hardships facing psychological symptoms.

DO AN INVENTORY OF THE FEELINGS YOU ARE COMFORTABLE WITH AND THOSE THAT MAKE YOU UNCOMFORTABLE

DO A DAILY EXAMEN (a practice found in many religious traditions. Some people call it a “Rose, Bud, Thorn” exercise; others call it the Crappy/Happy exercises. She suggest keeping a small notebook next to the bed. Each night before going to sleep, record what gave you life during that day (“happy”) and what took life away (“crapppy’).

Here are suggestions she offer for learning to self-soothe

LEARN TO PRACTICE MINDFUL BREATHING
There are many healthy memory blog posts on mindfulness

FIND “ESCAPE ROUTES”. These are routes to which you can escape an catch your breath and tap into your grounded self.

MAKE A “SELF-SOOTHING” LIST AND REFER TO IT

GO TO A CORNER OR APPLY SOME GENTLE WEIGHT
Heavy or weighted blankets that can be heated and placed on sore muscles are also helpful in communicating to the body that there is space for nothing and stillness.

Here are some additional ideas she offers for nurturing a more grounded sense of self

CONSIDER A SOCIAL MEDIA FAST
BUILD A VOCABULARY FILLED WITH NONEVALUATIVE, NONCOPERPATIVE LANGUAGE AND EMPATHIC, ENCOURAGING, AND LIFE AFFIRMING SENTIMENTS.

TRY THE HALT SCAN. This involves stopping throughout the day or when one feels particularly dysregulated and asking oneself if one is hungry, angry, lonely, or tired. These four states of being leave us particularly prone to distracting ourselves or using things other than what we really long for to satiate us. Once identified, we can choose a better action or feeling rather than simply acting unconsciously.

Technology and Relationships

November 8, 2019

This is the seventh post in the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World. The title of this post is identical to the title of a chapter in the book. The first relationship to be discussed is the relationship with oneself. Technology has a profound effect on how we relate to ourselves. If we have been able to develop a stable internal locus of control alongside our tech engagement, then we will be able to build an authentic, deep relationship with others. However, if the prevailing nature of technology’s impact on our relationship with ourselves has been to make us into self-promoting, self-centric, lacking in empathy, limited in communication skills, and sans an accompanying sense of self-knowing awareness of our limitations as well as our strengths, then our relationship with others will be built on a fragile foundation. We need to keep this foundational dynamic in mind as we discuss our relationships with others.

Given the amount of time we spend with screens, it seems plausible to posit that some of our most meaningful relationships exist with our devices (if meaningfulness is, at least in part, determined by investment of time and resources). Over time we develop response patterns to devices that look much like our response patterns to humans. Research has show that interaction with our devices can stimulate the release of oxytocin, initiating feelings similar to love. Oxytocin is considered the “cuddle hormone.” It is released when a new mother fazes at her nursing baby. Our physiological responses to devices suggest an emotional connection to them not unlike what we experience as physiological responses to connection between humans.

There is a distinction that needs to be made between our social lives and our relational lives. The former refers to the relative amount of time we spend in companionable connection with others, whereas the latter refers to the part of our being we invest in knowing others and being known by them. To be healthy and reliable, these relational forms of knowing need to be predicated on communication that is honest and authentic, happens in a variety of contexts, and occurs over time. Consider what you believe to be the differences and similarities between social networks and relational connections.
*Track the number of responses to social media posts you make in a day and compare that to the number of texts of phone calls you make.
*Consider who you might call if you had an amazing piece of news to share or if you needed help in an emergency.
*Let the difference between the types of connections and relationships you enjoy sink in, and determine where you might make some investments to deepen those that have real potential.

The author writes, “If our relationship with our own self and the authenticity of communication regarding that self is the foundation upon which our relationships are built, then the nature and quality of our communication creates that building blocks of our relationships with others. Research conducted with pairs of close friends found that communication via instant messaging results in significantly lower levels of bonding than face-to-face communication, video chatting, and audio chatting. If this is true for existing close friends, how might it impact the many relationships begun and maintained solely through typed digital messages?”

“Disinhibition” is one of the potential issues with the digital world that diminishes our communication skills. As we spend less time practicing the art of communication, with its subtleties of give and take, we are shifting toward disinhibition, a lack of restraint that manifests in impulsivity, poor risk assessment, and a disregard for social conventions. This shift is most apparent in typed communiques. In his article “online disinhibition effect,” Rider University communications professor John Suler describes how digital communication can train us to be less “other aware.” He writes: “In text communication such as email, chat, blogs, and instant messaging, others may know a great deal about who you are. However, they still can’t see or hear you—and you can’t see or hear them. Even with everyone’s identity visible, the opportunity to be physically invisible amplifies the ‘disinhibition effect.’”

Research evaluated whether people preferred to answer questions posed by humans or by “embodied conversational agents” (ECAs), which are virtual people. The results revealed that the research participants preferred speaking with ECAs if the answers might be of a highly sensitive nature or likely to involve negative self-admissions. If the answers were considered less sensitive or more likely to include positive self-admissions, the participants preferred human interviewers. Research participants reportedly appreciated the lack of judgment an ECA would afford.

The need to stand out alongside the constant comparison and competition for attention in our socially networked spaces has the power to subtly impact the way we think about ourselves and others. Excessive exposure to a world with constant judgment, evaluation, commentary, and comparison can make any of us lean toward relationally aggressive ways of encountering others and ourselves.

The author encourages the reader to align one’s social networks with our embodied, relational ones.
*Do an inventory of our social networks (whether they are via video games, on platforms like Facebook, etc.)
*Consider whether we are engaging with people on our social networks who are there only for you to show off to, or others who lead you to feel “less than.”
*Assess the newsletters and online subscriptions you receive.
Then carefully consider who and what are positive influences in our lives that we want to continue connecting with, and who and what might be best to part ways with. This need not be a harsh rejection session but rather a realignment of sorts,

More ideas for creating healthier relationships off-and online

TAKE A TEN-MINUTE PAUSE BEFORE POSTING OR RESPONDING TO POTENTIALLY PROVOCATIVE INFORMATION.

PRACTICE NONJUDGMENTAL AWARENESS AND RESPONSIVENESS
Consider living by the motto, “Be kind to everyone, for theirs is a difficult journey.” See how leading with empathy and openheartedness changes the tendency toward judgment and categorization.

PICK UP THE PHONE OR INITIATE A VIDEO CHAT

LEAVE YOUR PHONE IN THE CAR WHEN MEETING WITH OTHERS, AND DON’T WEAR EARBUDS WHEN OTHERS ARE PRESENT (AT LEAST SOME OF THE TIME).

WAIT IN LINE, AT A MEETING, OR ELSEWHERE WITHOUT INTERACTING WITH YOUR PHONE.

PRACTICE EYE CONTACT.

HANDWRITE A LETTER OR NOTE.

PRACTICE FINDING THE GOOD IN OTHERS, AND PERIODICALLY AFFIRM SOMEONE IN PERSON.

Our Bodies and Brains on Tech

November 7, 2019

This is the sixth post in the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World.” The title of this post is identical to the title of a chapter in that book. The title is accurate. Technology affects both our bodies and our brains. Unfortunately, many of these effects are bad.

Fortunately, the author offers tips for decreasing these bad effects. Here are some suggestions for taking action to decrease some bad physical effects:
*Take breaks from screens for movement through the day to help you stay not only healthy, but engaged.
*Get into the habit of walking away from your devices at least every hour to ge fresh air and move both your legs and small muscle groups. Just stepping outside for three deep breaths can be helpful.
*Try many different types of physical movement. Doing so will help you stay flexible both in your physiology as well as in your beliefs about your body’s capabilities.
*Associate one of your tech hobbies with a set of basic and easy-to-do-wherever-you-are stretches. Do these stretches every time you engage that tech habit. For example, do a sun salutation or two every time you pick up your game controller or log on to social media.

Negative postural effects are also a problem. The author offers these suggestions:
*Remember to step away from your devices regularly.
*Practice good ergonomics.
*Stretch regularly.
*Engage in flexibility exercises.
*Make sure your screens are level with your eyes when looking straight ahead.
*When using a keyboard, keep your back straight and your arms parallel to the floor and close in at your sides. Also, rotate your wrists occasionally.
*When using small devices, be sure to stand and stretch, shift your weight, and rotate your thumbs and wrists occasionally. Look up and around and intentionally stretch the top of your head toward the sky.
*When using any device, be careful not to round your shoulders or lean your head excessively forward.
*Practice mindful, thoughtful device engagement.

Blue light related to screen use also has negative effects. Here are some tips offered by the author to minimize this negative impact.
*Take breaks from screens throughout the day.
*Make sure screens are not placed in front of windows, forcing your eyes to adjust to both light sources.
*Use lighting at eye level rather than overhead when working with screens indoors.

Technology use also affects the brain. And these effects are large enough such that neuromarketing has emerged as a field of study. Neuromarketers use brain-imaging technology along with biometric measures (heart rate, respiration) to determine why consumers make the decisions they do. By studying fMRI scans and other physiological data while individuals interact with technology, the researchers see how activation of particular areas of the brain due to specific technological content exposure can result in specific behaviors, ideas, or feelings in people. By changing the way content is delivered within the digital framework, the researchers can change the way the brain is activated, hence changing the lived experience of the subject. This effort is predicated on the knowledge that activation of certain brain regions will bring about certain responses. As the brain wires together where it fires together, repetitive exposure and responses to technology must be having some impact on the way our brains are wired.

In a 1969 episode of Sesame Street the images were black and white and each sustained camera shot lasted somewhere between six and fifteen seconds. It is reasonable to assume that individuals who are exposed to this kind of pacing in the presentation of screen imagery will develop circuitry used to waiting for up to fifteen seconds for a new stimulus. Doing this over and over would force the brain to develop the ability to focus attention without becoming bored or distracted.

In a 1984 Sesame Street episode the sustained camera shots lasted between three to six seconds, with a few lasting only one and a half seconds. The author notes that the brain exposed to this rapid cycling of stimulation and images doesn’t wire with the same tendency toward focus and boredom tolerance that we explored earlier. Instead, it will anticipate a change of scenery every three to five seconds, wiring for efficiency in handling multiple images in fast succession.

The author finds no sustained unmoving camera shots on Sesame Street. She concludes the brain is trained to expect constantly, changing stimulation. If things don’t change on the screen immediately our brain is trained to look away to find something novel to attend to. When the preponderance of visual stimuli presented to us follows this pattern over time, we no longer have the neurologically practiced skills of waiting and focus. It is not every day that one can find such a condemning indictment of Sesame Street.

Dopamine is released during video game use and game developers work to exploit tis. When dopamine levels are high, we feel a sense of pleasure, Once we’ve experienced these feelings, it’s hard not to want to live with less.

Developers are trying to increase users’ screen time. And this can most definitely be harmful. Here are telltales signs that the author offers:
*Moving from incidental use to nearly constant use.
*Needing increasing levels of tech time of stimulation for satisfaction.
*Being jittery or anxious in response to stepping away from technology.
*Lying in order to garner more time/specific content/etc. or to cover up certain forms of use.
*Isolating in order to engage technology.

Here are tips offered by the author for preventing tech addiction and getting help.

Set clear boundaries, communicate them, and enforce them .

Think ahead before adding a technology.

Make sure technology is not your only “sweet spot.”

Introduce high quality, slow moving technologies first, and stick with them as long as possible.

If you feel you’ve moved into use patterns that are hurting you or keeping you from your embodied life, get help.

There is so much information on the dangers of multitasking in the healthy memory blog that anything the author offers on this topic would be repetitive.

She does note the good news of neuroplasticity and doing “deep work.” One of the principle goals of the healthy memory blog is to move past superficial system one processing, which is very fast and avoids deep thinking, and to engage in system 2 processing which is deep thinking. So much learning can be enhanced via technology. There is a virtual infinity of useful knowledge on the web. But people become preoccupied with games, staying in touch, being liked and other superficial activities. In terms of memory health, it is deeper system 2 processing which provides for a more fulfilling and meaningful life. It also decreases the probability of suffering from dementia. Autopsies have found many cases of people who died with the amyloid plaque and neurofibrillary ranges, which are the defining features Alzheimer’s, but who never exhibited any behavioral or cognitive symptoms. The explanation for this is that these people had developed a cognitive reserve during their lives through continual learning and critical use of their brains.

Practice Living an Embodied Life

November 5, 2019

 

This is the fourth post in the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World.” These are the points the author advances to practice living an embodied life:

*Work to love your body with all its imperfections. Identify things about your body that you appreciate or enjoy. Practice viewing your shortcomings with graciousness and then redirecting your attention to a trait you appreciate. Thank your body for the things it does well.

*Learn to listen to, soothe, and—with warmth and gentleness—care for your body.

*Experience your sexuality and desires in safe ways that are respectful of yourself and others.

Cultivate you intuitive intra-and interpersonal senses.

*Cultivate your physical senses as discussed in the following section>

Sound

*Cherish Silence. Maintain an ability to be in and with silence by creating it. Leave the television off for a while. Choose specific times to drive with no radio/digital content. Walk/run/work without earbuds. Let a podcast or two go unheard. Visit places such as libraries and empty places of worship, where silence is the norm. Sit for at leas ten minutes and pay attention to the sound of silence. What do you hear? What do you feel? What happens with your other senses as the need for active listening falls away?

*Turn down the volume. Set the volume of your laptop/television/phone a bit lower than normal. Notice how this feels. How does working at listening feel?

*Take earbud breaks. Set aside times when everyone in your home or work setting is earbud free—and maybe even device free for a while. This lets everyone in on what everyone else is doing and makes you aware of how much stimulation is influencing each person in your environment.

*Vary your playlist. Listen to a variety of genres of music/content. Challenge yourself to stretch into new styles. To find compelling elements in what you hear, listen past the point when the newness bothers you. Listen to an entire recording as presented. The artist ordered for a reason. Notice how this feels.

*Try going lyric free. If you must keep your earbuds employed, listen to the lyric-free music when trying to study or work. Experiment with genres such as baroque, jazz, or electronic. What do you notice as different themes within the music emerge?

Which forms increase your attention to the tasks you are working on? Which distract you?

Vision/Sight

*Declutter. Pay attention to “visual clutter” in your home, work, or school environment. Notice how it feels to look at the cluttered spaces versus spaces of visual stimulation or clutter. Regardless of your style or temperament, we all need quiet places for your eyes to rest. Ensure that you have places in your homework/space/classroom for your eyes to land with little stimulation. Practice drawing your attention to these spaces when you are overwhelmed or need a break. Notice how it feels for your eyes to have a place to rest.

*Renew your view. Give yourself new things to look at periodically. Take a new way home, visit a place you’ve never been (even if it’s just a new neighborhood in your city), or take a hike in an unfamiliar setting. Switch out or rearrange that art in your home or office. Pick up a children’s picture book or a photographic illustrated coffee-table book, and set aside time to take it in at a slow pace. Notice what draws your eye and what repels it.

*Find eye feasts and indulge. Provide our self with visual complexity. Art museums, image-rich magazines and journals, pattern-based coloring books, and natural settings with a variety of foliage. Artificial illumination stimulates the visual field in important ways. Make sure that screens aren’t your only source of visual stimulation.

*Think about lighting. Light impacts our sense of visual comfort versus discomfort. As a general principle overhead lighting (that is not highly designed or managed) is hard on the eye and creates shadows on the faces of those with whom we interact. Lighting at face level (e.g., table lamps that are right at face level) is easier on the eye and provides a more comforting environment. Make some changes according to these guidelines and see how the changes make you feel. Notice how the light changes in your environment as the sun goes down, and try to make the change from natural light to artificial forms of light more seamless.

*Power down prior to bedtime. Digital devices emit powerful doses of light that stimulate neurotransmitters and hormones related to wakefulness and stimulation. Try powering down all electronics at least thirty minutes before trying to sleep. Increase this to sixty or ninety minutes over time. Over the course of a two-week trial, notice how your sense of restfulness waxes and wanes as you eliminate screens closer to bedtime and when you are in bed.

Living Outside Our Skin

November 4, 2019

The title of this post is identical to the title of a chapter in he book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World.” This is the third post on this book. The chapter begins, “Humans are sensual animals.” We touch, taste, smell, hear, and see our way through our days. We pay attention to both the message indicators that direct us to stimuli and the physical experiences of these stimuli. Our stomachs rumble, so we look for food. We yawn, realize we’re tired, lie down to go to sleep, or go outside for a renewing breath of fresh air. We smell something out of the ordinary and search for its source. If we are especially mindful and aware, we might realize that we yearn for a vision of beauty or to be touched in a meaningful way. Each of our senses serves a unique function of keeping us healthy and content. Should one or more senses become compromised, we often find the others become heightened to compensate and keep us aware and in tune with ourselves and our surroundings.

As our technology use increases we should be aware of the risk of falling out of touch with the potency of our senses. We might begin the day by rolling over to grab our phone to catch up on the news or on our social feeds. We might connect with our device before we get out of bed and truly wake up to our embodied self! We surf the web during the day to tell us what to buy. We might use digital monitors to tell us our child’s or pet’s body temperature and heart rate. And we might use wearable technology to track our exercise and heart rate; and stave off boredom by tackling another level of a favorite game while waiting. At days end we plop down on the couch and play a video game, or we crawl into bed and watch a movie on our tablet or laptop. The author writes, “As we engage these platforms, we send our minds and bodies the not-so-subtle message that our technologies can entertain, comfort, and know us better that we can entertain, comfort, and know ourselves. This tendency to rely on devices, apps, and technology more consistently than on our own sense of our mental, emotional, and physiological states has far-reaching consequences.”

The author writes, “Our sense of place in the world is similarly impacted. At the core of our consciousness, we no longer truly need to know where we are, as long as we have a device and an internet connection. Directed by our GPS and Google Maps, we go directly from where we are to where we want to go without having to think or wonder. We visit a new part of town or a new state altogether, but have no sense of our larger environs. We don’t notice the landscape because we’re busy following our turn-by-turn directions. We don’t interact with the ‘natives’ of these places, and we find our favorite ‘local’ chain restaurants and stores in every place we visit, so we can frequent the familiar rather than brave the unknown. We rarely stop to attend to what it feels like to be an embodied person in a new space, and we don’t travel consciously into new spaces. Instead we move through them, looking down at our phones the entire time. When do look up, we snap photos on our phones, relieving our minds from the need to hold the memories.”

There is such a thing as “self-knowing awareness.” If we have developed the ability to scan our physical bodies—paying attention to what our sensory awareness can tell us about what we need and prefer in the way of stimulation, testing, learning, and more, then we can trust our own internal “gut” to inform us how to live most healthfully. However, if we have relied in such a way that a bulk of our stimulation, smoothing, learning, and information gathering has come from outside our body, we will feel bereft of knowledge regarding how to live in healthful ways in and out of ourselves. This requires the kind of living we have actively and passively practiced. To inventory and assess what our body might want or need, however, required a practice pattern of checking with ourselves, often in quiet and stilled ways. When we outsource this process to external devices, we miss out on the opportunity to know ourselves deeply and to practice self-regulation.

There is also good reason to be concerned with both the accuracy and validity of these devices. Analysis of clinical sleep studies done at the same time as sleep monitoring with fitness trackers reveals a great degree of variance in the accuracy of sleep assessment via wearable devices. Sleep is a complex activity with various stages and cycles. While movement is one indicator of depth of sleep stage, many other variables contribute to its nature and quality. Unfortunately, with growing frequency, wearers of tracking technologies are relying heavily on nightly generated data to evaluate the quantity and quality of their sleep, and to make adjustments based on the data. Instead of waking and taking time to consider how we feel and how long we slept, we are making assumptions based on data that may not be reliable. The author concludes, ”while the tracking is not, in and of itself, bad or negative, if it is used outside of self-assessment or real-time monitoring of actual experienced levels of tiredness or restfulness, we forgo a strong and developed sense of really knowing ourselves. Consider, for example, research reported in the Journal of Clinical Sleep Medicine. Subjects were found to make inferences based upon fitness tracker data that caused them to self-diagnose sleep disturbances that wee not clinically founded. In other words, our relying on data can sometimes get us in trouble!”

Overreliance on Technology

November 3, 2019

This is the second post based on the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World.” Dr. dodgen-magee writes of physicians during their residencies having to be with a patient while at the same time fulfilling the medical system’s need for a thorough and hyper-timely digital record for each appointment. The residents say that computers—installed in such a way as to hover almost directly between the physician and the patient—felt like a constant, tangible presence in the room that demanded the kind of attention that had previously been dedicated solely to the patient. Although providing access to increased helpful data, they also impacted the fulness of the personal encounter. The author concludes, “When a digital device is engaged during an embodied human encounter, what is each party’s relationship to the device, and how its the intimacy of the encounter affected? If it is affected, what are we doing to address or control for this? The reality into which we are evolving in many settings is one wherein determining how to handle the potentially disruptive powers of technology is a complex and ever-moving task, often largely outside our personal control.

The author writes about a dynamic at play with technology. We use it a little bit and find it to be “tasty” in ways our embodied lives aren’t. We begin using it to save time here and there and suddenly we’re investing the moments we’ve saved back into our technology engagement. Getting comfortable with their use, we think that more use might offer us increasing amounts of free time, social connection, learning, and entertainment. Before we know it we are spending large amounts of our lives in digital spaces. Instead of being a side dish or accompaniment to our embodied life they have taken center stage. Although engagement with technology has the capacity to result in increases in creativity, collaboration, socialization, visual reaction time, and visual spatial awareness, there is plenty of potential for less than ideal consequences as well. Disruptions to our physical bodies as well as to our inter-and interpersonal lives are well documented.

She writes that throughout our lifespan, we move across a developmental continuum. We are faced with experiences and opportunities the move us forward, push us backward, or interrupt our journey. Both “positive” (mastering the ability to functions as an autonomous self, committing to relationships with both people and communities, finding one’s passion) and “negative” (significant loss or rejection, failures, discovery of limitations) life events hold opportunities for forward or reverse movement. Usually, we move in adaptive ways, making steps forward and backward and rolling with the challenges and obstacles.

The author includes a chart titled “Potential Disrupter + Character/Personality/Developmental Milestones Reached to this point = Either:
*Confrontation of the disrupter/working through, allowing for forward movement
*Mindlessness in light of the disrupter, causing a sort of spinning of the wheels
*Fight/flight/freeze causing a developmental arrest

The author, who remember is a clinical psychologist, writes, “The constant presence of digital devices introduces a third party into human relationships with our most basic selves and the “selves” of others.

The author concludes this chapter as follows:
“If the goal of living is to continually grow and mature, we must take a long look at our own development and how it is helped or hindered, The very core of ourselves to engage. Our minds, guts, and bodies are shaped by the narrow or broad realities to which we expose them. More than ever, we must do this work with the intention and by our act of will, or our trajectory will be narrow or limited. To avoid complacency, work through developmental arrests, and become healthy and whole, we must examine the nature of our journey, the ways in which we invest ourselves and our time, and the disrupters that influence both.

Remember that this author is a clinical psychologist. If you were in counseling or therapy, this is likely the manner in which she would explain the problem to you.

DEVICED!

November 2, 2019

This post is the first in a series of posts based on the book by doreen dodgen-magee titled “DEVICED: Balancing Life and Technology in a Digital World.” More properly she is Dr. doreen dodgen-magee. She has a PsyD. and is a psychologist with a private practice in Portland, Oregon. The first part of the book is titled “How Devices are Impacting Us.” Dr. dodgen-magee begins, “In the recent past, the acronym ‘IRL’ has come to stand for the phrase ‘in real life.’ It refers to a person’s non-digitally based, fully analog, three dimensional, in-one’s own-and-actual self.’ Although the acronym isn’t that old, I think it’s time for retirement.” The reason being that the reality is that our digital and embodied lives come together to create one real, whole life—a real life that includes both. Friends exist in both embodied and digital spaces; they indulge our clans, classmates, and support groups—even though we may never meet in what used to be called our ‘real lives.’ She writes, “We buy physical items in virtual shops, we learn important lessons and gain actual skills in digital spaces, and we carry in our pockets virtual assistants that often know us better than our embodied friends; when these assistants fail, we feel real frustration. All of life, included that lived in the digital domain, is real.

Technology provides the ambient background noise of everyday life. For most people, regardless of personal choice, technology has come to create an ambient background noise that is inescapable, or escapable only with great effort. By using technology we are investing a part of our physical lives in digital spaces and making it such that, regardless of within which one an action happens, all our experiences are part of our “real lives.”

Dr. dodgen-magee writes, “I frequently think of the psychological concept of ‘leaking,’ when I consider these realities. Leaking, as I use it here, involves syphoning off just enough internal psychic pressure to make us comfortable staying exactly where we are instead of moving toward newness and growth.”

The goal of the tech industry in most things digital is to offer us both convenience and comfort. In optimal doses they help us and allow us to be productive, content, and available to life. However, if we exist in exclusively convenient and comfortable spaces, we lose appropriate motivation to undertake the kinds of risks and experiences that keep us growing and maturing. Too much of convenience and comfort cause us to lose our edge. We stop feeling the nudge to persist and engage meaningfully. We begin to feel entitled and bored in the worst ways, pursuing hedonistic and narcissistic pleasure and validation. But when these conveniences are engaged to allow ourselves time to pursue experiences that will help us grow, benefit person-kind, and expand our horizons, this is good. We need to find the fine-line balance where we feel convenience and comfortable enough without becoming too much so.

So that is the objective of this book to find the balance between life and technology in a digital world.

Drain the Shallows

October 22, 2019

This is the ninth post in a series of posts on a book by Cal Newport titled “Deep Work: Rules for Focused Success in a Distracting World.” The fourth rule can best be captured by the following tip: Become Hard to Reach.

As for emails.

Make people who send you e-mail do more work. Have them elaborate on their request or refer them to another source.

Don’t respond. Newport provides the following examples
*It’s ambiguous or otherwise makes it hard for you to generate a reasonable response.
*It’s not a question or proposal that interests you.
*Nothing really good would happen if you did respond and nothing really bad would happen if you didn’t.

So it’s quite simple, and this is why this post is so brief.

Quit Social Media

October 21, 2019

This is the eighth post in a series of posts on book by Cal Newport titled “Deep Work: Rules for Focused Success in a Distracting World.” His third rule is Quit Social Media. The reason for this should be obvious by now. Social media rob people of their valuable attentional resources and increase the difficulty of trying to focus, and effectively preclude deep thinking.

Abruptly quitting social media might offend some friends and acquaintances. So it is wise to inform them you are quitting and provide your reasons for doing so.

One reason would be “The Any Benefit Approach to Network Tool Selection.” This states that you are justified in using a network tool if you can identify any possible benefit to its use, or anything you might possibly miss out on if you don’t use it.

You replace this with “The Craftsman Approach to Tool Selection: Identify the core factors that determine success and happiness in your professional and personal life. Adopt a tool only if its positive impacts on these factors substantially outweigh its negative impacts.

Replace the superficial friends on social networks with close and rewarding friendships with a group of people who are important to you
Regularly take the time for meaningful connection with those who are most important to you (a long talk, a meal, joint activity).
Give of yourself to those who are most important to you (making nontrivial sacrifices that improve their lives).

When you quit, explain your reasons for quitting. Some might find your reasons compelling, In this case propose a support group for quitting. Ironically, it might be impossible to do this without technology, but if possible, try do to so.

Bad for Business, Good for You

October 18, 2019

This is the fourth post in a series of posts on book by Cal Newport titled “Deep Work:  Rules for Focused Success in a Distracting World.”  The title of this section is identical to the title of a section in “Deep Work: Rules for Focused Success in a Distracting World” by Cal Newport. He writes that deep work should be a priority in today’s business climate. And here are the reasons for this paradox: deep work is hard and shallow work is easier, in the absence of clear goals for your job, the visible busyness that surrounds shallow work becomes self-preserving, and that our culture has developed a belief that if a behavior is related to “the Internet,” then it’s good-regardless of its impact on our ability to produce valuable things. All of these trends are enabled by the difficulty of directly measuring the value of depth or the cost of ignoring it.

Newport continues, “If you believe in the value of depth, this reality spells bad new for business in general, as it’s leading them to miss out on potentially massive increases in value production. But for you, as an individual, good news lurks. The myopia of your peers and employees uncovers a great personal advantage. Assuming the trends outlined here continue, depth will become increasingly rare and therefore increasingly valuable. Having just established that there’s noting fundamentally flawed about deep work and nothing fundamentally necessary about the distracting behaviors that displace it, you can therefore continue with confidence with the ultimate goal of this book: to systematically develop your personal ability to go deep—and by doing so, reap great rewards.”

The problem here is whether your employers will allow your to go deep. A subsequent post will provide some tips for coping with your employer. But regardless of your job going deep leads to a healthy memory. It involves heavy amounts of System 2 processing. This builds a cognitive reserve that greatly reduces the problem of suffering the behavioral or cognitive indications of Alzheimer’s or dementia. It should also lead you to a more satisfying personal life.

Obstacles to Deep Thinking

October 17, 2019

This is the third post in a series of posts on book by Cal Newport titled “Deep Work: Rules for Focused Success in a Distracting World.” There is a curse called the culture of connectivity in both our work and so-called leisure worlds. This culture of connectivity is where one is expected to read and respond to e-mails (and related communications) quickly. One’s workplace plays a role in this expectation, but in one’s personal life, this expectation is self-imposed.

In the business setting the principle of least resistance is without clear feedback on the impact of various behaviors to the bottom line, we will tend to reward behaviors that are easiest in the moment.

Unfortunately, this principle can also apply in our personal life. Rather than pursuing an activity that is self-enhancing, there is a strong temptation to do something easier, like answering emails or participating in social media.

It is also possible, in both our work and personal lives, to mistake busyness as a proxy for productivity.

This is how Nobel Prize winning physicist Richard Feynman explained what work habits a professor adopts or abandons: “To do real good physics work, you do need absolute solid lengths of time…it needs a lot of concentration…if you have a job administering anything, you don’t have the time. So I have invented another myth for myself: that I’m irresponsible. I’m actively irresponsible. I tell everyone I don’t do anything. If anyone asks me to be on a committee for admissions, “no,” I tell them. I’m irresponsible.

The author, Newport, writes, “many knowledge workers want to prove that they’re a productive member of the team and are earning their keep, but they’re not entirely clear what this goal constitutes….many seem to be turning back to the last time when productivity was more universally observable: the industrial age.”

Newport writes, “In the absence of clear indicators of what it means to be productive and valuable in their jobs, many knowledge workers turn back toward an industrial indicator of productivity : doing lots of stuff in a visible manner.” In other words, they are using busyness as a proxy for productivity.

Newport writes about the warning provided by the late communications theorist at New York University Neil Postman. In the early 1990s, as the personal computer revolution first accelerated, Postman argued that our society was sliding into a troubling relationship with technology. He noted that we were no longer discussing the trade-offs surrounding new technologies, balancing the new efficiency against the new problems introduced. If it’s high tech, we begin to instead assume, then it’s good. Case closed.

Postman’s argument has appeared in prior HM posts. His argument is greatly amplified with the explosion in technology that has occurred. People want to get the latest smartphone because it is the latest, without considering whether the new functionality will be worthwhile. Social media is aggressively engaged without considering what the actual value in being liked is worth the time being invested. True friends require time and commitment. Are superficial “likes” worth the lost of true friends?

Evgeny Morozov in his book “To Save Everything, Click Here” writes, “It’s this propensity to view ‘the internet’ as a source of wisdom and policy advice that transforms it from a fairly uninteresting set of cables and network routers into a seductive and exciting ideology—perhaps today’s uber-ideology.” In his critique, we’ve made “the internet” synonymous with the revolutionary future of business and government. To make your company more like “the Internet” is to be with the times, and to ignore these trends is to be the proverbial buggy-whip maker in an automative age. We no longer see Internet tools as products released by for-profit companies, funded by investors hoping to make a return, and run by twenty somethings who are often making things up as they go along. Instead we’re quick to idolize these digital doodads as a signifier of progress and a harbinger of (dare I say it) a brave new world.

Understand that HM is not denigrating the new technology. Many of the posts under the category of Transactive Memory (go to healthymemory.wordpress.com to find it) express the tremendous potential the technology offers for cognitive growth and for collaboration among our fellow humans. Unfortunately, it appears that this potential has in large part been hijacked and used to nefarious ends.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Taking a Brief Break

September 30, 2019

And there is plenty to read. Go to healthymemory.wordpress.com

You can review the categories to find articles of interest or use the search block to search for posts on interesting topics.

Also go to https://centerhealthyminds.org

It is very interesting.

HM shall return.

Renaissance Now

June 21, 2019

This is the eleventh post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this book is identical to the title of the twelfth section of this book.

Rushkoff begins, “Built to enhance our essential interrelatedness, our digital networks could have changed everything. And the internet fostered a revolution, indeed. But it wasn’t a renaissance.

Revolutionaries act as if they are destroying the old and starting something new. More often than not, however, these revolutionaries look more like Ferris wheels: the only thing that’s truly revolving is the cast of characters at the top. The structure remains the same. So the digital revolution—however purely conceived—ultimately brought us a new crew of mostly male, white, libertarian technologists, who believed they were uniquely suited to create a set of universal rules for humans. But those rules—the rules of internet startups and venture capitalism—were really just the same old rules as before. And they supported the same sorts of inequalities, institutions, and cultural values.”

On the other hand a renaissance is a retrieval of the old. It makes no claim on the new unlike a revolution. A renaissance is a rebirth of old ideas in a new context. People are becoming aware of the ways in which these networks and the companies behind them have compromised our relationships, our values, and our thinking, and this is opening us to the possibility that something much bigger is going on.

Rushkoff suggests comparing the leaps in art, science, and technology that occurred during the original Renaissance with those we are witnessing today.

Perhaps perspective painting was the most dramatic artistic developed during the Renaissance. Artists learned how to render a three-dimensional image onto a flat, two-dimensional canvas. Rushkoff suggests that perhaps the hologram, which lets us reprint a fourth dimension of time on a flat plane, or virtual reality, which lets the viewer experience a picture as an immersive environment are comparable.

European sailors learned to navigate the globe, dispelling the conception of a flat earth and launching an era of territorial conquest during the Renaissance. We orbited and photographed our planet from space, launching a mindset of ecology and finite resources during the twentieth century. The sonnet, a form of poetry that allowed for the first extended metaphors was invented during the Renaissance. We got hypertext, which allows anything to become a metaphor for anything else during the twentieth century.
The printing press was invented during the Renaissance, which allowed for the written word to be distributed to everyone. In the twentieth century we got the computer and the internet, which distributes the power of publishing to everyone.

The original Renaissance brought us from a flat world to one with perspective and depth. Our renaissance potentially brings us from a world of objects to one of connections and patterns. The world can be understood as a fractal, while each piece reflects the whole. Nothing can be isolated or externalized since it’s already part of a larger system. Rushkoff concludes that the parallels are abundant and that this is our opportunity for a renaissance.

Rushkoff warns that a renaissance sans the retrieval of lost, essential values is just another revolution. He claims that the first individuals and organizations to capitalize on the digital era ignored the underlying values that their innovations could have retrieved. They erroneously assumed they were doing something absolutely new: disrupting existing hierarchies and replacing them with something or someone better, which was usually themselves. He claims that the early founders merely changed the ticker symbols on Wall Street from old tech companies to new tech companies, and the medium to display them from paper tape to LEDs.

Rushkoff writes, “The original Renaissance, for instance, retrieved the values of ancient Greece and Rome. This was reflected not just in the philosophy, aesthetics, and architecture of the period, but in the social agenda. Central currency favored central authorities, nation-states, and colonialism. These values had been lost since the fall of the Roman Empire. The Renaissance retrieved those ideals through its monarchies, economics, colonialism, and applied science.

He asks, what values can be retrieved by our renaissance. He suggests the values that were lost or repressed during the last one: environmentalism, women’s rights, peer-to-peer economics, and localism. He sees the over-rationalized, alienating approach to science being joined by the newly retrieved approaches of holism and connectedness. He sees peer-to-peer networks and crowdfunding replacing the top-down patronage of the Reconnaissance, retrieving a spirit of mutual aid and community.

Unfortunately, he writes that possibilities for renaissance are lost as our openness to fundamental change creates footholds for those who would exploit us. Innovations are instrumentalized in pursuit of short-term profit, and retrieved values are ignored or forcibly quashed. Without retrieval, all our work and innovations is just research and development for the existing repressive systems. The commercial uses for technology tend to emerge only after it has been around for a while..

He concludes this section by noting that the relationship between individuals and society is not a zero-sum game. He writes, “Humans, at our best, are capable of embracing seeming paradox, We push through the contradiction and find a dynamic sensibility on the other side. Each of these movements depends on our comfort with what we could call a fractal sensibility, the notion that each tiny part of a system echoes the shape and structure of the whole. Just as the veins within the leaf of a single fern reflect the branches, trees, and structure of an entire forest the thoughts and intentions of a single individual reflect the consciousness of the whole human organism. The key to experiencing one’s individuality is to perceive the way it is reflected in the whole and, in turn, resonates with something greater than oneself.”

Economics

June 17, 2019

This is the seventh post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this post is identical to the title of the seventh section of this book.
Rushkoff writes, “What we now think of capitalism was born in the late Middle Ages, in the midst of a period of organic economic growth. Soldiers had just returned from the Crusades, having opened up new trade routes and bringing back innovations from foreign lands. One of them, from the Moorish bazaar, was the concept of ‘market money.’”

Prior to this time, European markets operated mostly through the direct exchange of goods, that is, barter, Gold coins were too scarce and valuable to spend on bread. Anyone who did have gold hoarded it. Market money let regular people sell their goods to each other. It was often issued in the morning, and then cashed in at the close of trading. Each unit of currency could represent a loaf of bread or a head of lettuce, and would be used by the seller of those items as a way of priming the pump for the day’s trade. The baker could go out early and buy the things he needed, using coupons good for a loaf of bread. Those coupons would slowly make their way back to the baker, who would exchange them for loaves of bread. This was an economy geared for the velocity of money, not the hoarding of capital. It distributed wealth so well that many former peasants rose to become the new merchant middle class. They worked for themselves, fewer days per week, with greater profits, and in better health than Europeans had ever enjoyed and as Rushkoff notes would not enjoy again for many centuries.

The aristocracy disliked this egalitarian development. When the peasants became self-sufficient, feudal lords lost their ability to extract value from them. Rushkoff notes that these wealthy families hadn’t created value in centuries, so they needed to change the rules of business to set this rising tide of wealth as well as their own demise.

So the aristocracy came up with two main innovations. The chartered monopoly was the first. It made it illegal for anyone to do business in a sector without an official charter from the king. So if you were not the king’s selected shoemaker, you had to close your business and become an employee of someone who was. Rushkoff writes, “The American Revolution was chiefly a response to such monopoly control by the British East India India Company. Colonists were free to grow cotton but forbidden from turning it into fabric or selling it to anyone but the company.” Clearly the colonists were being exploited. The East India Company transported the cotton back to England, where it was made into fabric, then shipped back to American and sold to the colonists. This monopoly charter was the progenitor of the modern corporation.

Central currency was the other main innovation. Market money was declared illegal; its use was punishable by death. People who wanted to transact had to borrow money from the central treasury, at interest. This allowed the aristocracy, who had money, to make money simply by lending it. Local markets collapsed. Money which had been a utility to promote the exchange of goods, instead became a way of extracting value from commerce.
Rushkoff writes, “That growth mandate remains with us today. Corporations must grow in order to pay back their investors. The companies themselves are just the conduits through which the operating system of central currency can execute its extraction routines. With each new round of growth, more money and value is delivered up from the real world of people and resources to those who have the monopoly on capital, That’s why it’s called capitalism.”

Rushkoff continues, “But corporations are not people. They are abstract, and can scale up infinitely to meet the demands of the debt-based economy. People can only work so hard or consume so much before we reach our limits, We are still part of the organic world, and subject to the laws of nature. Corporations know no such bounds, making them an awful lot like the digital technologies they are developing and inhabiting.”

Continuing further, “The pioneering philosopher of the political economy, Adam Smith, was well aware of the abstract nature of corporations—particularly large ones—and stressed that regulations would be necessary to keep them from destroying the marketplace, He argued that there are three factors of production, which must all be recognized as equally important: the land, on which we grow crops or extract resources; the labor, who till the soil or manufacture the goods; and, finally, the capital—either the money invested or the tools and machines purchased. He worried that in an abstract growth-based economy, the priorities of the capital would quickly overtake the other two, and that this, in turn, would begin to favor the largest corporate players over the local, human-scaled enterprises that fuel any real economy.”

Capital can keep growing, unlike land and humans. Moreover, it has to, because a growth-based economy always requires more money. And capital accomplishes this miracle growth by continually abstracting itself. If investors don’t want to wait three months for a stock to increase in value they can use a derivative—an abstraction—to purchase the future stock now. Should that that not be enough temporal compression, one can purchase a derivative of that derivative, and so on. Today, derivatives trading outpaces trading of real stocks. The New York Stock Exchange was actually purchased by its derivatives exchange in 2013. So the stock exchange, which is itself an abstraction of the real marketplace of goods and services, was purchased by its own abstraction.

In 1960, the CEO of a typical company made about 20 times as much as its average worker. Today, CEOs make 271 times the salary of the average worker. They probably would like to take less and share with their workers, but they don’t know how to give up the wealth safely. Thomas Jefferson once described the paradox of wanting to free his slaves but fearing their retribution if he did, “it’s like holding a wolf by the ear.”

Rushkoff ends this section as follows, “So with the blessings of much of the science industry and its collaborating futurists, corporations press on, accelerating civilization under the false premise that because things are looking better for the wealthiest beneficiaries, they must be better for everyone. Progress is good, they say. Any potential impediment to the frictionless ascent of technological and economic scale, such as the cost of labor, the limits of a particular market, the constraints of the planet, ethical misgivings, or human frailty—must be eliminated. The models would all work if only there weren’t people in the way. That’s why capitalism’s true believers are seeking someone,or better something, to do their bidding with greater intelligence and less empathy than humans.”

Mechanomorphism

June 16, 2019

This is the sixth post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this post is identical to the title of the sixth section of this book. Rushkoff begins, “When autonomous technologies appear to be calling all the shots, it’s only logical for humans to conclude that if we can’t beat them, we may as well join them. Whenever people are captivated—be they excited or enslaved—by a new technology, it becomes their new role model, too. “

“In the Industrial Age, as mechanical clocks dictated human time, we began to think of ourselves in very mechanical terms. We described ourselves as living in a ‘clockwork universe,’ in which the human body was one of the machines.” Mechanical metaphors emerged in our language. We needed to grease the wheels, crank up the business, dig deeper, or turn a company into a well-oiled machine.

In the digital age we view our world as computational. Humans are processors; everything is data. Logic does not compute. He multitasks so well he’s capable of interfacing with more than one person in his network at a time.

Projecting human qualities onto machines is called anthropomorphism, but we are projecting machine qualities onto humans. Seeing a human being as a machine or computer is called mechanomorphism. This is not just treating machines as living humans; it’s treating humans as machines.

When we multitask we are assuming that, just like computers, we can do more than one task at a time. But research has been shown, and related in healthy memory blog posts, that when we multitask, our performance suffers. Sometimes this multitasking, such as when we talk, or even worse, text, while we are driving, we can die.

It is both curious and interesting that drone pilots, who monitor and neutralize people by remote control from thousands of miles away, experience higher rates of post-traumatic stress disorder than “real” pilots. An explanation for these high rates of distress is that, unlike regular pilots, drone pilots often observe their targets for weeks before killing them. These stress rates remain disproportionately high even for missions in which the pilots had no prior contact with the victims.

Rushkoff writes that a more likely reason for the psychic damage is that this drone pilots are trying to exist in more than one location at a time. They might be in a facility in Nevada operating a lethal weapon system deployed on the other side of the planet. After dropping ordnance and killing a few dozen people, the pilots don’t land their planes, climb out, and return to the mess hall to debrief over beers with their fellow pilots. They just log out, get into their cars, and drive home to the suburbs for dinner with their families. It’s like being two different people in different places in the same day. But none of us is two people or can be in more than one place. Unlike a computer program, which can be copied and run from several different machines simultaneously, human beings have one “instance” of themselves running at a time.
Rushkoff writes, “We may want to be like the machines of our era, but we can never be as good at being digital devices as the digital devices themselves. This is a good thing, and maybe the only way to remember that by aspiring to imitate our machines, we leave something even more important behind: our humanity.’

The smartphone, along with all the other smartphones, create an environment: a world where anyone can reach us at any time, where people walk down public sidewalks in private bubbles, and where our movements are tracked by GPS and stored in marketing and government databases for future analysis. In turn, these environmental factors promote particular states of mind, such as paranoia about be tracked, a constant state of distraction, and fear of missing out.

The digital media environment impacts us collectively, as an economy and as a society. Investors’ expectations of what a stock’s chart should look like given the breathtaking pace at which a digital company can reach “scale” has changed, as well as how a CEO should surrender the long-term health of a company for the short-term growth of shares. Rushkoff notes that the internet’s emphasis on metrics and quantity over depth and quality has engendered a society that values celebrity, sensationalism, an numeric measures of success. The digital media environment expresses itself in the physical environment s well; the production, use, and disposal of digital technologies depletes scarce resources, expends massive amount of energy, and pollutes vast regions of the planet.

Rushkoff concludes, “Knowing the particular impacts of a media environment on our behaviors doesn’t excuse our complicity, but it helps us understand what we’re up against—which way things are tilted. This enables us to combat their effects, as well as the darker aspects of our own nature that they provoke.”

If one assumes that humanity is a pure mechanistic affair, explicable entirely in the language of data processing then what’s the difference whether human beings or computers are doing that processing. Transhumanists hope to transcend biological existence. Kurzweil’s notion of a singularity in which human consciousness is uploaded into a computer has been written off in previous posts. The argument that these previous posts has made is that biology and silicon are two different media that operate in different ways. Although they can interact they cannot become one.

Rushkoff’s concludes, “It’s not that wanting to improve ourselves, even with seemingly invasive technology, is so wrong. It’s that we humans should be making active choices about what it is we want to do to ourselves, rather than letting the machines, or the markets propelling them, decide for us.

The Digital Media Environment

June 15, 2019

This is the fifth post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this post is identical to the title of the fifth section of this book. Rushkoff writes, whoever controls media controls society.

“Each new media revolution appears to offer people a new opportunity to wrest control from an elite few and reestablish the social bonds that media has compromised.” But the people have always remained one entire media revolution behind those who would dominate them.

Rushkoff cites the example of ancient Egypt that was organized under the presumption that the pharaoh could directly hear the words of the gods, as if he were a god himself. On the other hand, the masses could not hear the gods at all; they could only believe.

The invention of text might have led to a literate culture. Instead text was used just to keep track of possessions and slaves. When writing eventually was used by religion, only the priests could read the texts and understand the Hebrew or Greek in which they were written. The masses could hear the Scriptures being read aloud, thus they could hear the putative words of God, but the priests kept the elites’ capability of literacy.

During the Renaissance when the printing press was invented, the people gained the ability to read, but only the king and his selected allies could produce texts. Similarly, radio and television were controlled by corporations or repressive states. So people could only listen or watch passively.

Rushkoff writes, “The problem with media revolutions is that we too easily lose sight of what is truly revolutionary. By focusing on the shiny new toys and ignoring the human empowerment potentiated by these new media—the political and social capabilities they are retrieving—we end up surrendering them to the powers that be. Then we and our new inventions become mere instruments for some other agenda.

The early internet enabled new conversations between people who might never have connected in real life. The networks compressed distance between physicists in California, hackers in Holland, philosophers in eastern Europe, and animators in Japan. These early discussion platforms leveraged the fact that unlike TV or the telephone, internet messaging didn’t happen in real time. Users would download net discussions, read them on their own time, offline, and compose a response after an evening of thought and editing. Then they would log back onto the net, upload he contribution, and wait to see what others thought. The internet was a place where people sounded and acted smarter than they do in real life. This was a virtual space where people brought their best selves, and where the high quality of the conversations was so valued that communities governed these spaces the way a farmer’s cooperative protects a common water supply. To gain access to the early internet, users had to digitally sign an agreement not to engage in any commercial activity. Rushkoff writes “Even the corporate search and social platforms that later came to monopolize the net originally vowed never to allow advertising because it would tain the humanistic cultures they were creating.”

Consider how much better this was when people actually thought for a time, rather than responding immediately. Previously, System 2 processes were involved. Currently, responses are immediate, emotional System 1 processes.

Rushkoff writes, “ Living in a digitally enforced attention economy means being subjected to a constant assault of automated manipulations. Persuasive technology is a design technology taught and developed at some of America’s leading universities and then implemented on platforms from e-commerce sites and social networks to smartphones and fitness wristbands. The goal is to generate ‘behavioral change’ and ‘habit formation,’ most often without the user’s knowledge or consent. Behavioral design theory holds that people don’t change their behavior because of shifts in their attitudes and opinions. On the contrary, people change their attitudes to match their behaviors. In this model, we are more like machines than thinking, autonomous beings.”

Much or this has been discussed in previous health memory posts, especially those based on the book “Zucked.”

Rushkof writes, “Instead of designing technologies that promote autonomy and help us make informed decisions, the persuasion engineers in charge of our biggest digital companies are hard at work creating interfaces that thwart our thinking and push us into an impulsive response where thoughtful choice—or thought itself—are nearly impossible.” This explains how Russia was able to promote successfully its own choice to be President of the United States.

Previous healthy memory blog posts have argued that we are dumber when we are using smartphones and social media. We understand and retain less information. We comprehend with less depth, and make impulsive decisions. We become less capable of distinguishing the real from the fake, the compassionate from the cruel, and the human and the non-human. Rushkoff writes, “Team Human’s real enemies, if we can call them that, are not just the people who are trying to program us into submission, but the algorithms they’ve unleashed to help them do it.”

Rushkoff concludes this section as follows: “Human ideals such as autonomy, social contact, and learning are again written out of the equation, as the algorithms’ programming steers everyone and everything toward instrumental ends. When human beings are in a digital environment they become more like machines, entities composed of digital materials—the algorithms—become more like living entities. They act as if they are our evolutionary successors. No wonder we ape their behavior.”

Figure and Ground

June 14, 2019

This is the fourth post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this post is identical to the title of the fourth section of this book. Rushkoff begins, “Human inventions often end up at cross purposes with their original intention—or even at cross purposes with humans, ourselves. Once an idea or an institution gains enough influence it changes the basic landscape. Instead of the invention serving people in some way, people spend their time and resources serving it. The original subject becomes the new object. Or, as we may effectively put it, the figure becomes the ground.”

The figure is that on which we focus, the ground is the background. And the perception of figure or ground can change in different circumstances or cultures. Most westerners when shown a picture of a cow in a pasture will see a picture of a cow. On the other hand most easterners will see a picture of a pasture. Their perceptions are so determined that people who see the figure may be oblivious to major changes in the background, and people who see the ground may not even remember what kind of animal was grazing there.

Rushkoff writes, “Neither perception is better nor worse, such much as incomplete. If the athlete sees herself as the only one that matters, she misses the value of her team—the ground on which she functions. If a company’s “human resources” officer sees the individual employee as nothing more than gear in the firm, he misses the value and autonomy of the particular person, the figure.”

Consider money. It was originally invented to store value and enable transactions. Money was the medium for the marketplace’s primary function of value exchange. Money was the ground, and the marketplace was the figure. Today, the dynamic is reversed: the acquisition of money itself has become the central goal, and the marketplace just a means of realizing that goal. Money has become the figure, and the marketplace full of people has become the ground.

Rushkoff writes, “Understanding this reversal makes it easier to perceive the absurdity of today’s destructive form of corporate capitalism. Corporations destroy the markets on which they depend, or sell off their most productive divisions to increase the bottom line on their quarterly reports. That’s because the main product of a company is no longer whatever it provides to consumers, but the shares it sells to investors. The figure has become the ground.”

Rushkoff says that the true legacy of the Industrial Age is to get people out of sight, or out of the way under the pretense of solving problem’s and making people’s lives easier. As an example Rushkoff considers Thomas Jefferson’s famous invention, the dumbwaiter. We think of it as a convenience: instead of carrying food and wine from the kitchen up to the dining room, the servants could place items into a small lift and convey it upstairs by pulling on ropes. Food and drink appeared as if by magic. But the purpose of the dumbwaiter had nothing to do with saving effort. Its true purpose was to hide the grotesque crime of slavery.

Rushkoff contends that in the Industrial Age there were many mechanical innovations, but in very few cases did they actually make production more efficient. They simply made human skill less important, so that laborers could be paid less.

Rushkoff contends that today Chinese laborers “finish” smartphones by wiping off any fingerprints with a highly toxic solvent proven to shorten the workers’ lives. That’s how valuable it is for consumers to believe that their devices have been assembled by magic rather than by the fingers of underpaid and poisoned children. Creating the illusion of no human involvement actually costs human lives.

The mass production of goods, requires mass marketing, which can be just as dehumanizing. Once products were moved from the people who made them, mass production separated the consumer from the producer, and replaced this human relationship with the brand. So where people once purchased oats from the miller down the block, now consumers go to the store and buy a box shipped from a thousand miles away. The brand image—in this case a Smiling Quaker—substitutes for the real human relationship, and is carefully designed to appeal to us more than a living person would.

When consumer culture was born, media technologies became the main way to persuade people to desire possessions over relationships and social status over social connections. The less fruitful the relationships in a person’s life, the better that person was for synthetic ones, thus undoing the social fabric.

Rushkoff writes, “Since the Industrial Age, technology has been used as a way to make humans less valued and essential to labor, business, and culture. This is the legacy that digital technology inherited.

Rushkoff concludes this section as follows: “…the new culture of contact enabled by digital networks was proving unprofitable and was replaced by an industry-wide ethos of “content is king.” Of course, content was not the message of the net; the social contact was. We were witnessing the first synaptic transmissions of a collective attempting to reach new levels of connectedness and wake itself up. But that higher goal was entirely unprofitable, so conversations between actual humans were relegated to the comments sections of articles or better, the reviews of products. If people were going to use the networks to communicate it had better be about a brand. Communities became affinity groups, organized around purchases rather than any sort of mutual aid. Actual “social” media was only allowed to flourish once the contact people made with one another became more valuable as data than the cost in missed shopping or viewing time. Content remained king, even if human beings were now that content.

Learning to Lie

June 13, 2019

This is the third post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this post is identical to the title of the third section of this book. Rushkoff begins this section,”It doesn’t take much to tilt a healthy social landscape toward an individualist or repressive one. A scarcity of resources, a hostile neighboring tribe, a warlord looking for power, an elite seeking to maintain its authority, or a corporation pursuing a monopoly all foster antisocial environments and behavior. Socialization depends on both autonomy and interdependency; emphasizing one at the expense of the other compromises the balance.”

One desocializing strategy emphasizes individualism. The special group is broken down into automized individuals who fight for their right to fulfillment by professional advancement or personal consumption. This system is often sold as freedom. But these competing individuals never find true autonomy because they lack the social fabric in which to exercise it.

Another path to desocialization emphasized conformity. People don’t need to compete because they are all the same. Such system mitigates strident individualism, but it does through obedience usually to a supreme ruler or monopoly party. Conformity is not truly social, because people are looking up for direction other than to one another. Because there is no variation, mutation or social fluidity, conformity ends up being just as desocializing as individualism.

Rushkoff concludes that both approaches depend on separating people from one another and undermining our evolved social mechanisms in order to control us. He continues, “Any of our healthy social mechanisms can become vulnerabilities: what hackers would call “exploits” for those who want to manipulate us. For example, when a charity encloses a free “gift” or return address labels along with their solicitation for a donation, they are consciously manipulating our ancient, embedded social bias for reciprocity. The example is trivial, but the pattern is universal We either succumb to the pressures with the inner knowledge that something is off, or we recognize the ploy, reject the plea, and arm ourselves agains such tactics in the future. In either case, the social landscape is eroded. What held us together now breaks us apart.”

Spoken language can be regarded as the first communication technology. Language has many admirable capabilities. But before language, there was no such thing as a lie. Rushkoff writes that the closest thing to lying would have been a behavior such as hiding a piece of fruit, but speech created a way of actively misrepresenting reality to others.

Rushkoff writes that when we look at the earliest examples of the written word, it was used mostly to assert power and control. “For the first five hundred years after its invention in Mesopotamia, writing was used exclusively by her kings and priests to keep track of the grain and labor they controlled. Whenever writing appeared, it was accompanied by war and slavery. For all the benefits of the written word, it is also responsible for replacing an embodied, experiential culture with an abstract administrative one.”

Rushkoff continues, “The Gutenberg printing press extended the reach and accessibility of the written word throughout Europe, and promised a new era of literacy and expression. But the printing presses were tightly controlled by monarchs, who were well aware of what happens when people begin reading one another’s books. Unauthorized presses were destroyed and their owners executed. Instead of promoting a new culture of ideas, the printing press reinforced control from the top.

Radio also began as a peer-to-peer medium such as ham radio. But corporations lobbied to monopolize the spectrum and governments sought to control it, radio devolved from a community space to one dominated by advertising and propaganda.

Hitler used this new medium of radio to make himself appear to be anywhere and everywhere at once. No single voice had ever permeated German society previously, and the sense of personal connection it engendered allowed Hitler to create a new sort of rapport with millions of people. The Chinese installed 70 million loudspeakers to broadcast what they called “Politics on Demand” through the nation. Rwandans used radio as late as 1993 to reveal the location of ethnic enemies so that mobs of loyalists with machetes could massacre them.

Initially television was viewed as a great connector and educator. However, marketing psychologists saw in it a way to mirror a consumer’s mind and insert with it new fantasies and specific products. Programming referred to the programmability not of the channel, but of the viewer.

There have been so many previous healthy memory blog posts on the problems of social media and of cybernetic warfare, that can be found under the category of Transactive Memory, that little more on these general topics will be written.

But a few words words will be written on memes and memetics. Rushkoff writes, “An increasingly competitive media landscape favors increasingly competitive content. Today, anyone with a smartphone, web page or social media account can share their ideas. If that idea is compelling it might be replicated and spread to millions. And so the race is on. Gone are the collaborative urges that characterized embodied social interaction. In their place comes another bastardized Darwinian ideal: a battle for the survival of the fittest meme.”

Rushkoff continues, “The amazing thing is that it doesn’t matter what side of an issue people are on for them to be infected by the meme and provoked to replicate it. ‘Look what this person said’ is reason enough to spread it. In the contentious social media surrounding elections the most racist and sexist memes are reposted less by their advocates than by their outraged opponents. That’s because memes do not compete for dominance by appealing to our intellect, our compassion, or anything to do with our humanity. They compete to trigger our most automatic impulses.”

Rushkoff concludes this section as follows: “…our extension of our social reality into a new medium requires that we make a conscious effort to bring our humanity along with us. We must project our social human organism from the very things we have created.”

The Incel Problem

June 9, 2019

HM must confess to being asleep at the wheel. Although previous posts have written about the new technology resulting in about 1 in 3 18-to-34 year old American men being unemployed, and living at home, essentially divorced from society. HM learned learned reading Christine Emba’s column, “Men are in trouble, ‘Incels’ are proof” in the 8 June 2019 issue of the Washington Post that “incel” is short for “involuntarily celibate.” These are young men who have come to define themselves by their inability to find a sexual or romantic partner. Unfortunately, men who identify themselves as being #ForeverAlone have gathered online in forums such as Reddit to trade their stories of woe.

These communities are self-reinforcing. Members believe the their looks or personal traits have consigned them to lifelong loneliness, and similarly downbeat peers are always willing to add more fuel to that fire. They have gone on to develop elaborate, and elaborately misogynistic theories to blame others for their plight. These theories are centered on the idea that women are shallow, stupid and cruel—exclusively choosing only a handful of the most attractive men to be with and disdaining the rest. All men should deserve a chance with women, the incels tell themselves, but some men have all the luck, while they get left out. If there were a competition for self-fulfilling prophecies, this one would likely win.

Ms. Emba writes, “…the incel subculturing has become not just self-reinforcing but self-radicalizing, often with tragic outcomes. At its most horrifying extremes, the self-described incels have taken their anger out on the women they believe are refusing them. At least two mass shootings have left behind manifestos identifying themselves as adhering to incel ideology and explaining their actions as taking revenge on the world that hasn’t given them the women they think they deserve. It is clear that these incels are on a doomed quest that, at best will lead to miserable lives, and, at worst, will lead to imprisonment or death.

One of the unfortunate results of technology is that human connection in the real world has become rarer, and often feels more difficult than it used to be. Smartphones and gaming have been replacing face-to-face interactions that might force one to confront one’s social difficulties or develop a better understanding of the lives of others.

Incels need to understand that failure and rejection are necessary components of living, and that resilience needs to be developed to successfully cope with life. Interventions need to be developed to confront these individuals with the need to change to a life of interacting face-to-face with fellow humans and to dealing with failure and rejection with resilience. Until an incel realizes the need to change, improvement in his condition is extremely unlikely to occur.

However, once he realizes the need to change, technology could be helpful. Discussion groups could provide advice on how to change and would provide further guidance on the need to change. Such groups could benefit from technology by being self-reinforcing and group reinforcing.

Reasons to Build a Healthy Hippocampus

June 8, 2019

This post is inspired by an article by M.R. O’Connor in the 6 June 2019 issue of the Washington Post titled, “Here’s what gets lost when we rely on GPS.” The article cites a study published in Nature Communications in 2017 where researchers asked participants to navigate a virtual simulation of London’s Soho neighborhood and monitored their brain activity, specifically the hippocampus, which, as health memory blog readers know, is integral to spatial navigation. Amir-Honayoun Javadi, one of the study’s authors said, “The hippocampus makes an internal map of the environment and this map becomes active when you are engaged in navigating and not using GPS.”

The hippocampus is highly important. It allows us to orient in space and know where we are by creating cognitive maps. It allows us to both store and retrieve personal memories of experience. Neuroscientists believe the hippocampus believes give us the ability to imagine the future. Again this is something healthy memory blog readers should know and one of the principle purposes of memory is for time travel so we can travel back in time to review our past, so we can think of possible actions we can take in the future.

Research has long shown that the hippocampus changes as a function of learning. Again healthy memory blog readers should remember the study of London taxi drivers who have greater gray-matter volume in the hippocampus due to memorizing the city’s labyrinthine streets. Atrophy in the hippocampus is linked to devastating conditions, such as post-traumatic stress disorder and Alzheimer’s disease. Stress and depression dampen neurogenesis—the growth of new neurons —in the hippocampal circuit.

Javadi said the conclusions he draws from recent research is that “when people use tools such as GPS, they tend to engage less with navigation. Therefore, brain area responsible for navigation is less used, and consequently their brain areas involved in navigation tend to shrink”

Neuroscientist Veronique Bohbot has found that using spatial-memory strategies for navigation correlates with increased gray matter in the hippocampus at any age. She thinks that interventions focused on improving spatial memory by exercising the hippocampus—paying attention to the spatial relationships of places in our environment—might help offset age-related cognitive impairments or even neurodegenerative diseases.

She continues, “If we are paying attention to our environments, we re stimulating our hippocampus, and a bigger hippocampus seems to be protective against Alzheimer’s disease. When we get lost , it activates the hippocampus, it gets us completely out of the habit mode. Getting lost is good.” It can be a good thing if done safely.

M.R. O’Connor writes, “Saturated with devices, children today might grow up to see navigation from memory or a paper map as anachronistic as rote memorization or typewriting. But for them especially, independent navigation and the freedom to explore are vital to acquiring spatial knowledge that may improve hippocampal function. Turning off the GPS and teaching them navigational skills could have enormous cognitive benefits later in life.”

M.R. O’Connor concludes the article, “Over the past four years, I’ve spoken with master navigators from different cultures who show me that performing navigation is a powerful form of engagement with the environment that can inspire a greater sense of stewardship. Finding our way on our own—using perception, empirical observation and problem solving skills—forces us to attune ourselves to the outside world. And by turning our attention to the physical landscape that sustains and connects us, we can nourish “topophilia,” a sense of attachment and love for space. You’ll never get that from waiting for a satellite to tell you how to find a shortcut.”

POLITIFACT

June 5, 2019

The problem of misinformation is acute and currently the best means of addressing this misinformation is POLITIFACT politifact.com. POLITIFACT is a winner of the Pulitzer Prize. It is said that all politicians lie, and that is the truth. It is also likely that practically all humans lie. What one discovers in POLITIFACT is that all politicians also tell the truth, if only rarely. Moreover, there is not a strict dichotomy between true and false. Rather, there are shades of truth and false. That is why POLITIFACT uses a Truth-O-Meter that ranges as follows:

True
Mostly True
Half True
Mostly False
False
Pants-on-Fire, which is a rating that means the item is flamingly false.

You can look for specific issues and see a score card (the breakdown of the ratings) and see a sampling of the individual ratings by prominent individuals along with their statements. HM found this feature to be especially useful.

Each of these ratings is justified with a prose passage explaining the justification for the rating. So one doesn’t need to accept the rating. But the justification should be read to understand the basis for the rating.

One can access different editions. There is a national edition, a punditfact edition, which addresses various pundits, health check edition, which addresses health topics. There is a Facebook-Hoaxes edition, which is especially needed. There are editions specific to states, but no all states are available yet.

Certain individuals merit special editions. Visit the website to see who they are.
There is a Promises heading that has a Trump-o-meter and an Obameter.

There is also a Pants-on-Fire Heading that allows you to examine the most egregious lies.

It’s highly recommend to visit this website on a regular basis and spend as little or as much time as you want.

The problem of misinformation is chronic and POLITIFACT provides the best mean of dealing with this misinformation.

Stanford Helped Pioneer Artificial Intelligence

May 21, 2019

The title of this post is identical to the first half of a title by Elizabeth Dworkin in the 19 March 2019 issue of the Washington Post. The second half of the title is “Now it wants humans at the core.” A Stanford University scientist coined the term artificial intelligence (AI) and advancements have continued at the university including the first autonomous vehicle.

Silicon Valley is facing a reckoning over how technology is changing society. Stanford wants to be at the forefront of a different type of innovation, one that puts humans and ethics at the center of the booming field of AI. The university is launching the Stanford Institute for Human-Centered Artificial Intelligence (HAI). It is intended as a think tank that will be an interdisciplinary hub for policymakers, researchers and students who will go on to build the technologies of the future. The goal is to inculcate in the next generation a more worldly and humane set of values than those that have characterized it so far—and guide politicians to make more sophisticated decisions about the challenging social questions wrought by technology.

Fei-Fei-Li, an AI pioneer and former Google vice president who is one of the two directors of the new institute said, I could not have envisaged that the discipline I was so interested in would, a decade and a half later, become one of the driving forces of the changes that humanity will undergo. That realization became a tremendous sense of responsibility.”

The goal is to raise more than $1billion. It’s advisory panel is a who’s who of Silicon Valley titans, that includes former Google executive chairman Eric Schmidt, LinkedIn co-founder Reid Hoffman, former Yahoo chief executive Marissa Mayer and co-founder Jerry Yang, and the prominent investor Jim Breyer. Bill Gates will keynote its inaugural symposium.

The ills and dangers of AI have become apparent. New statistics emerge about the tide of job loss wrought by the technology, from long-haul truckers to farmer workers to dermatologists. Elon Musk called AI “humanity’s existential threat” and compared it to “summoning the demon.”

Serious problems were raised in the series of healthy memory posts based on the book, “Zuck.” The healthy memory posts based on the book “LikeWar” raised additional problems. Both these problems could be addressed with IA. Actually IA is being used to address the issues in “LIkeWar.” Regarding the problems raised in the book “Zuck”, rather than hoping that Facebook will self-police or trying to legislate against Facebook’s problematic practices, AI could police online all these social networks and flag problematic practices.

It is the position of this blog to advocate AI be used to enhance human intelligence. This is especially important in areas where human intelligence is woeful lacking, that is intelligent augmentation (IA). Unfortunately, humans, who are regarded as social animals, have difficulties reconciling conflicting political and religious beliefs. Artificial intelligence could be used here in an intelligence augmented (IA) role. Given polarized beliefs dead ends are reached. IA could suggest different ways of framing problematic issues. Lakoff’s ideas that were promoted in the series of healthy memory blog posts under the rubric “Linguistics and Cognitive Science in the Pursuit of Civil Discourse” could provide the initial point of departure. Learning would take place and these ideas would be refined further to result in disagreeing parties being surprised about their ultimate agreement.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Alternative Futures 3

May 20, 2019

This is another post motivated by “Machines of Loving Grace: The Quest for Common Ground” by John Markoff. Both AI (Artificial Intelligence) and IA (Intelliigent Augmentation) should be used where they are most needed. One of the negative effects of technology has been to increase polarization. It is even being used in warfare and in altering elections which are ostensibly free.

So AI and IA both should be placed to work on these problems. HM is only aware of some very limited work in this area. He remembers one project addressing collaboration within the military. Unlike most other occupations, the military wear their rank on their uniforms. So this experiment involved collaboration in which the participants were anonymous. There was no means of assessing relative rank. The project seemed to be going quite well. Then one of the participants started using all caps in his entries. This was the ranking officer who felt he was being ignored.

One would begin using IA to address this problem. This should be used to the extent possible. However, at some point there might be a need to let AI take over. Perhaps as in the case of the Forbin Project’s Colossus, it would succeed.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Alternative Futures 2

May 19, 2019

This is another post motivated by “Machines of Loving Grace: The Quest for Common Ground” by John Markoff. In this future AI, including robots and cyborgs, take over all labor. This technology is held by its few owners. So wealth is even more grossly distorted than it is today, and there are effectively no jobs for individual people to do.

To prevent violent uprisings guaranteed incomes would need to be provided to all. So people’s basic needs would be provided for, but what would provide meaning to their lives? They could have children who would have similarly bleak futures. There would likely be problems with drug and substance abuse.

Of course, there could be online games to play and, perhaps, opportunities to gamble. There could be supports for growth mindsets. There could be educational opportunities to pursue online and opportunities for athletic and artistic pursuits.  IA (intelligent augmentaion) could be life enriching for those who wanted to pursue such lives. It might also be possible to create unneeded jobs where people would pursue activities using IA, that they thought were meaningful. Even today, many work in research jobs that are designed to address problems, but who never see any of these projects implemented. HM knows of this from his own personal experience.

The preceding paragraph applies to the advanced world. What about the undeveloped or under developed worlds? Would they be ignored and allowed to suffer and die out? There could be an effort to attempt to bring these people up to the level of the developed worlds, and until this was accomplished it would likely provide additional jobs.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Alternative Futures

May 18, 2019

“Machines of Loving Grace: The Quest for Common Ground” by John Markoff provides an excellent review of the development of artificial intelligence including the researchers and the funding agencies. And he does examine the differences between AI (Artificial Intelligence) and IA (Intelligent Augmentation). For those interested in technology and of the developing and funding of both AI and IA, HM strongly recommends reading Markoff’s book. However, this post and the immediately following posts will examine the ramifications of Artificial Intelligence (AI) and Intelligence Augmented (IA) in alternative futures.

The most nightmarish future is one in which AI becomes so powerful that it takes over. It either eliminates humanity or preserves humans as pets. However, it should be realized that it is possible that a benign future would result from a powerful AI. At the height of the Cold War a movie was released titled “Collosus: the Forbin Project.” The movie takes place during the height of the cold war when there was a realistic fear that a nuclear war would begin that would destroy all life on earth. Consequently, the United States created the Forbin Project to create Colossus. The purpose of Colossus was to prevent a nuclear war before it began or to conduct a war once it had begun. Shortly after they turn on Colossus, the find it acting strangely. They discover that it is interacting with the Soviet version of Colossus. The Soviets had found a similar need to develop such a system. The two systems communicated with each other and came to the conclusion that these humans are not capable of safely conducting their own affairs. In the movie the Soviets capitulate to the computers and the Americans try to resist but ultimately fail. So the human species is saved by AI.

Currently there are more countries with missiles and nuclear weapons than there were at the time of this movie. So one might argue that there is even more of a need for such AI today than at the time of the movie. When one considers that the leader of one of these countries lives in his own reality and is prone to strike out whenever he feels threatened or provoked, there is even more of a need for such AI today.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Cyborgs

May 17, 2019

This post is motivated by material in an excellent book by John Markoff titled “Machines of Loving Grace: The Quest for Common Ground.” Cyborg stands for “cybernetic Organism,” a term formulated by medical researchers in 1969 who were thinking about intentionally enhancing humans to prepare them for the exploration of space. They foresaw a new kind of creature—half human, half mechanism—capable of surviving in harsh environments.

It seems even if Kurzweil is capable of uploading his mind into a computer, it would be a frustrating experience unless it was a cyborg. It is clear that the brain can issue motor movements to machines. So output issues would not be a problem. And suppose that Kurzweil successfully uploads his mind to this cyborg. The question remains what would the phenomenal experience be for Kurzweil or any human. Kurzweil’s fundamental concept is that his mind in the computer would give him extraordinary mental powers. He probably could do amazing computational exercises. But would he understand, in a phenomenal sense, what he was doing? He might even be able to write poetry, but would he understand the poetry. And what about his personality. Would he become more humanistic, or would he become mechanical. What about a soul and a sense of morality? What about one’s humanity? Would it be lost?

Would cyborgs be able to breed and produce new cyborgs? Presumably they would be immortal.

This seems like a great topic for science fiction. Unfortunately, HM does not read science fiction. Do any science fiction readers who also read this blog have any recommendations? If so, please supply them in the comments section.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Licklider

May 15, 2019

J.C.R Licklider is a personal hero of HM. He has appeared in previous healthy memory blog posts. When HM was a student and read Licklider’s article, “Man-Machine Symbiosis.” HM thought that this was the role computers should play in technology wherein the combination would be greater than the sum of its parts. Licklider also wrote, along with Taylor, in 1968, “The Computer as a Communication Device” that pointed to the existence of a future internet.

Unfortunately, the notion of Man-Machine symbiosis did not catch on. HM was frustrated with using computers to replace humans. True, there are jobs in which it is desirable to have computers play a solo role, but the real potential seemed to be in creating a symbiotic relationship with computers. Unfortunately, the focus has been on having computers replace humans. Late in his career HM wrote and co-authored articles on what he termed neo-symbiosis in an effort to resurrect the idea. Although he failed, he shall keep on trying.

HM was disappointed to learn while reading Markoff’s “Machines of Loving Grace: The Quest for Common Ground” that Licklider, like McCarthy, was confident that the advent of “Strong” artificial intelligence in which a machine capable of at least matching wits with a human, was likely to arrive relatively soon. He wrote that the period of man-machine “symbiosis” might only last less than two decades, although he allowed that the arrival of truly smart machines that capable of rivaling humans thinking might not happened for a decade, perhaps fifty years.

Humans must stay involved. Otherwise machines will take over and create knowledge that is inaccessible to humans. As was mentioned in a previous post, developers understand how they develop a neural net, but they are unable to understand how the net solves a given problem. Humans always need to maintain a supervisory role and regard computers as tools for them to use. Remember that Minsky once responded to a question about the significance of the arrival of artificial intelligence by saying, “If we’re lucky, maybe they’ll keep us as pets.”

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Douglas Engelbart

May 14, 2019

This post was motivated by an excellent book by John Markoff titled “Machines of Loving Grace: The Quest for Common Ground.” The Wikipedia credits Doug Engelbart with creating the field of human-computer interaction. Doug ran the Augmentation Research Lab at SRI International. He also created the computer mouse, the development of hypertext, networked computers, and precursors to graphical user interfaces. NLS, the oN-Line system developed by the Augmentation Research Center under Engelbart’s guidance with funding primarily from DARPA, demonstrated numerous technologies, most of which are now in widespread use; it included the computer mouse, bitmapped screens, and hypertext. Engelbart is credited with a law, appropriately named after him, that the intrinsic rate of human performance is exponential.

The following is taken from the Wikipedia article on Doug, “He reasoned that because the complexity of the world’s problems was increasing, and because any effort to improve the world would require the coordination of groups of people, the most effective way to solve problems was to augment human intelligence and develop ways of building collective intelligence.[6] He believed that the computer, which was at the time thought of only as a tool for automation, would be an essential tool for future knowledge workers to solve such problems. He was a committed, vocal proponent of the development and use of computers and computer networks to help cope with the world’s increasingly urgent and complex problems. Engelbart embedded a set of organizing principles in his lab, which he termed “bootstrapping”. His belief was that when human systems and tool systems were aligned, such that workers spent time “improving their tools for improving their tools” it would lead to an accelerating rate of progress.”

Returning to Markoff’s book, Doug stumbled across an article by Vannevar Bush, who proposed a microfiche-based information retrieval system called Memex to manage all the world’s knowledge. Later Doug deduced that such a system could be assembled based on the then newly available computers. He concluded that the time was right to build an interactive system to capture knowledge and organize information in a way that would now be possible for a small group of people to create and collaborate more effectively. So he was thinking of the world-wide web. It took time and resources and source code from Tim Berners-Lee to see the full scale implementation.

According to the Wikipedia article he retired in 1988 because of a lack of interest in his ideas and the funding to pursue them. One wonders what he could had achieved if others had understood his ideas and provided funding to support him.

Machines of Loving Grace

May 13, 2019

The title of this post is identical to the title of an excellent book by John Markoff. The subtitle is “The Quest for Common Ground.” The common ground referred to is that between humans and robots. The book covers, in excruciating detail, the development of artificial intelligence from the days of J.C.R. Licklider to 2015.

The book covers two lines of development. One from John McCarthy, which Markoff terms Artificial Intelligence (AI) and the other by Douglas Englebart, which Markoff terms Intelligence Augmented (IA). The former is concerned with making computers as smart as they can be, and the latter is concerned with using computers to augment human intelligence.

Markoff does not break down AI any further, but it needs to be. AI has been used by psychologists to model human cognition. So the ultimate goal here is to develop an understanding of human cognitive processes. AI has been quite informative. In attempting to model problems such as human vision, psychologists realized that they had overlooked some critical processes that were needed to explain perception. One should also regard AI as being a tool needed to develop theories of psychological processes.

There are also two types of AI. One is known as GOFIA, “Good Old Fashioned Artificial Intelligence” where computer code is developed to accomplish the task. GOFIA was stymied for a while due to the computational complexity it faced. Judea Pearl, the father of decapitated journalist Daniel, is a superb mathematician and logician. He developed Bayesian networks that successfully dealt with this problem and GOFIA proceeded further on with this expedited approach (enter “Pearl” into the search block of the healthy memory blog to learn more about this genius).

The other type is, or are neural nets. Here neural nets are designed to learn how to to accomplish a task. The problem with neural nets is that the programmers do not know how to solve the problem, rather they know how to design a neural net that solves the problem. Nightmare scenarios where computers take over the world would be the product of neural nets. With GOFAI problems could be solved by deleting lines of code.

Augmenting intelligence IA is what HM promotes. Here computer code serves as a mental prosthetic to enhance human knowledge and understanding. IA, unless it was the intelligence of a mad scientist, would not constitute a threat to humanity.

It is true that AI is required for robots to perform tasks that are difficult, boring, or dangerous. But the goal of an AI system must be understood or undesired consequences might result.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Missing Data

April 25, 2019

There are many changes in the behavior and thinking of iGen’ers. The question is which changes are due to the iPhone and which to general changes in society. Dr. Twenge has offered her opinions in “iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood.” Unfortunately, all her data comes from the United States. If she had included data from other advanced countries, then one would have a better idea regarding the effects from specific cultural contributions.

Income insecurity is a key problem for iGen-ers, and it is obvious why. Just consider the ridiculous college costs. They also need to be concerned about medical costs and the costs of medical insurance. The United States is unique in being the only advanced country that does not supply government funded health insurance. Specific forms may differ, but the bottom line is that medical costs are not a concern to residents of these countries. Moreover, not only are medical costs lower in these countries than in the United States, but the results, the overall health of these countries is better. It is also the case that colleges costs are much, much lower, and free in some cases. It should also be noted that in worldwide surveys of happiness, the United States does not fare that well. Not surprisingly, they fall behind the other advanced countries in these surveys.

There is a chapter on politics in the book, but HM did not bother to review it because it seemed that both Dr. Twenge and the iGen-ers were completely oblivious to the problem. Free medical and free or low cost college educations should be the primary concerns for them. But they were not mentioned. iGen-ers are not unique to being oblivious what is happening in the rest of the world, this seems to be the problem with the vast majority of Americans.

There is a word that is uttered and once uttered, closes down discussion. That word is “socialism.” It is generally ignored that there is no precise definition for socialism. Communist countries called themselves socialists, but by having a Social Security System, the United States could also be called a socialist country.

The term is used to elicit fear and to shut down further consideration. It’s goal is to shut down further discussion and thinking. But you need to consider what conditions are like in these advanced counties with free medical care and low cost college educations. One will likely find that many of these countries have more freedom that the United States. That is not to say that these countries are problem free, although many might appear to be. But they do have the priorities correct, with education and health at the top.

So realize what the cry “socialism” is intended to engender fear and to shut down further discussion. Basically, they are trying to screw you. Don’t accept it and demand that the United States needs to be comparable on these issues with the remainder of the advanced world. This will be difficult, It will likely require tax increases, but tax increases with cost effective benefits, and massive reallocation of government expenditures. But the United States needs to have its priorities ordered correctly. Ask why we are treated differently from citizens of the other advanced countries.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Understanding—and—Saving—iGen

April 24, 2019

The final chapter of iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D. offers some suggestions for saving iGen.

Not surprisingly, the first is to put down the phone. She recommends parents putting off giving their children a cell phone as long as possible. There really is no reason for an elementary school child to have a cell phone. By middle school, with kids in more activities and more likely to ride a bus, many parents buy phones for their kids convenience and safety. Here she recommends providing the child with a phone with limited functions such as an old-school flip phone without Internet access or a touch screen.

She reminds readers that many tech CEOs strictly regulate their own children’s technology use. Steve Jobs’ children didn’t use the iPad. He limits how much technology their children use as home. This restriction was common around tech CEOs from the cofounder of Twitter to a former editor of Wired magazine. So the people who love technology and made a living of it are cautious about their children using it too much. Adam Alter wrote in his book “Irrestible,” “It seemed as if the people producing tech products were following the cardinal rule of drug dealing: Never get high on your own supply.”

The same goes for social media and electronic device use. They are linked to higher rates of loneliness, unhappiness, depression, and suicide risk, in both correlational and experimental data. Any readers of the healthy memory blog should be well aware of the dangers of social media.

A key rule she provides is that no one, adults included, should sleep within ten feet of a phone.

Dr, Twenge also argues that given the benefits of in-person social interaction, parents should stop thinking that teens hanging out together are wasting their time. Electronic communications are a poor substitute for the emotional connections and social skills gained in face-to-face communication.

Physical exercise is a natural antidepressant.

In the conclusion she writes, “The devices they hold in their hands have both extended their childhoods and isolated them from true human interaction. As a result, they re both the physically safest generation and the most mentally fragile. They are more focused on work and more realistic than Millenials, grasping the certainty that they’ll need to fight hard to make it. They’re exquisitely tolerant and have brought a new awareness of equality, mental health, and LGBT rights, leaving behind traditional structures such as religion. iGEN’ers have a solid basis for success, with their practical nature and they inherent caution. It they can shake themselves out of the constant clutch of their phone and shrug off the heavy cloak of their fear, they can still fly. And the rest of us will be there, cheering them on.”

Inclusive: LGBT, Gender, and Race Issues in the New Age

April 23, 2019

The title of this post is identical to the title of a chapter in iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D. Dr. Twenge writes, “From LGBT identities to genre to race, iGen’ers expect equality and are often surprised, even shocked, to still encounter prejudice. At the same time, equality issues are far from resolved, creating divisions within iGen as well as generation gaps that can seem like unbridgeable gulfs. The equality revolution has been breathtaking but incomplete, leaving iGen to come of age after 2017, when issues around LGBT rights, genre, and race were suddenly back in contention.

Television might have had some effect on iGen’ers’ attitudes on these topics. The oldest iGeners were starting preschool when “Will & Grace” (the first sitcom with a gay man as a central character) premiered in 1998 and in elementary school when shows such as “Queer Eye for the Straight Guy” made being gay not just mainstream but fashionable. iGen teems grew up watching “Glee,” which featured several gay, lesbian, and transgender teen characters, and they saw numerous celebrities come out.

Dr Twenge writes, “The 2000s and 2010s ushered in a sea change in attitudes toward lesbian, gay, bisexual and transgender (LGBT) people. These are some of the largest and most rapid generational and time-period differences in existence. Even many conservative Republican iGen’ers now support same-sex marriage. Anthony Liveris, the vice-president of the University of Pennsylvania College Republicans said in 2013, ‘A true conservative should endorse empower American to marry whom they love, not limit them.’ The vast majority of iGen’ers see no reason why two people of the same sex can’t get married.”

One iGen’er said, “My view of LGBT is the same as on other people having sex before marriage: I don’t particularly care. I wouldn’t do it, but it has nothing to do with me, it doesn’t affect me in the slightest, and I have no right to tell other people what to believe…I wouldn’t go to a protest for it or anything, but they can do what they want.”

In spite of these large changes in attitudes, a third of iGen’ers still have issues with same-sex sexuality. One in four questions same-sex marriage. These young people often struggle to reconcile their iguana upbringing with their religion’s viewpoint that homosexuality is wrong.

Not only attitudes, but actual behavior has changed. The number of young women who have had sex with at least one other woman has nearly tripled since the early 1990s. More men now report having had a male sexual partner as well.

Olympic marathon champion Caitlyn Jenner’s transition from male to female in 2015 likely made iGen the first generation to understand what the term transgender means. Transgender is a new term for popular understanding. Perhaps, the simplest means of describing iGen’ers’ attitudes towards transgender people is confused.
Dr.Twenge writes, “Issues around race are particularly salient for iGen’ers, who have been surrounded by racial diversity their entire lives. In 2015, most 12h graders said their high school was at least half another race, double the number in 1980. Three times more said their close friends were of other races.”

So although there has been a vast improvement in attitudes toward race and sexual orientation, there remain problems. Particularly in the awarding of scholarships in this time of enormous costs, white students can feel than they lost possible support because it has gone to a minority student instead. There are still racial incidents on campus, although some of these originate off campus.

There are also microaggressions. Dr. Twenge writes that these are usually defined as unintentionally hurtful things said to people of color. But she notes that aggression is intentional, so the label is a misnomer. Nevertheless, it is possible to commit a microagression unintentionally. Actually, a microagression is defined by the receiver.

Moreover, microaggessions are not restricted to race. Telling a female that she is doing well for a girl is a clear microaggression. But again, it is possible for someone to do this with good intentions.

Unfortunately, racial and cultural sensitivities can impinge upon the free speech, which is assumed to be guaranteed in the constitution. The Pew Research Center found that 40% of Millennials and iGen’ers agreed that the government should be able to prevent people from making offensive statements about minority groups, compared to only 12% of the Silent generation, 24% of Boomers, and 27% of GenX’ers. Of course, the limits of free speech can be broached, but Dr. Twenge notes that more and more statements are deemed racist or sexist and more and more speakers are deemed “extreme.”

Some speakers are being disinvited from speaking. This is especially bad on college campuses. President Obama offered the following statement on the disinvitation issue by saying, “I think it’s a healthy thing for young people to be engaged and to question authority and to ask why this instead of that, to ask tough questions about social justice…Feel free to disagree with somebody, but don’t try to just to shut them up…What I don’t want is a situation in which particular points of view that are presented respectfully and reasonably are shut down.”

As was mentioned in a previous post, this proclivity to avoid disagreement or alternative arguments does not augur well for either education or democracy.

Income Insecurity

April 21, 2019

The title of this post is identical to the first part of a title in iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D. The remainder of the title of Chapter 7 is “Working to Earn—but Not to Shop.”

Dr. Twenge writes, “iGen’ers are practical, forward looking, and safe, a far cry from the ‘You can be anything’ and ‘Follow your dreams” Millenials.” iGen’ers make up the majority of traditional-age college graduates and will soon dominate the pool of entry-level talent. Dr. Twenge writes, “Given the key differences between iGen’ers and Millenials, the strategies that recruiters have been using to recruit and retain young employees may no longer work. The same is true for marketing to iGen’ers, with a decidedly different psychological profile selling to iGen’ers varies considerably from selling to Millenials. Businesses and managers need to take note: a new generation is arriving on your doorstep, and its members might not be what you expect.”

Interesting work and friends, the things that many Boomers and GenX’ers like the most about their jobs are not as important iGen’ers. They just want a job. An iGen’er wrote, “We should all be less interested in jobs that are interesting or encourage creativity because they don’t pay anything. That’s why you see so many people my age 100k in debt working at a Starbucks.”

iGen’ers also think that work should not crowd out the rest of life. There is a declining belief that work will be central to their lives. They do not want to have jobs that “take over my life.” Still 55% of 2015 high school seniors agree that they are willing to work overtime, up from 22% in 2004. And fewer iGen’ers said they would want to stop working if they had enough money. But iGen’ers have continued the Millenials ‘trend toward saying they don’t want to work hard. So, iGen’ers know that they may have to work overtime, but they believe that many of the jobs they’d want would require too much effort. They seem to be saying, it’s just too hard to succeed today.

The iGen’ers feel pressure to get a college degree. When Dr. Twenge asked her students at San Diego State University how their lives differed from their parent’s, most mentioned the necessity of a college degree. Many of their parents were immigrants who had worked at low-level jobs, but still had been able to buy houses and provide for their families. Her students tell her that they have to get a college education to get the same things that their parents got with a high school diploma or less. One iGen’er said, “My generation is stressed beyond belief because of college. When you graduate from high school, you are pushed to then go into a college, get your masters then have this awesome job. My father’s generation was different. He was born in the 70’s and despite never going to college he has a great paying job. That is not a reality for my generation. You are not even guaranteed a job after going to college. And once we graduate we are in deb to up to our ears.”

The wages of Americans with just a high school education declined by 13% between 1990 and 2013, making a college education more crucial for staying middle class. At the same time, college has become more expensive. Due to cutbacks in state funds for education and other factors college tuition has skyrocketed, forcing many students to take out loans. The average student graduating in 2016 carried $37,173 in debt upon graduation, up from $22, 575 in 2005 and $9,727 in 1993.

The escalation, this unbelievable increase in college costs present a clearly understandable obstacle to iGen’ers, but there are alternatives that are not mentioned.
These alternative are discussed in the healthy memory blog post “Mindshift Resources’. Universities and colleges offer Massive Online Open Courses (MOOCS). These offer an alternative that has certain advantages over typical coursework. Often these courses are free. Usually to get college credits payments are required. However, autodidacts do not necessarily want or desire college credits. There is a website nopaymba.com by Laura Pickard who writes, “I started the No-Pay MBA website as a way of documenting my studies, keeping myself accountable, and providing a resource for other aspiring business students. The resources on this site are for anyone seeking a world-class business education using the free and low-cost tools of the internet.  I hope you find them useful!” She explains how she got an business education equivalent to an MBA for less than1/100th the cost of a traditional MBA. Even without a degree HM would be impressed by a student who had acquired course knowledge in this manner. Autodidacts are devoted to their area of expertise. The have a true interest, they are probably not doing this as an instrumental act just to get a job.

Many young men apparently have a strong aversion to work. So what are they doing? They are playing video games. 25% played video games three or more hours a day, and 10% played at least six hours a day. Video games take up an increasing amount of young men’s time, about eleven hours a week on average in 2015. So the question is are young men playing video games because they are not working or are they not working because they are playing video games? The latter might well be the case. Why work when you can live at home and play video games. Technological innovations have made leisure time more enjoyable. For lower skilled workers, with low market wages, it is now more attractive to take leisure.

Dr. Twenge writes, “Some iGen’ers might be staying away from work because they are convinced that what they do matters little in a rigged system. One iGen-er writes “If we want to have a successful life, we have to go to college, but college is really expensive and we need to either take out loans, that is just going to make our future more complicated and stressful so we try to get a job, but most well paying jobs you want need experience or an educational background, so we are often stuck in a minimum wage position, with part time hours because our employers don’t want to give us benefits, which means we still have to take out loans.”

Dr. Twenge writes that even with their doubts about themselves and their prospects, iGen’ers are still fairly confident about their eventual standard of living.

60% of 2015 high school seniors expected to earn more than their parents. Somehow, most iGen’ers think they will make it. HM was also please to learn that iGen’ers were less impressed by consumer goods, and were less prone to buy consumer goods to impress their neighbors.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

More Safety and Less Community

April 20, 2019

We now return to iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D. The title of this post is the second part of the title of Chapter 6.

The chapter begins with a discussion about a student who has just finished her first year of community college that she attended from home living with her parents. She has a part time job and isn’t taking any classes over the summer. She says,”I need my summer. If I didn’t have it, I’d go crazy. Just as many of her fellow iGen-ers she doesn’t smoke, doesn’t drink, and has had limited experience with romantic relationships. She doesn’t think these things are safe. She says, “Going out and partying when you’re drunk, you’re in such an altered state of mind, you behave in ways that you never would when sober. There’s drunk driving—and people take advantage of you when you’re drunk. It’s not safe. You’re going to hurt yourself, or someone’s going to hurt you. It’s not my thing.”

Dr. Twenge notes that this iGener’s interest in safety extends beyond physical safety to a term she only recently learned from iGen: emotional safety. For example some iGen-ers believe that high school is too young to have a romantic relationship, especially a sexual one. This iGen-er points to scientific research to back up her conclusions. With the release of oxytocin (during sex), you form emotional connections to someone whether you like it or not. She thinks it dangerous to become emotionally reliant on someone, but especially at that age, when your brain is still developing. She is correct in that the prefrontal lobe, which is responsive for reasoning and executing control, continues to mature until the mid-twenties. There are probably people from earlier generations who might wish they had this knowledge that this iGen-er has at this age.

Statistics bear out this point. iGen teens are safer drivers. Fewer high school seniors get into car accidents, and fewer get tickets. This is a recent trend, beginning only in the early 2000s for tickets and in the mid-2000s for accidents. As recently as 2002, more than one out of three 12th graders had already gotten a ticket. By 2015 only one in five had.

A 2016 survey asked iGen teens what they wanted most out of a car, comparing them to Millennial young adults who recalled their preferences as teens. The feature iGen wanted much more than Millennials is safety.

iGen teens are also less likely to get into a car driven by some who’s been drinking; the number who did so was cut in half from 40% in 1991 to 20% in 2015.

Although iGen-ers tend to eschew alcohol, they are just as likely to use marijuana as Millennials were. The reason is that they tend to believe that marijuana is safe. Some iGen-ers believe that marijuana is not just safe, but beneficial. One iGen-er wrote, “Weed has been proven to provide many health benefits. It helps with pain, cancer, and many other illnesses. It can prevent people from getting addicted to other drugs that are way more harmful.” Nevertheless, iGen’ers remain cautious. Even though they are more likely to see marijuana as safe, use hasn’t gone up.

There has also been a decline in fighting and a waning of sexual assault. In 1991, half of 9th graders had been in a physical fight in the last twelve months, but by 2015 only one in four had. The homicide rate among teens and young adults reached a forty-year low in 2014. The number of teens who carry a weapon to school is now only a third of what it was in the early 1990s. From 1992 to 2015 the rate of rape was nearly cut in half in the FBI’s Uniform Crime Reports.

iGen’ers’ risk aversion goes beyond their behaviors toward a general attitude of avoiding risk and danger. Eighth and tenth graders are now less likely to answer positively to “I like to test myself every now and then by doing something a little risky.” Nearly half of teens found that appealing in the early 1990s, but by 2015 less than 40% did. They are also less likely to agree that “I get a real kick out of doing things that are a little dangerous.” In 2011, the majority of teens agreed that they got a jolt out of danger, but within a few years only a minority shared this view.

For the most part these changes can be regarded as improvements in attitudes and behavior. But Dr. Twenge notes that the flip side of iGen’s interest in safety is the idea that one should be safe not just from car accidents and sexual assaults, but from people who disagree with you. She provides as an example the most recent version of the “safe space” now known as a place where people can go to protect themselves from ideas they find offensive. She writes, “In recent years, safe spaces have become popular on college campuses as responses to visits by controversial speakers: if students are upset by a speakers message, they can come together in a separate location to console one another.

A 2015 “Atlantic” piece by Greg Lukianoff and Jonathan Haidt’s on safe spaces and other campus controversies was titled “The Coddling of the American Mind” and was illustrated with a picture of a confused-looking toddler wearing a shirt that said “College.” Josh Zeits wrote in “Poilitico Magazine,” “Yesterday’s student activists wanted to be treated like adults. Today’s want to be cheated like children.”

Such an attitude precludes a full education. It also precludes an effective democracy.

The trend in iGen’ers is not to take an interest in education. They attend college because they feel they have to to get a better job. Dr. Twenge writes, “Teen’s interest in school took a sudden plunge beginning around 2012, with fewer students saying they found school interesting, enjoyable, or meaningful. The strong push for technology in the classroom seems to have assuaged students’ boredom during the 2000s, but by the 2010s little in the classroom could compete with the allure of the ever-tempting smartphone.

Insecure: The New Mental Health Crisis

April 16, 2019

The title of this post is the same as the fourth chapter in iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D. The problems discussed in previous posts are important. The critical question is whether this use increases feelings of loneliness, depression, and anxiety also been accompanied by changes in diagnosable depression and its most extreme outcome, suicide?

Since 2004 the National Survey on Drug Use and Health (NSDUH), which is conducted by the US Department of Health and Human Services has screened US teens for clinical-level depression. The project uses trained interviewers to assess a nationally representative sample of more than 17,000 teens (ages 10 to 17) across the country every year. Participants hear questions through headphones and enter their answers directly into a laptop computer, ensuring privacy and confidentiality. The questions rely on the criteria for major depressive disorders documented in the Diagnostic and Statistical Manual (DSM) or the American Psychiatric Association. It is the gold standard for diagnosing mental health issues. The criteria include experiencing depressed mood, insomnia, fatigue, or markedly diminished pleasure in life every day for at least two weeks. This study is specifically designed to provide a benchmark for rates of mental illness among Americans, regardless of whether they’ve ever sought treatment.

The screening test showed a shocking rise in depression between 2010 and 2015 in which 56% of teens experienced a major depressive episode and 60% more experienced severe impairment.

So more people are expressing more than just symptoms and depression, and feelings of anxiety, but clinically diagnosable major depression. This is not a small issue with more than one in nine teens and one in eleven young adults suffering from major depression. This strongly suggests that something is seriously wrong in the lives of American teens.

This increase in major depressive episodes is far steeper among girls, which is the gender more likely to overuse social media. By 2015, one in five teen girls had experienced a major depressive episode in the last year.

Major depression, especially if its severe, is the primary risk factor for suicide. Between 2009 and 2015, the number of high school girls who seriously considered suicide increased 43%. The number of college students who seriously considered suicide jumped 60% between 2011 and 2016.

Dr Twenge mentions that a contributing factor is a shortfall in needed sleep. Many iGen’ers are so addicted to social media that they find it difficult to put down their phones and go to sleep when they should. More teens now sleep less than seven hours most nights. Sleep experts say that teens should get about nine hours of sleep a night, so a teen who is getting less than seven hours a night is significantly sleep deprived. 57% more teens were sleep deprived in 2015 than in 1991. In just the three years between 2012 and 2016, 22% more teens failed to get seven hours sleep.

So one way of improving mental health is to get more sleep. Dr. Twenge concludes the chapter as follows: “In other words, there is a simple, free way, to improve mental health: put down the phone and do something else.

In Person No More

April 15, 2019

The title of this post is the same as the third chapter in iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D. There is a second part to this title which is “I’m with You, but Only Virtually.

When Dr. Twenge asked one of her iGen teens what makes his generation different, he doesn’t hesitate to answer: I feel like we don’t party as much. People stay in more often. My generation lost interest in socializing in person—they don’t have physical get-togethers, they just text together, and they can just stay at home.”

College students were asked how many hours a week they spend at parties during their senior year in high school. In 2016, they said two hours a week, which is only a third of the time GenX students spent at parties in 1987. Perhaps iGen-ers just don’t like partying; perhaps they just like to hang out. This is not the case. The number of teens who get together with their friends every day has been cut in half in just fifteen years, with especially steep declines recently.

College students in 2016 when compared against college students in the late 1980s spent four fewer hours a week socializing with their friends and three fewer hours a week partying. So seven hours a week less on in-person social interaction. This severe drop in getting out and getting together with friends occurred right when smartphones became popular and social media use really took off. Time spent with friends in person has been replaced by time spent with friends (and virtual friends) online.

Many malls across the country have closed. In activity after activity, iGen-ers are less social than Millenials, GenX’ers, and Boomers at the same age. This change in activities outside the home doesn’t mean teens are always staying at home having wholesome family time. So iGen’ers spend more leisure time alone. Dr. Twenge writes “Although we can’t say for sure, it’s a good guess that this alone time is being spend online, on social media, streaming video, and texting. In short, iGen teens are less likely to take part in every singe face-to-face social activity measured across four data sets of three different age groups. These fading interactions include everything from small-group or one-on-one activities, such as getting together with friends to larger group activities such as partying. “

Instead, they are communicating electronically. The internet has taken over. Teens are Instagramming, Snapchatting, and texting with friends more, and seeing them in person less. She concludes, “For IGen’ers, online friendship has replaced offline friendship.”

Unfortunately, these trends are leading to decreases in mental health and happiness. Among 8th graders here are the activities that decrease happiness among 8th graders (according to Monitoring the Future, 2013 to 2015). Video chat, computer games, texting, Social networking websites, and Internet. But there has been a decrease in the following activities that increase happiness: Sports or exercise, religious services, print media, and in-person social interaction.

One study with college students asked students with Facebook pages to complete short surveys on their phone over the course of two weeks—they’d get a text message with a link five times a day and report on their mood and how much they’d used Facebook. The more they used Facebook, the unhappier they later felt. Dr. Twenge concludes, “feeling unhappy did not not lead to more Facebook use. Facebook use caused unhappiness, but unhappiness did not cause Facebook use.

She reports that another study of adults fond the same thing: the more people used Facebook, the lower their mental health and life satisfaction on the next assessment. But after they interacted with their friends in person, their mental health and life satisfaction improved.

In a third study that randomly assigned 1,095 Danish adults to stop using Facebook for a week or to continue to use Facebook. At the end of the week, those who had taken a break from Facebook were happier, less lonely, and less depressed than those who had used Facebook as usual. These differences were sizable. 36% fewer were lonely, 33% fewer were depressed, and 9% more were happy. Those who stayed off Facebook were also less likely to feel sad, angry, or worried.

The risk of unhappiness due to social media is the highest for the youngest teens. Eighth graders who spent ten or more hours a week on social networking sites were 56% more likely to be unhappy, compared to 39% for 10th graders and 14% for 12th graders.

A commercial for Facebook suggests that social media will help you feel less alone and surround you with friends every moment. Unfortunately, this is not true for the always online iGEN. Teens who visit social networking sites every day are actually more likely to agree “I often feel lonely,” “I often feel left out of things,” and “I often wish I had more good friends.”

Research has also revealed that teens who spend a lot of time looking at their phones aren’t just at a higher risk of depression, they re also at an alarmingly higher risk for suicide. This is not to suggest that there is an alarming suicide epidemic, but there will likely be increasing in suicide rates.

Internet: Online Time—Oh, and Other Media, Too

April 14, 2019

The title of this post is the same as the second chapter in iGEN: Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, by Jean M. Twenge, Ph.D.

iGen-ers sleep with their phones. They put them under their pillows, on the mattress, or at least within arm’s reach of the bed. They check social media websites and watch videos right before they go to bed, and reach for their phones again as soon as they wake up in the morning. So their phone is the last thing they see before they go to sleep, and the first thing they see when they wake up. If they wake up in the middle of the night, they usually look at their phones.

Dr. Twenge notes, “Smartphones are unlike any other previous form of media, infiltrating nearly every minute of our lives, even when we are unconscious with sleep. While we are awake, the phone entertains, communicates, and glamorizes. She writes, “It seems that teens (and the rest of us) spend a lot of time on phones—not talking but texting, on social media, online, and gaming (togther, these are labeled ‘new media’). Sometime around 2011, we arrived at the day when we looked up, maybe from our own phones, and realized that everyone around us had a phone in his or her hands.”

Dr, Twenge reports, “iGen high school seniors spent an average of 2.25 hours a day texting on their cell phone, about 2 hours a day on the Internet, 1.5 hours a day on electronic gaming , and about a half hour on video chat. This sums to a total of 5 hours a day with new media, This varies little based on family background; disadvantaged teens spent just as much or more time online as those with more resources. The smartphone era has meant the effective end of the Internet access gap.

Here’s a breakdown of how 12th graders are spending their screen time from Monitoring the Future, 2013-2015:
Texting 28%
Internet 24%
Gaming 18%
TV 24%
Video Chat 5%

Dr. Twenge reports that in seven years (2008 to 2015) social media sites went from being a daily activity for half of teens, to almost all of them. In 2015 87% of 12th grade girls used social media sites almost every day in 2015 compared to 77% of boys.
HM was happy to see that eventually many iGen’ers see through the veneer of chasing likes—but usually only once they are past their teen years.

She writes that “social media sites go into and out of fashion, and by the time you read this book several new ones will probably be on the scene. Among 14 year olds Instagram and Snapchat are much more popular than Facebook.“ She notes that recently group video chat apps such as Houseparty were catching on with iGEN, allowing them to do what they call ‘live chilling.”

Unfortunately, it appears that books are dead. In the late 1970s, a clear majority of teens read a book or a magazine nearly every day, but by 2015, only 16% did. e-book readers briefly seemed to rescue books: the number who said they read two or more books for pleasure bounced back in the late 2000s, but they sank again as iGEN (and smartphones) entered the scene in the 2010. By 2015, one out of three high school seniors admitted they had not read any books for pleasure in the past year, three times as many as in 1976.

iGEN teens are much less likely to read books than their Millennial, GenX, and Boomer predecessors. Dr. Twenge speculates that a reason for this is because books aren’t fast enough. For a generation raised to click on the next link or scroll to the next page within seconds, books just don’t hold their attention. There are also declines for iGen-ers with respect to magazines and newspapers.

SAT scores have declined since the mid-2000s, especially in writing (a 13-point decline since 2006) and critical reading ( a 13-point decline since 2005).

Dr, Twenge raises the fear that with iGen and the next generations never learning the patience necessary to delve deeply into a topic, and the US economy falling behind as a result.

In No Hurry: Growing Up Slowly

April 13, 2019

The title of this post is identical to the title of the first chapter in iGEN: “Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood” by Jean M. Twenge, Ph.D. Excerpts from this chapter follow.

iGEN teens are less likely to go out without their parents. Dr. Twenge writes that this trend began with Millennials and then accelerated at a rapid clip with iGen’ers. 12th graders in 2015 are going out less often than 8th graders did as recently as 2009. 18-year-olds are now going out less often than 14-year-olds did just six years prior.

Dr. Twenge writes that iGEN’ers are less likely to do adult things such as going out without their parents and having sex, and whether this trend of growing up more slowly is a good thing or a bad thing. She uses the approach called life history theory to provide insights. Life history theory states that how fast teens grow up depends on where and when they are raised. So developmental speed is an adaptation to a cultural context.

She writes, “Today’s teens follow a slow life strategy, common in times and places where families have fewer children and cultivate each child longer and more intensely. “ Life history theory explicitly notes that slow or fast life strategies are not necessarily good or bad; they just are. Nearly all of the generational shifts in this chapter and the rest appear across different demographic groups. The studies we’re drawing from here are nationally representative, meaning the teens reflect the demographics of the United States. Every group is included. Even within specific groups, the trends consistently appear; they are present in working-class homes as well as upper-middle-class ones, among minorities as well as whites, among girls as well as boys, in big cities, suburbs, and small towns, and all across the country. That means they are not isolated to the white, upper-middle-class teens whom journalists often wring heir hands over. Youths of every racial group, region, and class are growing up more slowly.”

When HM was a teen, one of the major milestones on the way to adulthood was getting a driver’s license. All boomer high school students had their driver’s license by spring of their senior year, by 2015 only 72% did. So more than one out of four iGen’ers did not have a driver’s license by the time they graduated from high school.

Another GenX memory is being a latchkey kid. They walked home from school and used their key to enter an empty house, because parents were still at work.

iGen’ers are also less likely to have jobs. In the late 1970s only 22% of high school seniors didn’t work for pay at all during the school year. By the early 2010s, twice as many (44%) didn’t. The number of 8th graders who work for pay has been cut in half.

With fewer teens one might think that more would get an allowance to buy the things they want. However, fewer iGen’ers get an allowance. When they need money, they just ask for it from their parents. It’s another example of 18-year-olds being like 15-year-olds: just like children and young adolescents, one out of five iGen high school seniors ask they parents for what they want instead of managing their own cash flow.

A positive fact about the iGen’ers is that they are much less likely to drink. This is especially true of binge drinking. However, iGen’ers smoke pot more often than the Millenials that preceded them.

Some have concluded that iGen’ers are more responsible. A 2016 Post article trumpeted that “Today’s Teens are Way Better Behaved than You Were.” Dr. Twenge thinks that it’s more informative to employ the terms of life history theory: ‘teens have adopted a slow life strategy, perhaps due to smaller families and the demands wrought by increasing income inequality. Parents have the time to cultivate each child to succeed in the newly competitive economic environment which might take twenty-one years when it once took sixteen. The cultural shift toward individualism may also play a role: childhood and adolescence are uniquely self-focused stages, so staying in them longer allows more cultivation of he individual self. With fewer children and more time spent with each, each child is noticed and celebrated. Cultural individualism is connected to slower developmental speeds.”

Perhaps this slower pace of development results in the 2014 emergence of he neologism “adulting”, which means taking care of one’s responsibilities. An Adulting School in Maine offers classes for young adults teaching the how to perform tasks such as managing finances and folding laundry.

Dr. Twenge ends this chapter as follows: “No matter what the reason. teens are growing up more slowly, eschewing adult activities until they are older. This creates a logical question” If teens are working less, spending less time on homework, going out less, and drinking less, what are they doing? For a generation called iGen, the answer is obvious: look no further than the smartphones in their hands.”

To which we turn in the next post.

Regardless of your age, how iGEN are you?

April 12, 2019

This post is taken from iGEN: “Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood” by Jean M. Twenge, Ph.D.

Take this `15-item quiz to find out how “iGEN” you are. Answer each question with a “yes” or “no.”

_____1. In the past 24 hours, did you spend at least an hour total texting on a cell phone?
_____2. Do you have a Snapchat account?
_____3. Do you consider yourself a religious person?
_____4. Did you get your driver’s license by the time you turned 17?
_____5. Do you think same-sex marriage should be legal?
_____6. Did you ever drink alcohol (more than a few sips) by the time you turned 16?
_____7. Did you fight with your parents a lot when you were a teen?
_____8. Were more than one-third of the other students at your high school a different race than you?
_____9. When you were in high school, did you spend nearly every weekend night out with your friends?
_____10. Did you have a job during the school year when you were in high school?
_____11. Do you agree that safe spaces and trigger warnings are good ideas and that efforts should be made to reduce microaggression?
_____12. Are you a political independent?
_____13. Do you support the legalization of marijuana?
_____14. Is having sex without much emotion involved desirable?
_____15. When you were in high schoool, did you feel left out ad lonely fairly often?

SCORING: Give yourself 1 point answering “yes” to questions 1,2,5,8,11,12,13,14.
and 15. Give yourself 1 point for answering “no” to questions 3,4,6,7,9, and 10. The higher your score, the more iGEN you are in your behaviors, attitudes, and beliefs.

iGEN

April 11, 2019

iGEN is the title of a new book by Jean M. Twenge, Ph.D. The subtitle is “Why Today’s Super-Connected Kids are Growing up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood. iGEN is the smartphone generation. HM is a member of the Boomer generation. Generation X followed the Boomers around 1964. The Millenials were the generation born in the 1980s and early 1990s, Dr. Twenge noted around 2012 seeing large abrupt shifts in teens behavior and emotional states.

This iGEN generation was born in 1995 and later. They grew up with cell phones, had an Instagram page before they started high school, and could not remember a time before the internet. The oldest member of iGEN were early adolescents when the iPhone was introduced in 2007 and high school students when the iPad was introduced in 2010. The i in the names of these devices stands for Internet. The internet was commercialized in 1995. So this generation is named after the iPhone. According to a fall 2015 marketing survey, two out of three US teens owned an iPhone. A 17-year old interviewed in American Girls said, “You have to have an iPhone. It’s like Apple has a monopoly on adolescence.

The iGEN is the first generation for whom internet access has been constantly available, right there in their hands. Whether their smartphone is a Samsung and their tablet a Kindle, these young people are all iGen’ers. Even lower income teens from disadvantaged backgrounds spend just as much time online as those with more resources. The average teen checks her phone more than eighty times a day.

Dr. Twenge writes, “technology is not the only change shaping this generation. The i in iGEN represents the individualism its members take for granted, a broad trend that grounds their bedrock sense of equality as well as their reaction to traditional social rules. It captures the income inequality that is creating a deep insecurity among iGEN’ers, who worry about doing the right things, to become financially successful, to become a “have” rather than a “have not.” Due to these influences and many others, iGEN is distinct from every previous generation in how its members spend their time, how they behave, and their attitudes toward religion, sexuality, and politics. They socialize in completely new ways, reject once sacred social taboos, and want different things from their lives and careers. They are obsessed with safety and fearful of their economic futures, and they have no patience for inequality based on gender, race or sexual orientation, They are at the forefront of the worst mental health crisis in decades, with rates of teen depression and suicide skyrocketing since 2011. Contrary to the prevalent idea that children are growing up faster than previous generations did, iGENers are growing up more slowly: 18-year olds now act like 15-year-olds used to, and 13-year-olds like 10-year olds. Teens are physically safer than ever, yet they are more mentally vulnerable.”

Dr Twenge draws from four large, nationally representative surveys of 11 million Americans since the 1960s and identifies ten important trends shaping iGEN’ers:

The extension of childhood into adolescence.

The amount of time they are really spending on their phones—and what that has replaced.

The decline in in-person social interaction.

The sharp rise in mental health issues.

The decline in religion.

The interest in safety and the decline in civic involvement

New attitudes towards work.

New attitudes toward sex, relationships, and children.

Acceptance, equality and free speech debates.

Independent political views.

Not all these changes are the result of the new technology. It is interesting to look at which changes and to what extent they are the result of new technology, and what is responsible for other changes.

Future posts on these issues will follow.

Get A Life!

April 9, 2019

This is the final post of a series of posts based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” Perhaps the best way of thinking about Facebook and related problems is via Nobel Winning Lauerate Daniel Kahneman’s Two System View of Cognition. System 1 is fast and emotional. Beliefs are usually the result of System 1 processing. System 2 is slow, and what we commonly regard as thinking.

The typical Facebook user is using System 1 processing almost exclusively. He is handing his life over to Facebook. The solution is to Get a Life and take your life back from Facebook.

The easiest way to do this is to get off from Facebook cold turkey. However, many users have personal reasons for using Facebook. They should take back their lives by minimizing their use of Facebook.

First of all, ignore individual users unless you know who they are. Ignore likes and individual opinions unless you know and can evaluate the individual. Remember what they say about opinions, “they’e like a—h—-s, everybody has one.” The only opinions you should care about are from responsible polls done by well known pollsters.

You should be able to find useful sources on your own without Facebook. Similarly you can find journalists and authors on your own without Facebook. Spend time and think about what you read. Is the article emotional? Is the author knowledgeable?

If you take a suggestion from Facebook, regard that source skeptically.

Try to communicate primarily via email and avoid Facebook as much as possible.

When possible, in person meetings are to be preferred.

In closing, it needs to be said that Facebook use leads to unhealthy memories. And perhaps, just as in the case of Trump voters, HM predicts an increased incidence of Alzheimer’s and dementia among heavy Facebook users.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

What’s Being Done

April 8, 2019

This is the twelfth post based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” The remainder of the book, and that remainder is large, discusses what is being done to remedy these problems. So people are concerned. One approach is to break up monopolies. But that approach ignores the basic problem. Facebook is taking certain actions, one of which is encryption is definitely bad Encryption would simply allow Facebook to hide its crimes.

One idea, which is not likely but has received undeserved attention, is to monetize users’ data so the Facebook would have to pay for its use. Unfortunately, this has likely provided users with hopes of future riches for their Facebook use. Although this is indeed how Facebook makes it money, it is unlikely to want to share it with users. Advertisements are pervasive in the world. Although we can try to ignore them in print media, advertisements need to be sat through on television unless one wants to record everything and fast forward through the ads later.

Moreover, there are users, and HM is one of them, who want ads presented on the basis of online behavior. Shopping online is much more efficient than conventional shopping, and ads taken from interests users shown online, provide more useful information. Amazon’s suggestions are frequently very helpful.

The central problem with Facebook is the artificial intelligence and algorithms that bring users of like mind together, and foster hate and negative emotions. This increases polarization and hatred that accompanies polarization.

Does Facebook need to be transparent and ask if users want to be sent off to these destinations the algorithms and AI have chosen? Even when explanations are provided polarization might still be enhanced as birds of a feather do tend to flock together on their own, but perhaps with less hate and extremism. There are serious legal and freedom of speech problems that need to be addressed.

Tomorrow’s post provides a definitive answer to this problem.

Damaging Effects on Public Discourse

April 7, 2019

This is the eleventh post based on an important book by Roger McNamee titled: “Zucked: Waking up to the Facebook Catastrophe.” In the MIT Technology Review professor Zeynep Tufekci explained why the impact on internet platforms is so damaging and hard to fix. “The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone. It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities and we seek approval from our like-minded peers. We bond with our team by yelling at the fans on the other one. In sociology terms, we strengthen our feeling of ‘in-group’ belonging by increasing our distance from and tension with the ‘out-group’—us versus them. Our cognitive universe isn’t an echo chamber, but our social one is. That is why the various projects for fact-checking claims in the news, while valuable, don’t convince people. Belonging is stronger than facts.” To this HM would add “beliefs are stronger than facts.” Belonging leads to believing what the group believes. As has been written in previous healthymemory blog posts, believing is a System One Process in Kahneman’s Two-process view of cognition. And System One processing is largely emotional. It shuts out System Two thinking and promotes stupidity.

Facebook’s scale presents unique threats for democracy. These threats are both internal and external. Although Zuck’s vision of connecting the world and bringing it together may be laudable in intent, the company’s execution has had much the opposite effect. Facebook needs to learn how to identify emotional contagion and contain it before there is significant harm. If it wants to be viewed as a socially responsible company, it may have to abandon its current policy of openness to all voices, no matter how damaging. Being socially responsible may also require the company to compromise its growth targets. In other words, being socially responsible will adversely affect the bottom line.

Are you in Control?

April 6, 2019

This is the tenth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Facebook wants you to believe that you are in control. But this control is an illusion. Maintaining this illusion is central to every platform’s success, but with Facebook, it is especially disingenuous. Menu choices limit user actions to things that serve Facebook’s interest. Facebook’s design teams exploit what are known as “dark patterns” in order to produce desired outcomes. Wikipedia defines a dark pattern as “a user interface that has been carefully crafted to trick users into doing things.” Facebook tests every pixel to ensure it produces the desired response. For example: which shade of red best leads people to check their notifications? for how many milliseconds should notifications bubbles appear in the bottom left before fading away to most effectively keep users on site? what measures of closeness should we recommend new friends of you to “add”?

With two billion users the cost for testing every possible configuration is small. And Facebook has taken care to make its terms of service and privacy headings hard to find and nearly impossible to understand. Facebook does place a button on the landing page to provide access to the terms of service, but few people click on it. The button is positioned so that hardly anyone even sees it. And those who do see it have learned since the early days of the internet to believe that terms or service are long and incomprehensible, so they don’t press it either.

They also use bottomless bowls. News Feeds are endless. In movies and television, scrolling credits signal to the audience that it is time to move on. They provide a “stopping cue.” Platforms with endless news feeds and autoplay remove that signal to ensure that users maximize their time on site for every visit. They also use autoplay on their videos. Consequently, millions of people are sleep deprived from binging on videos, checking Instagram, or browsing on Facebook.

Notifications exploit one of the weaker elements of human psychology. They exploit an old sales technique, called the “foot in the door” strategy,” that lures the prospect with an action that appears to be low cost, but sets in motion a process leading to bigger costs. We are not good at forecasting the true cost of engaging with a foot-in-door strategy. We behave as though notifications are personal to us, completely missing that they are automatically generated, often by an algorithm tied to an artificial intelligence that has concluded that the notification is just the thing to provoke an action that will serve Facebook’s economic interests.

We humans have a need for approval. Everyone wants to feel approved of by others. We want our posts to be liked. We want people to respond to our texts, emails, tags, and shares. This need for social approval is what what made Facebook’s Like button so powerful. By controlling how often an entry experiences social approval, as evaluated by others, Facebook can get that user to do things that generate billions of dollars in economic value. This makes sense because the currency of Facebook is attention.

Social reciprocity is a twin of social approval. When we do something for someone else, we expect them to respond in kind. Similarly, when when a person does something for us, we feel obligated to reciprocate. So when someone follows us, we feel obligated to follow them. If w receive an invitation to connect from a friend we may feel guilty it we do not reciprocate the gesture and accept it.

Fear of Missing Out (FOMO) is another emotional trigger. This is why people check their smart phone every free moment, perhaps even when they are driving. FOMO also prevents users from deactivating their accounts. And when users do come to the decision to deactivate, the process is difficult with frequent attempts to keep the user from deactivating.

Facebook along with other platforms work very hard to grow their user count but operate with little, if any, regard for users as individuals. The customer service department is reserved for advertisers. Users are the product, at best, so there is no one for them to call.

It Gets Even Worse

April 5, 2019

This is the ninth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” This post picks up where the immediately preceding post, “Amplifying the Worse Social Behavior” stopped. Users sometimes adopt an idea suggested by Facebook or by others on Facebook as their own. For example, if someone is active in a Facebook Group associated with a conspiracy theory and then stop using the platform for a time, Facebook will do something surprising when they return. It might suggest other conspiracy theory Groups to join because they share members with the first conspiracy Group. Because conspiracy theory Groups are highly engaging, they are likely to encourage reengagement with the platform. If you join the Group, the choice appears to be yours, but the reality is that Facebook planted the seed. This is because conspiracy theories are good for them, not for you.

Research indicates that people who accept one conspiracy theory have a high likelihood of accepting a second one. The same is true of inflammatory disinformation. Roger accepts the fact that Facebook, YouTube, and Twitter have created systems that modify user behavior. Roger writes, “They should have realized that global scale would have an impact on the way people use their products and would raise the stakes for society. They should have anticipated violations of their terms of service and taken steps to prevent them. Once made aware of the interference, they should have cooperated with investigators. I could no longer pretend that Facebook was a victim. I cannot overstate my disappointment. The situation was much worse than I realized.”

Apparently, the people at Facebook live in their own preference bubble. Roger writes, “Convinced of the nobility of their mission, Zuck and his employees reject criticism. They respond to every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. To them, connecting 2.2 billion people is so obviously a good thing, and continued growth so important, that they cannot imagine that the problems that have resulted could be in any way linked to their designs or business decisions. As a result, when confronted with evidence that disinformation and fake news spread over Facebook influenced the Brexit referendum and the election of Putin’s choice in the United States, Facebook took steps that spoke volumes about the company’s world view. They demoted publishers in favor of family, friends, and Groups on the theory that information from those sources would be more trustworthy. The problem is that family, friends, and Groups are the foundational elements of filter and preference bubbles. Whether by design or by accident, they share the very disinformation and fake news that Facebook should suppress.

Amplifying the Worst Social Behavior

April 4, 2019

This is the eighth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “The competition for attention across the media and technology spectrum rewards the worst social behavior. Extreme views attract more attention, so platforms recommend them. News Feeds with filter bubbles do better at holding attention than News Feeds that don’t have them. If the worst thing that happened with filter bubbles was that they reinforced preexisting beliefs, they would be no worse than many other things in society. Unfortunately, people in a filter bubble become increasingly tribal, isolated, and extreme. They seek out people and ideas that make them comfortable.”

Roger continues, “Social media has enabled personal views that had previously been kept in check by social pressure—white nationalism is an example- to find an outlet.” This leads one to ask the question whether Trump would have been elected via the Electoral College if it weren’t for social media. Trump’s base consists of Nazis and white supremacists and constitutes more than a third of the citizens. Prior to the election, HM would never have believed that this was the case. Now he believes and is close to being clinically depressed.

Continuing on, “Before the platforms arrived, extreme views were often moderated because it was hard for adherents to find one another. Expressing extreme views in the real world can lead to social stigma, which also keeps them in check. By enabling anonymity and/or private Groups, the platforms removed the stigma, enabling like-minded people, including extremists, to find one another, communicate, and, eventually, to lose the fear of social stigma.”

Once a person identifies with an extreme position on an internet platform, that person will be subject to both filter bubbles and human nature. There are two types of bubbles. Filter bubbles are imposed by others, whereas a preference bubble is a choice, although the user might be unaware of this choice. By definition, a preference bubble takes users to a bad place, and they may not even be conscious of the change. Both filter bubbles and preference bubbles increase time on site, which is a driver of revenue. Roger notes that in a preference bubble, users create an alternative reality, built around values shared with a tribe, which can focus on politics, religion, or something else. “They stop interacting with people with whom they disagree, reinforcing the power of the bubble. They go to war against any threat to their bubble, which for some users means going to war against democracy and legal norms, They disregard expertise in favor of voices from their tribe. They refuse to accept uncomfortable facts, even ones that are incontrovertible. This is how a large minority of Americans abandoned newspapers in favor of talk radio and websites that peddle conspiracy theories. Filter bubbles and preference bubbles undermine democracy by eliminating the last vestiges of common ground among a huge percentage of Americans. The tribe is all that matters, and anything that advances the tribe is legitimate. You see this effect today among people whose embrace of Donald Trump has required them to abandon beliefs they held deeply only a few years earlier. Once again, this is a problem that internet platforms did not invent. Existing issues in society created a business opportunity that platforms exploited. They created a feedback loop that reinforces and amplifies ideas with a speed and at a scale that are unprecedented.”

Clint Watts in his book, “Messing with the Enemy” makes the case that in a preference bubble, facts and expertise can be the core of a hostile system, an enemy that must be defeated. “Whoever gets the most likes is in charge; whoever gets the most shares is an expert. Preference bubbles, once they’ve destroyed the core, seek to use their preference to create a core more to their liking, specially selecting information, sources, and experts that support their alternative reality rather than the real physical world.” Roger writes, “The shared values that form the foundation of our democracy proved to be powerless against the preference bubbles that have evolved over the past decade. Facebook does not create preference bubbles, but it is the ideal incubator for them. The algorithms that users who like one piece of disinformation will be fed more disinformation. Fed enough disinformation, users will eventually wind up first in a filter bubble and then in a preference bubble. if you are a bad actor and you want to manipulate people in a preference bubble, all you have to do is infiltrate the tribe, deploy the appropriate dog whistles, and you are good to go. That is what the Russians did in 2016 and what many are doing now.

The Effects Facebook Has on Users

April 3, 2019

This is the seventh post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “It turns out that connecting 2.2 billion people on a single network does not naturally produce happiness at all. It puts pressure on users, first to present a desirable image, then to command attention in the form of Likes or shares from others. In such an environment, the loudest voices dominate.” This can be intimidating. Consequently, we follow the human tendency to organize into clusters and tribes. This begins with people who share our beliefs. Most often this consists of family, friends, and Facebook Groups to which we belong. Facebook’s news feed encourages every user to surround him- or herself with like-minded people. Notionally, Facebook allows us to extend our friends network to include a highly diverse community, but many users stop following people with whom they disagree. Usually it feels good when we cut off someone who provokes us and lots of people do so. Consequently friends lists become more homogeneous over time. Facebook amplifies this effect with its approach to curating the News Feed. Roger writes, “When content is coming from like-minded family, friends, or Groups, we tend to relax our vigilance, which is one of the reasons why disinformation spreads so effectively on Facebook.

An unfortunate by-product of giving users what they want are filter bubbles. And unfortunately, there is a high correlation between the presence of filter bubbles and polarization. Roger writes, “I am not suggesting that filter bubbles create polarization, but I believe they have a negative impact on public discourse and political because filter bubbles isolate the people stuck in them. Filter bubbles exist outside Facebook and Google, but gains in attention for Facebook and Google are increasing the influence of their filter bubbles relative to others.”

Although practically everyone on Facebook has friends and family, many also are members of Groups. Facebook allows Groups on just about anything, including hobbies, entertainment, teams, communities, churches, and celebrities. Many groups are devoted to politics and they cross the full spectrum. Groups enables easy targeting by advertisers so Facebook loves them. And bad actors like them for the same reason. Case Sunstein, who was the administrator of the White House Office of Information and Regulatory Affairs for the first Obama administration conducted research indicating that when like-minded people discuss issues, their views tend to get more extreme over time. Jonathan Morgan of Data for Democracy has found that as few as 1 to 2 percent of a group can steer the conversation if they are well-coordinated. Roger writes, “That means a human troll with a small army of digital bots—software robots—can control a large, emotional Group, which is what the Russians did when they persuaded Groups on opposite sides of the same issue—like pro-Muslim groups and anti-Muslim groups—to simultaneously host Facebook events in the same place at the same time hoping for a confrontation.

Roger notes that Facebook asserts that users control their experience by picking the friends and sources that populate their News Feed when in reality an artificial intelligence, algorithms, and menus created by Facebook engineers control every aspect of that experience. Roger continues, “With nearly as many monthly users are there are notional Christians in the world, and nearly as many daily users as there are notional Muslims, Facebook cannot pretend its business model does not have a profound effect. Facebook’s notion that a platform with more than two billion users can and should police itself also seems both naive and self-serving, especially given the now plentiful evidence to the contrary. Even if it were “just a platform,” Facebook has a responsibility for protecting users from harm. Deflection of responsibility has serious consequences.”

Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks

April 2, 2019

This is the sixth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” In 2014, Facebook published a study called “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks.” “This experiment entailed manipulating the balance of positive and negative messages in News Feeds of nearly seven hundred thousand users to measure the influence of social networks on mood. The internal report claimed the experiment provided evidence that emotions can spread over its platform. Facebook did not get prior informed consent or provide any warning. Facebook made people sad just to see if it could be done. Facebook was faced with strong criticism for this experiment. Zack’s right hand lady, Sheryl Sandberg said: “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated. And for that communication we apologize. We never meant to upset you.”

Note that she did not apologize for running a giant psychological experiment on users. Rather, she claimed that experiments like this are normal “for companies.” So she apologized only for the communication. Apparently running experiments on users without prior consent is a standard practice at Facebook.

Filter Bubbles

April 1, 2019

This is the fiftth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Adults get locked into filter bubbles. Wikipedia defines filter bubbles as “a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the users, such as location, past click-behavior and search history.

Filter bubbles are not unique to internet platforms. They can also be found on any journalistic medium that reinforces preexisting beliefs of its audience, while surprising any stories that might contradict them, such as Fox News, In the context of Facebook, filter bubbles have several elements. In Facebook’s endless pursuit of engagement, Facebook’s AI and algorithms feed users a steady diet of content similar to what has engaged us most in the past. Usually that is content that we “like.” Each click, share, and comment helps Facebook refine its AI. With 2.2 billion people clicking, sharing, and commenting every month—1.47 billion every day—Facebook’s AI knows more about users than the users can possibly imagine. All that data in one place is a target for bad actors, even if it were well-protected. But Roger writes that Facebook’s business model is to give the opportunity to exploit that data to just about anyone who is willing to pay for the privilege.

One can make the case that these platforms compete in a race to the bottom of the brain stem—where AIs present content that appeals to the low-level emotions of the lizard brain, such things as immediate rewards, outrage, and fear. Roger writes, “Short videos perform better than longer ones. Animated GIFs work better than static photos. Sensational headlines work better than calm descriptions of events. Although the space of true things is fixed, the space of falsehoods can expand freely in any direction. False outcompetes true. Inflammatory posts work better at reaching large audiences within Facebook and other platforms.”

Roger continues, “Getting a user outraged, anxious, or afraid is a powerful way to increase engagement. Anxious and fearful users check the site more frequently. Outraged users start more content to let other people know what they should also be outraged about. Best of all from Facebook’s perspective, outraged or fearful users in an emotionally hijacked state become more reactive to further emotionally charge content. It is easy to imagine how inflammatory content would accelerate the heart rate and trigger dopamine hits. Facebook knows so much about each user that they can often tune News Feed to promote emotional responses. They cannot do this all the time for every user, but they do it far more than users realize. And they do it subtly in very small increments. On a platform like Facebook, where most users check the site every day small nudges over long periods of time can eventually produce big changes.”

The Role of Artificial Intelligence

March 31, 2019

This is the fourth post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Companies like Facebook and Google use artificial intelligence (AI) to build behavioral prediction engines that anticipate our thoughts and emotions based on patterns found in the vast amount of data they have accumulated about users. Users of likes, posts, shares, comments, and Groups have taught Facebook’s AI how to monopolize our attention. As a result, Facebook can offer advertisers exceptionally high-quality targeting.

This battle for attention requires constant innovation. In the early days of the internet the industry learned that a user adapts to predictable ad layouts, skipping over them without registering any of the content. There’s a tradeoff when it comes to online ads. Although it is easy to see that the right person is seeing the ad, it is much harder to make sure that the person is paying attention to the ad. The solution to the latter problem is to maximize the time users spend on the platform. If users devote only a small percentage of attention to the ads they see, then they try to monopolize as much of the users’ attention as possible. So Facebook as well as other platforms add new content formats and products to stimulate more engagement. Text was enough at the outset. Next came photos, then mobile. Video is the current frontier. Facebook also introduces new products such as Messenger and, soon, dating. To maximize profits, Facebook and other platforms hide the data on the effectiveness of ads.

Platforms prevent traditional auditing practices by providing less-than-industry-standard visibility. Consequently advertisers say, “I know half my ad spending is wasted; I just don’t know which half. Nevertheless, platform ads work well enough that advertisers generally spend more every year. Search ads on Google offer the clearest payback, but brand ads on other platforms are much harder to measure. But advertisers need to put their message in front of prospective customers, regardless of where they are. When user gravitate from traditional media to the internet, the ad dollars follow them. Platforms do whatever they can to maximize daily users’ time on site.

As is known from psychology and persuasive technology, unpredictable, variable rewards stimulate behavioral addiction. Like buttons, tagging, and notifications trigger social validation loops. So users do not stand a chance. We humans have evolved a common set of responses to certain stimuli that can be exploited by technology. “Flight or fight” is one example. When presented with visual stimuli, such as vivid colors, red is a trigger color—or a vibration agains the skin near our pocket that signals a possible enticing reward, the body responds in predictable ways, such as a faster heartbeat and the release of dopamine are meant to be momentary responses that increase the odds of survival in a life-or-death situation. Too much of this kind of stimulation is bad for all humans, but these effects are especially dangerous in children and adolescents. The first consequences include lower sleep quality, an increase in stress, anxiety, depression, and inability to concentrate, irritability, and insomnia. Some develop a fear of being separated from their phone.
Many users develop problems relating to and interacting with people. Children get hooked on games, texting, Instagram, and Snapchat that change the nature of human experience. Cyberbullying becomes easy over social media because when technology mediates human relationships, the social cues and feedback loops that might normally cause a bully to experience shunning or disgust by their peers are not present.

Adults get locked into filter bubbles. Wikipedia defines filter bubbles as “a state of intellectual isolation that can result from personalized searches when a website algorithms selectively guesses what information a user would like to see.

Brexit

March 30, 2019

This is the third post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” The United Kingdom voted to exit the European Union in June 2016. Many posts have been written regarding how Russia used social media, including Facebook, to push Trump in the voting so that he won the Electoral College (but not the popular vote which was won by his opponent by more than 3 million votes).

The Brexit vote came as a total shock. Polling data had suggested that “Remain” would win over “Leave” by about four points. Precisely the opposite happened, and no one could explain the huge swing. A possible explanation occurred to Roger. “What if Leave had benefited from Facebook’s architecture? The Remain campaign was expected to win because the UK had a sweet deal with the European Union: it enjoyed all the benefits of membership, while retaining its own currency. London was Europe’s undisputed financial hub, and UK citizens could trade and travel freely across the open borders of the continent. Remain’s “stay the course” message was based on smart economics but lacked emotion. Leave based its campaign on two intensely emotional appeals. It appealed to ethnic nationalism by blaming immigrants for the country’s problems, both real and imaginary. It also promised that Brexit would generate huge savings that would be used to improve the National Health Service, an idea that allowed voters to put an altruistic shine on an otherwise xenophobic proposal.” So here is an example of Facebook exploiting System 1 processes that was explained in the immediately preceding post.

Roger writes, “The stunning outcome of Brexit triggered a hypothesis: in an election context, Facebook may confer advantages to campaign messages based on fear or anger over those based on neutral or positive emotions. It does this because Facebook’s advertising business model depends on engagement, which can best be triggered through appeals to our most basic emotions. What I did not know at the time is that while joy also works which is why puppy and cat videos and photos of babies are so popular, not everyone reacts the same way to happy content. Some people get jealous, for example. ‘Lizard brain’ emotions such as fear and anger produce a more uniform reaction and are more viral in a mass audience. When users are riled up, they consume and share more content. Dispassionate users have relatively little value to Facebook, which does everything in its power to activate the lizard brain. Facebook has used surveillance to build giant profiles on every user.”

The objective is to give users what they want, but the algorithms are trained to nudge user attention in directions that Facebook wants. These algorithms choose posts calculated to press emotional buttons because scaring users or pissing them off increases time on site. Facebook calls it engagement when users pay attention, but the goal is behavior modification that makes advertising more valuable. At the time the book was written, Facebook is the fourth most valuable company in America, despite being only fifteen years old, and its value stems from its mastery of surveillance and behavioral modification.
So who was using Facebook to manipulate the vote? The answer is Russia. Just as they wanted to elect Trump president. Russia used the Ukraine as a proving ground for their disruptive technology on Facebook. Russia wanted to breakup the EU, of which Great Britain was a prominent part. The French Minister of Foreign Affairs has found that Russia is responsible for 80% of disinformation activity in Europe. One of Russia’s central goals is to break up alliances.

Zucking

March 29, 2019

This is the second post based on an important book by Roger McNamee titled “Zucked: Waking Up to the Facebook Catastrophe.” Roger writes, “Zuck created Facebook to bring the world together.’ What he did not know when he met Zuck, but that he eventually discovered was that Zuck’s idealism was unbuffered by realism or empathy. Zuck seems to have assumed that everyone would view and use Facebook the way he did, not imagining how easily the platform could be exploited to cause harm. He did not believe in data privacy and did everything he could to maximize disclosure and sharing. Roger writes that Zuck operated the company as if every problem could be solved with more or better code. “He embraced invasive surveillance, careless sharing of private data, and behavior modification in pursuit of unprecedented scale and influence. Surveillance, the sharing of user data, and behavioral modification are the foundation of Facebook’s success. Users are fuel for Facebook’s growth and, in some cases, the victims of it.”

The term “behavioral modification” is used here in a different sense than how it is usually meant. Typically behavioral modification is used to modify or eliminate undesirable behaviors, such as smoking. Although sometimes this involves the use of painful stimuli, there are effective techniques that avoid aversive stimuli.

The behavioral modification involved in Zucking can best be understood in terms of Kahneman’s two process view of cognition. The two process view of cognition provides a means of understanding both how we can process information so quickly and why cognition fails and is subject to error. There are several two systems views of cognition, all of which share the same basic ideas. Perhaps the most noteworthy two system view is that of Nobel Laureate Daniel Kahenman.

System 1 is named Intuition. System 1 is very fast, employs parallel processing, and appears to be automatic and effortless. They are so fast that they are executed, for the most part, outside conscious awareness. Emotions and feelings are also part of System 1. Learning is associative and slow. For something to become a System 1 process typically requires much repetition and practice. Activities such as walking, driving, and conversation are primarily System 1 processes. They occur rapidly and with little apparent effort. We would not have survived if we could not do these types of processes rapidly. But this speed of processing is purchased at a cost, the possibility of errors, biases, and illusions.

System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors to slip through.

Zuck’s behavioral modification involves System 1 processing almost exclusively. System 1 is largely emotional and involves little, if any thinking. “Likes” are largely emotional responses. People like something because it is something they agree with and invokes a favorable emotional response. Similarly, when someone accesses a site, it is most likely a site that they like and have a favorable response.

Facebook collects the data to send users to sites that they like and are interested in. Most of this processing occurs at a non conscious level so users are not conscious that they are being manipulated. But they are being manipulated which can lead to poor decisions. Moreover, they are directed to like-minded individuals, so there is minimal chance that they will know about different opinions and different ideas.

This behavior that is being modified is all beneficial to Facebook. Facebook wants to keep users on Facebook as long as possible. This results in increased ad revenues for Facebook. The critical resource here is attention. And Facebook’s procedures are extremely effective at capturing and keeping attention.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Zucked

March 28, 2019

The title of this post is the first part of a title of an important book by Roger McNamee. The remainder of the title is “Waking Up to the Facebook Catastrophe.” Roger McNamee is a longtime tech investor and tech evangelist. He was an early advisor to Facebook founder Mark Zuckerberg. To his friends Zuckerberg is known as “Zuck.” McNamee was an early investor in Facebook and he still owns shares.

The prologue begins with a statement made by Roger to Dan Rose, the head of media partnerships at Facebook on November 9, 2016, “The Russians used Facebook to tip the election!” One day early in 2016 he started to see things happening on Facebook that did not look right. He started pulling on that thread and uncovered a catastrophe. In the beginning, he assumed that Facebook was a victim and he just wanted to warn friends. What he learned in the months that followed shocked and disappointed him. He learned that his faith in Facebook had been misplaced.

This book is about how Roger became convinced that even though Facebook provided a compelling experience for most of its users, it was terrible for America and needed to change or be changed, and what Roger tried to do about it. This book will cover what Roger knows about the technology that enables internet platforms like Facebook to manipulate attention. He explains how bad actors exploit the design of Facebook and other platforms to harm and even kill innocent people. He explains how democracy has been undermined because of the design choices and business decisions by controllers of internet platforms that deny responsibility for the consequences of their actions. He explains how the culture of these companies cause employees to be indifferent to the negative side effects of their success. At the time the book was written, there was nothing to prevent more of the same.

Roger writes that this is a story about trust. Facebook and Google as well as other technology platforms are the beneficiaries of trust and goodwill accumulated over fifty years of earlier generations of technology companies. But they have taken advantage of this trust, using sophisticated techniques to prey on the weakest aspects of human psychology, to gather and exploit private data, and to craft business models that do not protect users from harm. Now users must learn to be skeptical about the products they love, to change their online behavior, insist that platforms accept responsibility for the impact of their choices, and push policy makers to regulate the platforms to protect the public interest.

Roger writes, “It is possible that the worst damage from Facebook and the other internet platforms is behind us, but that is not where the smart money will place its bet. The most likely case is that technology and the business model of Facebook and others will continue to undermine democracy, public health, privacy, and innovations until a countervailing power, in the form of government intervention or user protest, forces change.

Free Exchange | Replacebook

March 27, 2019

The title of this post is identical to the title of a piece in the Finance & Economics section of the 16 February 2019 issue of “The Economist.” The article notes, “There has never been such an agglomeration of humanity as Facebook. Some 2.3bn people, 30% of the world’s population engage with the network each month.” It describes an experiment in which researchers kicked a sample of people off Facebook and observed the results.

In January, Hunt Allcott, of New York University, and Luca Braghiere, Sarah Eichmeyer and and Matthew Gentzkow, of Stanford University, published results of the largest such experiment yet. They recruited several thousand Facebookers and sorted them into control and treatment groups. Members of the treatment group were asked to deactivate their Facebook profiles for four weeks in late 2018. The researchers checked up on their volunteers to make sure they stayed off the social network, and then studied the results.

On average, those booted off enjoyed an additional hour of free time. They tended not to redistribute their liberated minutes to other websites and social networks, but instead watched more television and spent time with friends and family. They consumed much less news, and were consequently less aware of events but also less polarized in their views about them than those still on the network. Leaving Facebook boosted self-reported happiness and reduced feelings of depression and anxiety.

Several weeks after the deactivation period, those who had been off Facebook spent 23% less time on it than those who never left, and 5% of the forced leavers had yet to turn their accounts back on. And the amount of money subjects were willing to accept to shut off their accounts for another four weeks was 13% lower after the month off than it had been before.

In previous posts HM has made the point that our attentional resources are limited, and that they should not be wasted. HM has also recommended quitting Facebook and similar accounts. Of course, this is a personal question regarding how each of us uses πour attentional resources. They key point is to be cognizant that our precious attentional resources are limited and to spend them wisely and not waste them.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Social Prosthetic Systems

March 22, 2019

This is the sixth post in series of posts based on a book by Stephen Kosslyn and G. Wayne Miller titled “Top Brain, Bottom Brain.” The subtitle is “Harnessing the Power of the Four Cognitive Modes.” When we don’t have the ability or skill to do something we need to do, we should turn to someone (or something) else for help. Sometimes there is a reluctance to ask for help. The authors recommendation is to overcome a reluctance to ask for help. Then the question is to whom, exactly, should we reach out to. The answers can be found in the principles of what the authors call social prosthetic systems, a name coined by drawing an analogy to physical prosthetic systems. Should we lose a leg, we would rely on a prosthesis to walk. The prosthesis makes up for shortcomings allowing one either to accomplish a task or to better accomplish a task or achieve an objective. Whenever we use a calculator we are using a cognitive prosthesis.

The authors note that the Internet has evolved into what can be called the mother of all cognitive prosthesis—the place many of us turn, typically via Google and other search engines to find facts, directions, images, translations, calendars, and more. We store personal data and cherished memories in the cloud, from which they can easily be retrieved. James Gleick, author of “The Information: A History, a Theory, a Flood”, calls the many billions of pages that constitute the Internet “the global prosthetic brain.”

The authors note that this statement is not quite correct. The Internet is a vast memory, but less useful as a tool of reasoning—especially when emotion is involved. Despite its informational power, the Internet is of limited use when we need wise advice to help us navigate a thorny situation. The main cognitive prosthesis we rely on for such help is not software or machines, but other people: individuals who can help us extend our intelligence and discover and regulate our emotions. In the lingo of the healthy memory blog, these social prosthetic systems are part of Transactive Memory. There is an entire category of posts labeled Transactive Memory. Transactive Memory is memory that cannot be accessed directly from our brains. Paper, technology, and our fellow humans constitute Transactive Memory. So these Social Prosthetic Systems are part of Transactive Memory.

As the senior author defined it in his first paper on the idea, social prosthetic systems are “human relationships that extend one’s emotional or cognitive capacities. In such systems, other people serve as prosthetic devices, filling in shortcomings in an individual’s cognitive or emotional abilities.” The authors note that with the possible exception of a committed hermit, every person belongs to one or more of these systems.

The authors present an example of our being in an emotionally fraught situation—on the verge of breaking up with a spouse or partner. “Your partner complains that you work too much, and you feel trapped between the requirements of your job and your desire to maintain the relationship. You would probably not want to seek the counsel of someone who typically operates in Stimulator or Adaptor mode. A person operating in Stimulator Mode might simply offer a knee-jerk reaction, perhaps giving you the first idea that springs to mind (“Maybe you just need to explain why your job is so important to you”) and a person operating in Adaptor Mode might try to minimize the issue (“Life has its ups and downs—if you wait awhile this will probably get better”). So that would leave you with the choice of counsel from someone who typically operates in Mover Move or Perceiver Mode. And that choice would depend in part on your goals for the outcome. If you wanted strategic help on how to handle the situation, the theory suggests the person in Mover Mode would be the most appropriate (perhaps suggesting ways to achieve more work/life balance by avoiding work on weekends). But if you wanted reflection on how you were actually feeling, and on what you wanted and needed, the person who typically operates in Perceiver Mode might be more helpful (listening as you try to sort out why you feel so torn). Putting this together, you might want to seek counsel from two separate people to garner the benefits of both kinds of input. Thus informed, you could more wisely make decisions.

Good Advice from the Danes

March 15, 2019

This post is based on an article in the Washington Post by Marie Helweg-Larsen titled (in the electronic version) “Angry? Worried? Stressed Out? Just say ‘pyt” Danes are regarded as being among the happiest people in the world. The article notes that they also happen to have a lot of cool words for ways to be happy.

One is “hygge,” which is often mistranslated to mean “cozy,” but it really describes the process of creating intimacy. But the word “pyt” was recently voted the most popular word by the Danes. Pyt does not have an exact English translation. It’s more a cultural concept about cultivating healthy thoughts to deal with stress.

Pyt sounds something like “pid.” It is usually expressed as an interaction in reaction to a daily hassle, frustration, or mistake. It most closely translates to the English sayings, “Don’t worry about it,” “stuff happens” or “oh, well.”

If you break a glass in the kitchen, you would just shrug and say, “pyt.” If you see a parking ticket lodged under your windshield wiper and, as you become hot with anger, just shake your head and murmur, “pyt.”

It’s benefit comes from accepting and resettling. It provides a reminder to step back and refocus rather than overreact. Instead of assigning blame, it’s a way to let go and move on.

The author, who is a Danish psychologist writes, “ You might say “pyt” in response to something your did—“pyt, that was a dumb thing to say”—or to support another person—“pyt with that, don’t fret about your co-worker’s insensitivity.”

Pyt can reduce stress because it is a sincere attempt to encourage yourself and others to not get bogged down by minor daily frustrations. One Danish business leader has suggested that knowing when to say “pyt” at work can lead to more job satisfaction.”

The author notes that there’s a rich strain of psychological research devoted to understanding how we interpret and react to other people’s actions.

Study after study show that we are happier and live longer when we have fewer daily hassles. And in some cases, what constitutes a hassle might be tied to how we interpret what’s happening around us.

Pyt can also help people avoid the tendency to blame others. Say you’re late to an appointment and there’s a person in front of you who’s driving slowly. This can feel irrationally personal.

However, research shows that we get angrier when we explain someone’s behavior by pointing to their incompetence, intentionality, or poor character.

If you say “pyt,” you’re deducing that it’s not worth letting someone else’s actions, which are out of your control, bother you; It’s “water off a duck’s back.” You can also see other strategies, such as thinking about situational constraints—maybe the driver was ill—or considering whether this will be an issue in two hours, two days, or two weeks.

Of course, ‘pyt’ should not be said in response to being seriously wrong. Nor should it be used when you ought to take responsibility, nor should it be used as an excuse for inaction.

Danes who teach positive psychology have also written about how applying pyt to too many aspects of our life isn’t healthy, especially if they concern your core needs or values.

Other activities, such as walking in nature, doing yoga or meditation, exercising, keeping a journal, or engaging in creative work, can also facilitate letting go

And you can also get a pyt button. Danish teachers use pyt buttons to teach students how to let go. Teachers find that it can help children cope with smaller frustrations such as losing a game, or losing a pencil. It teaches children that everything can’t be perfect.

These are important skills. Research shows that perfectionism is related to worry and depression, whereas self-compassion and social support can help prevent perfectionism from leading to negative outcomes.

The pyt button has become popular recently among Danish adults. They can either make one at home or buy one that, when pressed, says “pyt pyt pyt” and “breathe deeply, it will all be okay” in Danish.

Enter “pyt button” in your browser search block to find where to get your own pet button.

Another factor contributing to the Danes being among the happiest people of the world is that they have government provided healthcare. Moreover, the costs of this healthcare is less than US costs, and the care that the Danes receive is better than the US. Of course, this is true of every advanced country other than the US. It is like these other countries are wearing shoes and the US is still barefoot.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Social Brain Online

March 5, 2019

This title of this post is the same as the title of a chapter in Daniel Goleman’s book “The Brain and Emotional Intelligence: New Insights.” Here the question is how do social brains interact when we’re sitting looking at a video monitor instead of directly at another person? There was a major clue about the problems ever since the beginning of the internet, when it was just scientists emailing on what was called ARPAnet. The problem was, and still is, flaming. Goleman writes, “Flaming happens when someone is a little upset—or very upset—and with their amygdala in firm control, furiously types out a message and hits “send” before thinking about it—and that hijack hits the other person in their inbox. Now the more technical term for flaming is cyber-disinhibition, because we realize that the disconnect between the social brain and the video monitor releases the amygdala from the usual management by the more reasonable prefrontal areas.”

Online the social brain has no feedback loop: unless you are in a live, face-to-face teleconference, the social circuitry has no input. It doesn’t know how the other person is reacting so it can’t guide our response—do this, don’t do that—as it does automatically and instantly in face-to-face interactions. Instead of acting as a social radar, the social brain says nothing—and that unleashes the amygdala to flame and cause a hijack.

A phone call gives these circuits ample emotional cues from tone of voice to understand the emotional nuance of what you say. But email lacks all these inputs.

One reason personal connection is so important for online communication has to do with the social brain/video monitor interface. When we’re at our keyboard and we think a message is positive, and we hit send, what we don’t realized at the neural level is that all the nonverbal cues, facial expression, tone of voice, gesture and so on, stay with us. There’s a negativity bias to email: when the sender thinks the email was positive, the receiver tends to see it as neutral. When the sender thinks it’s neutral, the receiver tends to interpret it as somewhat negative. The big exception is when you know the person well; that bond overcomes the negativity bias.

Clay Shirky, who studies social networks and the web at New York University, tells an example of a local bank security team that had to operate 24 hours a day. In order for them to operate well, it was critical that they use what he calls a banyan tree model, where key members of each group get together and meet key members of every other group, so that in an emergency they can contact each other and get a clear sense of how to evaluate the message the group was sending. If someone in the receiving group knows that person well, or has a contact there whom he can ask about the person who sent the message, then the receiving group can better gauge how much to rely on it.

Goleman says that one enormous upside of the web is what you might call brain 2.0. Shirky points our, the potential for social networking to multiply our intellectual capital is enormous. It’s sort of a super-brain, the extended brain on the web. In the healthy memory blog, this is termed transactive memory.

Goleman writes that the term group IQ refers to the sum total of the best talents of each person on a team, or in a group, contributed at full force. What Goleman does not say is that the group can be more than the sum of its parts due to beneficial interactions within the group. He does note that one factor that makes the actual group IQ less than its potential is a lack of interpersonal harmony in the group. Vanessa Druskat of the University of New Hampshire has studied was she calls group EQ—things like being able to surface and resolve conflicts among the group, high levels of trust and mutual understanding. Not surprisingly, her research show that groups with the highest collective emotional intelligence outperform the others. Goleman notes the when you apply this to groups working together online, one core operating principle is that the more channels that come into the social brain, the more easily attuned you can be. So, when you video-conference, you have visual, body and voice cues. Even if it’s a conference call, the voice is extraordinarily rich in emotional cues. In any case, if you’e working together just through text, it’s best when you know the other person well, or at least have some sense of them in order to have a context for reading their messages, so you can overcome the negativity bias. Best of all is leaving your office or cubicle and getting together to talk with the person.

What Do We Know, What Can We Do?

January 24, 2019

This is the twelfth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” Having raised an enormous number of problems, it is fortunate that the authors also proposed possible solutions.

The military is already training and experimenting for the new environment. The Joint Readiness Training Center at Fort Polk, Louisiana is a continuously operating field laboratory. The laboratory is good not only for training, but also for simulations to respond to different situations, so that possible solutions can be evaluated in a simulation prior to actual conflict. The Army needs to understand how to train for this war. Fort Polk has a brand-new simulation for this task: the SMEIR (Social Media Environment and Internet Replication). SMEIR simulates the blogs, news outlets, and social media accounts that intertwine to form a virtual battlefield.

The authors have also claimed that LikeWar has rules, and has tried to articulate them:

“First for all the sense of flux, the modern information environment is becoming stable. The internet is now the preeminent communications medium in the world; it will remain so for the foreseeable future. Through social media the web will grow bigger in size, scope, and membership, but its essential form and centrality to the information ecosystem will not change.”

“Second, the internet is a battlefield. It is a platform for achieving the goals of whichever actor manipulates it most effectively. Its weaponization, and the conflicts that erupt on it, define both what happens on the internet and what we take away from it.”

“Third, this battlefield changes how we must think about information itself. If something happens, we must assume that there’s likely a digital record of it that will surface seconds or years from now. But an event only carries power if people also believe that it happened. So a manufactured event can have real power, while a demonstrably true event can be rendered irrelevant. What determines the outcome isn’t mastery of the “facts,” but rather a back-and-forth battle of psychological, political, and algorithmic manipulation.”

“Fourth, war and politics have never been so intertwined. In cyberspace, the means by which the political or military aspects of this competition are won are essentially identical. Consequently, politics has taken on elements of information warfare, while violent conflict is increasingly influenced by the tug-of-war for online opinion. This also means that the engineers of Silicon Valley, quite unintentionally, have turned into global power brokers, Their most minute decisions shape the battlefield on which both war and politics are increasingly decided.”

“Fifth, we’re all part of the battle. We are surrounded by countless information struggles—some apparent, some invisible-all of which seek to alter out perceptions of the world. Whatever we notice whatever we “like,” whatever we share, become the next salvo. In this new war of wars, taking place on the network of networks, there is no neutral ground.”

“For governments, the first and most important step is to take this new battleground seriously. The authors write, “Today, a significant part of the American political culture is willfully denying the new threats to its cohesion. In some cases, it is colluding with them.”

“Too often, efforts to battle back against online dangers emanating from actors and home and abroad have been stymied by elements within the U.S. government, Indeed, at the time we write this in 2018, the Trump White House has not held a single cabinet-level meeting on how to address the challenges outlined in this book, while its State Department refused to increase efforts to counter online terrorist propaganda and Russian disinformation, even as Congress allocated nearly $80 million for the purpose.”

“Similarly, the American election system remains remarkably vulnerable, not merely to hacking of the voting booth, but also to the foreign manipulation of U.S. voters political dialogue and beliefs. Ironically, although the United States has contributed millions of dollars to help nations like Ukraine safeguard their citizens against these new threats, political paralysis has prevented the U.S. government from taking meaningful steps to inoculate its own population. Until this is reframed as a nonpartisan issue—akin to something as basic as health education—the United States will remain at grave risk.”

The Conflicts That Drive the Web and the World

January 23, 2019

This is the eleventh post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” The title to this post is identical to the subtitle of the chapter titled “Likewar.” In 1990 two political scientists with the Pentagon’s think tank at the RAND Corporation started to explore the security implications of the internet. John Arquilla and David Ronfeldt made their findings public in a revolutionary article titled “Cyberwar Is Coming!” in a 1993 article. They wrote that “information is becoming a strategic resource that may prove as valuable in the post-industrial era as capital and labor have been in the industrial age.” They argued that future conflicts would not be won by physical forces, but by the availability and manipulation of information. They warned of “cyberwar,” battles in which computer hackers might remotely target economies and disable military capabilities.

They went further and predicted that cyberwar would be accompanied by netwar. They explained: It means trying to disrupt, damage, or modify what a target population “knows” or thinks it knows about itself and the world around it. A network may focus on public or elite opinion, or both. It may involve public diplomacy, measures, propaganda and psychological campaigns, political and cultural subversion, deception of or interference with the local media…In other words, netwar represents a new entry on the spectrum of conflict that spans economic, political, and social as well as military forms of ‘war.’

Early netwar became the province of far-left activists undemocratic protesters, beginning with the 1994 Zapatista uprising in Mexico and culminating in the 2011 Arab Spring. In time, terrorists and far-right extremists also began to gravitate toward net war tactics. The balance shifted for disenchanted activists when dictators learned to use the internet to strengthen their regimes. For us, the moment came when we saw how ISIS militants used the internet not just to sow terror across the globe, but to win its battles in the field. For Putin’s government it came when the Russian military reorganized itself to strike back what it perceived as a Western information offensive. For many in American politics and Silicon Valley, it came when the Russian effort poisoned the networks with a flood of disinformation, bots, and hate.

In 2011, DARPA’s research division launched the new Social Media in Strategic Communications program to study online sentiment analysis and manipulation. About the same time, the U.S. military’s Central Command began overseeing Operation Earnest Voice to fight jihadists across the Middle East by distorting Arabic social media conversations. One part of this initiative was the development of an “online persona management service,” which is essentially sockpuppet software, “to allow one U.S. serviceman or woman to control up to 10 separate identities based all over the world.” Beginning in 2014, the U.S. State Department poured vast amounts of resources into countering violent extremism (CVE) efforts, building an array of online organizations that sought to counter ISIS by launching information offensives of their own.

The authors say national militaries have reoriented themselves to fight global information conflicts, the domestic politics of these countries have also morphed to resemble netwars. The authors write, “Online, there’s little difference in the information tactics required to “win” either a violent conflict or a peaceful campaign. Often, their battles are not just indistinguishable but also directly linked in their activities (such as the alignment of Russian sockpuppets and alt-right activists). The realms of war and politics have begun to merge.”

Memes and memetic warfare also emerged. Pepe the Frog was green and a dumb internet meme. In 2015, Pepe was adopted as the banner of Trump’s vociferous online army. By 2016, he’d also become a symbol of a resurgent timed of white nationalism, declared a hate symbol by the Anti-Defamation League. Trump tweeted a picture of himself as an anthropomorphized Pepe. Pepe was ascendant by 2017. Trump supporters launched a crowdfunding campaign to elect a Pepe billboard “somewhere in the American Midwest.” On Twitter, Russia’s UK embassy used a smug Pepe to taunt the British government in the midst of a diplomatic argument.

Pepe formed an ideological bridge between trolling and the next-generation white nationalist, alt-right movement that had lined up behind Trump. The authors note that Third Reich phrases like “blood and soil” filtered through Pepe memes, fit surprisingly well with Trump’s America First, anti-immigration, anti-Islamic campaign platform. The wink and note of a cartoon frog allowed a rich, but easily deniable, symbolism.

Pepe transformed again when Trump won. Pepe became representative of a successful, hard-fought campaign—one that now controlled all the levers of government. On Inauguration Day in Washington, DC, buttons and printouts of Pepe were visible in the crowd. Online vendors began selling a hat printed in the same style as those worn by military veterans of Vietnam, Korea, and WW II. It proudly pronounced its wearer as a “Meme War Veteran.”

The problem with memes is that by highjacking or chance, a meme can come to contain vastly different ideas than those that inspired it, even as it retains all its old reach and influence. And once a meme has been so redefined, it becomes nearly impossible to reclaim. Making something go viral is hard; co-opting or poisoning something that’s already viral can be remarkable. U.S Marine Corps Major Michael Prosser published a thesis titled: “Memetics—a Growth industry in US Military Operations.. Prosser’s work kicked off a tiny DARPA-Funded industry devoted to “military memetics.”

The New Wars for Attention and Power

January 22, 2019

This is the tenth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” The title of this post is identical to the subtitle of the title “Win the Net, Win the Day” of a chapter in the book.

Brian Jenkins declared in a 1974 RAND Corporation report, “Terrorism is theater,” that became one of terrorism’s foundational studies. The difference between the effectiveness of the Islamic State and that of terror groups in the past was not the brains of the ISIS; it was the medium they were using. Mobile internet access could be found everywhere; smartphones were available in any bazaar. Advanced video and image editing tools were just one illegal download away, and an entire generation was well acquainted with their use. For those who weren’t, there were free online classes offered by a group called Jihadi Design. It promised to take ISIS supporters ‘from zero to professionalism’ in just a few sessions. The most dramatic change from terrorism was that distributing a global message was as easy as pressing ”send,” with the dispersal facilitated by a network of super-spreaders beyond any one state’s control.

ISIS networked its propaganda pushing out a staggering volume of online messages. In 2016 Charlie Winter counted nearly fifty different ISIS media hubs, each based in different regions with different target audiences, but all threaded through the internet. These hubs were able to generate over a thousand “official” ISIS releases, ranging from statements to online videos, in just a one-month period.

They spun a tale in narratives. Human minds are wired to seek and create narratives. Every moment of the day, our brains are analyzing new events and finding them in thousand of different narratives already stowed in our memories. In 1944 psychologists Fritz Heider and Marianne Simmel produced a short film that showed three geometric figures (two triangles and a circle) bouncing off each other at random. They screened the film to a group of research subjects and asked them to interpret the shapes’ actions. All but one of the subjects described these abstract objects as living beings; most saw them as representations of humans. In the shapes’ random movements they expressed motives, emotions, and complex personal histories such as: the circle was “worried,” one triangle was “innocent” and the other was “blinded by rage.” Even in crude animation all but one observer saw a story of high drama.

The first rule in building effective narratives is simplicity. In 2000, the average attention span of an internet user was measured at twelve seconds. By 2015 it had shrunk to eight seconds. During the 2016 election Carnegie Mellon University researchers studied and ranked the complexity of the candidates language (using the Flesch-Kincaid score). They found that Trump’s vocabulary measured at the lowest level of all the candidates, comprehensible to someone with a fifth-grade education. This phenomenon is consistent with a larger historic pattern. Starting with George Washington’s first inaugural address, which was one of the most complex overall, American presidents communicated at a college level only when newspapers dominated mass communication. But each time a new technology took hold, complexity dropped. The authors write, “To put it another way: the more accessible the technology, the simpler a winning voice becomes. It may be Sad! But it is True!

The second rule of narrative is resonance. Nearly all effective narratives conform to what social scientists call “frames.” Frames are proud of specific languages and cultures that feel instantly and deeply familiar. To learn more about frames enter “frames” into the search block of the healthy memory blog.

The third and final rule of narrative is novelty. Just as narrative frames help build resonance, they also serve to make things predictable. However, too much predictability can be boring, especially in an age of microscopic attention spans and unlimited entertainment. Moreover, there seems to be no limit on the quality of narrative. Some messages far exceed the limits of credibility, yet they are believed and spread.

Additional guidelines are pull the heartstrings and feed the fury. Final guidance would be inundation: drown the web, run the world.

The Unreality Machine

January 21, 2019

This is the ninth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” There was a gold rush in Veles, Macedonia. Teenage boys there worked in “media.” More specifically, American social media. The average U.S. internet is virtually a walking bag of cash, with four times the advertising dollars of anyone else in the world. And the U.S. internet user is very gullible. The following is from the book: “In a town with 25% unemployment and an annual income of under $5,000, these young men had discovered a way to monetize their boredom and decent English-language skills. They set up catch websites, peddling fad diets and weird health tips.” They relied on Facebook “shares” to drive traffic. Each click gave them a small slice of the pie from ads running along the side. Some of the best of them were pulling in tens of thousands of dollars a month.

Competition swelled, but fortunately the American political scene soon brought them a virtually inexhaustible source of clicks and resulting fast cash. This was the 2016 presidential election. Now back to the text “The Macedonians were awed by Americans’ insatiable thirst for political stories, Even a sloppy, clearly plagiarized jumble of text and ads could rack up hundreds of thousands of “shares.” The number of U.S. politics-related websites operated out of Veles swelled into the hundreds.

One of the successful entrepreneurs estimated that in six month, his network of fifty websites attracted some 40 million page views driven there by social media. This made him about $60,000. This 18-year-old then expanded his media empire. He outsourced the writing to three 15-year-olds, paying each $10 a day. He was far from the most successful of the Veles entrepreneurs. Some became millionaires, One rebranded himself as as “clickbait coach,” running a school where he taught dozens of others how to copy his success.

These viral news stories weren’t just exaggerations or products of political spin; they were flat-out lies. Sometimes the topic was the proof that Obama had been born in Kenya or that he was planning a military coup. Another report warned that Oprah Winfrey had told her audience that “some white people have to die.”

The following is from the book: “Of the top twenty best-performing fake stories spread during the election, seventeen were unrepentantly pro Trump. Indeed, the single most popular news story of the entire election—“Pope Francis Shocks World, Endorses Donald Trump for President.” Social media provided an environment in which lies created by anyone, from anywhere, could spread everywhere, making the liars plenty of cash along the way”

In 1995 MIT media professor Nicholas Negroponte prophesied that there would be an interface agent that read every newswire and newspaper and catch every TV and radio broadcast on the planet, and then construct a personalized summary. He called this the “Daily Me.”

Harvard law professor Cass Sunstein argues that the opposite might actually be true. Rather than expanding their horizons, people were just using the endless web to seek out information with which they already agree. He called this the “Daily We.”

A few years later the creation of Facebook, the “Daily We,” an algorithmically created newsfeed became a fully functioning reality.

For example, flat-earthers had little hope of gaining traction in a post-Christopher Columbus, pre-internet world. This wasn’t just because of the silliness of their views, but they couldn’t easily find others who shared them. But the world wide web has given the flat-earth belief a dramatic comeback. Proponents now have an active community and aggressive marketing scheme.

This phenomenon is called ‘homophily,” meaning “love of the same.” Homophily is what makes us humans social creatures able to congregate in such like-minded groups. It explains the growth of civilization and cultures, It is also the reason an internet falsehood, once it begins to spread, can rarely be stopped.

Unfortunately falsehood diffused significantly farther, faster, deeper, and more broadly than the truth. It becomes a deluge. The authors write, “Ground zero for the deluge, however, was in politics. The 2016 U.S. presidential election released a flood of falsehoods that dwarfed all previous hoaxes and lies in history. It was an online ecosystem so vast that the nightclubbing, moneymaking, lie-spinning Macedonians occupied only one tiny corner. There were thousands of fake website, populated by millions of baldly false stories, each then shared across people’s personal networks. In the final three months of the 2016 election, more of these fake political headlines were shared on Facebook than real ones. Meanwhile, in study of 22 million tweets, the Oxford Internet Institute concluded that Twitter users, too, and shared more disinformation, polarizing and conspiratorial content’ than actual news. The Oxford team called this problem “junk news.”

Censorship, Disinformation, and the Burial of Truth

January 20, 2019

This is the eighth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. Initially, the notion that the internet would provide the basis for truth and independence was supported. The Arab Spring was promoted on the internet. The authors write, “Social media had illuminated the shadows crimes through which dictators had long clung to power, and offered up a powerful new means of grassroots mobilization.

Unfortunately, this did not last. Not only did the activists fail to sustain their movement, but they noticed that the government began to catch up. Tech-illiterate bureaucrats were replaced by a new generation of enforcers who understood the internet almost as well as the protestors. They invaded online sanctuaries and used the very same channels to spread propaganda. And these tactics worked. The much-celebrated revolutions fizzled. In Libya and Syria, digital activists turned their talents to waging internecine civil wars. In Egypt, the baby named Facebook would grow up in a country that quickly turned back to authoritarian government.

The internet remains under the control of only a few thousand internet service providers (ISPs). These firms run the backbone, or “pipes,” of the internet. Only a few ISPs supply almost all of he world’s mobile data. Because two-thirds of all ISPs reside in the United States, the average number across the rest of the world is relatively small. The authors note that, “Many of these ISPs hardly qualify as “businesses” at all. Rather, they are state-sanctioned monopolies or crony sanctuaries directed by the whim of local officials. Although the internet cannot be destroyed, regimes can control when the internet goes on or off and what goes on it.

Governments can control internet access and target particular areas of the country. India, the world’s largest democracy had the mobile connections in an area where violent protests had started out for a week. Bahrain instituted an internet curfew that affected only a handful of villages where antigovernment protests were brewing. When Bahrainis began to speak out against the shutdown, authorities narrowed their focus further, cutting access all the way down to specific internet users and IP addresses.

The Islamic Republic of Iran has poured billions of dollars into its National Internet Project. It is intended as a web replacement, leaving only a few closely monitored connections between Iran and the outside world. Italian officials describe it as creating a “clean” internet for its citizens, insulated from the “unclean” web that the rest of us use.

Outside the absolute-authoritarian state of North Korea (whose entire internet is a closed network of about 30 websites), the goal isn’t so much to stop the signal as it is to weaken it. Although extensive research and special equipment can circumvent government controls, the empower parts of the internet are no longer for the masses.

Although the book discusses China, that discussion will not be included here as there are separate posts on the book “Censored: Distraction and Diversion Inside China’s Great Firewall” by Margaret E. Roberts.

The Russian government hires people to create chaos on the internet. They are tempted by easy work and good money for work such as writing more than 200 blog posts and comments a day, assuming fake identities, hijacking conversations, and spreading lies. This is an ongoing war of global censorship by means of disinformation.

Russia’s large media networks are in the hands of oligarchs, whose finances are deeply intertwined with those of the state. The Kremlin makes its positions known through press releases and private conversations, the contents of which are then dutifully reported to the Russian people, no matter how much spin it takes to make them credible.

Valery Gerasimov has been mentioned in previous healthy memory blog posts. He channeled Clausewitz in speech reprinted in the Russian military newspaper that “the role of nonmilitary means of achieving political and strategic goals has grown. In many cases, they have exceeded the power of the force of weapons in their effectiveness.” This is known as the Gerasimov Doctrine that has been enshrined in the nation’s military strategy.

Individuals working at the Internet Research Agency assume a series of fake identities known as “sockpuppets.” The authors write, The job was writing hundreds of social media posts per day, with the goal of hijacking conversations and spreading lies, all to the benefit of the Russian government. For this work people are paid the equivalent of $1500 per month. (Those who worked on the “Facebook desk” targeting foreign audience received double the pay of those targeting domestic audiences).

The following is taken directly from the text:

“The hard work of a sockpuppet takes three forms, best illustrated by how they operated during the 2016 U.S. election. One is to pose as the organizer of a trusted group. @Ten_GOP called itself the “unofficial Twitter account of Tennessee Republicans” and was followed by over 136,000 people (ten times as many as the official Tennessee Republican Party Account). It’s 3,107 messages were retweeted 1,213,506 times. Each retweet then spread to millions more users especially when it was retweeted by prominent Trump campaign figures like Donald Trump Jr., Kellyanne Conway, and Michael Flynn. On Election Day 2016, it was the seventh most retweeted account across all of Twitter. Indeed, Flynn followed at least five such documented accounts, sharing Russian propaganda with his 1000,000 followers at least twenty-five times.

The second sockpuppet tactic is to pose as a trusted news source. With a cover photo image of the U.S. Constitution, @partynews presented itself as hub for conservative fans of the Tea Party to track the latest headlines. For months , the Russian front pushed out anti-immigrant and pro-Trump messages and was followed and echoed out by some 22,000 people, including Trump’s controversial advisor Sebastian Gorka.

Finally, sockpuppets pass as seemingly trustworthy individuals: a grandmother, a blue-collar worker from the midwest,a decorated veteran, providing their own heartfelt take on current events (and who to vote for). Another former employee of the Internet
Research Agency, Alan Baskayev, admitted that it could be exhausting to manage so many identities. “First you had to be a redneck from Kentucky, then you had to be some white guy from Minnesota who worked all his life, paid taxes and now lives in poverty; and in 15 minutes you have to write something in the slang of [African] Americans from New York.”

There have been many other posts about Russian interference in Trump’s election. Trump lost the popular vote, and it is clear that he would not have won the Electoral College had it not been for Russia. Clearly, Putin owns Trump.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Flynn

January 19, 2019

This is the seventh post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. A former director of the U.S. Defense Intelligence Agency (DIA) said,”The exponential explosion of publicly available information is changing the global intelligence system…It’s changing how we tool, how we organize, how we institutionalize—everything we do.” This is how he explained to the authors how the people who once owned and collected secrets—professional spies—were adjusting to this world without secrets.

U.S. intelligence agencies collected open source intelligence (OSINT) on a massive scale through much of the Cold War. The U.S. embassy in Moscow collected OSINT on a massive scale. The U.S. embassy in Moscow maintained subscriptions to over a thousand Soviet journals and magazines, while the Foreign Broadcast Monitoring Service (FBIS) stretched across 19 regional bureaus, monitoring more than 3,500 publications in 55 languages, as well as nearly a thousand hours of television each week. Eventually FBIS was undone by the sheer volume of OSINT the internet produced. In 1993, FBIS was creating 17,000 reports a month; by 2004 that number had risen to 50,000. In 2005 FBIS was shuttered. The former director of DIA said, Publicly available information is now probably the greatest means of intelligence that we could bring to bear. Whether you’re a CEO, a commander in chief, or a military commander, if you don’t have a social media component…you’re going to fail.”

Michael Thomas Flynn was made the director of intelligence for the task force that deployed to Afghanistan. Then he assumed the same role for the Joint Special Operations Command (JSOC), the secretive organization of elite units like the bin Laden-killing navy SEAL team. He made the commandos into “net fishermen” who eschewed individual nodes and focused instead on taking down he entire network, hitting it before it could react and reconstitute itself. JSOC got better as Flynn’s methods evolved capturing or killing dozens of terrorists in a single operation, gathering up intelligence, and then blasting off to hit another target before the night was done. The authors write, “Eventually, the shattered remnants of AQI would flee Iraq for Syria, where they would ironically later reorganize themselves as the core of ISIS.

Eventually the Peter Principle prevailed. The Peter Principle is that people rise in an organization until they reach their level of incompetence. The directorship of DIA was that level for Flynn. Flynn was forced to retire after 33 years of service. Flynn didn’t take his dismissal well . He became a professional critic of the Obama administration, which brought him to the attention of Donald Trump. He used his personal Twitter account to push out messages of hate (Fear of Muslims is RATIONAL). He put out one wild conspiracy theory after another. His postings alleged that Obam wasn’t just a secret Muslim, but a “jihadi” who “laundered” money for terrorists, and that if Hillary Clinton won the election she would help erect a one-world government to outlaw Christianity (notwithstanding that Hillary Clinton was and is a Christian). He also claimed that Hillary was involved in “Sex Crimes w Children. This resulted in someone going into a Pizzeria, the supposed locus of these sex crimes with children, and shooting it up. He was charged by the FBI for lying about his contact with a Russian official. This was based on a recorded phone conversation. This was a singularly dumb mistake for a former intelligence officer

Crowdsourcing

January 18, 2019

This is the sixth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media. The terrorist attack on Mumbai opened up all the resources of the internet using Twitter to defend against the attack. When the smoke cleared, the Mumbai attack left several legacies. It was a searing tragedy visited upon hundreds of families. It brought two nuclear powers to the brink of war. It foreshadowed a major technological shift. Hundreds of witnesses—some on-site, some from afar—had generated a volume of information that previously would have taken months of diligent reporting to assemble. By stitching these individual accounts together, the online community had woven seemingly disparate bits of data into a cohesive whole. The authors write, “It was like watching the growing synaptic connections of a giant electric brain.”

This Mumbai operation was a realization of “crowdsourcing,” an idea that had been on the lips of Silicon Valley evangelists for years. It had originally been conceived as a new way to outsource programming jobs, the internet bringing people together to work collectively, more quickly and cheaply than ever before. As social media use had sky rocketed, the promise of had extended a space beyond business.

Crowdsourcing is about redistributing power-vesting the many with a degree of influence once reserved for the few. Crowdsourcing might be about raising awareness, or about money (also known as “crowdfunding.”) It can kick-start a new business or throw support to people who might have remained little known. It was through crowdsourcing that Bernie Sanders became a fundraising juggernaut in the 2016 presidential election, raking in $218 million online.

For the Syrian civil war and the rise of ISIS, the internet was the “preferred arena for fundraising.” Besides allowing wide geographic reach, it expands the circle of fundraisers, seemingly linking even the smallest donor with their gift on a personal level. The “Economist” explained, this was, in fact, one of the key factors that fueled the years-long Syrian civil war. Fighters sourced needed funds by learning “to crowd fund their war by using Instagram, Facebook and YouTube. In exchange for a sense of what the war was really like, the fighters asked for donations via PayPal. In effect, they sold their war online.”

In 2016 a hard-line Iraqi militia took to Instagram to brag about capturing a suspected ISIS fighter. The militia then invited its 75,000 online fans to vote on whether to kill or release him. Eager, violent comments rolled in from around the world, including many from the United States. Two hours later, a member of the militia posted a follow-up selfie; the body of the prisoner lay in a pool of blood behind him. The caption read, “Thanks for the vote.” In the words of Adam Lineman, a blogger and U.S. Army veteran, this represented a bizarre evolution in warfare: “A guy on the toilet in Omaha, Nebraska could emerge from the bathroom with the blood of some 18-year-old Syrian on his hands.”

Of course, crowdsourcing can be used for good as well as for evil.

Sharing

January 17, 2019

This is the fifth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” The authors blame sharing on Facebook rolling out a design update that included a small text box that asked the simple question: “What’s on your mind?” Since then, the “status update” has allowed people to use social media to share anything and everything about their lives they want to, from musings and geotagged photos to live video and augmented-reality stickers.

The authors continue, “The result is that we are now our own worst mythological monster—not just watchers but chronic over-sharers. We post on everything from events small (your grocery list) to momentous (the birth of a child, which one of us actually live-tweeted). The exemplar of this is the “selfie,” a picture taken of yourself and shared as widely as possible online. At the current pace, the average American millennial will take around 26,000 selfies in their lifetime. Fighter pilots take selfies during combat missions. Refugees take selfies to celebrate making it to safety. In 2016, one victim of an airplane hijacking scored the ultimate millennial coup: taking a selfie with his hijacker.”

Not only are these postings revelatory of our personal experiences, but they also convey the weightiest issues of public policy. The first sitting world leader to use social media was Canadian prime minister Stephen Harper in 2008, followed by U.S. President Barack Obama. A decade later, the leaders of 178 countries had joined in, including former Iranian president Mahmoud Ahmadinejad, who banned Twitter during a brutal crackdown, has changed his mind on the morality—and utility—of social media. He debuted online with a friendly English-language video as he stood next to the Iranian flag. He tweeted, “Let’s all love each other.”

Not just world leaders, but agencies at every level and in every type of government now share their own news, from some 4,000 national embassies to the fifth-grade student council of the Upper Greenwood Lake Elementary school. When the U.S. military’s Central Command expanded Operation Inherent Resolve against ISIS in 2016, Twitter users could follow along directly via the hashtag #TALKOIR.

Nothing actually disappears online. The data builds and builds and could reemerge at any moment. Law professor Jeffrey Rosen said that the social media revolution has essentially marked “the end of forgetting.”

The massive accumulation of all this information leads to revelations of its own. Perhaps the clearest example of this phenomenon is the first president to have used social media before running for office. Being both a television celebrity and a social media addict, Donald Trump entered politics with a vast digital trail behind him. The Internet Archive has a fully perusable, downloadable collection of more than a thousand hours of Trump-related video, and his Twitter account has generated around 40,000 messages. Never has a president shared so much of himself—not just words but even neuroses and particular psychological tics—for all the world to see. Trump is a man—the most powerful in the world—whose very essence has been imprinted on the internet. Know this one wonders how such a man could be elected President by the Electoral College.

Tom Nichols is a professor at the U.S. Naval War College who worked with the intelligence community during the Cold War explained the unprecedented value of this vault of information: “It’s something you never want the enemy to know. And yet it’s all out there…It’s also a window into how the President processes information—or how he doesn’t process information he doesn’t like. Solid gold info.” Reportedly Russian intelligence services came to the same conclusion, using Trump’s Twitter account as the basis on which to build a psychological profile of Trump.

The World Wide Web Goes Mobile

January 16, 2019

This is the fourth post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” On January 9, 2007, Apple cofounder and CEO Steve Jobs introduced the world to the iPhone. Its list of features: a touchscreen; handheld integration of movies, television, and music; a high quality camera; plus major advances in call reception and voicemail. The most radical innovation was a speedy, next-generation browser that could shrink and reshuffle websites, making the entire internet mobile-friendly.

The next year Apple officially opened its App Store. Now anything was possible as long as it was channeled through a central marketplace. Developers eagerly launched their own internet-enabled games and utilities, built atop the iPhone’s sturdy hardware (There are about 2.5 million such apps today). The underlying business of the internet soon changed with the launch of Google’s Android operating system and competing Google Play Store that same year, smartphones ceased to be the niche of tech enthusiast, and the underlying business of the internet soon changed.

There were some 2 billion mobile broadband subscriptions worldwide by 2013. By 2020, that number is expected to reach 8 billion. In the United States, where three-quarters of Americans own a smartphone, these devises have long since replaced televisions as the most commonly used piece of technology.

The following is taken directly from the text: “The smartphone combined with social media to clear the last major hurdle in the race started thousands of years ago. Previously, even if internet services worked perfectly, users faced a choice. They could be in real life but away from the internet. Or they could tend to their digital lives in quiet isolation, with only a computer screen to keep them company. Now, with an internet-capable device in their pocket, it became possible for people to maintain both identities simultaneously. Any thought spoken aloud could be just as easily shared in a quick post. A snapshot of a breathtaking sunset or plate of food (especially food) could fly thousands of miles away before darkness had fallen or the meal was over. With the advent of mobile livestreaming, online and offline observers could watch the same even unfold in parallel.”

Twitter was one of the earliest beneficiaries of the smartphone. Silicon Valley veterans who were hardcore free speech advocates founded the companion 2006. The envisioned a platform with millions of public voices spinning the story of their lives in 140-character bursts. This reflected the new sense that it was the network, rather than the content on it, that mattered.

Twitter grew along with smartphone use. In 2007, its users were sending 5,000 tweets per day. By 2010, that number was up to 50 million; by 2015, 500 million. The better web technology offered users the chance to embed hyperlinks, images, and video in their updates.

The most prominent Twitter user is Donald Trump, who likened it to “owning your own newspaper.” What he liked most about it was that it featured one perfect voice: his own.
It appears that it is his primary means of communications. It also highlights the risks inherent in using Twitter impulsively.

An Early Example of the Weaponization of the Internet

January 15, 2019

This is the third post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” In early 1994 a force of 4,000 disenfranchised workers and farmers rose up in Mexico’s poor southern state of Chiapas. They called themselves the Zapista National Liberation Army (EZLN). They occupied a few towns and vowed to march on Mexico City. This did not impress the government. Twelve thousand soldiers were deployed, backed by tanks and air strikes, in a swift and merciless offensive. The EZLN quickly retreated to the jungle. The rebellion teetered on the brink of destruction. But twelve days after it began the government declared a sudden halt to combat. This was a real head-scratcher, particularly for students of war.

But there was nothing conventional about this conflict. Members of the EZLN had been talking online. They spread their manifesto to like-minded leftists in other countries, declared solidarity with international labor movements protesting free trade (their revolution had begun the day the North American Free Trade Agreement (NAFTA) went into effect, established contact with organizations like the Red Cross, and urged every journalist they could find to come and observe the cruelty of the Mexican military firsthand. They turned en masse to the new and largely untested power of the internet.

It worked. Their revolution was joined in solidarity by tens of thousands of liberal activists in more than 130 countries, organizing in 15 different languages. Global pressure to end the small war in Chiapas built quickly on the Mexican government. And it seemed to come from every direction, all at once. Mexico relented.

But this new offensive did not stop after the shooting had ceased. The war became a bloodless political struggle, sustained by the support of a global network of enthusiasts and admirers, most of whom had never heard of Chiapas before the call to action went out. In the years that followed, this network would push and cajole the Mexican government into reforms the local fighters had been unable to obtain on their own. The Mexican foreign minister, Jose Angel Gurria lamented in 1995, “The shots lasted ten days, but ever since the war has been a war of ink, of written word, a war on the internet.”

There were signs everywhere that the internet’s relentless pace of innovation was changing the social and political fabric of the real world. The webcam was invented and the launch of eBay and Amazon; the birth of online dating; even the first internet-abetted scandals and crimes, one of which resulted in a presidential impeachment, stemming from a rumor first reported online. In 1996, Manual Castells, one of the world’s foremost sociologists, made a bold prediction: “The internet’s integration of print, radio, and audiovisual modalities into a single system promise an impact on society comparable to that of the alphabet.”

The authors note that most forward-thinking of these internet visionaries was not an academic. In 1999, musician David Bowie sat for an interview with the BBC. Instead of promoting his albums, he waxed philosophical about technology’s future. He explained that the internet would not just bring people together; it would also tear them apart. When asked by the interviewer about his surety about the internet’s powers, Bowie said that he didn’t think we’ve even seen the tip of the iceberg. “I think the potential of what the internet is going to do to society, both good and bad, is unimaginable. I think we’re actually on the cusp of something, exhilarating and terrifying…It’s going to crush our ideas of what mediums are all about.”

Could Sputnik be Responsible for the Internet?

January 14, 2019

This is the second post in a series of posts on a book by P.W. Singer and Emerson T. Brooking titled “Likewar: The Weaponization of Social Media.” Probably most readers are wondering what is or was Sputnik? Sputnik was the first space satellite to orbit the earth. It was launched by the Soviet Union. The United States was desperately trying to launch such a satellite, but was yet to do so. A young HM appeared as part of a team of elementary school presenters on educational TV that made a presentation on Sputnik and on the plans of the United States to launch such a satellite. The young version of HM explained the plans for the rocket to launch a satellite. Unfortunately, the model briefed by HM failed repeatedly, and a different rocket was needed for the successful launch.

The successful launch of Sputnik created panic in the United States about how far we were behind the Russians. Money was poured into scientific and engineering research and into the education of young scientists and engineers. HM personally benefited from this generosity as it furthered his undergraduate and graduate education.

Licklider and Taylor the authors of the seminal paper, “The Computer as a Communication Device” were employees of the Pentagon’s Defense Advanced Research Project Agency (DARPA). An internetted communications system was important for the U.S. military was that it would banish its greatest nightmare: the prospect of the Soviet Union being able to decapitate U.S. command and control with a single nuclear strike. But the selling point for the scientists working for DARPA was that linking up computers would be a useful way to share what was at the time incredibly rare and costly computer time. A network could spread the load and make it easier on everyone. So a project was funded to transform the Intergalactic Computer Network into reality. It was called ARPANET.

It is interesting to speculate what would have been developed in the absence of the Soviet threat. It is difficult to think that this would have been done by private industry.
Perhaps it is a poor commentary on homo sapiens, but it seems that many, if not most, technological advances have been developed primarily for warfare and defense.

It is also ironic to think that technology developed to thwart the Soviet Union would be used by Russia to interfere in American elections to insure that their chosen candidate for President was elected.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

LikeWar: The Weaponization of Social Media

January 13, 2019

The title of this post is identical to the title of a book by P.W. Singer and Emerson T. Brooking. Many of the immediately following posts will be based on or motivated by this book. The authors have been both exhaustive and creative in their offering. Since it is exhaustive only a sampling of the many important points can be included. Emphasis will be placed on the creative parts.

The very concept that led to the development of the internet was a paper written by two psychologists J.C.R Licklider and Robert W. Taylor titled “The Computer as a Communication Device.” Back in those days computers were large mainframes used for data processing. Licklider wrote another paper titled “Man Computer Symbiosis.” The idea here was that both computers and humans could benefit from the interaction between the two, a true symbiotic interaction. Unfortunately, this concept has been largely overlooked. Concentration was on replacing humans, who were regarded as slow and error prone, with computers. Today the fear is of the jobs lost by artificial intelligence. Attention needs to be focused on the interaction between humans and computers as advocated by Licklider.

But the notion of the computer as a communication device did catch on. More will be written on that in the following post.

The authors also bring Clausewitz into the discussion. Clausewitz was a military strategist famous for his saying, war is politics pursued in other means. More specifically he wrote, “the continuation of political intercourse with the addition of other means.” The two are intertwined, he explained. “War in itself does not suspend political intercourse or change it into something entirely different. In essentials that intercourse continues, irrespective of the means it employs.” War is political. And politics will always be at the heart of human conflict, the two inherently mixed. “The main lines along which military events progress, and to which they are restricted, are political lines that continue throughout the war into the subsequent peace.”

If only we could learn of what Clausewitz would think of today. Nuclear warfare was never realistic. Mutual Assured Destruction with the meaningful acronym (MAD) was never feasible. Conflicts need to be resolved, not the dissolution of the disagreeing parties. Today’s technology allows for the disruptions of financial systems, power grids, the very foundations of modern society. Would Clausewitz think that conventional warfare has become obsolete? There might be small skirmishes, but would standing militaries go all out to destroy each other. Having a technological interface rather than face to face human interactions seems to allow for more hostile and disruptive
interactions. Have politics become weaponized? Is that what the title of Singer and Brooking’s book implies?

The authors write that their research has taken them around the world and into the infinite reaches of the internet. Yet they continually found themselves circling back to five core principles, which form the foundation of the book.
First, the internet has left adolescence.

Second, the internet has become a battlefield.

Third, this battlefield changes how conflicts are fought.

Fourth, this battle changes what “war” means.

Fifth, and finally, we’re all part of this war.

Here are the final two paragraphs of the first chapter.

“The modern internet is not just a network but an ecosystem of nearly 4 billion souls, each with their own thoughts and aspirations, each capable of imprinting a tiny piece of themselves on the vast digital commons. They are the targets not of a single information war but of thousands and potentially millions of them. Those who can manipulate this swirling tide, to steer its direction and flow, can accomplish incredible good. They can free people, expose crimes, save lives, and seed far-reaching reforms. But they can also accomplish astonishing evil. They can foment violence, stoke hate, sow falsehoods, incite wars, and even erode the pillar of democracy itself.

Which side succeeds depends, in large part, on how much the rest of us learn to recognize this new warfare for what it is. Our goal in “LikeWar” is to explain exactly what’s going on and to prepare us all for what comes next.”

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Scale of Russian Operation Detailed

December 23, 2018

The title of this post is identical to the title of an article by Craig Timberg and Tony Romm in the 17 Dec ’18 issue of the Washington Post. Subtitles are: EVERY MAJOR SOCIAL MEDIA PLATFORM USED and Report finds Trump support before and after election. This post is the first to analyze the millions of posts provided by major technology firms to the Senate Intelligence Committee.

The research was done by Oxford University’s Computational Propaganda Project and Graphic, a network analysis firm. It provides new details on how Russians worked at the Internet Research Agency (IRA), which U.S. officials have charged with criminal offenses for interring in the 2016 campaign. The IRA divided Americans into key interest groups for targeted messaging. The report found that these efforts shifted over time, peaking at key political moments, such as presidential debates or party conventions. This report substantiates facts presented in prior healthy memory blog posts.

The data sets used by the researchers were provided by Facebook, Twitter, and Google and covered several years up to mid-2017, when the social media companies cracked down on the known Russian accounts. The report also analyzed data separately provided to House Intelligence Committee members.

The report says, “What is clear is that all of the messaging clearly sought to benefit the Republican Party and specifically Donald Trump. Trump is mentioned most in campaigns targeting conservatives and right-wing voters, where the messaging encouraged these groups to support his campaign. The main groups that could challenge Trump were then provided messaging that sought to confuse, distract and ultimately discourage members from voting.”

The report provides the latest evidence that Russian agents sought to help Trump win the White House. Democrats and Republicans on the panel previously studied the U.S. intelligence community’s 2017 finding that Moscow aimed to assist Trump, and in July, said the investigators had come to the correct conclusion. Nevertheless, some Republicans on Capitol Hill continue to doubt the nature of Russia’s interference in the election.

The Russians aimed energy at activating conservatives on issues such as gun rights and immigration, while sapping the political clout of left-leaning African American voters by undermining their faith in elections and spreading misleading information about how to vote. Many other groups such as Latinos, Muslims, Christians, gay men and women received at least some attention from Russians operating thousands of social media accounts.

The report offered some of the first detailed analyses of the role played by Youtube and Instagram in the Russian campaign as well as anecdotes about how Russians used other social media platforms—Google+, Tumblr and Pinterest—that had received relatively little scrutiny. That also used email accounts from Yahoo, Microsoft’s Hotmail service, and Google’s Gmail.

While reliant on data provided by technology companies the authors also highlighted the companies’ “belated and uncoordinated response” to the disinformation campaign and, once it was discovered, their failure to share more with investigators. The authors urged that in the future they provide data in “meaningful and constructive “ ways.

Facebook provided the Senate with copies of posts from 81 Facebook pages and information on 76 accounts used to purchase ads, but it did not share posts from other accounts run by the IRA. Twitter has made it challenging for outside researchers to collect and analyze data on its platform through its public feed.

Google submitted information in an especially difficult way for researchers to handle, providing content such as YouTube videos but not the related data that would have allowed a full analysis. They wrote that the YouTube information was so hard to study, that they instead tracked the links to its videos from other sites in hopes of better understand YouTube’s role in the Russian effort.

The report expressed concern about the overall threat social media poses to political discourse within and among nations, warning them that companies once viewed as tools for liberation in the Arab world and elsewhere are now a threat to democracy.

The report also said, “Social media have gone from being the natural infrastructure for sharing collective grievances and coordinating civic engagement to being a computational tool for social control, manipulated by canny political consultants and available to politicians in democracies and dictatorships alike.”

The report traces the origins of Russian online influence operations to Russian domestic politics in 2009 and says that ambitions shifted to include U.S. politics as early as 2013. The efforts to manipulate Americans grew sharply in 2014 and every year after, as teams of operatives spread their work across more platforms and accounts to target larger swaths of U.S. voters by geography, political interests, race, religion and other factors.

The report found that Facebook was particularly effective at targeting conservatives and African Americans. More than 99% of all engagements—meaning likes, shares and other reactions—came from 20 Facebook pages controlled by the IRA including “Being Patriotic,” “Heart of Texas,” “Blacktivist” and “Army of Jesus.”

Having lost the popular vote, it is difficult to believe that Trump could have carried the Electoral College given this impressive support by the Russians. One can also envisage Ronald Reagan thrashing about in his grave knowing that the Republican Presidential candidate was heavily indebted to Russia and that so many Republicans still support Trump.
© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Memory Special: Is Technology Making Your Memory Worse?

November 28, 2018

The title of this post is identical to the title of a Feature article by Helen Thomson in the 27 Oct 2018 issue of the “New Scientist.” Previous healthymemory blog posts have implied that the answer to this question is worse. Thomson writes, “Outsourcing memories, for instance to pad and paper is nothing new, but it has become easier than ever to do using external devices, leading some to wonder whether our memories are suffering as a result.”

Taking pictures has become more of an obsession given the capabilities of smart phones to take and store high quality photos. Although you might think that taking pictures and sharing stories helps you to preserve memories of events, but the opposite is true. When Diana Tamir and her colleagues at Princeton University sent people out on tours, those encouraged to take pictures actually had a poorer memory of the tour at a later date. Prof. Tamir said, “Creating a hard copy of an experience through media leaves only a diminished copy in our own heads.” People who rely on a satellite navigation system to get around are also worse at working out where they have been than those who use maps.

The expectation of information being at our fingertips seems to have an effect. When we think of something that can be accessed later, regardless of whether we will be tested on it, we have lower rates of recall of the information itself and enhanced recall instead for where to access it. Sam Gilbert of University College, London says, “These kinds of studies suggest that technology is changing our memories. We increasingly don’t need to remember content, but instead, where to find it.”

Unfortunately relying too heavily on devices can mess with our appreciation of how good our memory actually is. We are constantly making judgements about whether something is worth storing in mind. Will I remember this tomorrow? Does it need to be written down? Should I set a reminder? Meta-memory refers to our ability to understand and use our memory. Technology seems to screw it up.

People who can access the internet to help them answer general knowledge questions, such as “How does a zip work?” overestimate how much information they think they have remembered, as well as their knowledge of unrelated topics after the test,compared with people who answered questions without gong online. You lose touch with what you came from you and what came from the machine. This exacerbates the part of the Dunning-Krueger phenomena in which we think we know much more than we actually know. Gilbert says, “These are subtle biases that may not matter too much if you continue to have access to external resources. But if those resources disappear—in an exam, in an emergency, in a technological catastrophe—we may underestimate how much we would struggle without them. Having an accurate insight into how good your memory actually is, is just as important as having a good memory in the first place.”

Hypertext

October 25, 2018

HM was disappointed in Wolf’s “READER COME HOME” as hypertext was not addressed except in passing in a note to a journal article titled, “Why Don’t We Read Hypertext Novels?” HM sees enormous potential in hypertext. In scientific reading links can be provided to the references and notes in the text. Unfortunately, the financing of academic and professional texts and journals makes the seamless operation of this capability difficult. Professional organizations and publishers need to recognize that their primary job, and this is certainly true of professional organizations, is to disseminate information about their disciplines. There is a demand for hypertext here and the free moving to different texts. It is hope that in the future this demand will eventually be realized once means of remuneration and compensation are identified.

HM would be interested to read “Why Don’t We Read Hypertext Novels?” One reason might be that there are so few, if any, of them. But there is a need here, unless authors feel compelled to shove everything they’ve written down the throats of their readers. There could be links providing more information on characters and background. There could be digressive passages that a reader might want to have the option of reading or skipping. If passages are not interesting to certain readers, they either skim them or give up on the book.

From an author’s perspective hypertext offers the option to expand views and to write one document to different levels of readers. One text could be written for beginning, intermediate, and advanced learners that would provide a coherent path through one’s learning.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Concluding Letters

October 24, 2018

What struck HM about the concluding letters (chapters in Wolf’s parlance) in “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf was that they were not especially unique to digital media. They applied generally to reading and education.

One letter was titled “Between Two and Five Years: When Language and Thought Take Flight Together.” The most important point in this chapter is to read to your children. This goes beyond reading to the building of intimacy and rapport with your children. And it will fill them with the wonder of books, be that in print or digital. Some of HM’s favorite childhood memories are of his mother reading to him. She read many items some of which were “Peter Pan,” “Tom Sawyer,” and sports books by Claire Bee featuring Chip Hilton. The wonder that these abstract characters on a background yielded such interesting and entertaining stories that stimulated the mind to create images of the stories. So when reading was the subject at school, HM was a highly motivated student.

Another letter is titled “The Science and Poetry in Learning (and teaching to Read). True there are necessary adaptions for digital material, some yet to be identified, and these are important subjects. Moreover, science is involved in addressing the questions raised by digital media. Relevant sections titles are ‘Investment in Early, Ongoing Assessment of Students,” “Investment in Our Teachers,” and “Investment in the Teaching of Reading Across the School Years.”

Another letter is titled “Building a Biliterate Brain.” “Biliterate” here refers to being literate in both conventional and digital media. But this is what the entire text addresses, and it should not be thought that everything is known about conventional media. True, the ignorance is greater on the digital side, and the genius is combining so there is a synergy between the two. Research is needed. There needs to be professional training and development, and it is important that there be equal access regardless of the financial resources of the schools.

The final letter is titled “Reader, Come Home,” which again extols the virtues of reading and thinking.

The Raising of Children in a Digital Age

October 23, 2018

The title of this post is identical to the title of a letter in “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf. Wolf refers to her chapters as letters. Wolf writes: “The tough questions raised in the previous letters come to roost like chickens on a fence in the raising of our children. They require of us a developmental version of the issues summarized to this point: Will time-consuming, cognitively demanding deep-reading processes atrophy or be gradually lost within a culture whose principle mediums advantage speed, immediacy, high levels of stimulation, multitasking and large amount of information?”

She continues, “Loss, however, in this question implies the existence of a well-formed, fully elaborated circuitry. The reality is that each new reader—that is, each child—must build a wholly new reading circuit. Our children can form a very simple circuit for learning to read and acquire a basic level of decoding, or they can go on to develop highly elaborated reading circuits that add more and more sophisticated intellectual processes over time.”

These not-yet-formed reading circuits present unique challenges and a complex set of questions: First, will the early-developing cognitive components of the reading circuit be altered by digital media before, while, and after children learn to read. What will happen to the development of their attention, memory, and background knowledge—processes known to be be affected in adults by multitasking, rapidity, and distraction? Second, if they are affected, will such changes alter the makeup of the resulting expert reading circuit and/or the motivation to form and sustain deep reading capacities? Finally, what can we do to address the potential negative effects of varied digital media on reading without losing their immensely positive contributions to children and to society?

The digital world grabs children. A 2015 RAND study reported the average amount of time spent by three-to-five year old children on digital devices was four hours a day, with 75% of children from zero to eight years old having access to digital devices. This figure is up from 52% only two years earlier. The use of digital devices increased by 117% in just one year. Our evolutionary reflex, the lovely bias pulls our attention immediately toward anything new. The neuroscientist Daniel Levitin says, “Humans will work just as hard to obtain a novel experience as we will to get a meal or a mate…In multitasking, we unknowingly enter an addiction loop as the brain’s novelty centers become rewarded for processing tiny new stimuli, to the detriment of our prefrontal cortex, which wants to stay on task and gain the rewards of sustained effort attention. We need to train ourselves to go for the long reward and forgo the short one.”

Levitin claims that children can become so accustomed to a continuous stream of competitors for their attention that their brains are for all purposes being bathed in hormones such as cortisol and adrenaline, the hormones more commonly associated with fight, flight, and stress. Children three, or four, or sometimes even two and younger—but they are first passively receiving and then, ever so gradually requiring the levels of stimulation of much older children on a regular basis.

The Stanford University neuroscientist Poldrack and his team has found that some digitally raised youth can multitask if they have been trained sufficiently on one of the tasks. Unfortunately, not enough information is reported to evaluate this claim, other than to leave it open and look to further research to see how these skills can develop.

Wolfe raises legitimate concerns. Much research is needed. But the hope is that damaging effects can be eliminated or minimized. Perhaps even certain types of training with certain types of individuals can be done to minimize the costs of multitasking.

Digital Media and the Loss of Quality Information

October 22, 2018

To put matters in perspective before proceeding it is useful to remember that Socrates saw dangers in the printed word. He believed that knowledge needed to be resident in the brain and not on physical matter. He thought that the printed word would result in going to hell in a hand basket (Be clear that he did not say this, but he did see it as a definite potential danger). So this new digital world has much to offer, but also has dangers, and we need to avoid these dangers.

Frank Schirrmacher placed the origins of the conflict without our species’ need to be instantly aware of every new stimulus, what some call our novelty bias. Hyper vigilance toward the environment has definite survival value. It is virtually certain that this reflect saved many of our prehistoric ancestors from threats signaled by the barely visible tracks of deadly tigers or the soft susurrus of venomous snakes in the underbrush. Unfortunately experts in”persuasion design” principles know very well how to exploit these tendencies.

Wolf writes, “As Schirrmacher described it, the problem is that contemporary environments bombard us constantly with new sensory stimuli, as we split our attention across multiple digital devices most of our days, as often as not, nights shortened by our attention to them. A recent study by Time, Inc. of the media habits of people in their twenties indicated that they switched media sources twenty-seven times an hour. On average they now check their cell phones between 150 and 190 times a day, As a society we’re continuously distracted by our environment, and our very wiring as ominous aids and abets this. We do not see or hear the same quality of attention, because we see and hear too much, become habituated, and then seek still more.

Enter “The Distracted Mind” into the search block of the healthy memory blog to find many more relevant posts on this topic. There are clearly two distinct components to this problem: Staying plugged in and the volume and quality of information.

Unfortunately, Wolf does not directly address the topic of being plugged in, but this problem needs to be addressed first before significant progress can be made on the second. Being constantly plugged in precludes one from making any progress on this problem. There are simply too many disruptions and distractions. So one either unplugs cold turkey and remains that way, either only plugging in to communicate or strictly limiting the time one is plugged in. Clearly there are social implications here, so one needs to explain to one’s friends and acquaintances why one is doing this and try to persuade them to join you for their own benefit.

Next one can deal with the volume of communications. Wolf notes that the average amount of communication consumed by us is 34 gigabytes. Moreover, this is characterized by one spasmodic burst after another. Barack Obama has said he is worried that for many of our young, information has become “a distraction, a diversion, a form of entertainment, rather a tool of empowerment, rather than a means of emancipation.”

The literature professor Mark Edmundson writes, “Swimming in entertainment, my students have been sealed off from the chance to call everything they’ve valued into question, to look at new ways of life…For them, education is knowing and lordly spectatorship, never the Socratic dialogue about how one ought to live one’s life.”

Wolf writes, “What do we do with the cognitive overload from multiple gigabytes of information from multiple devices? First, we simplify. Second we process the information as rapidly as possible: more precise, we read more in briefer bursts. Third, we triage. We stealthily begin the insidious trade-off between our need to know with our need to save and gain time. Sometimes we outsource our intelligence to the information outlets that offer the fastest, simplest most digestible distillations of information we no longer want to think about ourselves.”

This post is based in part on “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf. She does discuss how she managed to discipline herself and break these bad habits, although she doesn’t mention the importance of the first necessary act to unplug oneself.

Then one needs to decide that technology is a tool one should use to benefit oneself rather than letting technology drives one life. Realize that we humans have finite attentional resources and prioritize what sources and types of technology should be used to pursue specific goals. These will change over time as will goals, but one should always have goals, perhaps as simple as learning something about x. If that is rewarding, one can pursue it further, move off to related areas, or to completely new areas. The objective should always be to use technology, not be used by technology, for personal fulfillment.

This post will close with a quote from Susan Sontag:
“To be a moral human being is to pay, be obliged to pay, certain kinds of attention…The nature of moral judgments depends on our capacity for paying attention, has its limits, but whose limits can be stressed.”

And one from Herman Hesse’s essay “The Magic of the Book:’
“Among the many worlds which man did not receive as a gift of nature, but which he created with his own spirit, the world of books is the greatest. Every child, scrawling his first letters on his slate and attempting to read for the first time, in so doing, enters an artificial and most complicated world: to know the laws and rules of this world completely and to practice them perfectly, no single human life is long enough. Without words, without writing, and without books thee would be no history, there could be no concept of humanity.”

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Is Deep Reading Endangered by Technology?

October 21, 2018

This post is based on “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf. MIT scholar Sherry Turkle described a study by Sara Konrath and her research group at Stanford University that showed a 40% decline in empathy in young people over the last two decades. The most precipitous decline occurred in the last ten years. Turkle attributes the loss of empathy largely to their inability to navigate the online world without losing track of their real-time, face-to-face relationships. Turkle thinks that our technologies place us at a remove, which changes not only who we are as individuals but also who we are with one another. Wolf writes, “The act of taking on the perspective and feelings of others is one of the most profound, insufficiently heralded contributions of the deep-reading process.”

Barack Obama described novelist Marilynne Robinson as a “specialist in empathy.” Obama visited Robinson during his presidency. During their wide-ranging discussion, Robinson lamented what she saw as a political drift among many people in the United States toward seeing those different from themselves as the “sinister other.” She characterized this as “dangerous a development as there could be in terms of whether we continue to be a democracy.” Whether writing about humanism’s decline or fear’s capacity to diminish the very values its proponents purport to defend, Ms Robinson conceptualized the power of books to help us understand the perspective of others as an antidote to the fears and prejudices many people harbor, often unknowingly. Within this context Obama told Robinson that the most important things he had learned about being a citizen came from novels. “It has to do with empathy. It has to do with being comfortable with the notion that the world is complicated and full of grays but there’s still truth there to be found, and that you have to strive for that and work for that. And that it’s possible to connect with someone else even thought they’re very different from you.”

It is most insightful that the polarization that is being experienced, is due in large part to missing empathy, which to some degree, perhaps large is due to digital screen technology. Although technology has been blamed for much, part of the problem here is not just the display mode of information, but also the type of content of the information. Quality fiction builds empathy. Even technical reading can build empathy provided the content can be related to the feelings and thinking of others. And some social research does summarize the feelings and thinking of others.

Wolf writes, “There are many things that would be lost if we slowly lose the cognitive patience to immerse ourselves in the worlds created by books and the lives and feelings of the “friends” who inhabit them. And although it is a wonderful thing that movies and film can do some of this, too, there is a difference in the quality of immersion that is made possible by entering the articulated thoughts of others. What will happen to young readers who never meet and begin to understand the thought and feelings of someone totally different? What will happen to older readers who begin to lose touch with that feeling of empathy for people outside their ken or kin? It is a formula for unwitting ignorance, fear and misunderstanding, that can lead to the belligerent forms of intolerance that are the opposite of America’s original goals for its citizens of many cultures.”

Deep reading involves more than empathy. Wolf writes, “The consistent strengthening of the connections among our analogical, inferential, empathic, and background knowledge processes generalize well beyond reading. When we learn to connect these processes over and over in our reading, it becomes easier to apply them to our own lives, teasing apart our motives and intentions and understanding with ever perspicacity and, perhaps, wisdom, why others think and feel the way they do. Not only is it the basis for the compassionate side of empathy, but it also contributes to strategic thinking.

Just as Obama noted, however, these strengthened processes do not come without work and practice, nor do they remain static if unused. From start to finish, the basic neurological principle—“Use it or lost it”— is true for each deep-reading process. More important still, this principle holds for the whole plastic reading-brain circuit. Only if we continuously work to develop and use our complex analogical and inferential skills will the neural networks underlying them sustain our capacity to be thoughtful, critical analysts of knowledge, rather than passive consumers of information.”

Mark Edmunson asks in his book “Why Read,” “What exactly is critical thinking?” He explains that it includes the power to examine and potentially debunk personal beliefs and convictions. Then he asks, “What good is this power of critical thought if you do not yourself believe something and are not open to having this belief modified? What’s called critical thought generally takes place from no set position at all.”

Edmonson articulates two connected, insufficiently discussed threats to critical thinking. The first threat comes when any powerful framework for understanding our world (such as a political or religious view) becomes so impenetrable to change and so rigidly adhered to that it obfuscates any divergent type of thought, even when the latter is evidence-based or morally based.

The second effect that Edmunson observes is the total absence of any developed personal belief system in many of our young people, who either do not know enough about past systems of thought (for example, Freud, Darwin, or Chomsky) or who are too impatient to examine and learn from them. As a result, their ability to learn the kind of critical thinking necessary for deeper understanding can become stunted, Intellectual rudderlessness and adherence to a way of thought that allows no question are threats to critical thinking in us all.

It is also important to be aware that Deep Reading has a generative process. Here is a quote from Jonah Lehrer—“An insight is a fleeting glimpse of the brain’s huge store of unknown knowledge. The cortex is sharing one of its secrets.”

Wolf writes, “Insight is the culmination of the multiple modes of exploration we have brought to bear on what we have read thus far: the information harvested from the text; the connections to our best thoughts and feelings; the critical conclusions gained; and then the uncharted leap into a cognitive space where we may upon occasion glimpse whole new thoughts. The formation of the reading-brain circuit is a unique epigenetic achievement in the intellectual history of our species. Within this circuit, deep reading significantly changes what we perceive, what we feel, and what we know and in so doing alters, informs, and elaborates the circuit itself.”

Neuroscience informs us that creativity is everywhere based on brain imaging and recording. There is no neat map of what occurs when we have our most creative bursts of thinking. Instead, it appears that we activate multiple regions of the brain, particularly the prefrontal cortex and the anterior cingulate gyrus.

Print vs. Screen or Digital Media

October 20, 2018

What is most bothersome about “READER COME HOME: The Reading Brain in the Digital World” by Maryanne Wolf is the way she contrasts print media versus the new screen or digital media. Readers might mistakenly think that the solution to this problem is to use print media and eschew screen or digital media. The reality is that in the future this might be impossible as conventional print media might be found only in museums or special libraries. But what is key to understanding is that unfortunate habits tend to develop when using screen/digital media. Moreover, the unfortunate habits are the result of a feeling of needing to be plugged in with digital media. It is these habits, skimming, superficial processing, and multi-tasking that are the true culprits here.

These same practices can be found using print matter and they are not always bad. Reading the newspaper, in either print or digital form, HM’s attention is dictated by his interests. Initially he is skimming, but when he finds something interesting he focuses his attention and reads deeply. If it turns out that he already knows the material, or that the material is a bunch of crap. He resumes skimming. This is the reason he does not like televised news since it includes material he would like to ignore or skip over. HM finds it annoying that the phrase “Breaking News” is frequently heard. Frankly, he would prefer “Already considered and processed news.” Unless there is a natural catastrophe or some imminent danger, there is no reason the news can’t wait for further context under which it can be processed.

Frankly, HM would never have been able to complete his Ph.D, had he not developed this ability. His work is interdisciplinary, so he must read in different areas. He skims until he finds relevant material. Then he focuses and quizzes himself to assure he is acquiring the relevant material. Sometimes this might be a matter of bookmarking it with the goal of returning when there would be sufficient time to process the material. Even if the topic is one with which he is familiar, he will assess whether there is anything new that requires his attention. There is simply too much material and too little time. Strategies need to be employed. The risk from current technology is that the technology is driving the process rather than the individual using the technology effectively.

We are not victims of technology unless we passively allow ourselves to become victims of technology. Students need to be taught how to use the technology and what practices need to be abandoned. One of these is being continually plugged in, but there are also social issues that need to be addressed.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

READER COME HOME

October 18, 2018

The title of this post is the same as the title of an important book by Maryanne Wolf. The subtitle is “The Reading Brain in the Digital World.” Any new technology offers benefits, but it may also contain dangers. There definitely are benefits from moving the printed world into the digital world. But there are also dangers, some of which are already quite evident. One danger is the feeling that one always needs to be plugged in. There is even an acronym for this FOMO (Fear of Missing Out). But there are costs to being continually plugged in. One is superficial processing. One of the best examples of this is of the plugged-in woman who was asked what she thought of OBAMACARE. She said that she thought it was terrible and was definitely against it. However, when she was asked what she thought of the Affordable Care Act, she said that she liked it and was definitely in favor of it. Of course, the two are the same.

This lady was exhibiting an effect that has a name, the Dunning-Krueger effect. Practically all of us think we know more than we do. Ironically, people who are quite knowledgeable about a topic are aware of their limitations and frequently qualify their responses. So, in brief, the less you know the more you think you know, but the more you know, the less you think you know. Moreover, this effect is greatly amplified in the digital age.

There is a distinction between what is available in our memories and what is accessible in our memories. Very often we are unable to remember something, but we do know that it is present in memory. So this information is available, but not accessible. There is an analogous effect in the cyber world. We can find information on the internet, but we need to look it up. It is not available in our personal memory. Unfortunately, being able to look something up on the internet is not identical to having the information available in our personal memories so that we can extemporaneously talk about the topic. We daily encounter the problem of whether we need to remember some information or whether it would be sufficient to look it up. We do not truly seriously understand something until it is available in our personal memories. The engineer Kurtzweil is planning on extending his life long enough so the he can be uploaded to a computer, thus achieving a singularity with technology. Although he is a brilliant engineer, he is woefully ignorant of psychology and neuroscience. Digital and neural codes differ and the processing systems differ, so the conversion is impossible. However, even if it were understanding requires deep cognitive and biological processing. True understanding does not come cheaply.

Technology can be misused and it can be very tempting to misuse technology. However, there are serious costs. Maryanne Wolf discusses the pitfalls and the benefits of technology. It should be understood that we are not victims of technology. Rather we need to use technology not only so that we are not victims, but also so we use technology synergistically.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

An Ambiguous State of Affairs

September 18, 2018

The title of this post is identical to the title of a section of a chapter in an insightful book by Antonio Damaisio titled “The Strange Order of Things: Life, Feeling, and the Making of Cultures. The title of this chapter is “On the Human Condition Now.”

Damaisio writes, “This could be the best of times to be alive because we are awash in spectacular scientific discoveries and in technical brilliance that make life even more comfortable and convenient; because the amount of available knowledge and the ease of access to that knowledge are at an all-time high and so is human interconnectedness at a planetary scale, as measured by actual travel, electronic communication, and international agreements for all sorts of cooperation, in science, the arts, and trade; because the ability to diagnose, manage, and even cure diseases continues to expand and longevity continues to extend so remarkably that human beings born after the year 2000 are likely to live, hopefully well, to an average of at least a hundred. Soon we will be driven around by robotic cars, saving us effort and lives because, at some point, we should have few fatal accidents.”

Unfortunately for the past four or five decades, Damaisio notes that the general public of the most advanced societies has accepted with little or no resistance a gradually deformed treatment of news and public affairs designed to fit the entertainment model of commercial television and radio. Damaisio writes, “Although a viable society must care for the way its governance promotes the welfare of citizens, the notion that one should pass four some minutes of each day and make an effort to learn about the difficulties and successes of governments and citizenry is not just old-fashioned; it has nearly vanished. As for the notion that we should learn about such matters seriously and with respect, is by now an alien concept,. Radio and television transform every governance issue into “a story,” and it is the “form” and entertainment value of the story that count, more than its factual content.”

The internet provides a means that provides large amounts of information readily available to the public. It also provides means for deliberation and discussion. Unfortunately it also provides for the generation of false news, creates alternative realities, and builds conspiracy theories. This blog has repeatedly invoked Daniel Kahneman’s Two Process View of cognition to assist in understanding the problem.
System 1 is named Intuition. System 1 is very fast, employs parallel processing, and appears to be automatic and effortless. They are so fast that they are executed, for the most part, outside conscious awareness. Emotions and feelings are also part of System 1. Learning is associative and slow. For something to become a System 1 process requires much repetition and practice. Activities such as walking, driving, and conversation are primarily System 1 processes. They occur rapidly and with little apparent effort. We would not have survived if we could not do these types of processes rapidly. But this speed of processing is purchased at a cost, the possibility of errors, biases, and illusions.
System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors to slip through.

To achieve coherent understanding, System 2 processing is required. However, System 1 processing is common on the internet. The content is primarily emotional. Facts are irrelevant and the concept of objective truth is becoming irrelevant. The Russians were able to use the internet to enable their choice for US President, Trump, to win.

Due to System 2 processing being more effortful, no matter how smart and well informed one is, we naturally tend to resist changing our beliefs, in spite of the availability of contrary evidence. Research done at Damaisio’s institute shows the resistance to change is associated with a conflicting relationship of brain systems related to emotivity and reason. The resistance to change is associated with the engagement of systems responsible for producing anger. We construct some sort of natural refuge to defend ourselves against contradictory information.

Damaisio writes, “The new world of communication is a blessing for the citizens of the world trained to think critically and knowledgeable about history. But what about citizens who have been seduced by the world of life as entertainment and commerce? They have been educated, in good part, by a world in which negative emotional provocation is the rule rather than the exception and where the best solutions for a problem have to do primarily with short-term interests.”

Fascism in on the March Again: Blame the Internet

August 11, 2018

The title of this post is identical to the title of an article by Timothy Snyder in the Outlook Section of the 27 May 2018 issue of the Washington Post. The hope was that the internet would connect people and spread liberty around the world. The opposite appears to have happened. According to Freedom House, ever year since 2005 has seen a retreat in democracy and an advance of authoritarianism. The year 2017, when the Internet reached more than half the world’s population, was marked by Freedom House as particularly disastrous. Young people who came of age with the Internet care less about democracy and are more sympathetic to authoritarianism that any other generation.

Moreover, the Internet has become a weapon of choice for those who wish to spread authoritarianism. Russia’s president and its leading propagandism both cite a fascist philosopher, Ivan Ilyin, who believed that factuality was meaningless. In 2016 Russian Twitter bots spread messages designed to discourage some Americans from voting and encourage others to vote for Russia’s preferred presidential candidate, Donald Trump. Britain was substantially influenced by bots from beyond its borders. In contrast, Germany’s democratic parties have agreed not to use bots during political campaigns. The only party to resist the idea was the extreme right Alternative fur Deutschland, which was helped by Russia’s bots in last year’s elections.

Mr. Snyder writes, “Modern democracy relies upon the notion of a “public space” where, even if we can no longer see all our fellow citizens and verify facts together, we have institutions such as science and journalism that can provide going references for discussion and policy. The Internet breaks the line between the public and private by encouraging us to confuse our private desires with the actual state of affairs. This is a constant human tendency. But in assuming that the Internet would make us more rather than less rational, we have missed the obvious danger: that we can now allow our brokers to lead us into a world where everything we would like to believe is true.

The explanation that the healthy memory blog makes is Nobel Lauerate Daniel Kahneman’s Two System View of Cognition. System 1, intuition, is our normal mode of processing and requires little or no attention. System 2, commonly referred to as thinking, requires our attention. One of the roles of System 2 is to monitor System 1. When we encounter something contradictory to what we believe, the brain set off a distinct signal. It is easiest to ignore this signal and to continue our System 1 processing. To engage System 2 requires our attentional resources to attempt to resolve the discrepancy and to seek further understanding. The Internet is a superhighway for System 1 processing, with few willing to take the off ramps to System 2 to learn new or different ways of thinking.

Mr. Snyder writes, “Democracy depends upon a certain idea of truth: not the babel of our impulses, but an independent reality visible to all citizens. This must be a goal; it can never be fully achieved. Authoritarianism arises when this goal is openly abandoned, and people conflate the truth with what they want to hear. Then begins a politics of spectacle, where the best liars with the biggest megaphones win. Trump understands this very well. As a businessman he failed, but as a politician he succeeded because he understood how to beckon desire. By deliberately speaking unreality with modern technology, the daily tweet, he outrages some and elates others, eroding the very notion of a common world of facts.

“To be sure Fascism 2.0 differs from the original. Traditional facts want to conquer both territories and selves; the Internet will settle for your soul. The racist oligarchies that are emerging behind the Internet today want you on the couch, outraged or elated, it doesn’t matter which, so long as you are dissipated at the end of the day. They want society to be polarized, believing in virtual enemies that are inside the gate, rather than actually marching or acting in the physical world. Polarization directs Americans at other Americans, or rather at the Internet caricatures of other Americans, rather than at fundamental problems such as wealth inequality or foreign interference in democratic elections. The Internet creates a sense of “us and them” inside the country and an experience that feels like politics but involves no actual policy.”

To be sure, Trump is a Fascist. His so-called “base” consists of nazis and white supremacists. His playbook is straight from Joseph Goebbels with the “big lie” and the repetition of that “big lie.”

VR Headset Helps People Who Are Legally Blind to See

August 9, 2018

The title of this post is identical to the title of an article by Catherine de Lange in the News section of the 4 August 2018 issue of the New Scientist. Although this virtual reality headset does not cure the physical cause of blindness, the device does let people with severe macular degeneration resume activities like reading and gardening—tasks they previously found impossible.

Macular degeneration is a common, age-related condition. It affects about 11 million people in the US and around 600,000 people in the UK. Damage to blood vessels causes the central part of the eye, the macula, to degrade. This leaves people with a blind spot in the center of their vision, and can make those with the condition legally blind. Bob Massof at Johns Hopkins University says, “You can still see with your periphery, but it is difficult or impossible to recognize people, to read, to perform everyday activities.”

This new system is called IrisVision. It uses virtual reality (VR) to make the most of peripheral vision. The user puts on a VR headset that holds a Samsung Galaxy phone. It records the person’s surrounding and displays them in real time, so that the user can magnify the image as many times as they need for their peripheral vision to become clear. Doing so also helps to reduce or eliminate their blind spot.

Tomi Perski at Iris Vision, who also has severe macular degeneration, says “Everything around the blind spot looks, say, 10 times bigger, so the relative size of the blind spot looks so much smaller that the brain can’t perceive it anymore. When he first started using the device it was an emotional experience. He says, “I sensed that I could see again and the tears started coming.”

Perski says, “If I were to look at my wife—and I’m standing 4 or 5 feet away—my blind spot is so large I couldn’t see her head at all.” But when he uses IrisVision the magnification causes the blind spot to be relatively much smaller, so that it no longer covers his wife’s whole head, just a small part of her face. He says, “If I just move that blind spot I can see her whole face and her expression and everything.”

The software automatically focuses on what the person is looking at, enabling them to go from reading a book on their lap to looking at the distance without adjusting the magnification or zoom manually. Colors are given a boost because many people with macular generation have trouble distinguishing them (the cones are largely in the macular region), and users can place the magnification bubble over anything they want to see in even more detail, for example to read small print.

In a trial, 30 people used the system for two weeks, filling out questionnaires on their ability to complete daily activities before and after the period. David Rhew at Samsung Electronics Americas says, “They can now read, they can watch TV, they can interact with people, they can do gardening, They can can stuff that for years was not even a consideration.”
According to Rhew, the vision of participants was all but restored with the headset. Whew says, “The baseline rate of vision in the individuals came in at 20/400, which is legally blind, and with the use of this technology it improved to 20/30, which is pretty close to 20/20 vision.” 20/40 is usually the standard that lets people drive without glasses. 20/30 is even better. This is not to say they can drive with this device, but rather to indicate the quality of the vision.

The results have been presented at the Association for Research in Vision and Opthalmology annual meeting.

The headset is now being used in 80 ophthalmology centers around the US, and the next step is to adapt the software to work of other vision disorders.

The system costs $2500, which includes a Samsung Gear VR headset and a Galaxy S7 or S8 smartphone customized with the software.

Trump and the 2018 Election

July 27, 2018

At the joint press conference with Trump and Putin, Putin said that he wanted Trump to win and that he helped Trump win. The record (both video and print) of this conference the White House has published, which is supposed to be an accurate public record, has omitted these comments by Putin. And Trump is arguing that Russia is going to help the Democrats in 2018.

In case you’re wondering how Trump manages to do this, you must realize that Trump lives in a self-created reality that changes as a function of what he wants and what is convenient at the moment. Objective reality does not exist for Trump.

The obvious question is, how can Trump’s base not notice that Trump is not in touch with reality. The answer is that they are exclusive System 1 processors (see the many posts on Kahneman) who believe everything he says.

The immediately preceding post predicted a possible Constitutional Crisis resulting from disputed election results. The situation reminds HM of the response Benjamin Franklin gave to someone who asked what the outcome of the Constitutional Convention was. Franklin answered, “a republic, if you can keep it.” HM is becoming increasingly doubtful that we shall be able to keep it. What is needed is for Republicans return to Republican values rather then serving as Trump’s unthinking lackeys.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Surprise, Maryland

July 26, 2018

The title of this post is identical to the title of an article in the 23 July 2018 issue of the Washington Post. The subtitle is “Your election contractor has ties to Russia. And other states also remain vulnerable to vote tampering.” Senior officials have revealed that an Internet technology company with which the state contracts to hold electronic voting information is connected to a Russia oligarch who is “very close’ to Russian President Vladimir Putin. Maryland leaders did not know about the connection until the FBI told them.

Maryland is not a slacker on election security; it is regarded as being ahead of the curve relative to other states. So if even motivated states can be surprised, what about the real laggards?

Maryland’s exposure began when it chose a company to keep electronic information on voter registration, election results and other extremely sensitive data. Later this company was purchased by a firm run by a Russian millionaire and heavily invested in by a Kremlin-connected Russian billionaire. Currently the state does not have any sense that this Russia links have had any impact on the conduct of its elections, and it is scrambling to shore up its data handling before November’s voting. But the fact that the ownership change’s implications could have gone unnoticed by state officials is cause enough for concern. The quality of contractors that states employ to handle a variety of election-related tasks is just one of may concerns election-security experts have identified since Russia’s manipulation in the 2016 U.S. presidential election.

Maryland has pushed to upgrade its election infrastructure. It rented new voting machines in advance of the 2016 vote to ensure that they have left a paper trail. State election officials note that they hire an independent auditor to conduct a parallel count based on those paper records, with automatic recounts if there is a substantial discrepancy between the two tallies. Observers note that the state could still do better, for example by conducting manual post-election audits as well as electronic ones. But Maryland is still far more responsible than many others.

Recently Politico’s Eric Geller surveyed 40 states about how they would spend new federal election-security funding Congress recently approved. Here are some depressing results: “only 13 states said they intend to use the federal dollars to buy new voting machines. At least 22 said they have no plans to replace their machines before election—including all five states that rely solely on electronic voting devices, which cybersecurity experts consider a top vulnerability. In addition almost no states conduct robust statistic-based post-election audits to look for evidence of tampering after the fact. And fewer than one-third of states and territories have requested a key type of security review from the Department of Homeland Security.”

Moreover, Congress seems uninterested in offering any more financial help, despite states’ glaring needs. Federal lawmakers, who are Republican, last week nixed a $380 million election-security measure.
So do not waste your time watching voter predictions and wonder whether there will be a “blue wave” to save the country from Trump. Russian election interference is guaranteed, and Trump, understandably is taking no action. So if there is no blue wave, Democrats will cry interference. If there is a “blue wave” Trump would claim interference even though such interference by Russia would make no sense, although Trump has already made this assertion. Mixed results and widespread dissatisfaction are the likely result. And perhaps a Constitutional Crisis.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Wait, There’s More

July 25, 2018

More information on Russian interference in the 2016 Presidential Election. This post is based on an article titled, “Burst of tweets from Russian operatives in October 2016 generates suspicion” by Craig Timberg and Shane Harris in the 21 July 2018 issue of the Washington Post. The article begins, “On the eve of one of the newsiest days of the 2016 presidential election, Russian operatives sent off tweets at a rate of about a dozen each minute. By the time they finished, more than 18,000 had been sent through cyberspace toward unwitting American voters, making it the busiest day by far in a disinformation operation whose aftermath is still roiling U.S. politics.

Clemson University researchers have collected 3 million Russian tweets. The reason for this burst of activity on 6 Oct. 2016 is a mystery that has generated intriguing theories but no definitive explanation. The theories attempt to make sense of how such a heavy flow of Russian disinformation might be related to what came immediately after on Oct. 7. This was the day when Wikileaks began releasing embarrassing emails that Russian intelligence operatives had stolen from the campaign chairman for Hillary Clinton, revealing sensitive internal conversations that would stir controversy.

Complicating this analysis is the number of other noteworthy events on that day. That day is best remembered for the Washington Post’s publication of a recording of Donald Trump speaking crudely about women. Also on that day U.S. intelligence officials first made public their growing concerns about Russian meddling in the presidential election, following reports about the hacking of prominent Americans and intrusions into election systems in several U.S. states.

Two questions are: Could the Russian disinformation teams have gotten advanced notice of the Wikileaks release, sending the operatives into overdrive to shape public reactions to the news? And what do the operatives’ actions that day reveal about Russia’s strategy and tactics that Americans are heading into another crucial election in just a few months?

The Clemson University researchers have assembled the largest trove of Russian disinformation tweets available so far. The database includes tweets between February 2014 and May 2018, all from accounts that Twitter has identified as part of the disinformation campaign waged by the Internet Research Agency, based in St. Petersburg, Russia and owned by a Putin associate.

The new data offer still more evidence of the coordinated nature of Russia’s attempt to manipulate the American election. The Clemson researchers dubbed it “state-sponsored agenda building.”

Overall the tweets reveal a highly adaptive operation that interacted tens of millions of times with authentic Twitter users—many of whom retweeted the Russian accounts—and frequently shifted tactics in response to public events, such as Hillary Clinton’s stumble at a Sept. 11 memorial.

The Russians working for the Internet Research Agency are often called “trolls” for their efforts to secretly manipulate online conversations. These trolls picked up their average pace of tweeting after Trump’s election. This was especially true for the more than 600 accounts targeting the conservative voters who were part of his electoral base, a surge the researchers suspect was an effort to shape the political agenda during the transition period by energizing core supporters.

For sheer curiosity, nothing in the Clemson dataset rivals Oct. 6. The remarkable combination of news events the following day has several analysts, including the Clemson researchers, suspect there likely was a connection to the coming Wikileaks release. Other researchers dispute the conclusion that there was a connection.

However, last week’s indictment of Russian intelligence officers by Special Counsel Robert Mueller III made clear that the hack of Clinton campaign chairman John Podesta’s emails and their distribution through Wikileaks was a meticulous operation. Tipping off the Internet Research Agency, the St. Petersburg troll factory owned by an associate of President Vladimir Putin, might have been part of an overarching plan of execution, said several people familiar with the Clemson findings about the activity of the Russian trolls.

Clint Watts, a former FBI agent and an expert on the Russian troll armies and how they respond to new as well as upcoming events, like debates of candidate appearances ,says that they tend to ramp up when they know something’s coming.

Although Watts did not participate in the Clemson research, his instincts fit with those of researchers Darren L. Linvill and Patrick L. Warren, who point to the odd consistency of the storm of tweets. More than on any other day, the trolls on Oct 6 focused their energies on a left leaning audience, with more than 70% of the tweets targeting Clinton’s natural constituency of liberals, environmentalists and African Americans. Livill and Warren, have written a paper on their research now undergoing peer review, identifying 230 accounts they categorized as “Left Trolls” because they sought to infiltrate left-wing conversation on Twitter.

These Left Trolls did so in a way designed to damage Clinton, who is portrayed as corrupt, in poor health, dishonest and insensitive to the needs of working-class voters and various minority groups. In contrast, the Left Trolls celebrated Sen. Bernie Sasnders and his insurgent primary campaign against Clinton and, in the general election, Green Party candidate Jill Stein.

For example, less than two weeks before election day, the Left Troll account @Blactivisits tweeted, “NO LIVES MATTER TO HILLARY CLINTON. ONLY VOTES MATTER TO HILLARY CLINTON.”

Ninety-three of the Left Troll accounts were active on Oct. 6 and 7, each with an average following of 1760 other Twitter accounts. Taken together, their messages could have directly reached Twitter accounts 20 million times on those two days, and reached millions of others though retweets, according to the Clemson researchers.

Podesta’s emails made public candid, unflattering comments about Sanders and fueled allegations that Clinton had triumphed over him because of her connection to the Democratic party establishment. The Left Trolls on Oct. 6 appeared to be stirring up conversation among Twitter users potentially interested in such arguments, according to the Clemson researchers.

Warren, an associate professor of economics, said, “We think that they were trying to activate and energize the left wing of the Democratic Party, the Bernie wing basically, before the Wikileaks release that implicated Hillary in stealing the Democratic primary.”

U.S. officials with knowledge of information that the government has gathered on the Russian operation said they had yet to establish a clear connection between Wikileaks and the troll account that would prove they were coordinating around the release of campaign emails. The official spoke on the condition of anonymity to share assessments not approved for official release.

But some clues have emerged that may point to coordination. It now appears that WikiLeaks intended to publish the Podesta emails closer to the election, and that some external event compelled the group to publish sooner than planned, the officials said.

One U.S. official said, “There is definitely a command and control structure behind the IRA’s use of statistical media, pushing narratives and leading people towards certain conclusions.”

Warren and Linvill found that Russian disinformation tweets generated significant conversation among other Twitter users. Between September and November 2016, references to the Internet Research Agency accounts showed up in the tweets of others 4.7 million times.

The patterns of tweets also shows how single team trolls worked on different types of accounts depending on shifting priorities, one hour playing the part of an immigrant-bashing conservative, the next an African American concerned about police brutality and on their avid participant in “hashtag games” in which Twitter users riff on particular questions such as “#WasteAMillionIn3Words.” The answer on 11 July 2015 from IRA account @LoraGreen was, “Donate to #Hillary.”

Linvill said, “Day to day they seem to be operating as a business just allocating resources. It’s definitely one organization. It’s not one fat guy sitting in his house.”

Warren and LInvill collected their set of Internet Research Agency tweets using a social media analysis tool called Social Studio that catalogs tweets in a searchable format. The researchers collected all of the available tweets from 3,841 accounts that Twitter has identified as having been controlled by the Internet Research Agency, whose officials and affiliated companies have been charged with several crimes related to the 2016 election.

The Clemson researchers sorted the Internet Research Agency accounts into five categories, the largest two being “Right Troll” and Left Troll.” The others focused on retweeting news stories from around the country, participating in hashtag games or spreading a false news story about salmonella outbreak in turkeys around the Thanksgiving season of 2015.

The latest and most active group overall were the Right Trolls, which typically had little profile information but features photos the researchers described as “young, attractive women.” They collectively had nearly a million followers, the researchers said.

The Right Trolls pounced on the Sept. 11 stumble by Clinton to tweet at a frenetic pace for several days. They experimented with a variety of related hashtags such as #HillarSickAtGroundZero, #ClintonCollapse and #ZombieHillary before eventually focusing on #HillarysHealth and #SickHillary, tweeting these hundreds of times.

This theme flowed into several more days of intensive tweeting about a series of bombings in the New York area that injured dozens of people, stoking fears of terrorism.

When one group of accounts was tweeting at a rapid pace, others often slacked off or stopped entirely, underscoring the Clemson researchers’ conclusion that a single team was taking turns operating various accounts. The trolls also likely used some forms of automation to manage multiple accounts simultaneously and tweet with a speed impractical for humans, according to the researchers.

What Should Be Done

July 24, 2018

The first part of this post is taken from the Afterword of “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age,” by David E. Sanger.

“The first is that our cyber capabilities are no longer unique. Russia and China have nearly matched America’s cyber skills; Iran and North Korea will likely do so soon, if they haven’t already. We have to adjust to that reality. Those countries will no sooner abandon their cyber arsenals than they will abandon their nuclear arsenals or ambitions. The clock cannot be turned back. So it is time for arms control.”

“Second, we need a playbook for responding to attacks, and we need to demonstrate a willingness to use it. It is one thing to convene a ‘Cyber Action Group’ as Obama did fairly often, and have them debate when there is enough evidence and enough concert to recommend to the president a ‘proportional response.’ It is another thing to respond quickly and effectively when such an attack occurs.”

“Third, we must develop our abilities to attribute attacks and make calling out any adversary the standard response to cyber aggression. The Trump administration, in its first eighteenth months, began doing just this: it named North Korea as the culprit in WannaCry and Russia as the creators of NotPetya. It needs to do that more often, and faster. “

“Fourth, we need to rethink the wisdom of reflexive secrecy around our cyber capabilities. Certainly, some secrecy about how our cyberweapons work is necessary—though by now, after Snowdon and Shadow Brokers, there is not much mystery left. America’s adversaries have a pretty complete picture of how the United States breaks into the darkest of cyberspace. “

“Fifth, the world tends to move ahead with setting these norms of behavior even if governments are not yet ready. Classic arms-control treaties won’t work: they take years to negotiate and more to ratify. With the blistering pace of technological change in cyber, they would be outdated before they ever went into effect. The best hope is to reach a consensus on principles that begins with minimizing the danger to ordinary civilians, the fundamental political goal of most rules of warfare. There are several ways to accomplish that goal, all of them with significant drawbacks. But the most intriguing, to my mind, has emerged under the rubric of a “Digital Geneva Convention,” in which companies—not countries—take the lead in the short term. But countries must then step up their games too.”

There is much more in this book than could be covered in these healthymemory posts. The primary objective was to raise awareness of this new threat, this new type of warfare, and how ill-prepared we are to respond to it and to fight it. You are encouraged to buy this book and read it for yourself. If this book is relevant to your employment, have your employer buy this book.
It is important to understand that Russia made war on us by attacking our election, and that they shall continue to do so. Currently we have a president who refuses to believe that we have been attacked. Moreover, it is possible that this president colluded with the enemy in this attack. Were he innocent, he would simply let the investigation take its course. Through his continuing denials, cries of witch hunt, and his attacks on the intelligence agencies and justice department are unconscionable. This has been further exacerbated by Republicans aiding in this effort to undermine our democracy.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The 2016 Election—Part Three

July 22, 2018

This post is based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” Once the GRU via Gucci 2.0, DCLeaks, and WikiLeaks, began distributing the hacked emails, each revelation of the DNC’s infighting or Hillary Clinton’s talks at fund raisers became big news. The content of the leaks overwhelmed the bigger, more important questions of whether everyone—staring with the news organizations reporting the contents of the emails—was doing Putin’s bidding. When in early August John Brennan, the CIA Director, began sending intelligence reports over to the White House in sealed envelopes, the administration was preoccupied with the possibility that a far larger plot was under way. The officials feared that the DNC was only an opening shot, or a distraction. Reports were trickling in about constant “probes” of election systems in Arizona and Illinois were traced back to Russian hackers. Two questions were: Was Putin’s bigger plan to hack the votes on November 8? and how easy would that be to pull off?

Brennan’s intelligence reports of Putin’s intentions and orders made the CIA declare with “high confidence” that the DNC hack was the work of he Russian government at a time when the NSA and other intelligence agencies still harbored doubts. The sources described a coordinated campaign ordered by Putin himself, the ultimate modern-day cyber assault—subtle, deniable, launched on many fronts-incongruously directed from behind the six-hundred walls of the Kremlin. The CIA concluded that Putin didn’t think Trump could win the election. Putin, like everyone else, was betting that his nemesis Clinton would prevail. He was hoping to weaken her by fueling a post-election-day narrative, that she had stolen the election by vote tampering.

Brennan argued that Putin and his aides had two goals: “Their first objective was to undermine the credibility and integrity of the US electoral process. They were trying to damage Hillary Clinton. They thought she would be elected and they wanted her bloodied by the time she was going to be inaugurated;” but Putin was hedging his bets by also trying to promote the prospects of Mr. Trump.

[Excuse the interruption of this discussion to consider where we stand today. Both Putin and Trump want to undermine the credibility and integrity of the US electoral process. Trump has been added because he is doing nothing to keep the Russians from interfering again. Much is written about the possibility of a “Blue Wave” being swept into power in the mid-term elections. Hacking into the electoral process again with no preventive measures would impede any such Blue Wave. Trump fears a Blue Wave as it might lead to his impeachment. This is one of his “Remain President and Keep Out of Jail Cards. Others will be discussed in later posts. ]

Returning to the blog, at this time Trump began warning about election machine tampering. He appeared with Sean Hannity on Fox News promoting his claim of fraudulent voting. He also complained about needing to scrub the voting rolls and make it as difficult as possible for non-Trump voters to vote. Moreover, he used this as his excuse for losing the popular election.

At this time Russian propaganda was in full force via the Russian TV network and Breitbart News, Steve Bannion’s mouthpiece.

A member of Obama’s team, Haines said he didn’t realized that two-thirds of American adults get their news through social media. He said, “So while we knew somethig about Russian efforts to manipulate social media, I think it is fair to say that we did not recognize the extent of the vulnerability.

Brennan was alarmed at the election risk from the Russians. He assembled a task force of CIA, NSA, and FBI experts to sort through the evidnce. And as his sense of alarm increased, he decided that he needed to personally brief the Senate and House leadership about the Russian infiltrations. One by one he got to these leaders and they had security clearances so he could paint a clear picture of Russia’s efforts.

As soon as the session with twelve congressional leaders led by Mitch McConnell began it went bad. It devolved into a partisan debate. McConnell did not believe what he was being told. He chastised the intelligence officials for buying into what he claimed was Obama administration spin. Comey tried to make the point that Russian had engaged in this kind of activity before, but this time it was on a far broader scale. The argument made no difference, It became clear that McConnell would not sign on to any statement blaming the Russians.

It should be remembered that when Obama was elected, McConnell swore he would do everything in his power to keep Obama from being reelected. McConnell is a blatant racist and 100% politician. The country is much worse for it. For McConnell professionals interested in determining the truth do not exist. All that exists is what is politically expedient for him.

There was much discussion regarding what to do about Russia. DNI Clapper warned that if the Russians truly wanted to escalate, the had an easy path. Their implants were already deep inside the American electric grid. The most efficient for turning Election Day into a chaotic finger-pointing mess would be to plunge key cities into darkness, even for just a few hours.

Another issue was that NSA’s tools had been compromised. Their implants in foreign systems exposed, the NSA temporarily went dark. At a time when the White House and Pentagon were demanding more options on Russia and a stepped-up campaign against ISIS, the NSA was building new tools because their old ones had been blown.

The 2016 Election—Part Two

July 21, 2018

This post is based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” In March 2016 “Fancy Bear,” a Russian group associated with the GRU (Russian military intelligence) broke into the computers of the Democratic Congressional Campaign Committee before moving into the DNC networks as well. “Fancy Bear” was busy sorting through Podesta’s email trove. The mystery was what the Russians planned to do with the information they had stolen. The entire computer infrastructure at the DNC needed to be replaced. Otherwise it would not be known for sure where the Russians had buried implants in the system.

The DNC leadership began meeting with senior FBI officials in mid-June. In mid-June, the DNC leadership decided to give the story of the hack to the Washington Post. Both the Washington Post snd the New York Times ran it, but it was buried in the political back pages. Unlike the physical Watergate break-in, the significance of a cyber break in had yet to be appreciated.

The day after the Post and the Times ran they stories a persona with the screen name Guccifer 2.0 burst onto the web, claiming that he—not some Russian group—had hacked the DNC. His awkward English, a hallmark of the Russian effort made it clear he was not a native speaker. He contended he was just a very talented hacker, writing:

Worldwide known cyber security company CrowdStrike announced that the Democratic National Committee (DNC) servers had been hacked by “sophisticated” hacker groups.

I’m very please the company appreciated mu skills so highly)))
But in fact, it was easy, very easy.

Guccifer may have been the first one who penetrated Hillary Clinton’s and other Democrats’ mail servers. But he certainly wasn’t the last. No wonder any other hacker could easily get access to the DNC’s servers.

Shame on CrowdStrike: Do you think I’ve been in the DNC’s networks for almost a year and saved only 2 documents? Do you really believe it?

He wrote that thousands of files and emails were now in the hands of WikiLeaks. He predicted that they would publish them soon.

Sanger writes, “There was only one explanation for the purpose of releasing the DNC documents: to accelerate the discord between the Clinton camp and the Bernie Sanders camp, and to embarrass the Democratic leadership. That was when the phrase “weaponizing” information began to take off. It was hardly a new idea. The web just allowed it to spread faster than past generations had ever known.”

Sanger continues, “The digital break-in at the DNC was strange enough, but Trump’s insistence that there was no way it could be definitively traced to the Russians was even stranger, Yet Trump kept declaring he admired Putin’s “strength,” as if strength was the sole qualifying characteristic of a good national leader…He never criticized Putin’s moves against Ukraine, his annexation of Crimea, or his support of Bashar al-Assad in Syria.”

The GRU-linked emails weren’t producing as much news as they had hoped, so the next level of the plan kicked in: activating WikiLeaks. The first WikiLeaks dump was massive: 44,000 emails, more than 17,000 attachments. The deluge started right before the Democratic National Convention .

Many of these documents created discord in the convention. The party’s chair, Wasserman Schultz had to resign just ahead of the convention over which she was to preside. In the midst of the convention Sanger and his colleague Nicole Perlroth wrote: “An unusual question is capturing the attention of cyber specialists, Russia experts and Democratic Party leaders in Philadelphia: Is Vladimir V. Putin trying to meddle in the American Presidential Election?”

A preliminary highly classified CIA assessment circulating in the White House concluded with “high confidence” the the Russian government was behind the theft of emails and documents from the Democratic National Committee. This was the first time the government began to signal that a larger plot was under way.

Still the White House remained silent. Eric Schmitt and Sanger wrote,” The CIA evidence leaves President Obama and his national security aides with a difficult diplomatic decision: whether to publicly accuse the government of Vladimir V. Putin of engineering the hacking.”

Trump wrote on Twitter, “The new joke in town is that Russia leaked the disastrous DNC emails, which never should have been written (stupid), because Putin likes me.”

Sanger writes, “Soon it would not be a joke.

The 2016 Election—Part One

July 20, 2018

This post is based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” In the middle of 2015 the Democratic National Committee asked Richard Clarke to assess the political organization’s digital vulnerabilities. He was amazed at what his team discovered. The DNC—despite its Watergate History, despite the well-publicized Chinese and Russian intrusion into the Obama campaign computers in 2008 and 2012—was securing its data with the kind of minimal techniques one would expect to find at a chain of dry cleaners. The way spam was filtered wasn’t even as sophisticated as what Google’s Gmail provides; it certainly wasn’t prepared for a sophisticated attack. And the DNC barely trained its employees to spot a “spear phishing” of the kind that fooled the Ukrainian power operators into clicking on a link, only to steal whatever passwords are entered. It lacked any capability for detecting suspicious activity in the network such as the dumping of data to a distant server. Sanger writes, “It was 2015, and the committee was still thinking like it was 1792.”

So Clarke’s team came up with a list of urgent steps the DNC needed to take to protect itself. The DNC said they were too expensive. Clarke recalled “They said all their money had to go into the presidential race.” Sanger writes, “Of the many disastrous misjudgments the Democrats made in the 2016 elections, this one may rank as the worst.” A senior FBI official told Sanger, “These DNC guys were like Bambi walking in the woods, surrounded by hunters. They had zero chance of surviving an attack. Zero.”

When an intelligence report from the National Security Agency about a suspicious Russian intrusion into the computer networks at the DNC was tossed onto Special Agent Adrian Hawkin’s desk at the end of the summer of 2015, it did not strike him or his superiors at the FBI as a four-alarm fire. When Hawkins eventually called the DNC switchboard, hoping to alert its computer-security team to the FBI’s evidence of Russian hacking he discovered that they didn’t have a computer-security team. In November 2015 Hawkins contacted the DNC again and explained that the situation was worsening. This second warning still did not set off alarms.

Anyone looking for a motive for Putin to poke into the election machinery of the United States does not have to look far: revenge. Putin had won his election, but had essentially assured the outcome. This evidence was on video that went viral.
Clinton, who was Secretary of State, called out Russia for its antidemocratic behavior. Putin took the declaration personally. The sign of actual protesters, shouting his name, seemed to shake the man known for his unchanging countenance. He saw this as an opportunity. He declared that the protests were foreign-inspired. At a large meeting he was hosting, he accused Clinton of being behind “foreign money” aimed at undercutting the Russian state. Putin quickly put down the 2011 protests and made sure that there was no repetition in the aftermath of later elections. His mix of personal grievance at Clinton and general grievance at what he viewed as American hypocrisy never went away. It festered.

Yevgeny Prigozhin developed a large project for Putin: A propaganda center called the Internet Research Agency (IRA). It was housed in a squat four-story building in Saint Petersburg. From that building, tens of thousands of tweets, Facebook posts, and advertisements were generated in hopes of triggering chaos in the United States, and, at the end of the processing, helping Donald Trump, a man who liked oligarchs, enter the Oval Office.

This creation of the IRA marked a profound transition in how the Internet could be put to use. Sanger writes, “For a decade it was regarded as a great force for democracy: as people of different cultures communicated, the best ideas would rise to the top and autocrats would be undercut. The IRA was based on the opposite thought: social media could just as easily incite disagreements, fray social bonds, and drive people apart. While the first great blush of attention garnered by the IRA would come because of its work surrounding the 2016 election, its real impact went deeper—in pulling at the threads that bound together a society that lived more and more of its daily life the the digital space. Its ultimate effect was mostly psychological.”

Sanger continues, “There was an added benefit: The IRA could actually degrade social media’s organizational power through weaponizing it. The ease with which its “news writers” impersonated real Americans—or real Europeans, or anyone else—meant that over time, people would lose trust in the entire platform. For Putin, who looked at social media’s role in fomenting rebellion in the Middle East and organizing opposition to Russia in Ukraine, the notion of calling into question just who was on the other end of a Tweet or Facebook post—of making revolutionaries think twice before reaching for their smartphones to organize—would be a delightful by-product. It gave him two ways to undermine his adversaries for the price of one.”

The IRA moved on to advertising. Between June 2015 and August 2017 the agency and groups linked to it spent thousands of dollars on Facebook as each month, at a fraction of the cost for an evening of television advertising on a local American television stations. In this period Putin’s trolls reached up to 126 million Facebook users, while on Twitter they made 288 million impressions. Bear in mind that there are about 200 million registered voters in the US and only 139 million voted in 2016.

Here are some examples of the Facebook posts. A doctored picture of Clinton shaking hands with Osama bin Laden or a comic depicting Satan arm-wrestling Jesus. The Satan figures says “If I win, Clinton wins.” The Jesus figure responds, “Not if I can help it.”

The IRA dispatched two of their experts, a data analyst and a high-ranking member of the troll farm. They spent three weeks touring purple states. They did rudimentary research and developed an understanding of swing states (something that doesn’t exist in Russia). This allows the Russians to develop an election-meddling strategy, which allows the IRA to target specific populations within these states that might be vulnerable to influence by social media campaigns operated by trolls across the Atlantic.

Russian hackers also broke into the State Department’s unclassified email system, and they might also have gotten into some “classified” systems. They also managed to break into the White House system. In the end, the Americans won the cyber battle in the State and White House systems, though they did not fully understand how it was part of an escalation of a very long war.

The Russians also broke into Clinton’s election office in Brooklyn. Podesta fell prey to a phishing attempt. When he changed his password the Russians obtained access to sixty thousand emails going back a decade.

WannaCry & NotPetya

July 19, 2018

This post is based on “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age,” by David E. Sanger. The North Koreans got software stolen from the NSA by the Shadow Brokers group. So, the NSA lost its weapons and the North Koreans shot them back.

The North Korean hackers married NSA’s tool to a new form of ransomware, which locks computers and makes their data inaccessible—unless the user pays for an electronic key. The attack was spread via a phishing email similar to the one used by Russian hackers in the attacks on the Democratic National Committee and other targets in 2016. It contained an encrypted, compressed file that evaded most virus-detection software. Once it burst alive inside a computer or network, users received a demand for $300 to unlock their data. It is not known how many paid, but those who did never got the key, if there ever was one—to unlock their documents and databases.

WannaCry, like the Russian attackers on the Ukraine power grid, was among a new generation of attacks that put civilians in the crosshairs. Jared Cohen, a former State Department official said, “If you’re wondering why you’re getting hacked—or attempted hacked—with greater frequency, it is because you are getting hit with the digital equivalent of shrapnel in an escalating state-against-state war, way out there in cyberspace.”

WannaCry shut down the computer systems of several major British hospital systems, diverting ambulances and delaying non-emergency surgeries. Banks and transportation systems across dozens of counties were affected. WannaCry hit seventy-four countries. After Britain, the hardest hit was Russia (Russia’s Interior Ministry was among the most prominent victims). The Ukraine and Taiwan were also hit.

It was not until December 2017, three years to the day after Obama accused North Korea of the Sony attacks, for the United States and Britain to formally declare that Kim Jong-un’s government was responsible for WannaCry. President Trump’s homeland security adviser Thomas Bossert said he was “comfortable” asserting that the hackers were “directed by the government of North Korea,” but said that conclusion came from looking at “not only the operational infrastructure, but also the tradecraft and the routine and the behaviors that we’ve seen demonstrated in past attacks. And so you have to apply some gumshoe work here, and not just some code analysis.”

“The gumshoe work stopped short of reporting about how Shadow Brokers allowed the North Koreans to get their hands on tools developed for the American cyber arsenal. Describing how the NSA enabled North Korean hackers was either too sensitive, too embarrassing or both. Bossert was honest about the fact that having identified the North Koreans, he couldn’t do much else to them. “President Trump has used just about every level you can use, short of starving the people of North Korea to death, to change their behavior,” Bossert acknowledged. “And so we don’t have a lot of room left here.”
The Ukraine was victim to multiple cyberattacks. One of the worst was NotPetya. NotPetya was nicknamed by the Kaspersky Lab, which is itself suspected by the US government of providing back doors to the Russian government via its profitable security products. This cyberattack on the Ukrainians seemed targeted at virtually every business in the country, both large and small—from the television stations to the software houses to any mom-and-pop shops that used credit cards. Throughout the country computer users saw the same broken-English message pop onto their screens. It announced that everything on the hard drives of their computers had been encrypted: “Oops, your important files have been encrypted…Perhaps you are busy looking to recover your files, but don’t waste your time.” Then the false claim was made that if $300 was paid in bitcoin the files would be restored.

NotPetya was similar to WannaCry. In early 2017 the Trump administration said that NotPetya was the work of the Russians. It was clear that the Russians had learned from the North Koreans. They made sure that no patch of Microsoft software would slow the spread of their code, and no “kill switch’ could be activated. NotPetya struck two thousand targets around the world, in more than 65 countries. Maersk, the Danish shipping company, was among the worst hit. They reported losing $300 million in revenues and had to replace four thousand servers and thousands of computers.

The Shadow Brokers

July 18, 2018

This is the fourth post based on David E Sanger’s, “THE PERFECT WEAPON: War, Sabotage, & Fear in the Cyber Age.” Within the NSA a group developed special tools for Tailored Access Operations (TAO). These tools were used to break into the computer networks of Russia, China, and Iran, among others. These tools were posted by a group that called itself the Shadow Brokers. NSA’s cyber warriors knew that the code being posted was malware they had written. It was the code that allowed the NSA to place implants in foreign systems, where they could lurk unseen for years—unless the target knew what the malware looked like. The Shadow Brokers were offering a product catalog.

Inside the NSA, this breach was regarded as being much more damaging than what Snowdon had done. The Shadow Brokers had their hands on the actual code, the cyberweapons themselves. These had cost tens of millions of dollars to create, implant, and exploit. Now they were posted for all to see—and for every other cyber player, from North Korea to Iran, to turn to their own uses.

“The initial dump was followed by many more, wrapped in taunts, broken English, a good deal of profanity, and a lot of references to the chaos of American politics.” The Shadow Brokers promised a ‘monthly dump service’ of stolen tools and left hints, perhaps misdirection, that Russian hackers were behind it all. One missive read, “Russian security peoples is becoming Russian hackers at nights, but only full moons.”

This post raised the following questions. Was this the work of the Russians, and if so was it the GRU trolling the NSA the way it was trolling the Democrats”? Did the GRU’s hackers break into the TAO’s digital safe, or did they turn an insider maybe several. And was this hack related to another loss of cyber trolls from the CIA’s Center for Cyber Intelligence which had been appearing for several months on the WikiLeaks site under the name “Vault 7?” Most importantly, was there an Implicit message in the publication of these tools, the threat that if Obama came after the Russians too hard for the election hack, more of the NSA’s code would become public?

The FBI and Brennan reported a continued decrease in Russian “probes” of the state election system. No one knew how to interpret the fact. It was possible that the Russians already had their implants in the systems they had targeted. One senior aide said, “It wouldn’t have made sense to begin sanctions” just when the Russians were backing away.

Michael Hayden, formerly of the CIA and NSA said that this was “the most successful covert operation in history.

From Russia, With Love

July 17, 2018

The title of this post is identical to the title of the Prologue from “The Perfect Weapon: War, Sabotage, & Fear in the Cyber Age.” Andy Ozment was in charge of the National Cybersecurity & Communications Integration Center, located in Arlington, VA. He had a queasy feeling as the lights went out the day before Christmas Eve, 2015. The screens at his center indicated that something more nefarious than a winter storm or a blown-up substation had triggered the sudden darkness across a remote corner of the embattled former Soviet republic. The event had the marking of a sophisticated cyberattack, remote-controlled from someplace far from Ukraine.

This was less than two years since Putin had annexed Crimea and declared it would once again be part of Mother Russia. Putin had his troops trade in their uniforms for civilian clothing and became known as the “little green men.” These men with their tanks were sowing chaos in the Russian-speaking southeast of Ukraine and doing what they could to destabilize a new, pro-Western government in Kiev, the capital.

Ozment realized that this was the ideal time for a Russian cyberattack against the Ukrainians in the middle of the holidays. The electric utility providers were operating with skeleton crews. To Putin’s patriotic hackers, Ukraine was a playground and testing ground. Ozment told his staff that this was a prelude to what might well happen in the United States. He regularly reminded his staff, that the world of cyber conflict, attackers came in five distinct varieties: “vandals, burglars, thugs, spies, and saboteurs. He said he was not worried about the thugs, vandals, and burglars. It was the spies, and particularly the saboteurs who keep him up at night.

In the old days, they could know who launched the missiles, where they came from and how to retaliate. This clarity created a framework for deterrence. Unfortunately, in the digital age, deterrence stops at the keyboard. The chaos of the modern Internet plays out in an incomprehensible jumble. There are innocent service outages and outrageous attacks, but it is almost impossible to see where any given attack came from. Spoofing the system comes naturally to hackers, and masking their location was pretty simple. Even in the case of a big attack, it would take weeks, or months, before a formal intelligence “attribution” would emerge from American intelligence agencies and even then there might be no certainty about who instigated the attack. So this is nothing like the nuclear age. Analysts can warn the president about what was happening, but they could not specify, in real time and with certainty, where an attack was coming from or against whom to retaliate.

In the Ukraine the attackers systematically disconnected circuits, deleted backup systems, and shut down substations, all by remote control. The hackers planted a cheap program—malware named “KillDisk”—to wipe out the systems that would otherwise allow the operators to regain control. Then the hackers delivered the finishing touch: they disconnected the backup electrical system in the control room, so that not only were the operators now helpless, but they were sitting in darkness.

For two decades experts had warned the hackers might switch off a nation’s power grid, the first step in taking down an entire country.

Sanger writes, “while Ozment struggled to understand the implications of the cyber attack unfolding half a world away in Ukraine, the Russians were already deep into a three-pronged cyberattack on the very ground beneath his feet. The first phase had targeted American nuclear power plants as well as water and electric systems, with the insertion of malicious code that would give Russia the opportunity to sabotage the plants or shut them off at will. The second was focused on the Democratic National Committee, an early victim of a series of escalating attacks ordered, American intelligence agencies later concluded, by Vladimir V. Putin himself. And the third was aimed at the heart of American innovation, Silicon Valley. For a decade the executives of Facebook, Apple and Google were convinced that the technology that made them billions of dollars would hasten the spread of democracy around the world. Putin was out to disprove that thesis and show that he could use the same tools to break democracy and enhance his own power.”

Trump and North Korea

July 16, 2018

The situation between Trump and North Korea provides a salient, if not the most salient, example of the issues explored in THE PERFECT WEAPON. Trump has mistakenly declared that the threat of a nuclear armed North Korea is over.

Trump has met with Kim Jong-un. This was a major victory for Kim in that North Korea has desired a face to face meeting with the American President for a long time. The meeting was one of personal pleasure for Trump. His profuse praise of Kim Jong-un was honest as Kim is one of the most ruthless, if not the hands-down most ruthless, dictators. Clearly Kim is someone that Trump personally admires and would like to emulate.

The earlier name calling was just a ploy to provoke Kim. It’s a good thing that he did not provoke Kim as Kim has a large portion of Seoul that can be fired upon and destroyed by a simple command. This is the dilemma that has precluded taking any military action against North Korea. Actually the capability of hitting the United States with missiles armed with nuclear warheads has virtually no effect on the situation before Kim developed this capability. It’s primary role is that of prestige. North Korea is now in the nuclear club.

Kim realizes that if he ever hit the United States with nuclear weapons, there would be a massive nuclear retaliation by the United States against North Korea.

Regardless of what it says, North Korea is not going to relinquish its nuclear arsenal. They’ve played this negotiation game in the past, and they never follow through on their promises. The danger is that when Trump realizes that he has been played, he will threaten the “bloody nose” that he has threatened North Korea with in the past. Should he do this, Kim would likely use his cyberwarfare options. He could disrupt financial operations, the electrical grid, communications and effectively bring the United States to its knees. Even if Trump exercised his nuclear option that would likely not deter the North Koreans. Many of its servers and its operators reside outside North Korea. Moreover, it is likely that the Chinese would come to North Korea’s aide as they did during the Korean war. America would be living for a substantial amount of time in the dark ages.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

THE PERFECT WEAPON

July 15, 2018

The title of this post is identical to the title of a book by David E. Sanger. The subtitle is “War, Sabotage, & Fear in the Cyber Age.” The following is from the Preface:

“Cyberweapons are so cheap to develop and so easy to hide that they have proven irresistible. And American officials are discovering that in a world in which almost everything is connected—phones, cars, electrical grids, and satellites—everything can be disrupted, if not destroyed. For seventy years, the thinking inside the Pentagon was that only nations with nuclear weapons could threaten America’s existence. Now that assumptions is in doubt.

In almost every classified Pentagon scenario for how a future confrontation with Russia and China, even Iran and North Korea, might play out, the adversary’s first strike against the United States would include a cyber barrage aimed at civilians. It would fry power grids, stop trains, silence cell phones, and overwhelm the Internet. In the worst case scenarios, food and water would begin to run out; hospitals would turn people away. Separated from their electronics, and thus their connections, Americans would panic, or turn against one another.

General Valery Gerasimov, an armor officer who after combat in the Second Chechen War, served as the commander of the Leningrad and then Moscow military districts. Writing in 2013 Gerasimov pointed to the “blurring [of] the lines between the state of war and the state of peace” and—after noting the Arab Awakening—observed that “a perfectly thriving state can, in a matter of months and even days, be transformed into an arena of fierce armed conflict…and sink into a web of chaos.” Gerasimov continued, “The role of nonmilitary means of achieving political and strategic goals has grown,” and the trend now was “the broad use of political, economic, informational humanitarian, and other nonmilitary measures—applied in coordination with the protest potential of the population.” He said seeing large clashes of men and metal as a “thing” of the past.” He called for “long distance, contactless actions against the enemy” and included in his arsenal “informational actions, devices, and means.” He concluded, “The information space opens wide asymmetrical possibilities for reducing the fighting potential of the enemy,” and so new “models of operations and military conduct” were needed.

Putin appointed Gerasimov chief of the general staff in late 2012. Fifteen months later there was evidence of his doctrine in action with the Russian annexation of Crimea and occupation of parts of the Donbas in eastern Ukraine. It should be clear from General Gerasimov and Putin appointing him as chief of the general staff, that the nature of warfare has radically

changed. This needs to be kept in mind when there is talk of modernizing our strategic nuclear weapons. Mutual Assured Destruction, with the appropriate acronym MAD, was never a viable means of traditional warfare. It was and still is a viable means of psychological warfare, but it needs to remain at the psychological level.

Returning to the preface, “After a decade of hearings in Congress, there is still little agreement on whether and when cyberstrikes constitute an act of war, an act of terrorism, mere espionage, or cyber-enabled vandalism.” Here HM recommends adopting Gerasimov and Putin’s new definition of warfare.

Returning to the preface, “But figuring out a proportionate yet effective response has now stymied three American presidents. The problem is made harder by the fact that America’s offensive cyber prowess has so outpaced our defense that officials hesitate to strike back.”

James A. Clapper, a former director of national intelligence said that was our problem with the Russians. There were plenty of ideas about how to get back at Putin: unplug Russia from the world’s financial system; reveal Putin’s links to the oligarchs; make some of his own money—and there was plenty hidden around the world—disappear. The question Clapper was asking was, “What happens next (after a cyber attack)? And the United States can’t figure out how to counter Russian attacks without incurring a great risk of escalation.

Sanger writes, “As of this writing, in early 2018, the best estimates suggest there have been upward of two hundred known state-on-state cyber atacks—a figure that describes only those made public.”

This is the first of many posts on this book.

Microsoft Calls for Regulation of Facial Recognition

July 14, 2018

The title of this post is that same as the title of an article by Drew Harwell in 12 July 2018 issue of the Washington Post. Readers of the healthy memory blog should know that there have been many posts demanding data on the accuracy of facial recognition software to include a party responsible for assessing its accuracy. As has been mentioned in many posts, the accuracy of facial recognition software on television, especially on police shows, is misleading. And the ramifications of erroneous classifications can be serious.

The article begins, “Microsoft is calling for government regulation on facial-recognition software, one of its key technologies, saying such artificial intelligence is too important and potentially dangerous for tech giants to police themselves. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike. The only way to regulate this broad use is for the government to do so.”

There’s been a torrent of public criticism aimed at Microsoft, Amazon and other tech giants over their development and distribution of the powerful identification and surveillance technology—including their own employees.

Last month Microsoft faced widespread calls to cancel its contract with Immigration and Customs Enforcement, which uses a set of Microsoft cloud-computing tools that also include facial recognition. In a letter to chief executive Satya Nadella, Microsoft workers said they “refuse to be complicit” and called on the company to “put children and families about profits.” The company said its work with Immigration and Customs Enforcement is limited to mail, messaging and office work.

This a rare call for greater regulation from a tech industry that has often bristled at Washington involvement in its work. The expressed fear is that government rules could hamper new technologies of destroy their competitive edge. The expressed fear is not real if the government does the testing of new technologies. This does no hamper new technologies, rather it protects the public from using inappropriate products.

Face recognition is used extensively in China for government surveillance. The technology needs to be open to greater public scrutiny and oversight. Allowing tech companies to set their own rules is an inadequate substitute for decision making by the public and its representatives.

Microsoft is moving more deliberately with facial recognition consulting and contracting work and has turned down customers calling for deployment of facial-recognition technology in areas where we’ve concluded that there are greater human rights and risks.

Regulators also should consider whether police or government use of face recognition should require independent oversight; what legal measures could prevent AI from being used for racial profiling; and whether companies should be forced to post noticed that facial-recognition technology is being used in public places.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Conclusions

July 1, 2018

This is the sixth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Although this is an outstanding work by Dr. Roberts, the conclusions could have been better. Consequently, HM is providing his conclusions from this work. It is divided into two parts. The first part deals with implications for authoritarian governments. The second part deals with implications for democracies.

Authoritarian Governments

Mao Tse Tung initially used a heavy handed approach to the control of information. Although he managed to maintain control of the regime, it was an economic and social disaster. Beginning with Den Xiapong policies of reform and opening were begun. This evolved slowly and serially. The dictator’s dilemmas were discussed in the first post, “Censored.” One dilemma is when the government would like to enforce constraints on public speech, but repression could backfire against government. Censorship could be seen as a signal that the authority is trying to conceal something and is not in fact acting as an agent for citizens. Another dilemma is that even if the dictator would like to censor, by censoring the autocrat has more difficulty collecting precious information about the public’s view of government. The third dilemma is that censorship can have economic consequences that are costly for authoritarian governments that retain legitimacy from economic growth.

China has apparently handled these three dilemmas via porous censorship. As China has a highly effective authoritarian government it appears that porous censorship is highly effective. One could argue that China has provided a handbook for authoritarian governments, explaining how to maintain power, have a growing economy, and have a fairly satisfied public. It still is an open question for how long this authoritarian government can maintain. Although many Chinese are wealthy, and some are extremely wealthy, the majority of the country is poor. Although, in general, the standard of living has improved for virtually everyone, the amount of improvement largely differs. China has emerged as one of the leading powers in the world.

The question is whether they are satisfied being an economic power, or does it also want to be a military power? It is devoting a serious amount of money to its military forces and has built its first aircraft carrier. Other countries in the area, along with the United States, are justly concerned with China’s growing military power, especially its navy and air force. China has made it clear that they want to dominate the South China Sea. There is also the possibility that when they think the time is right, they will invade Taiwan. It is clear that the United States does not want another land war in Asia. But US Naval forces would be stretched very thin. And the loss of a couple of super carriers could result in a very short war.

Democracies

One can argue that democracy is already plagued with flooding. There is just way too much stuff on the internet. One could also argue that this is just too much of a good thing, but one would be wrong. Placing good information on the internet requires effort. Apart from entertainment, objective truth needs to be a requirement for the internet. Unfortunately, there are entities and individuals such as the current president of the United States, such as the alt-right that do not care about objective truth. So it is easy to post stuff on the internet that has no basis in objective reality. It is easy to spin conspiracy theories and all sorts of nonsense. So there is a problem on the production side. Information based on objective-truth takes time to produce. Eliminate this goal of objective truth and letting the mind run wild provides the means of producing virtually endless amounts of nonsense, at least some of which is harmful.

But there is also effort on the receiving side. Concern with the objective truth requires the use of what Kahneman terms, System 2 processing, which is more commonly know as thinking. This requires both time and mental effort. However, a disregard for objective truth such as what is produced by the alt-right, requires only believing, not thinking. It involves System 1 processing which is also where our emotions sit.

Given that objective truth requires System 2 processing both for its production and its reception, and that a disregard for objective truth such as illustrated in alt-right products and conspiracy theories, requires only System 1 processing with emotional and gut feelings, the latter will likely overwhelm the former. This could spell the death of democracy. If so, the Chinese have provided an effective handbook for managing authoritarian governments.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Information Flooding

June 30, 2018

This is the fifth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Dr. Roberts writes, “information flooding is the least identifiable form of censorship of all the mechanisms described in this book. Particularly with the expansion of the Internet, the government can hide its identify and post online propaganda pretending to be unrelated to the government. Coordinate efforts to spread information online reverberate throughout social media because citizens are more likely to come across them and share them. Such coordinate efforts can distract from ongoing events that might be unfavorable to the government and can de-prioritize other news and perspectives.

We might expect that coordinated government propaganda efforts would be meant to persuade or cajole support from citizens on topics that criticize the government about. However, the evidence presented in this chapter indicates that governments would rather not use propaganda to draw attention to any information that could shed a negative light on their performance. Instead, governments use coordinated information to draw attention away from negative events toward more positive news for their own overarching narrative, or to create positive feelings about the government among citizens. This type of flooding is even more difficult to detect, and dilutes the information environment to decrease the proportion of information that reflects badly on the government.

Information flooding can be subtle. In other cases it can be quite glaring. On August 3, 2014 a 6.5 magnitude earthquake hit Yunnan province in China. The earthquake killed hundreds and injured thousands of people, destroying thousands of homes in the process, School buildings toppled and trapped children, reminiscent of the 2008 Sichuan earthquake, which killed 70,000 people. The government was heavily criticized for shoddy construction of government buildings. Emergency workers rushed to the scene to try to rescue survivors.

Eight hours after the earthquake struck, the Chinese official media began posting coordinated stories. These stories were not about the earthquake , but about controversial Internet personality Guo Meimei. Guo had reached Internet celebrity status three years earlier, in 2011 when she repeatedly posted posted pictures of herself dressed in expensive clothing and in front of expensive cars on Sina Weibo, attributing her lavish lifestyle to her job at the Red Cross in China. Although Guo did not work at the Red Cross, her boyfriend, Wang Jun, was on the board of the Red Cross Bo-ai Asset Management Ltd., a company that coordinated charity events for the Red Cross. The expensive items that Guo had posed with on social media in 2011 were allegedly gifts from Wang. This attracted millions of commentators on social media. This scandal highlighted issues with corruption of charities in China, and donations to the Red Cross plummeted.

By 2014, when the earthquake hit, the Guo Meimei scandal was old news, long forgotten by the fast pace of the Internet. On July 10, 2014, Chinese officials had arrested Guo on allegations of gambling on the World Cup. On midnight August 4, 2014 Xinhua out of the blue posted a long, detailed account of a confession made by Guo Memei that included admissions of gambling and engaging in prostitution. On the same day, many other major media outlets followed suit, inducing coverage by major media outlets such as CCTV, the Global Times, Caijing, Southern Weekend, Beijing Daily, and Nanjing Daily. Obviously this was not an enormous coincidence. Rather, it was well coordinated information flooding.

Coordination of information to produce such flooding is central to the information strategies of the Chinese propaganda system. The Chinese government is in the perfect position to coordinate because it has the resources and infrastructure to do so. The institution of propaganda in China is built in a way that makes coordination easy. The Propaganda Department is one of the most extensive bureaucracies within the Chinese Communist Party, infiltrating every level of government. It is managed and led directly from the top levels of the CCP.

China has a Fifty Cent Party that provides highly coordinated cheerleading. Current conceptions of online propaganda in China posit that the Fifty Cent Party is primarily tasked with countering anti-government rhetoric online. Social media users are accused of being Fifty Cent Party members when they defend government positions in heated online debates about policy or when they attack those with anti-government views. Scholars and pundits have viewed Fifty Cent Party members as attackers aimed at denouncing or undermining pro-West, anti-China opinion. For the most part, Fifty Cent Party members have been seen in the same light as traditional propaganda. They intend to persuade rather than to censor.

Instead of attacking, the largest portion of Fifty Cent Party posts in the leaked email archive were aimed at cheerleading for citizens and China—patriotism, encouragement or motivation of citizens, inspirational quotes or slogans, gratefulness, or celebrations of historical figures, China or cultural events. Most of the posts seem to be intended to make people feel good about their lives, and not to draw attention to anti-government threads on the Internet, is consistent with recent indication from Chinese propaganda officials that propagandists attempt to promote “positivity.” The Chinese Communist Party has recently focused on encouraging art, TV shows, social media posts, and music to focus on creating “positive energy” to distract from increasingly negative commercial news.

The Powerful Influence of Information Friction

June 29, 2018

This is the fourth post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” Dr. Roberts related that in May 2011 she had been following news about a local protest in Inner Mongolia in which an ethnic Mongol herdsmen had been killed by a Han Chinese truck driver. In the following days increasingly large numbers of local Mongols began protesting outside of government buildings, culminating in sufficiently large-scale protests that the Chinese government imposed martial law. These were the largest protests that Inner Mongolia had experienced in twenty years. A few months later Dr. Roberts arrived in Beijing for summer. During discussions with a friend she brought up the Inner Mongolia protest. Her friend could not recollect the event, saying that she had not heard of it. A few minutes later, she remembered that a friend of hers had mentioned something about it. but when she looked for information online, she could not find any, so she assumed that the protest itself could not have been that important.

This is what happened. Bloggers who posted information about the protest online had their posts quickly removed from the Internet by censors. As local media were not reporting on the event, any news of the protest was reported mainly by foreign sources, many of which had been blocked by the Great Firewall. Even for the media, information was difficult to come by, as reporting on the protests on the ground had been banned, and the local Internet had been shut off by the government.

Dr. Roberts noted that information about the protest was not impossible to find on the Internet. She had been following news from Boston and even in China. The simple use of a Virtual Private Network and some knowledge of which keywords to search for had uncovered hundreds of news stories about the protests. But her friend, a well-to-do, politically interested, tech-savvy woman, was busy and Inner Mongolia is several hundred miles away. So after a cursory search that turned up nothing, she concluded that the news was either unimportant or non-existent.

Another of her friends was very interested in politics and followed political events closely. She was involved in multiple organizations that advocated for genuine equality and was an opinionated feminist. Because of her feminist activist, Dr. Roberts asked her whether she had heard of the five female activists who had been arrested earlier that year in China, including in Beijing, for their involvement in organizing a series of events meant to combat sexual harassment. The arrests of these five women had been covered extensively in the foreign press and had drawn an international outcry. Articles about the activists had appeared in the New York Times and on the BBC. Multiple foreign governments had called for their release. But posts about their detention were highly censored and the Chinese news media were prohibited from reporting on it. Her friend, who participated in multiple feminist social media groups, and had made an effort to read Western news, still had not heard about their imprisonment.

Dr. Roberts kept encountering examples like these, where people living in China exhibited surprising ignorance about Chinese domestic events that had made headlines in the international press. They had not heard that the imprisoned Chinese activist Liu Xiao had won the Nobel Peace Prize. They had not heard about major labor protests that had shut down factories or bombings of local government offices. Despite the possibility of of accessing this information without newspapers, television, and social media blaring these headlines, they were much less likely to come across these stories.

Content filtering is one of the Chinese censorship methods. This involves the selective removal of social media posts in China that are written on the platforms of Chinese owned internet service providers. The government does not target criticism of government policies, but instead removes all posts related to collective action events, activists, criticism of censorship, and pornography. Censorship focuses on social media posts that are geo-located in more restive areas, like Tibet. The primary goal of government censorship seems to be to stop information flow from protest areas to other areas of China. Since large-scale protest is known to be one of the main threats to the Chinese regime, the Chinese censorship program is preventing the spread of information about protests in order to reduce their scale.

Despite extensive content filtering, if users were motivated and willing to invest time in finding information about protests, they could overcome information friction to find such information. Information is often published online before it is removed by Internet Companies. There usually is a lag of several hours to a day before content is removed from the Internet.

Even with automated and manual methods of removing content, some content is missed. And if the event is reported in the foreign press, Internet users could access information by jumping the Great Firewall using a VPN.

The structural frictions of the Great Firewall are largely effective. Only the most dedicated “jump” the Great Firewall. Those who jump the Great Firewall are younger and have more education and resources. VPN users are more knowledgeable about politics and have less trust in government. Controlling for age, having a college degree means that a user is 10 percentage points more likely to jump the Great Firewall. Having money is another factor that increases the likelihood of jumping the Great Firewall. 25% of those who jump the Great Firewall say they can understand English, as compared with only 6% of all survey respondents. 12% of those who jump work for a for a foreign-based venture compared to only 2% of all survey respondents. 48% of the jumpers have been abroad compared with 17% of all respondents.

The government has cracked down on some notable websites. Google began having conflicts with the Chinese government in 2010. Finally, in June 2014, the Chinese government blocked Google outright.

The Wikipedia was first blocked in 2004. Particular protests have long been blocked . but the entire Wikipedia website has occasionally been made unaccessible to Chinese IP addresses.

Instagram was blocked on September 29, 2014 from mainland Chinese IP addresses due to increase popularity among Hong Kong protestors.

Censorship of the Chinese Internet

June 28, 2018

This is the third post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.” The arrival of the web in 1995 following the Tiananmen crackdown complicated the government’s ability to control the gatekeepers of information as channels of information transitioned from a “one to many” model, where a few media companies transferred information to many people, to a “many to many” model where everyday people could contribute to media online and easily share news and opinions with each other. If the government had been worried about complete control over the information environment, one would expect it to try to slow the expansion of the Internet within the country. Instead the government actively pursued it. The Chinese government aggressively expanded Internet access throughout the country and encouraged online enterprises as the CCP saw these as linked to economic growth and development.

As it pursued greater connectivity, the government simultaneously developed methods of online information control that allowed it to channel information online. The government issued regulations for the Internet in 1994, stipulating that the Internet could not be used to hurt the interest of the state. Immediately, the state began developing laws and technology that allowed it more control over information online, including filtering, registration of online websites, and capabilities for government surveillance.

The institutions that now implement information control in China for both news media and the Internet are aimed at targeting large-scale media platforms and important producers of information in both traditional and online media make it more difficult for the average consumer to come across information that the Chinese government finds objectionable. The CCP also maintains control over key information channels to be able to generate and spread favorable content to citizens. The CCP’s control over these information providers allows them the flexibility to make censorship restriction more difficult to penetrate during particular periods and to loosen constraints during others. This censorship system is a taxation system of information on the Internet, allowing the government to have it two ways: by making information possible to access, those who care enough (such as entrepreneurs, academics, those with international business connections) will bypass control and find the information they need. For the masses the impatience that accompanies surfing the web makes the control effective even though it is porous.

In 2013 President Xi Jinping upgraded the State Internet Information Office to create a new, separate administration for regulating Internet content and cyberspace. This office was called the Cyberspace Administration of China (CAC), which was run by the Central Cybersecurity and Informatization Leading Small Group and personally chaired by Xi Jinping. Xi was worried that there were too many bureaucracies in control of regulating the Internet, so he formed the CAC to streamline Internet control. The CAC sought to more strictly enforce censorship online. This included shutting down websites that did not comply with censorship regulations and increasing the prevalence of the government’s perspective online by digitizing propaganda. This showed the importance the Xi administration placed on managing content on the internet.

These institutions use a variety of laws and regulations to control information in their perspective purviews. These laws tend to be relatively ambiguous giving the state maximal flexibility in their enforcement. Censorship disallows a wide range of political discourse, including anything that “harms the interests of the nation,” “spreads rumors or disturbs social order,” “insults or defames third parties,” or “jeopardizes the nation’s unity.” Due to widespread discussion of protest events and criticism of the government online, the government cannot possibly arrest all those who violate a generous interpretation of this law. These institutions keep a close watch particularly on high-profile journalists, activists, and bloggers, developing relationships with these key players to control content and arresting those they view as dangerous. These activities are facilitated by surveillance tools that require users to register for social media with their real names and require Internet providers to keep records of users’ activities. Since Xi Jinping became president in 2012, additional laws and regulations have been written to prevent “hacking and Internet-based terrorism.”

The government cannot only order traditional media to print particular articles and stories, it also retains flooding power on the Internet. The Chines government hires thousands of online commentators to write pseudonymously at its direction. This is called the Fifty Cent Party, which is an army off Internet commentators who work at the instruction of the government to influence public opinion during sensitive periods. These propagandists are largely instructed to promote positive feelings, patriotism, and a positive outlook on governance. They are unleashed during particularly sensitive periods as a form of distraction. This is in line with President Xi’s own statements that public opinion guidance should promote positive thinking and “positive energy.” They also sometime defame activists or counter government criticism.

Since the government focus control on gatekeepers of information, rather than individuals, from the perspective of an ordinary citizen in China the information control system poses few explicit constraints. For those who are aware of censorship and are motivated to circumvent it, censorship poses an inconvenience rather than a complete constraint on their freedom. While minimizing the perception of control, the government is able to wield significant influence over which information citizens will come across.

Modern History of Information Control in China

June 27, 2018

This is the second post based on Margaret E. Roberts’ “Censored: Distraction and Diversion Inside China’s Great Firewall.”

Censorship under Mao (1949-1976)
Under Mao the Chinese government exercised authority in all areas of citizens’ lives. The Party regarded information control as a central component of political control, and Party dogma, ideology, and doctrine pervaded every part of daily routine. Propaganda teams were placed in workplaces and schools to carry out work and education in the spirit of party ideology and to implement mass mobilization campaigns. Ordinary citizens were encouraged to engage in self-criticism—publicly admitting and promising to rectify “backward” thoughts.

Under Mao the introduction of “thought work” into everyday life meant that fear played a primary role in controlling information, and each citizen was aware of political control over speech and fearful of the consequences of stepping over the line. Everyday speech could land citizens in jail or worse.

During this period China was closed off from the Western world in an information environment completely controlled by the state, had among the most “complete” control of information a country could muster, akin to today’s North Korea.

Even with ideological uniformity and totalitarian control based on repression, both the Communist Party and the Chinese people paid a high price for highly observable forms of censorship that control citizens through brainwashing and deterrence. Citizens’ and officials’ awareness of political control stifled the government’s ability to gather information on the performance of policies, contributing to severe problems of economic planning and governance. The Great Leap Forward, in which about thirty million people died of starvation in the late 1950s, has been partially attributed to local officials’ fear of reporting actual levels of grain production to the center, which led them to report inflated numbers. Even after the Great leap Forward, the inability of the Chinese bureaucracy to extract true economic reports from local officials and citizens led to greater economic instability and failed economic policies and plans.

This extensive control also imposed explicit constraints on economic growth. Large amounts of trade with other countries were not possible without loosening restriction on the exchange of information with foreigners. Innovation and entrepreneurship require risk-taking, creativity, and access to the latest technology, which are all difficult under high levels of fear that encourage risk-aversion. Millions of people were given class levels that made them second-class citizens or were imprisoned in Chinese gulags that prevented them from participating in the economy. Frequently, those who were persecuted had high levels of education and skills that the Chinese economy desperately needed. The planned economy in concert with high levels of fear stifled economic productivity and keep the vast majority of Chinese citizens in poverty.

Even in a totalitarian society with little contact with the outside world, government ideological control over the everyday lives of citizens decrease the government’s legitimacy and sowed seeds of popular discontent. Mao’s goal of ideological purity led him to encourage the Cultural Revolution, which was a decade-long period of chaos in China based on the premise of weeding out ideological incorrect portions of society. In the process this killed millions of people and completely disrupted social order. The chaos of the Cultural Revolution combined with resentment toward the extreme ideological left in the Chinese political system that had spawned it created openings for dissent. In 1974, a poster written in Guangzhou under a pseudonym called explicitly for reform. Similar protests followed. During the first Tiananmen incident in 1976, thousands of people turned out to protest the ideological left. Several years later, in the Democracy movement in 1978 and 1979, protesters explicitly called for democracy and human rights, including free speech.

Censorship Reform Before 1989
In 1978 when Deng Xiapong gained power, he initiated policies of reform and opening that were in part a reaction to the intense dissatisfaction of Chinese citizens with the Cultural Revolution and the prying hand of the government in their personal affairs. A hallmark of Deng’s transition to a market economy, which began in 1978, was the government’s retreat from the private lives of citizens and from the control of the media. Leaders within Deng’s government realized the trade-offs between individual control and entrepreneurship, creativity, and competition required by the market and decreased government emphasis on ideological correctness of typical citizens in China. In the late 1970s and early 1980s, the Chinese Communist Party (CCP) rehabilitated those who had been political victims during the Cultural Revolution. Class labels were removed and political prisoners were released, thus enabling more than twenty million additional people to participate in the economy, many of whom had high levels of education. It has been noted that the “omnipresent fear” that had been common in the Mao era lessened and personal relationships again became primarily private and economic. At first citizens began to criticize the government and express dissatisfaction privately, but later more publicly.

Not only did the government retreat from the private lives of individuals to stimulate the economy and address dissatisfaction, but also loosened its control over the media in order to reduce its own economic burden in the information industry. As other aspects of the Chinese economy privatized, the government began to commercialize the news media to respond to citizens’ demands for entertainment and economic, international, and political news. This proved to be extremely lucrative for Chinese media companies. This lessened control also allowed Chinese media to compete with the new onslaught of international information that began to pour in as international trade and interactions increased, and Chinese media companies were able to innovate to retain market share in an increasingly competitive information environment.

In the 1980s there was an increasing decentralization of the economy from the central Party planning system to the localities. As the government began to decentralize its control, it began to rely on the media to ensure that local officials were acting in the interest of the Party. Watchdog media could help keep local businesses, officials, and local courts in check. Investigative journalism serves citizens by exposing the defective aspects of its own system. Freer media in a decentralized state can serve the government’s own interest as much as it can serve the interests of citizens.

The CCP did take significant steps toward relaxing control over the flow of information in the 1980s to loosen enforcement over speech, particularly with respect to the Maoist era. By 1982, the Chinese constitution began to guarantee free speech and expression for all Chinese citizens, including freedom of the press, assembly and demonstrations. Commercialization of Chinese newspapers began in 1979 with the the first advertisement and gradually the press began making more profit from the sales of advertising and less from government subsidies. Radio and television, which had previously been controlled by the central and provincial levels of government, expanded rapidly to local levels of government and was also commercialized.

In April 1989 the death of Hu Yaobang sparked the pro-democracy protests centered in Beijing’s Tiananmen Square. These protests spread all over China, culminating in an internal CCP crisis and a large-scale violent crackdown on protesters on June 4, 1989, that was condemned internationally.

Not surprisingly, this June 4 crisis marked a turning point in government strategy with respect to the media and the press. There was widespread consensus among the Party elites after the crackdown that the loosening of media restrictions had aggravated the student demonstrations. During the months of protests reformers within the Party had allowed and even encouraged newspapers to discuss the protests. In the immediate aftermath of the crackdown on the protesters and clearing of the square on June 4, 1989, censorship ramped up quickly. This large-scale crackdown on journalists, activists, an academics reintroduced widespread fear into the private lives of influential individuals, particularly among those who had been involved in the protest events. China was returning to the model of media serving the Party and expressing enthusiasm for government policies.

Post-Tiananmen: Control Minimizing the Perception of Control

Although the belief among government officials that free media had contributed to unrest prevented the CCP from returning to the extent of press freedom before Tiananmen Square, Deng did not return to the version of pre-reform information control that relied on fear-based control of individuals’ everyday lives and instead quickly reversed the post -Tiananmen crackdown on speech. Instead, government policy evolved toward a censorship strategy that attempted to minimize the perception of information control among ordinary citizens while still playing a central role in prioritizing information for the public. The government strengthend mechanisms of friction and flooding while for the most past staying out of the private lives of citizens. A few years after Tiananmen Square, the CCP returned to an apparent loosening of control, and commercialization of the media resumed in the mid-1990s. After Deng’s “Southern Tour” in 1992, meant to reemphasize the economy, broader discussions and criticisms of the state were again allowed, even publicly and even about democracy.

Even though the government did not return to Maoist-era censorship, the government tightened its grip on the media, officials, journalists, and technology in a way that allowed targeted control: by managing the gatekeepers of information, the government could de-prioritize information unfavorable to itself and expand its own production of information to compete with independent sources. The government strengthened institutional control over the media. The CCP created stricter licensing requirement to control the types of organizations that could report news. They also required that journalists apply for press cards, which required training in government ideology. In spite of extensive commercialization that created the perception among readers that news was driven by demand rather than supply, the government retained control over the existence, content, and personnel decisions of newspapers throughout the country allowing the government to effectively, if not always explicitly, control publishing.

The government proactively changed its propaganda and strategies after Tiananmen Square, adapting Western theories of advertising and persuasion, and linking thought work with entertainment to make it more easily understood by the public. The CCP decided to instruct newspapers to follow Xinhua’s lead on important events and international news, much as the had done with the People’s Daily doing the 1960s. In the 1990s, the party also renewed its emphasis on “patriotic education” in schools around the country, ensuring that the government’s interpretations of events were the first interpretations of politics that students learned.

Censored

June 26, 2018

The title of this post is identical to the title of an important and highly relevant book by Margaret E. Roberts. The subtitle is “Distraction and Diversion Inside China’s Great Firewall.” This book is of special interest to HM. A number of summers back HM was privileged to participate in a month long workshop on the effect of new technology on two countries: China and Iraq. The workshop included intelligence professionals, technology professionals, linguists, and experts on these specific topics. Why they were interested in a psychologist like HM was not clear to him, although it was a most stimulating month, and HM hopes he was able to make some contributions.

This book makes clear the sophisticated means that China uses to control information in the country. These were vaguely understood from the workshop, but Dr. Roberts brings them into clear view.

“China has four million websites, with nearly 700 million Internet users, 1.2 mobile phone users, 600 million WeChat and Weibo users, and generates 30 billion pieces of information every day. It is not possible to apply censorship to this enormous amount of data. Thus censorship is not the correct word choice. But no censorship does not mean no management.” Lu Wei was the Director, State Internet Information Office, China, in December 2015. As the former “gatekeeper of the Chinese Internet” Lu Wei stresses in his epigraph that the thirty billion pieces information generated each day by Chinese citizens quite simply cannot be censored.

So China as developed what is termed “porous” censorship. Dr. Roberts writes, “…most censorship methods implemented by the Chinese government act not as a ban but as a tax on information, forcing users to pay money or spend more time if they want to access the censored material. For example, when the government ‘kicked out’ Google from China in 2010, it did so simply by throttling the search engine so it loaded only 75% of the time.” So if you want to use Google, you just needed to be more patient. China’s most notorious censorship intervention that blocked a variety of foreign websites from Chinese users could be circumvented by downloading a Virtual Private Network (VPN). Chinese social media users circumvent keyword censoring of social media posts by substituting similar words that go undetected for the words that the government blocks. This makes content accessible as long as you spend more time searching. Newspapers are often instructed by censors to put stories on the back pages of the newspaper, where access is just a few more slips of the page away. This technique is termed “friction” for creating friction that seriously slows, but does not eliminate, access to the information. Porous censorship is neither unique to China nor the modern time period. Iran has been known simply to throttle information accessibility and make it slower during elections.

The Russian government also uses armies of online bots and commentators to flood opposition hashtags, and make it more difficult, but not impossible, for people to find information on protests or opposition leaders. This technique is termed “flooding.” Essentially users are flooded and drown in information.

Conventional wisdom is that these porous censorship strategies are futile for governments as citizens learn quickly to circumvent censorship that is not complete or enforced. Conventional wisdom is wrong. Many governments that have the capacity to enforce censorship more forcefully choose not to do so. Using censorship that taxes, rather than prohibits, information in China and in other countries around world is done as a design choice and is not an operational flaw.

The trade-offs between the benefits and costs of repression and censorship are often referred to as “the dictator’s dilemma.” One form of the dictator’s dilemma is when the government would like to enforce constraints on public speech, but repression could backfire against the government. Censorship could be seen as a signal that the authority is trying to conceal something and is not in fact acting as an agent for citizens.

Another form of the “dictator’s dilemma” is that even if the dictator would like to censor, by censoring the autocrat has more difficulty collecting precious information about the public’s view of the government. Fear of punishment scares the public into silence and this creates long-term information collection problems for governments, which have interest in identifying and solving problems of governance that could undermine their legitimacy. Greater transparency facilitates central government monitoring of local officials, ensuring that localities are carrying out central directives and not mistreating citizens. Allowing citizens to express grievances online also allows government to predict and prevent the organization of protests.

What could perhaps be considered a third “dictator’s dilemma” is that censorship can have economic consequences that are costly for authoritarian governments that retain legitimacy from economic growth. Communications technologies facilitate markets, create greater efficiencies, lead to innovation, and attract foreign direct investment. Censorship is expensive—government enforcement or oversight of the media can be a drag on firms and requires government infrastructure. Economic stagnation and crises can contribute to the instability of governments. Censorship can exacerbate crises by slowing the spread of information that protects citizens. When censorship contributes to crises and economic stagnation, it can have disastrous long-term political costs for governments.

So “porous” censorship is much more efficient than heavy handed control of virtually all information by inducing fear in users.

Putin’s Peaks

June 25, 2018

The title of this post is identical to the title of an article by Dmitry Kobak, Sergey Shpilkin, and Maxim S. Pshenichnikov in the June 2018 issue of “Significance.” “Significance” is a joint publication of the Royal Statistical Society and the American Statistical Association. The subtitle of the article is “Russian election data revisited.”

The article states that the Kremlin wants a golden 70-70 win, meaning a win of 70% of the vote with a turnout of 70% to give it a clear mandate and provide it with a riposte to Western leaders who criticize Russia as an autocracy. What was actually achieved was a seemingly respectable 67.5%, with Putin securing 76.7% of the vote. But there have been criticisms of the election process, and doubts have been cast over the validity of the outcome. For instance, Golos, an election monitoring organization, has documented incidents of ballot stuffing at various polling stations, and multiple other violations both before and during the election (bit.ly/2HawRD3). At least since the mid-2000s Presidential and parliamentary elections in Russia have been accused of being fraudulent. From the Russian perspective, the two most important numbers that describe an election outcome are turnout percentage and leader’s result percentage. Although these percentages are not reported in the data sets from individual polling stations, they can be calculated from the information provided officially.

The authors (and others including HM) have argued that due to human attraction to round numbers, large-scale attempts to manipulate reported turnout or leader’s results would likely show up as frequent whole (integer) percentages in the election data. A previous “Significance” article gave the hypothetical example of a polling station with 1577 registered voters. Here election officials decide to forge the results and report a turnout of 85%. 85% was chosen as it is a round number which is more appealing than say 83.27%. To achieve a falsified turnout of 85%, this polling station needs to report 1755 x 0.85 = 1492 ballots cast. Other polling stations making similar attempts at fraud may also choose 85% as their target value, so that when we look at the turnout percentages for all polling stations, we see a noticeable split in the number of stations with turnout of 85%. In a previous article these integer peaks were found in elections from 2004 to 2012.

Since then two new elections were held in Russia, the 2016 parliamentary and the 2018 presidential elections. As with previous elections, sharp periodic peaks are clearly visible at integer values (91%, 92%, and 93%) and at round integer values (80%. 85%, and 90%) rather than fractional values (such as 91.3%).

The authors did Monte Carlo simulations of election results using the binomial distribution of ballots at every polling station. It strongly confirmed the hypothesis that results were being rounded to the benefit of the government. The authors note that integer peaks in the election data do not originate uniformly across all parts of Russia; they are mostly localized in the same administrative regions, providing additional evidence supporting that these are not natural phenomena Specific peaks can sometimes be traced to a particular city, or even an electoral constituency within a city, where turnout and/or leader’s results are nearly identical at a large number of polling stations. The most prominent example from the last two elections was the city of Saratov in 2016. Its plotting stations are the sole contributor to the sharp turnout peak at 64.3% and the leader’s result peak at 62.2%. These peaks are not integer and so are not counted towards the anomalies. Curiously, their product—showing the fraction of leader’s votes with respect to the total number of registered voters is a perfectly round 40%: 0.643 x 0.622 = 0.400.

One could regard these discrepancies, assuming that they do accurately reflect the underlying true vote as relatively innocuous. But the suspicion is that the results are significantly modified to get close to the Golden 70-70.

In the future it will be interesting to see if this integer bias persists in future voting summaries. It is disappointing to see this “rookie” flaw in a country noted for phony elections.

Russia’s newly developed strength is in influencing elections via technology. It has been discussed in previous healthy memory blog posts how Russian developed this new type of warfare. It began in homeland Russia. It was developed further in Russian speaking countries and in the Ukraine. And it has now been exported to Europe, where is it credited by some for the Brexit result, and to the United States were it is credited for Trump’s victory at least by some (Former DNI Director Clapper and HM at least).

Moreover, Russia is perfecting this new form of warfare and is promising its continuance. There is much talk of the upcoming midterm elections in the United States, yet nary a word about Russian interference. Trump is not taking any actions to safeguard these elections, which is perfectly understandable as Russian interference benefits the invertebrates supporting him. Even if the Russians are not entirely successful in benefitting Trump, just a small amount of interference could call into question the validity of the elections.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Possible Outcomes

May 22, 2018

This is the final post in this series. Unfortunately, Hayden does not come to any real conclusions at the end of “The Assault on Intelligence: American Security in an Age of Lies.” He just rambles on and on. As a career intelligence professional, one could expect better. He has made a career of dealing with large amounts of data of varying amounts of credibility, and has come to conclusions, or at least different possible outcomes weighted differently. But he didn’t. So please tolerate HM’s offerings.

The president has already tweeted that the entire Department of Justice is the deep state. He has also told a New York Times reporter, “I have an absolute right to do what I want to do with the Justice Department. Two conclusions can be drawn here.
Trump is woefully ignorant of the Constitution and what he can do.
The Russian new way of conducting warfare has been highly successfully.

Should the Democrats win back the House and the Senate, Trump can be impeached and removed from office.
However, this is a goal that it is difficult to achieve. And likely impossible given Russian interference, which has been promised, but for which Trump is going to do nothing to prevent.

Mueller can finish his report and provide it to Congress. It is likely that Republicans would not be impressed by compelling evidence of obstruction of justice.

But what about conspiring with Russia to win the election? The United States has spent large amounts on defense. But to what end if the Russians have effectively captured the White House? Trump worships Putin and would gladly serve as his lap dog.

And suppose it is discovered that Trump owes large amounts of money to Russia and that Putin effectively owns him?

What happens in these latter two cases rests solely with the Republicans. Too many Republicans have been influenced by Russia’s new form of warfare and are doing everything they can to subvert Mueller’s work. They have already produced a biased report that excludes Democratic input and exonerates the president.

Similarly, if Trump fires Mueller and tries to close down the investigation, the question is how will Republicans respond to this constitutional crisis? If they’re complacent and do nothing, our democracy effectively goes down the drain. Trump is likely to declare himself President for life, and Russia would effectively occupy the oval office.

The Russians are generations ahead of the United States in warfare. If this were an old-fashioned shooting war, all Americans would be enraged and the country would be up in arms. But the type of highly effective warfare to which the Russians have advanced involves the human mind. Some US Citizens are loosing interest in Mueller’s investigation and are tired of it lasting so long. They seem to care not that they would be losing the White House to the Russians. All this requires thinking, that is System 2 processing. System 1 processing, feeling, believing, not thinking and being oblivious of the truth is so much easier.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Trump, Russia, and Truth (Cont.)

May 21, 2018

This post is a continuation of the post of the same title taken from the book by Michael Hayden titled “The Assault on Intelligence: American Security in the Age of Lies.” This is the third post in the series.

Gary Kasparov, Soviet chess champion turned Russian dissident outlined the progression of Putin’s attacks. They were developed and honed first in Russia and then with Russian-speaking people nearby before expanding to Europe and the U.S. These same Russian information operations have been used to undercut democratic processes in the United States and Europe, and to erode confidence in institutions like NATO and the European Union.

Hayden notes, “Committed to the path of cyber dominance for ourselves, we seemed to lack the doctrinal vision to fully understand that the Russians were up to with their more full-spectrum information dominance. Even now, many commentators refer to what the Russians did to the American electoral process as a cyber attack, but the actual cyber portion of that was fairly straightforward.”

Hayden writes, “Evidence mounted. The faux personae created at the Russian bot farm—the Saint Petersburg—based Internet Research Agency—were routinely represented by stock photos taken from the internet, and the themes they pushed were consistently pro-Russian. There was occasional truth to their posting, but clear manipulation as well, and they all seemed to push in unison.

The Russians knew their demographic. The most common English words in their faux twitter profiles were “God,” “military,” “Trump,” “family,” “country,” “conservative,” “Christian,” “America,” and “Constitution,” The most commonly used hashtags were #nuclear, #media, #Trump, and #Benghazi…all surefire dog whistles certain to create trending.”

It was easy for analysts to use smart algorithms to determine whether something was trending because of genuine human interaction or simply because it was being pushed by the Russian botnet. Analysts could see that the bots ebbed and flowed based upon the needs of the moment. Analysts tried to call attention to this, but American intelligence did not seem to be interested.

Analyst Clint Watts characterized 2014 as year of capability development for the Russians and pointed to a bot-generated petition movement calling for the return of Alaska to Russia that got more than forty thousand supporters while helping the Russians build their cadre and perfect their tactics. With that success in hand in 2015 the Russians started a real push toward the American audience, by grabbing any divisive social issue they could identify. They were particularly attracted to issues generated from organic American content, issues that had their origin in the American community. Almost by definition, issues with a U.S. provenance could be portrayed as genuine concerns to America, and they were already preloaded in the patois of the American political dialogue, which included U.S. based conspiracy theorists.

Hayden writes, “And Twitter as a gateway is easier to manipulate than other platforms since in the twitterers we voluntarily break down into like-minded tribes, easily identified by or likes and by whom we follow. Watts says that the Russians don’t have to “bubble” us—that is, create a monolithic information space friendly for their messaging, We have already done that to ourselves since, he says, social media is as gerrymandered as any set of state electoral districts in the country. Targeting can become so precise that he considers social media “a smart bomb delivery system.” In Senate testimony, Watts noted that with tailored news feeds, a feature rather than a bug for those getting their news online, voters see “only stories and opinions suiting their preferences and biases—ripe condition for Russian disinformation campaigns.”

Charlie Sykes believes “many Trump voters get virtually all their information from inside the bubble…Conservative media has become a safe space for people who want to be told they don’t have to believe anything that is uncomfortable or negative…The details are less important than the fact that you’re being persecuted, you’re being victimized by people you loathe.”

What we have here is an ideal environment for System 1 processors. They can feed their emotions and beliefs without ever seeing any contradicting information that would cause them to think and invoke System 2 processing.

Republican Max Boot railed against the Fox network as “Trump TV,” Trump’s own version of RT,” and its prime-time ratings czar Sean Hannity as “the president’s de facto minister of information. Hayden says that there are what he calls genuine heroes on the Fox Network, like Shepard Smith, Chris Wallace, Charles Krauthammer, Bret Baier, Dana Perino and Steve Hayes, but for the most part he agrees with Boot. Hannity gave a platform to WikiLeaks’ Julian Assange shortly before Trump’s inauguration, traveling to London to interview him at the Ecuadorian embassy, where Assange had taken refuge from authorities following a Swedish rape allegation.

Hayden writes, “When the institutions of the American government refuse to kowtow to the president’s transient whim, he sets out to devalue and delegitimize them in a way rarely, if ever, seen before in our history. A free (but admittedly imperfect) press is “fake news,” unless, of course, it is Fox; the FBI is in “tatters,” led by a”nut job” director and conducting a “witch hunt”; the Department of Justice, and particularly the attorney general, is weak, and so forth.”

It is clear that Trump has experience only with “family” business, where personal loyalty reigns supreme. He has no experience with government and is apparently ignorant of the separation of the three branches of govern, legislative, judicial, and executive. The judicial and legislative branches are to be independent of the executive.

Apparently the White House lawyer, Ty Cobb, asked Trump whether he was guilty. Obviously, Trump said he was innocent, so Cobb told Trump to cooperate with Mueller and that would establish his innocence quickly and he could devote full time to his presidential duties.

Obviously, he is not innocent. On television he told Lester Holt that the reason he fired Comey was that he would not back off the Russia investigation. In other words, he has already been caught obstructing Justice.

During the campaign he requested Hillary’s emails from the Russians. So he was conspiring with the Russians and this conspiracy was successful as he did indeed get the emails.

There are also questions regarding why is he so reluctant to take any actions against Russia? One answer is that it is clearly in Trumps’ interest for the Russians interfering in the mid term election as he is concerned that the Democrats could regain control of both the House and the Senate, which would virtually guarantee that he would be impeached.

A related question regards his finances. Why has he never released his tax forms? There are outstanding debts that are not accounted for, and he seems to be flush with cash, but from where? The most parsimonious answer to this question is that he is in debt to Putin. In other words, Putin owns him.

We do not know what evidence Mueller has, but it appears that it is very large.

And Trump is behaving like a guilty person. Of course he denies his guilt and proclaims his innocence vehemently, but this only makes him appear guilty. He is viciously attacking the government and the constitution to discredit them, since he will not be able to prove his innocence. And the Russians have and will continue to provide the means for helping him try to discredit the justice system, the intelligence community, and the press.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Trump, Russia, and Truth

May 20, 2018

The title of this post is identical to the title of a chapter in “The Assault on Intelligence: American Security in an Age of Lies.” This book is by Michael V. Hayden who has served as the directors of both the National Security Agency (NSA) and the Central Intelligence Agency (CIA). This is the second post in the series.

in 2017 a detailed story in “Wired” magazine revealed how Russia was subverting U.S. democracy cited a European study that found that rather than trying to change minds, the Russian goal was simply “to destroy and undermine confidence in Western media.” The Russians found a powerful ally in Trump, who attacked American institutions with as much ferocity as did Russian propaganda, as when he identified the press as the “enemy of the American people.” The attack on the media rarely argued facts. James Poniewozik of the New York Times wrote in a 2017 tweet that Trump didn’t try to argue the facts of a case—“just that there is no truth, so you should just follow your gut & your tribe.”

Wired also pointed out the convergence between the themes of Russian media/web blitz and the Trump campaign: Clinton’s emails, Clinton’s health, rigged elections, Bernie Sanders, and so forth. And then there was an echo chamber between Russian news and American right-wing outlets, epitomized by Clinton staffer Seth Rich was somehow related to the theft of DNC emails, and the dumping of them on Wikileaks—that it was an inside job and not connected to Russia at all.

Hayden writes, “Trump seemed the perfect candidate for the Russians’ purpose, and that was ultimately our choice not theirs. But the central fact to be faced and understood here is that Russians have gotten very good indeed at invading and often dominating the American information space. For me, that story goes back twenty years. I arrived in San Antonia, TX, in January 1996 to take command of what was then called the Air Intelligence Agency. As I’ve written elsewhere, Air Force Intelligence was on the cutting edge of thinking about the new cyber warfare, and I owed special thanks to my staff there for teaching me so much about this new battle space.”

“The initial question they asked was whether we were in the cyber business or the information dominance business? Did we want to master cyber networks as a tool of war or influence or were we more ambitious, with an intent to shape how adversaries or even societies received and processed all information? As we now have a Cyber Command and not an information dominance command, you can figure how all this turned out. We opted for cyber; Russia opted for information dominance.”

The Russian most interested in that capacity was General Valery Gerasimov, an armor officer who after combat in the Second Chechen War, served as the commander of the Leningrad and then Moscow military districts. Writing in 2013 Gerasimov pointed to the “blurring [of] the lines between the state of war and the state of peace” and—after noting the Arab Awakening—observed that “a perfectly thriving state can, in a matter of months and even days, be transformed into an arena of fierce armed conflict…and sink into a web of chaos.”

Gerasimov continued, “The role of nonmilitary means of achieving political and strategic goals has grown,” and the trend now was “the broad use of political, economic, informational humanitarian, and other nonmilitary measures—applied in coordination with the protest potential of the population.” He said seeing large clashes of men and metal as a “thing” of the past.” He called for “long distance, contactless actions against the enemy” and included in his arsenal “informational actions, devices, and means.” He concluded, “The information space opens wide asymmetrical possibilities for reducing the fighting potential of the enemy,” and so new “models of operations and military conduct” were needed.

Putin appointed Gerasimov chief of the general staff in late 2012. Fifteen months later there was evidence of his doctrine in action with the Russian annexation of Crimea and occupation of parts of the Donbas in eastern Ukraine.

Hayden writes, “In eastern Ukraine, Russia promoted the fiction of a spontaneous rebellion by local Russian speakers against a neofascist regime in Kiev, aided only by Russian volunteers, a story line played out in clever high quality broadcasts from news services like RT and Sputnik coupled with relentless trolling on social media. [At this time HM was able to view these RT telecasts at work. They were the best done propaganda pieces he’s ever seen, because they did not appear to be propaganda, but rather, high quality, objective newscasts.]

Hayden concludes, “With no bands, banners, or insignia, Russia had altered borders within Europe—by force—but with an informational canopy so dense as to make the aggression opaque.”

The Assault on Intelligence

May 19, 2018

Michael V. Hayden has served as the director of both the National Security Agency (NSA) and the Central Intelligence Agency (CIA). His latest book is “The Assault on Intelligence: American Security in an Age of Lies.” Actually this title is modest. The underlying reality is that this is an attack on American Democracy.

In 2016 the Oxford’s English Dictionary’s word of the year was “post truth,” a condition where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. A. C. Grayling characterized the emerging post-truth world as “over-valuing opinion and preference at the expense of proof and data.” Oxford Dictionaries president Casper Grathwohl predicted that the term could become “one of the defining words of our time.” Change “could” to ‘has,” and change one to “is,” and, unfortunately, you have an accurate characterization of today’s reality.

Kahneman’s two-system view of cognition is fitting here. This is a concept that should be familiar to healthy memory blog readers. System 1, is called, intuition, and refers to the most common mode of our cognitive processing. Normal conversation, or the performance of skilled tasks are System 1 processes. Emotional processing is also done in System 1. System 2 is named Reasoning. It is controlled processing that is slow, serial, and effortful. It is also flexible. This is what we commonly think of as conscious thought. One of the roles of System 2 is to monitor System 1 for processing errors, but System 2 is slow and System 1 is fast, so errors do slip through.

Post truth processing is exclusively System 1. It involves neither proof nor accurate data, and is frequently emotional. That is the post truth world. One of the most disturbing facts in Hayden’s book, is that Trump does not care about objective truth. Truth is whatever he feels at a particular time. The possibility that Trump might have a delusional disorder, in which he is incapable of distinguishing fact from fiction has been mentioned in previous health memory blog posts. That was proposed as a possible reason for the enormous number of lies he tells. But it is equally possible that he has no interest in objective truth. As far as he is concerned, objective truth does not exist.

Tom Nichols writes in his 2017 book “The Death of Expertise” “The United States is now a country obsessed with the worship of its own ignorance…Google-fueled, Wikipedia-based, blog sodden…[with] an insistence that strongly held opinions are indistinguishable from facts.” Nichols also writes about the Dunning-Kruger effect, which should also be familiar to healthy memory blog readers. The Dunning-Kruger Effect describes the phenomenon of people thinking they know much more about a topic than they actually know, compared to the knowledgeable individual who is painfully aware of how much he still doesn’t know about the topic in question.

Trump is an ideal example of the Dunning-Kruger Effect. Mention any topic and Trump will claim that he knows more about the topic than anyone else. He knows more about fighting wars than his generals, He knows more about debt than anyone else (from a personal experience this might be true). He told potential voters that he was the only one who knew how to solve all their problems, without explaining how he knew or what his approach was. In point of fact, the only things he knows, and is unfortunately an expert at, are how to con and cheat people.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

What Can Be Done?

May 18, 2018

Many problems have been discussed in Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. First of all, people need to be made aware of these problems. Businesses, companies, and agencies should be willing, to the extent possible, to unweaponize these weapons of math destruction. If they are unwilling, laws should be enacted.

Dr. O’Neill thinks that data scientists should pledge a Hippocratic Oath, one that focuses on the possible misuses and misinterpretations of their models. Following the market crash of 2008, two financial engineers, Emanuel Derman and Paul Wilmots, drew up such an oath:

I will remember that I didn’t make the world, and it doesn’t satisfy my equations.

Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.

I will never sacrifice reality for elegance without explaining why I have done so.

Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.

The Electoral College Needs to Go

May 17, 2018

This post is based on Cathy O’Neil’s informative book, “Weapons of Math Destruction.” The penultimate chapter in the book shows how weapons of math destruction are ruining our elections. It is only recently that Facebook and Cambridge Analytics have be found to employ users data for nefarious purposes. Nevertheless Dr. O’Neil’s book was published in 2016. To summarize the chapter, weapons of math destruction are distorting if not destroying our elections. Actually the most informative and most important part of the chapter is found in a footnote at the end:

“At the federal level, this problem could be greatly alleviated by abolishing the Electoral College system. It’s the winner-take-all mathematics from state to state that delivers so much power to a relative handful of voters. It’s as if in politics, as in economics, we have a privileged 1 percent. And the money from the financial 1 percent underwrites the micro targeting to secure the votes of the political 1 percent. Without the Electoral College, by contrast, every vote would be worth exactly the same. That would be a step toward democracy. “

Readers of the healthy memory blog should realize that the Electoral College is an injustice that has been addressed in previous healthy memory blog posts (13 to be exact). Just recently, the Electoral College, not the popular vote, produced Presidents with adverse effects. One resulted in a war in Iraq that was justified by nonexistent weapons of mass destruction. And most recently, the most ill-suited person for the presidency became president, contrary to the popular vote.

The justification for the Electoral College was the fear that ill-informed voters might elect someone who was unsuitable for the office. If there ever was a candidate unsuitable for the office, that candidate was Donald Trump. It was the duty of the Electoral College to deny him the presidency, a duty they failed. So, the Electoral College needs to be disbanded and never reassembled.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Broken Windows Policing

May 16, 2018

This post is based on Cathy O’Neil’s informative book, “Weapons of Math Destruction.” The title of this post should be familiar to anyone who has viewed the Blue Bloods television series. It advanced Broken Windows Policing as justification for the policies they pursued to prevent serious crimes. The justification of this policy has been an article of faith since 1982, when a criminologist named George Kelling teamed up with a public policy expert, James Q. Wilson to write an article in the “Atlantic Monthly” on so-called broken-windows policing. According to Dr. O’Neil, “The idea was that low-level crimes and misdemeanors created an atmosphere of disorder in a neighborhood. This scared law-abiding citizens aware. The dark and empty streets they left behind were breeding grounds for serious crimes. The antidote was for society to resist the spread of disorder. This included fixing broken windows cleaning up graffiti-covered subway cars, and taking steps to discourage nuisance crimes. This thinking led in the 1990s to zero-tolerance campaigns most famously in New York City. Cops would arrest people for jumping subway turnstiles. They’d apprehend people caught sharing a single joint and rumble them around the city in a paddy wagon for hours before eventually booking them.”

There were dramatic campaigns for violent crimes. The zero-tolerance campaign was credited for reducing violent crime. Others disagreed citing the fallacy of “post hoc, propter hoc” (after this, therefore because of this) and other possibilities, ranging from the falling rates of crack cocaine addiction to the booming 1990s economy. Regardless, the zero-tolerance movement gained broad support, and the criminal justice system sent millions of mostly young minority males meant to prison, many of them for minor offenses.

Dr. O’Neil continues, “But zero tolerance actually had very little to do with Kelling and Wilson’s “broken-windows” thesis. Their case focused on what appeared to be a successful policing initiative in Newark, New Jersey. Cops who walked the beat there, according to the program, were supposed to be highly tolerant. Their job was to adjust to the neighborhood’s own standards of order and to help uphold them. Standards varied from one part of the city to another. In one neighborhood it might mean that drunks had to keep their bottles in bags and avoid major streets but that side streets were okay. Addicts could sit on stoops but not lie down. The idea was only to make sure the standards didn’t fall. The cops, in this scheme, were helping a neighborhood maintain its own order but not imposing their own.”

On the basis of this and other data, Dr. O’Neil comes to the conclusion, “that we criminalize poverty, believing all the while that our tools are not only scientific, but fair.” Dr. O’Neil asks, “What if police looked for different kinds of crimes?” That may sound counterintuitive, because most of us, including the police, view crime as a pyramid. At the top is homicide. It’s followed by rape and assault, which are more common, then shoplifting, petty fraud, and even parking violations, which happen all the time. Minimizing violent crime, most would agree, is and should be a central part of a police force’s mission.”

Dr. O’Neil asks an interesting question. What if we looked at the crimes carried out by the rich? “In the 2000s, the kings of finance threw themselves a lavish party. They lied, they bet billions against their own customers, they committed fraud and paid off rating agencies. Enormous crimes were committed there, and the result devastated the global economy for the best part of five years. Millions of people lost their homes, jobs, and health care.”

She continues,”We have every reason to believe that more such crimes are reoccurring in finance right now. If we’ve learned anything, it’s that the driving goal of the finance world is to make a huge profit, the bigger the better, and that anything resembling self-regulation is worthless. Thanks largely to the industry’s wealth and powerful lobbies, finance is underpoliced.”

Two Especially Troubling Problems

May 15, 2018

One of these problems is found in the Chapter “Propaganda Machine: Online Advertising in Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Advertising is legitimate, but predatory advertising is certainly not. In predatory advertising weapons of math destruction are used to identify likely subjects to be exploited. Not all, but some for-profit colleges were built and grew through weapons of math destruction. People who were identified as being in need of education or training were preyed upon and sold expensive on-line courses, that were not likely to pay off in jobs or any sort of advancement.

HM learned a new word reading Dr. Kathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. That word was clopening. This is when an employee works late one night to close the store or cafe and then returns a few hours later, before dawn, to open it. Having the same employee closing and opening, or clopening, can make logistical sense for a company, but it leads to sleep-deprived workers and crazy schedules. Weapons of math destruction can identify optimal schedules for the company, but they also need to take into account the welfare of the employee. Scheduling can place the employee’s health in jeopardy along with the employee’s family life.

Laws are clearly needed here. As for the predatory advertisers marketing on-line courses, they should be closed down and fined. Unfortunately, the Consumer Financial Protection Bureau that was policing this problem has been shut down. Companies and businesses need to be held responsible for the health and welfare of their employees.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The General Problem of Proxies

May 14, 2018

This general problem of proxies is fairly ubiquitous as outlined in Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Remember that proxies are variables used to compensate for the actual variables for which data are unavailable. The Chapter “Ineligible to Serve” addresses problems proxies can create in getting a job. Once on the job proxies can make it more difficult to hold the job. This is described in the chapter, “Sweating Bullets: on the Job.” Proxies also cause problems in getting credit, which is described in the chapter “Collateral Damage: Landing Credit.” Similarly proxies present problems in getting insurance described in the chapter, “No Safe Zone: Getting Insurance.”

So the effects of Weapons of Math Destruction are ubiquitous. People need to be aware of when they might be being screwed by these weapons. So “Weapons of Math Destruction” needs to be generally read.

Indeed, there are reasons why these weapons are being used, but care must be taken to reduce or eliminate the destruction. It is not only the individuals being evaluated who need to be aware, but also the businesses and agencies using them. They should be aware of their shortcomings and the need for eliminating these shortcomings when possible. These models need to be made transparent, so the proxies can be identified, and the possibility of misclassifications can be addressed.

There is also a chapter titled “The Targeted Citizen,” but since that topic is so much in the news about Facebook and the interference of Russia in the presidential election, that will not be addressed here.

Ranking Colleges

May 13, 2018

This post is based on Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”.

In 1983 the newsmagazine “U.S. News & World Report” decided it would evaluate 1,800 colleges and universities throughout the United States and rank them for excellence. Had they honestly considered if they could accurately do this they could have saved the country and the countries’ colleges and universities from anxiety and confusion. But they were not honest and proceeded to build the magazine’s reputation and fortune.

How could one do this? One could conduct a national survey and have individuals rate the schools in terms of prestige. This could be done validly. But to rate them in terms of excellence? How is excellence defined? Would it be the satisfaction of recent graduates? Would it be the satisfaction of graduates further down the course of life?

The healthy memory blog has made the point in previous posts that depending on what a student wants to learn and what career the student wants to pursue should be primary factors in choosing a college. All colleges, even the most prestigious ones, differ in what they have to offer. And what about the cost-effectiveness of colleges? This is probably the most important factor for the majority of students. One can pay through the nose to attend a prestigious college, but what is the benefit for the cost incurred?

The magazine picked proxies that seemed to correlate with success. They looked at SAT scores, student-teacher ratios, and acceptance rates. They analyzed the percentage of incoming freshmen who made it to the sophomore year and the percentage of those who graduated. They calculated the percentage of living alumni who contributed money to their alma mater, surmising that if they gave a college money there was a good chance they appreciated the education there. Three-quarters of the ranking would be produced by an algorithm, an opinion formalized in code, that incorporated these proxies. In the other quarter they would factor in the subjective views of college officials throughout the country.

HM regards this procedure pretty much as ad hoc selection with no external validation. However, Dr. O’Neil is more charitable writing, “U.S. News first data-driven ranking came out in 1988, and the results seemed sensible. However, as the rankings grew into a national standard, a vicious feedback loop materialized. The trouble was that the rankings were self-reinforcing.” So if a college was rated poorly in “U.S. News,” its reputation would suffer, and conditions would deteriorate. Top students would avoid it, as would top professors. Alumni would howl and cut back on contributions. The ranking would go down further. Dr. O’Neil concludes that the ranking was destiny.

Everyone was acting foolishly. In fact, this was a jury-rigged methodology that provided a proxy estimate of a school’s prestige. ‘U.S. News” should have discontinued the survey. Universities should have disclaimed the methodology and the ratings. Instead, they played the game and took actions just to improve their ratings. Read the book to learn the gory details.

Dr. O’Neil notes that when you create a model from proxies, it is far simpler to game it. This is because proxies are easier to manipulate than the complicated reality they represent. This is a common problem with big data and weapons of math destruction.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Finance and Big Data

May 12, 2018

This post is based on Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Dr. O’Neil was originally applying her mathematical knowledge and skills in finance. In 2008 there was a catastrophic market crash. Although weapons of math destruction did not solely cause the financial crash, they definitely contributed to it. So Dr. O’Neil moved from finance to Big Data where her skills were readily transferable.

She writes, “In fact, I saw all kinds of parallels between finance and Big Data. Both industries gobble up the same pool of talent, much of it from elite universities like MIT, Princeton, or Stanford. These new hires are ravenous for success and have been focused on external metrics—like SAT scores and college admissions—their entire lives. Whether in finance or tech, the message they’ve received is that a they will be rich, that they will run the world. Their productivity indicates that they’re on the right track, and it translates into dollars. This leads to the fallacious conclusion that whatever they’re doing to bring in more money is good. It ‘adds value.’ Otherwise, why would the market reward it?”

She continues, “In both of these industries, the real world, with all of its messiness sits apart. The inclination is to replace people with data trails, turning them into more effective shoppers, voters, or workers to optimize some objective. This is easy to do, and to justify, when success comes back as an anonymous score and when the people affected remain ever bit as abstract as the numbers dancing across the screen.”

She worried about the separation between technical models and real people and about the moral repercussions of the separation. She saw the same pattern emerging in Big Data that she’d witnessed in finance: a false sense of security was leading to widespread use of imperfect models, self-serving definitions of success, and the growing feedback loops.

She continued working in Big Data. She writes that the her journey to disillusionment was more or less complete, and the misuse of mathematics was accelerating. She started a blog on this problem and in spite of almost daily blogging she barely kept up with all the ways she was hearing of people being manipulated, controlled, and intimidated by algorithms. It began with teachers working under inappropriate value-added models (read the book to learn about this), then the LSI-R risk model, and and continued from there. She quit her job to investigate full time the issue leading to this book.

Three Kinds of Models

May 11, 2018

This post is based on Dr. Cathy O’Neil’s book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. Many of us likely develop predictive models, but remain unaware what we are doing. So Dr. O’Neil describes an internal intuitive model she uses in planning family meals. She has a model of everyone’s appetite. She knows that one of her sons loves chicken (but hates hamburgers), while another will eat only pasta (with extra grated parmesan cheese). She also has to take into account that people’s appetites vary from day to day, so a change can catch her internal model by surprise. In addition to the information she has about her family, she knows the ingredients she has on hand or knows are available, plus her own energy, time, and ambition. The output is how and what she decides to cook. She evaluated the success of a meal by how satisfied her family seems at the end of it, how much they’ve eaten, and how healthy the food was. Seeing how well it is received and how much of it is enjoyed allows her to update her model for the next time she cooks. These updates and adjustments make it what is called a “dynamic model.”
Her model is a good model as long as she restricts it to her family. The technical term for this limitation is that it doesn’t scale. It will not work with larger or different families.

Examples of the best models are those used by professional baseball teams. There are an enormous number of variables that can be used to predict a teams performance. Moreover, these models allow the prediction of the performance of the team when different players are added or subtracted. The measure this model is designed to predict is the number of wins. Wins provides the variable that it used to predict and improve the models.

Recidivism models are used to predict the likelihood that a prisoner, after being released from prison will return to criminal behavior and end up back in jail. One of the more popular models is the Level of Service Inventory-Revised (LSI-R). It includes a lengthy questionnaire for the prisoner to fill out. One of the questions—“How many prior convictions have you had?” is highly relevant to the risk of recidivism. Others are also clearly related. For example “What part did others play in the offense? What part did drugs and alcohol play?”

Other questions are more problematic. For example a question about the first time they ever were involved with the police. For a white subject the only incident to report might be the one that brought him to prison. However, young black males are likely to have been stopped by police dozens of times, even when they’ve done nothing wrong. A 2013 study by the New York Civil Liberties Union found the while black and Latino males between the ages of fourteen and twenty-four make up only on 4.7% of the cities population, but accounted for 40.6% of the stop-and-frisk checks by police. More than 90% of those stopped were innocent. Some of the others might have been drinking underage or carrying a joint. And unlike most rich kids, they got in trouble for it. So if early “involvement” with police signals recidivism, poor people and racial minorities look far riskier.

Although statistical systems like the LSI-R are effective in gauging recidivism risk, or at least more accurate than a judge’s random guess, we find ourselves descending into a pernicious WMD feedback loop. A person who scores as “high risk” is likely to be unemployed and to come from a neighborhood where many of his friends and family have had run-ins with the law. Dr. O’Neil writes, “Thanks in part to the resulting high score on the evaluation, he gets a longer sentence, locking him away for more years in a prison where he’s surrounded by criminals, which raises the likelihood that he’ll return to prison. If he commits another crime, the recidivism model can claim another success. But in fact the model contributes to a toxic situation and helps to sustain it. That’s a signature quality of a WMD.

This risk and the value of the LSR-R could be tested. There could be two groups. A control group would be administered the questionnaire. Another group would be administered a modified version of the questionnaire that did not include responses that would tip the race of the individual. The participants could be tracked over time. If the modified version of the questionnaire actually resulted in the a lower rate of recidivism, then the original questionnaire could be identified as harmful, not only to the respondent, but also to society that was increasing recidivism rather than reducing it.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Weapons of Math Destruction

May 10, 2018

The title of this book is identical to the title of a book by Dr. Cathy O’Neil. The subtitle is “How Big Data Increases Inequality and Threatens Democracy.” Dr. O’Neil is a mathematician. She left her academic position to work as a quant (a quantitative expert) for D. E. Shaw, a leading hedge fund. Initially she was excited by working in the global academy. But the economy crashing in the autumn of 2008 caused her to reevaluate what she was doing.

She writes, “The crash made it all too clear that mathematics, once my refuge, was not only deeply entailed in the world’s problems, but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment—all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I love so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to a system that I now recognized as flawed.”

She writes that the crisis should have caused all to take a step back and try to figure out how math had been misused and how a similar catastrophe in the future could be prevented. She writes, “But instead, in the wake of the crisis, new mathematical techniques were hotter than ever and expanding into still more domains. They churned 24/7 through petabytes of information, much of it scraped from social media, or e-commerce websites. And increasingly they focused not on the movements of global financial markets but on human beings, on us. Mathematicians and statisticians were studying our desires, movements, and spending power. They were predicting our trustworthiness and calculating our potential as students, workers, lovers, criminals.”

These math-powered applications were based on choices made by fallible human beings. Although some choices were made with the best intentions, many of the models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Dr. O’Neil came up with a name for these harmful kinds of models: Weapons of Math Destruction, or WMDs for short.

She notes that statistical systems require feedback—something to tell them when they’re off track. The example she provides is that if amazon.com, through a faulty correlation, started recommended lawn care books to teenage girls, the click would plummet, and the algorithm would be tweaked until it got it right. However, without feedback, a statistical engine can continue spinning out faulty and damaging analysis while never learning from its mistakes. These models end up defining their own reality and use it to justify its results. She writes that this type of model is self-perpetuating highly destructive—and very common.

This book focuses on the damage inflicted by WMDs and the injustice they perpetuate. It discusses harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job.

Responsible Tech is Google’s Likely Update

May 9, 2018

The title of this post is identical to the title of an article by Elizabeth Dworkin and Haley Tsukayama in the 8 May 2018 issue of the Washington Post. At its annual developer conference scheduled to kick off today in its hometown of Mountain View, CA, Google is set to announce a new set of controls to its Android operating system, oriented around helping individuals and families manage the time they spend on mobile devices. Google’s chief executive, Sundar Pichai is expected to emphasize the theme of responsibility in his keynote address.

Pichai is trying to address the increased public skepticism and scrutiny of the technology regarding the negative consequences of how its products are used by billions of people. Some of this criticism concerns the addictive nature of many devices and programs. In January two groups of Apple shareholders asked the company to design products to combat phone addiction in children. Apple chief executive Tim Cook has said he would keep the children in his life away from social networks, and Steve Jobs placed strict limitation on his children’s screen time. Even Facebook admitted that consuming Facebook passively tends to put people in a worse mood according to both its internal research as well as academic reports. Facebook chief executive Mark Zuckerberg has said that his company didn’t take a broad enough view of our responsibility to society, in areas such as Russian interference and the protection of people’s data. HM thinks that this statement should qualify as the understatement of the year.

Google appears to be ahead of its competitors with respect to family controls. Google offers Family Link, which is a suite of tools that allows parents to regulate how much time their children can spend on apps and remotely lock their child’s device. FamilyLink gives parents weekly reports on children’s app usage and offers controls to approve the apps kids download.

Google has also overhauled Google news. The new layout show how several outlets are covering the same story from different angles. It will also make it easier to subscribe to news organizations directly from its app store.

HM visited Google’s campus at Mountain View, which was one of the trips of a month long workshop he attended provided. It looks more like a university campus than a technology business. Different people explained what they were working on, and we ate at the Google cafeteria. This cafeteria is large, offers a wide variety of delicious food, and is open 24 hours so staff can snack or dine for free any time they want.

The most talented programmer with whom HM was privileged to work with, left us for an offer at Google. She felt that this was a needed move for her to develop further her already excellent programming skills.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Data is Needed on Facial Recognition Accuracy

May 8, 2018

This post is inspired by an article titled “Over fakes, Facebook’s still seeing double” by Drew Harrell in the 5 May 2018 issue of the Washington Post. In December Facebook offered a solution of its worsening coverage of fake accounts: new facial-recognition technology to spot when a phony profile tries to use someone else’s photo. The company is now encouraging its users to agree to expand use of their facial data, saying they won’t be protected from imposters without it. The Post article notes that Katie Greenmail and other Facebook users who consented to that technology in recent months have been plagued by a horde of identity thieves.

After the Post presented Facebook with a list of numerous fake accounts, the company revealed that its system is much less effective than previously advertised: The tool looks only for imposters within a user’s circle of friends and friends of friend’s of friend;s—not the site’s 2 billion-user network, where the vast majority of doppelgänger accounts are probably born.

Before any entity uses facial recognition software, they should be compelled to test the software and describe in detail the sample it was developed on including the size and composition of that sample, and the performance of the software with respect to correct identifications, incorrect identifications, and no classifications. Facebook needed to do this testing and present the results. And Facebook users needed to demand these results from testing before using face recognition. How many time do users need to be burned by Facebook before they terminate interactions with the application?

The way facial recognition is used on police shows on television seems like magic. A photo is taken at night with a cellphone and is tested against a data base that yields the identity of the individual and his criminal record. These systems seem to act with perfection. HM has yet to see a show in which someone in a database is incorrectly identified, and that individual arrested by the police, interrogated and charged. That must happen. But how often and under what circumstances? It seems likely that someone with a criminal record is likely to be in the database and it is possible that the individual whose photo was taken is not in the database. If there is no match will the system make the best match that it can and make a person who is in the database a suspect in the crime?

The public, and especially defense lawyers, need to have quality data on how well these recognition systems perform.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.