Posts Tagged ‘priests’

The Digital Media Environment

June 15, 2019

This is the fifth post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this post is identical to the title of the fifth section of this book. Rushkoff writes, whoever controls media controls society.

“Each new media revolution appears to offer people a new opportunity to wrest control from an elite few and reestablish the social bonds that media has compromised.” But the people have always remained one entire media revolution behind those who would dominate them.

Rushkoff cites the example of ancient Egypt that was organized under the presumption that the pharaoh could directly hear the words of the gods, as if he were a god himself. On the other hand, the masses could not hear the gods at all; they could only believe.

The invention of text might have led to a literate culture. Instead text was used just to keep track of possessions and slaves. When writing eventually was used by religion, only the priests could read the texts and understand the Hebrew or Greek in which they were written. The masses could hear the Scriptures being read aloud, thus they could hear the putative words of God, but the priests kept the elites’ capability of literacy.

During the Renaissance when the printing press was invented, the people gained the ability to read, but only the king and his selected allies could produce texts. Similarly, radio and television were controlled by corporations or repressive states. So people could only listen or watch passively.

Rushkoff writes, “The problem with media revolutions is that we too easily lose sight of what is truly revolutionary. By focusing on the shiny new toys and ignoring the human empowerment potentiated by these new media—the political and social capabilities they are retrieving—we end up surrendering them to the powers that be. Then we and our new inventions become mere instruments for some other agenda.

The early internet enabled new conversations between people who might never have connected in real life. The networks compressed distance between physicists in California, hackers in Holland, philosophers in eastern Europe, and animators in Japan. These early discussion platforms leveraged the fact that unlike TV or the telephone, internet messaging didn’t happen in real time. Users would download net discussions, read them on their own time, offline, and compose a response after an evening of thought and editing. Then they would log back onto the net, upload he contribution, and wait to see what others thought. The internet was a place where people sounded and acted smarter than they do in real life. This was a virtual space where people brought their best selves, and where the high quality of the conversations was so valued that communities governed these spaces the way a farmer’s cooperative protects a common water supply. To gain access to the early internet, users had to digitally sign an agreement not to engage in any commercial activity. Rushkoff writes “Even the corporate search and social platforms that later came to monopolize the net originally vowed never to allow advertising because it would tain the humanistic cultures they were creating.”

Consider how much better this was when people actually thought for a time, rather than responding immediately. Previously, System 2 processes were involved. Currently, responses are immediate, emotional System 1 processes.

Rushkoff writes, “ Living in a digitally enforced attention economy means being subjected to a constant assault of automated manipulations. Persuasive technology is a design technology taught and developed at some of America’s leading universities and then implemented on platforms from e-commerce sites and social networks to smartphones and fitness wristbands. The goal is to generate ‘behavioral change’ and ‘habit formation,’ most often without the user’s knowledge or consent. Behavioral design theory holds that people don’t change their behavior because of shifts in their attitudes and opinions. On the contrary, people change their attitudes to match their behaviors. In this model, we are more like machines than thinking, autonomous beings.”

Much or this has been discussed in previous health memory posts, especially those based on the book “Zucked.”

Rushkof writes, “Instead of designing technologies that promote autonomy and help us make informed decisions, the persuasion engineers in charge of our biggest digital companies are hard at work creating interfaces that thwart our thinking and push us into an impulsive response where thoughtful choice—or thought itself—are nearly impossible.” This explains how Russia was able to promote successfully its own choice to be President of the United States.

Previous healthy memory blog posts have argued that we are dumber when we are using smartphones and social media. We understand and retain less information. We comprehend with less depth, and make impulsive decisions. We become less capable of distinguishing the real from the fake, the compassionate from the cruel, and the human and the non-human. Rushkoff writes, “Team Human’s real enemies, if we can call them that, are not just the people who are trying to program us into submission, but the algorithms they’ve unleashed to help them do it.”

Rushkoff concludes this section as follows: “Human ideals such as autonomy, social contact, and learning are again written out of the equation, as the algorithms’ programming steers everyone and everything toward instrumental ends. When human beings are in a digital environment they become more like machines, entities composed of digital materials—the algorithms—become more like living entities. They act as if they are our evolutionary successors. No wonder we ape their behavior.”