Posts Tagged ‘J.C.R. Licklider’

Machines of Loving Grace

May 13, 2019

The title of this post is identical to the title of an excellent book by John Markoff. The subtitle is “The Quest for Common Ground.” The common ground referred to is that between humans and robots. The book covers, in excruciating detail, the development of artificial intelligence from the days of J.C.R. Licklider to 2015.

The book covers two lines of development. One from John McCarthy, which Markoff terms Artificial Intelligence (AI) and the other by Douglas Englebart, which Markoff terms Intelligence Augmented (IA). The former is concerned with making computers as smart as they can be, and the latter is concerned with using computers to augment human intelligence.

Markoff does not break down AI any further, but it needs to be. AI has been used by psychologists to model human cognition. So the ultimate goal here is to develop an understanding of human cognitive processes. AI has been quite informative. In attempting to model problems such as human vision, psychologists realized that they had overlooked some critical processes that were needed to explain perception. One should also regard AI as being a tool needed to develop theories of psychological processes.

There are also two types of AI. One is known as GOFIA, “Good Old Fashioned Artificial Intelligence” where computer code is developed to accomplish the task. GOFIA was stymied for a while due to the computational complexity it faced. Judea Pearl, the father of decapitated journalist Daniel, is a superb mathematician and logician. He developed Bayesian networks that successfully dealt with this problem and GOFIA proceeded further on with this expedited approach (enter “Pearl” into the search block of the healthy memory blog to learn more about this genius).

The other type is, or are neural nets. Here neural nets are designed to learn how to to accomplish a task. The problem with neural nets is that the programmers do not know how to solve the problem, rather they know how to design a neural net that solves the problem. Nightmare scenarios where computers take over the world would be the product of neural nets. With GOFAI problems could be solved by deleting lines of code.

Augmenting intelligence IA is what HM promotes. Here computer code serves as a mental prosthetic to enhance human knowledge and understanding. IA, unless it was the intelligence of a mad scientist, would not constitute a threat to humanity.

It is true that AI is required for robots to perform tasks that are difficult, boring, or dangerous. But the goal of an AI system must be understood or undesired consequences might result.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

LikeWar: The Weaponization of Social Media

January 13, 2019

The title of this post is identical to the title of a book by P.W. Singer and Emerson T. Brooking. Many of the immediately following posts will be based on or motivated by this book. The authors have been both exhaustive and creative in their offering. Since it is exhaustive only a sampling of the many important points can be included. Emphasis will be placed on the creative parts.

The very concept that led to the development of the internet was a paper written by two psychologists J.C.R Licklider and Robert W. Taylor titled “The Computer as a Communication Device.” Back in those days computers were large mainframes used for data processing. Licklider wrote another paper titled “Man Computer Symbiosis.” The idea here was that both computers and humans could benefit from the interaction between the two, a true symbiotic interaction. Unfortunately, this concept has been largely overlooked. Concentration was on replacing humans, who were regarded as slow and error prone, with computers. Today the fear is of the jobs lost by artificial intelligence. Attention needs to be focused on the interaction between humans and computers as advocated by Licklider.

But the notion of the computer as a communication device did catch on. More will be written on that in the following post.

The authors also bring Clausewitz into the discussion. Clausewitz was a military strategist famous for his saying, war is politics pursued in other means. More specifically he wrote, “the continuation of political intercourse with the addition of other means.” The two are intertwined, he explained. “War in itself does not suspend political intercourse or change it into something entirely different. In essentials that intercourse continues, irrespective of the means it employs.” War is political. And politics will always be at the heart of human conflict, the two inherently mixed. “The main lines along which military events progress, and to which they are restricted, are political lines that continue throughout the war into the subsequent peace.”

If only we could learn of what Clausewitz would think of today. Nuclear warfare was never realistic. Mutual Assured Destruction with the meaningful acronym (MAD) was never feasible. Conflicts need to be resolved, not the dissolution of the disagreeing parties. Today’s technology allows for the disruptions of financial systems, power grids, the very foundations of modern society. Would Clausewitz think that conventional warfare has become obsolete? There might be small skirmishes, but would standing militaries go all out to destroy each other. Having a technological interface rather than face to face human interactions seems to allow for more hostile and disruptive
interactions. Have politics become weaponized? Is that what the title of Singer and Brooking’s book implies?

The authors write that their research has taken them around the world and into the infinite reaches of the internet. Yet they continually found themselves circling back to five core principles, which form the foundation of the book.
First, the internet has left adolescence.

Second, the internet has become a battlefield.

Third, this battlefield changes how conflicts are fought.

Fourth, this battle changes what “war” means.

Fifth, and finally, we’re all part of this war.

Here are the final two paragraphs of the first chapter.

“The modern internet is not just a network but an ecosystem of nearly 4 billion souls, each with their own thoughts and aspirations, each capable of imprinting a tiny piece of themselves on the vast digital commons. They are the targets not of a single information war but of thousands and potentially millions of them. Those who can manipulate this swirling tide, to steer its direction and flow, can accomplish incredible good. They can free people, expose crimes, save lives, and seed far-reaching reforms. But they can also accomplish astonishing evil. They can foment violence, stoke hate, sow falsehoods, incite wars, and even erode the pillar of democracy itself.

Which side succeeds depends, in large part, on how much the rest of us learn to recognize this new warfare for what it is. Our goal in “LikeWar” is to explain exactly what’s going on and to prepare us all for what comes next.”

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Robots Will Be More Useful If They are Made to Lack Confidence

July 17, 2017

The title of this post is identical to the title of an article by Matt Reynolds in the News & Technology section of the 10 June 2017 issue of the New Scientist. The article begins “CONFIDENCE in your abilities is usually a good thing—as long as you know when it’s time to ask for help. Reynolds notes that as we build ever smarter software, we may want to apply the same mindset to machines.

Dylan Hadfield-Menell says that overconfident AIs can cause all kinds of problems. So he and his colleagues designed a mathematical model of an interaction between humans and computers called the “off-switch-game.” In the theoretical set-up robots are given a task to do and humans are free to switch them off whenever they like. The robot can also choose to disable its switch so the person cannot turn it off.

Robots given a high level of “confidence” that they were doing something useful would never let the human turn them off, because they tried to maximize the time spent doing their task. Not surprisingly, a robot with low confidence would always let a human switch it off, even if it was doing a good job.

Obviously, calibrating the level of confidence is important. It is unlikely that humans would ever provide a level of confidence that would not allow them to shut down the computer. A problem here is that we humans tend to be overconfident and to be unaware of how much we do not know. This human shortcoming is well documented in a book by Steven Sloman and Philip Fernbach titled “The Knowledge Illusion: Why We Never Think Alone.” Remember that transactive memory is information that is found in our fellow human beings and in technology that ranges from paper to the internet. Usually we eventually learn the best sources of information in our fellow humans and human organizations, and we need to learn where to find and how much confidence to have in information stored in technology, which includes AI robots. Just as we can have the wrong friends and sources of information, we have the same problem with robots and external intelligence.

So the title is wrong. Robots may not be more useful if they are made to lack confidence. They should have a calibrated level of confidence just as we humans should have calibrated levels of confidence depending upon the task and how skilled we are. Achieving the appropriate levels of confidence between humans and machines is a good example of the Man-Machine Symbiosis J.C.R. Licklider expounded upon in his Classic paper “Man-Computer Symbiosis.”

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.