Posts Tagged ‘Judea Pearl’

Machines of Loving Grace

May 13, 2019

The title of this post is identical to the title of an excellent book by John Markoff. The subtitle is “The Quest for Common Ground.” The common ground referred to is that between humans and robots. The book covers, in excruciating detail, the development of artificial intelligence from the days of J.C.R. Licklider to 2015.

The book covers two lines of development. One from John McCarthy, which Markoff terms Artificial Intelligence (AI) and the other by Douglas Englebart, which Markoff terms Intelligence Augmented (IA). The former is concerned with making computers as smart as they can be, and the latter is concerned with using computers to augment human intelligence.

Markoff does not break down AI any further, but it needs to be. AI has been used by psychologists to model human cognition. So the ultimate goal here is to develop an understanding of human cognitive processes. AI has been quite informative. In attempting to model problems such as human vision, psychologists realized that they had overlooked some critical processes that were needed to explain perception. One should also regard AI as being a tool needed to develop theories of psychological processes.

There are also two types of AI. One is known as GOFIA, “Good Old Fashioned Artificial Intelligence” where computer code is developed to accomplish the task. GOFIA was stymied for a while due to the computational complexity it faced. Judea Pearl, the father of decapitated journalist Daniel, is a superb mathematician and logician. He developed Bayesian networks that successfully dealt with this problem and GOFIA proceeded further on with this expedited approach (enter “Pearl” into the search block of the healthy memory blog to learn more about this genius).

The other type is, or are neural nets. Here neural nets are designed to learn how to to accomplish a task. The problem with neural nets is that the programmers do not know how to solve the problem, rather they know how to design a neural net that solves the problem. Nightmare scenarios where computers take over the world would be the product of neural nets. With GOFAI problems could be solved by deleting lines of code.

Augmenting intelligence IA is what HM promotes. Here computer code serves as a mental prosthetic to enhance human knowledge and understanding. IA, unless it was the intelligence of a mad scientist, would not constitute a threat to humanity.

It is true that AI is required for robots to perform tasks that are difficult, boring, or dangerous. But the goal of an AI system must be understood or undesired consequences might result.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Thinking 2.0

March 9, 2016

This  post was inspired by an article in the February 26, 2016 edition of the “New Scientist” written by Michael Brooks.  The title of the article is “A new kind of logic:  How to upgrade the way we think.”    There are many healthymemoy blog posts about the limitations of our cognitive processes.  First of all, are attentional capacity is quite limited and requires selection.  Our working memory capacity is around 5 or fewer items.  There are healthy memory blog posts on cognitive misers and cognitive spendthrifts.  Thought requires cognitive effort that we are often reluctant to spend making us cognitive misers.  And there are limits to the amount of cognitive effort we can expend.  Cognitive effort spent unwisely can be costly.

Let me elaborate on the last statement with some personal anecdotes.  Ohio State was on the quarter system when I attended and my initial goal was to begin college right after graduation in the summer quarter and to attend quarter consecutively so that I would graduate within three years.  Matters when fairly well until my second quarter when I earned the only “D” in my life.  Although I did get one “A” it was in a course for which I had already read the textbook in high school.  I replaced and continued to attend consecutive quarters, but only part time during he summer.  I was in the honors program and managed to graduate in 3.5 years with a Bachelor’s of Arts with Distinction in Psychology.  I tried going directly into graduate studies, but found that I had already expended my remaining cognitive capital.  So I entered the Army to give my mind a rest.

When I returned and began graduate school I was a cognitive spendthrift who wanted to learn as much as I could in my field.  However, I found that I could not work long hours.  If I did my brain turned to mush and I was on the verge of drooling.  So I found it profitable to stop my cognitive spendthrift days and marshal my cognitive resources. It worked and I earned my doctorate psychology from the University of Utah.

Michael Brooks argues that we are stuck in Thinking 1.0.   He mentions that our conventional economic models bear no resemblance to the real world.  We’ve had unpredicted financial crises because of incorrect rational economic models.  This point has been  made many times in the healthy memory blog.  Behavioral economics should address these shortcomings, but it is still in an early stage of development.

Ioannidis’s article has convinced  statisticians and epidemiologists that more than half of scientific papers reach flawed conclusions especially in medical science, neuroscience and psychology.

Currently we do have big data, machine learning, neural nets, and, of course, the Jeopardy champion Watson.  Although these systems provide answers, they do not provide explanations as to how they arrived at the answers.  And there are statistical relations in which it is difficult to determine causality, that is, what causes what.

Michael Brooks argues that Thinking 2.0 is needed.  Quantum logic makes the distinction between cause and effect (one thing influencing another) and common cause (two things responding to the same effect).  The University of Pittsburgh opened the Center for Causal Discovery (www.ccd.pit.edu) in 2014.

Judea Pearl, a computer scientist and philosopher at UCLA (and the father of the tragically slain journalist Daniel Pearl) says “You simply cannot grasp causal relationships with statistical language.”  Judea Perl has done some outstanding mathematics and has developed software that has made intractable AI programs tractable and has provided for distinguishing  cause and effect.  Unlike neural nets, machine learning, and Watson, it provides the logic, 2.0 logic I believe, as to reasoning behind the conclusions or actions.

It is clear that Thinking 2.0 will require computers.  But let us hope that humans will understand and be able to develop narratives from their output.  If we just get answers from machine oracles will we still be thinking in 2.0

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.