Posts Tagged ‘machine learning’

Thinking 2.0

March 9, 2016

This  post was inspired by an article in the February 26, 2016 edition of the “New Scientist” written by Michael Brooks.  The title of the article is “A new kind of logic:  How to upgrade the way we think.”    There are many healthymemoy blog posts about the limitations of our cognitive processes.  First of all, are attentional capacity is quite limited and requires selection.  Our working memory capacity is around 5 or fewer items.  There are healthy memory blog posts on cognitive misers and cognitive spendthrifts.  Thought requires cognitive effort that we are often reluctant to spend making us cognitive misers.  And there are limits to the amount of cognitive effort we can expend.  Cognitive effort spent unwisely can be costly.

Let me elaborate on the last statement with some personal anecdotes.  Ohio State was on the quarter system when I attended and my initial goal was to begin college right after graduation in the summer quarter and to attend quarter consecutively so that I would graduate within three years.  Matters when fairly well until my second quarter when I earned the only “D” in my life.  Although I did get one “A” it was in a course for which I had already read the textbook in high school.  I replaced and continued to attend consecutive quarters, but only part time during he summer.  I was in the honors program and managed to graduate in 3.5 years with a Bachelor’s of Arts with Distinction in Psychology.  I tried going directly into graduate studies, but found that I had already expended my remaining cognitive capital.  So I entered the Army to give my mind a rest.

When I returned and began graduate school I was a cognitive spendthrift who wanted to learn as much as I could in my field.  However, I found that I could not work long hours.  If I did my brain turned to mush and I was on the verge of drooling.  So I found it profitable to stop my cognitive spendthrift days and marshal my cognitive resources. It worked and I earned my doctorate psychology from the University of Utah.

Michael Brooks argues that we are stuck in Thinking 1.0.   He mentions that our conventional economic models bear no resemblance to the real world.  We’ve had unpredicted financial crises because of incorrect rational economic models.  This point has been  made many times in the healthy memory blog.  Behavioral economics should address these shortcomings, but it is still in an early stage of development.

Ioannidis’s article has convinced  statisticians and epidemiologists that more than half of scientific papers reach flawed conclusions especially in medical science, neuroscience and psychology.

Currently we do have big data, machine learning, neural nets, and, of course, the Jeopardy champion Watson.  Although these systems provide answers, they do not provide explanations as to how they arrived at the answers.  And there are statistical relations in which it is difficult to determine causality, that is, what causes what.

Michael Brooks argues that Thinking 2.0 is needed.  Quantum logic makes the distinction between cause and effect (one thing influencing another) and common cause (two things responding to the same effect).  The University of Pittsburgh opened the Center for Causal Discovery ( in 2014.

Judea Pearl, a computer scientist and philosopher at UCLA (and the father of the tragically slain journalist Daniel Pearl) says “You simply cannot grasp causal relationships with statistical language.”  Judea Perl has done some outstanding mathematics and has developed software that has made intractable AI programs tractable and has provided for distinguishing  cause and effect.  Unlike neural nets, machine learning, and Watson, it provides the logic, 2.0 logic I believe, as to reasoning behind the conclusions or actions.

It is clear that Thinking 2.0 will require computers.  But let us hope that humans will understand and be able to develop narratives from their output.  If we just get answers from machine oracles will we still be thinking in 2.0

© Douglas Griffith and, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.

Limits to Human Understanding

May 20, 2014

This blog post was motivated by an article in the New Scientist1, “Higher State of Mind” by Douglas Heaven.  It raised the question of limits to human understanding, a topic of longstanding interest to myself.  The article reviews two paths Artificial Intelligence has taken.  One approach involved rule-based programming.  Typically the objective here was to model human information processing with the goal of having the computer “think” like a human.  This approach proved quite valuable in the development of cognitive science, as it identified problems that needed to be addressed in the development of theories and models of human information processing.   Unfortunately, it was not very successful in solving complex computational problems.
The second approach eschewed the modeling of the human and focused on developing computational solutions to difficult problems.  Machines were programed to learn and to compute statistical correlations  and inferences by studying patterns in vast amounts of data.  Neural nets were developed that successfully solved a large variety of complex computational problems.  However, although the developers of these neural nets could describe the neural net they themselves had programmed, they could not understand  how the conclusion was made.  Although they can solve a problem, they are unable to truly understand the problem.  So, there are areas of expertise where machines can be said to know not only more than we do, but also know more than we are capable of understanding.  In other words, what we can understand  may be constrained by our biological limitations.
So, what does the future hold for us?  There is an optimistic scenario and a pessimistic scenario.  According to Kurzweil a singularity will be achieved by transcending biology and we shall augment ourselves with genetic alterations,  nanotechnolgy, and machine learning.  He sees a time when we shall become immortal.  In fact, he thinks that this singularity is close enough that he is doing everything to extend his life so that he shall achieve this immortality.  This notion of a singularity was first introduced in the fifties by the mathematician John von Neuman.
A pessimistic scenario has been sketched  out by Bill Joy.  I find his name  a bit ironic.  He has written a piece titled, “Why the Future Doesn’t Need Us”  where he argues that technology might be making us an endangered species.
So these are two extremes.  A somewhat less extreme scenario was outlined in the movie, Collosus:  the Forbin Project, which was based on a novel by Dennis Feltham Jones, Collus.  The story takes place during the cold war with the confrontation between the United States and the Soviet Union.  The United States has built a complex sophisticated computer, the Collosus to manage the country’s defenses in the event of a nuclear war.  Shortly after the Collosus becomes operational, it established contact with a similar computer built by the Soviet Union.  These two systems agree that humans are not intelligent enough to manage their own affairs, so they eventually hey take over the control of the world.
So what does the future hold for us?  Who knows?

© Douglas Griffith and, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.