Posts Tagged ‘Kurzweil’

Mechanomorphism

June 16, 2019

This is the sixth post based on a new book by Douglas Rushkoff titled “TEAM HUMAN.” The title of this post is identical to the title of the sixth section of this book. Rushkoff begins, “When autonomous technologies appear to be calling all the shots, it’s only logical for humans to conclude that if we can’t beat them, we may as well join them. Whenever people are captivated—be they excited or enslaved—by a new technology, it becomes their new role model, too. “

“In the Industrial Age, as mechanical clocks dictated human time, we began to think of ourselves in very mechanical terms. We described ourselves as living in a ‘clockwork universe,’ in which the human body was one of the machines.” Mechanical metaphors emerged in our language. We needed to grease the wheels, crank up the business, dig deeper, or turn a company into a well-oiled machine.

In the digital age we view our world as computational. Humans are processors; everything is data. Logic does not compute. He multitasks so well he’s capable of interfacing with more than one person in his network at a time.

Projecting human qualities onto machines is called anthropomorphism, but we are projecting machine qualities onto humans. Seeing a human being as a machine or computer is called mechanomorphism. This is not just treating machines as living humans; it’s treating humans as machines.

When we multitask we are assuming that, just like computers, we can do more than one task at a time. But research has been shown, and related in healthy memory blog posts, that when we multitask, our performance suffers. Sometimes this multitasking, such as when we talk, or even worse, text, while we are driving, we can die.

It is both curious and interesting that drone pilots, who monitor and neutralize people by remote control from thousands of miles away, experience higher rates of post-traumatic stress disorder than “real” pilots. An explanation for these high rates of distress is that, unlike regular pilots, drone pilots often observe their targets for weeks before killing them. These stress rates remain disproportionately high even for missions in which the pilots had no prior contact with the victims.

Rushkoff writes that a more likely reason for the psychic damage is that this drone pilots are trying to exist in more than one location at a time. They might be in a facility in Nevada operating a lethal weapon system deployed on the other side of the planet. After dropping ordnance and killing a few dozen people, the pilots don’t land their planes, climb out, and return to the mess hall to debrief over beers with their fellow pilots. They just log out, get into their cars, and drive home to the suburbs for dinner with their families. It’s like being two different people in different places in the same day. But none of us is two people or can be in more than one place. Unlike a computer program, which can be copied and run from several different machines simultaneously, human beings have one “instance” of themselves running at a time.
Rushkoff writes, “We may want to be like the machines of our era, but we can never be as good at being digital devices as the digital devices themselves. This is a good thing, and maybe the only way to remember that by aspiring to imitate our machines, we leave something even more important behind: our humanity.’

The smartphone, along with all the other smartphones, create an environment: a world where anyone can reach us at any time, where people walk down public sidewalks in private bubbles, and where our movements are tracked by GPS and stored in marketing and government databases for future analysis. In turn, these environmental factors promote particular states of mind, such as paranoia about be tracked, a constant state of distraction, and fear of missing out.

The digital media environment impacts us collectively, as an economy and as a society. Investors’ expectations of what a stock’s chart should look like given the breathtaking pace at which a digital company can reach “scale” has changed, as well as how a CEO should surrender the long-term health of a company for the short-term growth of shares. Rushkoff notes that the internet’s emphasis on metrics and quantity over depth and quality has engendered a society that values celebrity, sensationalism, an numeric measures of success. The digital media environment expresses itself in the physical environment s well; the production, use, and disposal of digital technologies depletes scarce resources, expends massive amount of energy, and pollutes vast regions of the planet.

Rushkoff concludes, “Knowing the particular impacts of a media environment on our behaviors doesn’t excuse our complicity, but it helps us understand what we’re up against—which way things are tilted. This enables us to combat their effects, as well as the darker aspects of our own nature that they provoke.”

If one assumes that humanity is a pure mechanistic affair, explicable entirely in the language of data processing then what’s the difference whether human beings or computers are doing that processing. Transhumanists hope to transcend biological existence. Kurzweil’s notion of a singularity in which human consciousness is uploaded into a computer has been written off in previous posts. The argument that these previous posts has made is that biology and silicon are two different media that operate in different ways. Although they can interact they cannot become one.

Rushkoff’s concludes, “It’s not that wanting to improve ourselves, even with seemingly invasive technology, is so wrong. It’s that we humans should be making active choices about what it is we want to do to ourselves, rather than letting the machines, or the markets propelling them, decide for us.

Cyborgs

May 17, 2019

This post is motivated by material in an excellent book by John Markoff titled “Machines of Loving Grace: The Quest for Common Ground.” Cyborg stands for “cybernetic Organism,” a term formulated by medical researchers in 1969 who were thinking about intentionally enhancing humans to prepare them for the exploration of space. They foresaw a new kind of creature—half human, half mechanism—capable of surviving in harsh environments.

It seems even if Kurzweil is capable of uploading his mind into a computer, it would be a frustrating experience unless it was a cyborg. It is clear that the brain can issue motor movements to machines. So output issues would not be a problem. And suppose that Kurzweil successfully uploads his mind to this cyborg. The question remains what would the phenomenal experience be for Kurzweil or any human. Kurzweil’s fundamental concept is that his mind in the computer would give him extraordinary mental powers. He probably could do amazing computational exercises. But would he understand, in a phenomenal sense, what he was doing? He might even be able to write poetry, but would he understand the poetry. And what about his personality. Would he become more humanistic, or would he become mechanical. What about a soul and a sense of morality? What about one’s humanity? Would it be lost?

Would cyborgs be able to breed and produce new cyborgs? Presumably they would be immortal.

This seems like a great topic for science fiction. Unfortunately, HM does not read science fiction. Do any science fiction readers who also read this blog have any recommendations? If so, please supply them in the comments section.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Singulataritarians

May 16, 2019

This is another post using “Machines of Loving Grace: The Quest for Common Ground” by John Markoff as a point of departure. Perhaps the logical result of combining Artificial Intelligence (AI) with Intelligent Augmentation (IA) is a singularity, the combining of the two. Kurzweil has written a book “How to Create a Mind: The Secret of Human Thought Revealed.” HM would like to see a review of this book by a psychologist. As a psychologist he thinks we have much more to learn before we can even consider to attempt building a mind. Yet apparently Kurzweil, an engineer, is convinced that he can. Moreover, he thinks he can upload his brain/mind into this machine. The following is taken from the Wikipedia:

• The Singularity is an extremely disruptive, world-altering event that forever changes the course of human history. The extermination of humanity by violent machines is unlikely (though not impossible) because sharp distinctions between man and machine will no longer exist thanks to the existence of cybernetically enhanced humans and uploaded humans.

Kurzweil is taking means (diets, drugs, etc.) to assure that he shall be able to upload himself into the machine and achieve eternal life.

Presumably, his intention is to upload his brain into the machine. What he forgets is that he is a biological organism. His memory is biologically based on chemical changes that take time to implement. In other words, his mind uploaded to a computer would be nothing but buzzing noise. Consider how fast a computer printout occurs. Then consider how long it takes not just to read, but to assimilate the meaning of the information. Consider the paltry few seconds it takes to download a book to an iPad. Then consider not just how long it takes to read the book, but to assimilate the material in the book and related it to old knowledge and to update current knowledge.

Kurzweil presents the best case for a liberal education, one that includes courses in psychology, biology, and neurochemistry.

© Douglas Griffith and healthymemory.wordpress.com, 2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

READER COME HOME

October 18, 2018

The title of this post is the same as the title of an important book by Maryanne Wolf. The subtitle is “The Reading Brain in the Digital World.” Any new technology offers benefits, but it may also contain dangers. There definitely are benefits from moving the printed world into the digital world. But there are also dangers, some of which are already quite evident. One danger is the feeling that one always needs to be plugged in. There is even an acronym for this FOMO (Fear of Missing Out). But there are costs to being continually plugged in. One is superficial processing. One of the best examples of this is of the plugged-in woman who was asked what she thought of OBAMACARE. She said that she thought it was terrible and was definitely against it. However, when she was asked what she thought of the Affordable Care Act, she said that she liked it and was definitely in favor of it. Of course, the two are the same.

This lady was exhibiting an effect that has a name, the Dunning-Krueger effect. Practically all of us think we know more than we do. Ironically, people who are quite knowledgeable about a topic are aware of their limitations and frequently qualify their responses. So, in brief, the less you know the more you think you know, but the more you know, the less you think you know. Moreover, this effect is greatly amplified in the digital age.

There is a distinction between what is available in our memories and what is accessible in our memories. Very often we are unable to remember something, but we do know that it is present in memory. So this information is available, but not accessible. There is an analogous effect in the cyber world. We can find information on the internet, but we need to look it up. It is not available in our personal memory. Unfortunately, being able to look something up on the internet is not identical to having the information available in our personal memories so that we can extemporaneously talk about the topic. We daily encounter the problem of whether we need to remember some information or whether it would be sufficient to look it up. We do not truly seriously understand something until it is available in our personal memories. The engineer Kurtzweil is planning on extending his life long enough so the he can be uploaded to a computer, thus achieving a singularity with technology. Although he is a brilliant engineer, he is woefully ignorant of psychology and neuroscience. Digital and neural codes differ and the processing systems differ, so the conversion is impossible. However, even if it were understanding requires deep cognitive and biological processing. True understanding does not come cheaply.

Technology can be misused and it can be very tempting to misuse technology. However, there are serious costs. Maryanne Wolf discusses the pitfalls and the benefits of technology. It should be understood that we are not victims of technology. Rather we need to use technology not only so that we are not victims, but also so we use technology synergistically.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Brain Implant Boosts Human Memory by Mimicking How We Learn

November 18, 2017

The title of this post is identical to the title of a News piece by Jessica Hamzelou in the 18 November 2017 issue of the New Scientist. The team doing the research says,
“Electrical shocks that simulate the patterns seen in the brain when you are learning have enhanced human memory for the first time, boosting performance on tests by up to 30%.” Dong Song of the University of Southern California says, “We are writing the neural code to enhance memory function. This has never been done before.”

The device mimics brain signals associated with learning and memory, stimulating similar patterns of brain activity in the hippocampus via electrodes. This device was implanted in 20 volunteers who were already having electrodes placed in their brains to treat epilepsy.

The first stage was to collect data on patterns of activity in the brain when the volunteers were doing a memory test. The test involved trying to remember which unusual, blobby shapes they had been shown 5 to 10 seconds before. This test measures short- term memory. People normally score around 80% on this task.

The volunteers also did a more difficult version of the test, in which thy had to remember images they had seen between 10 and 40 minutes before. This measures working memory.

Then the team used this data to work out the patterns of brain activity associated with each person’s best memory performances. The group then made the device electrically stimulate similar brain activity in the volunteers while they did more tests.

A third of the time, the device stimulated the participants brains in a way the team thought would be helpful. Another third of the time, it stimulated the brain with random patterns of electricity. For the remaining third of the time, it didn’t stimulate the brain at all.

Memory performance improved by about 15% in the short-term memory test and around 25% in the working-memory test when the correct stimulation pattern was used, compared to no stimulation at all. Some improved by 30%. Random stimulation worsened performance. Song says the “It is the first time a device like this has been fond to enhance an aspect of human cognition.”

Chris Bird of the University of Sussex, UK thing that such a device may be useful for treating medical conditions. However the prosthesis wouldn’t be able to replace the hippocampus entirely. He says, “The hippocampus is quite a large structure and they are only recording from a very small area.’

Now the team is working on ways to enhance other brain functions. Song says,”The approach is very general . If you can improve the input/output of one brain region, you could apply it to other brain regions. Good candidates for this are skills localized to particular parts of the brain, such as sensation of the outside world, vision, and how we move. Enhancing these might improve a person’s hand-eye coordination. However, cognitive functions like intelligence involve many brain regions working together so wouldn’t make good targets.

There are individuals like Kurzweil who think that their brains can be uploaded to silicon, or that direct connections can run between computers and the brain. What these individuals are ignoring is that communications must be in the language of the brain. The research presented here shows what must be done to communicate and exchange information to the brain.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

An AI Armageddon

July 27, 2017

This post is inspired by an article by Cleve R. Wootson, Jr. in the July 24, 2017 Washington Post article titled, “What is technology leader Musk’s great fear? An AI Armageddon”.

Before addressing an AI Armageddon Musk speaks of his company Neuralink, which would devise ways to connect the human brain to computers. He said that an internet-connected brain plug would allow someone to learn something as fast at it takes to download a book. Everytime HM downloads a book to his iPad he wonders, if only… However, HM knows some psychology and neuroscience, topics in which Musk and Kurzweil have little understanding. Kurzweil is taking steps to prolong his life until his brain can be uploaded to silicon. What these brilliant men do not understand is that silicon and protoplasm require different memory systems. They are fundamentally incompatible. Now there is promising research where recordings are made from the rat’s hippocampi while they are learning to perform specific tasks. Then they will try to play these recordings into the hippocampi of different rats and see how well they can perform the tasks performed by the previous rats. This type of research, which stays in the biological domain, can provide the basis for developing brain aids for people suffering from dementia, or who have had brain injuries. The key here is that they are staying in the biological domain.

This biological silicon interface needs to be addressed. And it would be determined that this transfer of information would not be instantaneous, it would be quite time consuming. And even if this is solved, both the brain and the human are quite complicated and there needs to be time for consolidation and other processes. Even then there is the brain mind distinction. Readers of this blog should know that the mind is not contained within the brain, but rather the brain is contained within the mind.

Now that that’s taken care off, let’s move on to Armageddon. Many wise men have warned us of this danger. Previous healthy memory posts, More on Revising Beliefs, being one of them reviewed the movie “Collosus: the Forbin Project.” The movie takes place during the height of the cold war when there was a realistic fear that a nuclear war would begin that would destroy all life on earth. Consequently, the United States created the Forbin Project to create Colossus. The purpose of Colossus was to prevent a nuclear war before it began or to conduct a war once it had begun. Shortly after they turn on Colossus, the find it acting strangely. They discover that it is interacting with the Soviet version of Colossus. The Soviets had found a similar need to develop such a system. The two systems communicate with each other and come to the conclusion that these humans are not capable of safely conducting their own affairs. In the movie the Soviets capitulate to the computers and the Americans try to resist but ultimately fail.

So here is an example of beneficent AI; one that prevents humanity from destroying itself. But this is a singular case of beneficent AI. The tendency is to fear AI and predict either the demise of humanity or a horrendous existence. But consider that perhaps this fear is based on our projecting our nature on to silicon. Consider that our nature may be a function of biology, and absent biology, these fears don’t exist.

One benefit of technology is that the risks of nuclear warfare seem to have been reduced. Modern warfare is conducted by technology. So the Russians do not threaten us with weapons; rather they had technology and tried to influence the election by hacking into our systems. This much is known by the intelligence community. The Russians conducted warfare on the United States and tried to have their candidate, Donald Trump, elected. Whether they succeeded in electing Donald Trump cannot be known in spite of claims that he still would have been elected. But regardless of whether their hacking campaign produced the result, they definitely have the candidate they wanted.

Remember the pictures of Trump in the Oval Office with his Russian buddies (Only Russians were allowed in the Oval Office). He’s grinning from ear to ear boasting about how he fired his FBI Director and providing them with classified intelligence that compromised an ally. Then he tries to establish a secure means of communication with the Russians using their own systems. He complains about the Russian investigation, especially those that involve his personal finances. Why is he fearful? If he is innocent, he will be cleared, and the best thing would be to facilitate the investigation rather than try to obstruct and invalidate it. Time will tell.

How could a country like the United States elect an uncouth, mercurial character who is a brazen liar and who could not pass an elementary exam on civics? Perhaps we are ready for an intervention of benign AI.

© Douglas Griffith and healthymemory.wordpress.com, 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

 

 

More on Revising Beliefs

August 10, 2015

This is the third post in a series of posts on Nilsson’s book, Understanding Beliefs.  Nils J. Nilsson, a true genius who is one of the founders of artificial intelligence, recommends the scientific method, as the scientific method is the primary reason underlying the progress humans have made in the past several centuries.  We know from previous healthy memory blog posts that beliefs are difficult to change.  Yet we inhabit an environment in which there is ongoing dynamic change.  Moreover, modern technology accelerates the amount of information that is being processed and the amount of change that occurs.

The immediately preceding healthy memory post, “Revising Beliefs,” expressed extreme skepticism that there was sufficient sophistication among the public to implement the scientific method on a large scale in the political arena. Suppose this is indeed the case.  Suppose the world will be characterized by increasing polarization so that little or no progress can be made.  What is a possible remedy?

Here I wish that Nils J. Nilsson would write a second book on how technology, in the lingo of the healthy memory blog transitive memory, might be used to address this problem.  During the Cold War there was a movie, Collosus:  The Forbin Project.  At this time there was a realistic fear that a nuclear exchange could occur between the United States and the Soviet Union that would obliterate life on earth.  In the movie the United States has built a complex sophisticated computer, the Collosus to manage the country’s defenses in the event of a nuclear war.  Shortly after Collosus becomes operational it establishes contact with a similar computer built by the Soviet Union.  These two systems agree that humans are not intelligent enough to manage their own affairs, so they eventually take control of the world.

Perhaps we are not intelligent enough to govern and we need to turn the job over to computers.  Kurzweil has us becoming one with silicon in his Singularity, so we would be as intelligent as computers.  Suppose, however, that computers were infected with human frailties.  In Bill Joy’s the World Without Us, we are eliminated by intelligent machines.  But perhaps he is projecting human desires on computers.  Perhaps they would be motivated to dominate, but rather to assist.  Or perhaps this feature would be incorporated by AI developers offering this solution to a country or the world, locked in gridlock.

So here is my plea to AI researchers and Sci-fi authors.  Please take this concept and run with it.

© Douglas Griffith and healthymemory.wordpress.com, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Limits to Human Understanding

May 20, 2014

This blog post was motivated by an article in the New Scientist1, “Higher State of Mind” by Douglas Heaven.  It raised the question of limits to human understanding, a topic of longstanding interest to myself.  The article reviews two paths Artificial Intelligence has taken.  One approach involved rule-based programming.  Typically the objective here was to model human information processing with the goal of having the computer “think” like a human.  This approach proved quite valuable in the development of cognitive science, as it identified problems that needed to be addressed in the development of theories and models of human information processing.   Unfortunately, it was not very successful in solving complex computational problems.
The second approach eschewed the modeling of the human and focused on developing computational solutions to difficult problems.  Machines were programed to learn and to compute statistical correlations  and inferences by studying patterns in vast amounts of data.  Neural nets were developed that successfully solved a large variety of complex computational problems.  However, although the developers of these neural nets could describe the neural net they themselves had programmed, they could not understand  how the conclusion was made.  Although they can solve a problem, they are unable to truly understand the problem.  So, there are areas of expertise where machines can be said to know not only more than we do, but also know more than we are capable of understanding.  In other words, what we can understand  may be constrained by our biological limitations.
So, what does the future hold for us?  There is an optimistic scenario and a pessimistic scenario.  According to Kurzweil a singularity will be achieved by transcending biology and we shall augment ourselves with genetic alterations,  nanotechnolgy, and machine learning.  He sees a time when we shall become immortal.  In fact, he thinks that this singularity is close enough that he is doing everything to extend his life so that he shall achieve this immortality.  This notion of a singularity was first introduced in the fifties by the mathematician John von Neuman.
A pessimistic scenario has been sketched  out by Bill Joy.  I find his name  a bit ironic.  He has written a piece titled, “Why the Future Doesn’t Need Us”  where he argues that technology might be making us an endangered species.
So these are two extremes.  A somewhat less extreme scenario was outlined in the movie, Collosus:  the Forbin Project, which was based on a novel by Dennis Feltham Jones, Collus.  The story takes place during the cold war with the confrontation between the United States and the Soviet Union.  The United States has built a complex sophisticated computer, the Collosus to manage the country’s defenses in the event of a nuclear war.  Shortly after the Collosus becomes operational, it established contact with a similar computer built by the Soviet Union.  These two systems agree that humans are not intelligent enough to manage their own affairs, so they eventually hey take over the control of the world.
So what does the future hold for us?  Who knows?

© Douglas Griffith and healthymemory.wordpress.com, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

How Much Information Is There and What Does It Mean?

September 27, 2012

A recent article by Martin Hilbert was published in the Big Data Special Issue of the publication Significance: statistics making sense titled “How Much Information Is There in the Information Society”? Hilbert together with his collaborator Priscila Lopez tackled the task of estimating the world’s technological capacity to store, communicate, and compute information over the period from 1986 to 2007/2012. The complete collection of these studies can be accessed free of charge at

http://martinhilbert.net/WorldInfoCapacity.html

In 1949 the father of information theory, Claude E. Shannon, estimated that the largest information stockpile he could think of was the Library of Congress with about 12,500 megabytes (106). The current estimate for the amount of storage for the Library of Congress has grown to a terabyte 1012. During the two decades of their study the amount of information quadrupled from 432 exabytes (1018) to 1.9 zetabytes (1021). For our personal and business computation we are familiar with gigabytes (109). Next are terabytes (1012), then petabytes (1015), the aforementioned exabytes, and zetabytes. Yottabytes (1024) await us in the future.

Although these are measures of information in the technical sense, I prefer to think of them as data. I think of information in technical transactive memory as data. When it is perceived by a human it becomes information. When it is further processed into the human information processing system, it becomes knowledge. Suppose we all disappeared and the machines kept remembering and processing. What would that be? Perhaps sometime in the future machines will become intelligent enough to function on their own. There is a movie, Colossus: the Forbin Project in which intelligent machines take over the world because they have concluded that humans are not intelligent enough to govern. Then there is Ray Kurzwiel‘s concept of the Singularity, when humans and technology become one. However, coming back to reality, I think there would just be machines storing and processing information absent true knowledge. We need to use technology to help us cope with all these data and fortunately according to Hilbert computation is grown at a faster rate than storage.

Hilbert makes some interesting comparisons between technical processing and storage of information and biological processing and storage of information. In 2007, the DNA of the 60 trillion cells of one single human body would have stored more information than all of our technological devices together. He notes that in both cases information is highly redundant. One hundred human brains can roughly execute as many nerve pulses as our general purpose computers can execute instructions per second. Hilbert asks the question why we currently spend 3.5 trillion dollars per year on our information and communication technology but less than $50 dollars per year on the education of many children in Africa? I think what he is proposing is that we not lose sight of human potential. Although our brains and DNA have phenomenal processing and storage capacities, we only have access to a very small percentage of this information in our conscious awareness. The healthymemory blog makes a distinction among potential transactive memory, available transactive memory, and accessible transactive memory. Potential transactive memory is all the information about which Hilbert writes as well as information held by our fellow humans. Available transactive memory is that information we are able to find. And accessible transactive memory is that information we are able to access readily. The goal is that this accessible transactive memory grows into knowledge, understanding, and insight, as it is in these final stages where its true value is realized.

© Douglas Griffith and healthymemory.wordpress.com, 2012. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.