History of NLP Series #3: The Theoretical Foundations
[This article is adapted from the International Society of Neuro-Semantics (ISNS). Originally written by the co-developer of Neuro-Semantics NLP, Dr. L. Michael Hall, in Meta Reflections #35 (2010) in July 26, 2010.]
NLP began with the surprising effectiveness of certain linguistic patterns that Richard found in Perls and then Satir. So John got involved when fourth-year student Bandler wanted to teach a class on Gestalt. The surprise that language itself could facilitate pretty incredible therapeutic change by two men with no psychology or psychotherapy background sent them on a wild chase to find out what was going on— so using the tools they had available, they began modeling the language of magic.
That’s what they called it, The Structure of Magic. Why? Mostly because what seemed like “magic,” what seemed “magical” was not really magic, it had a structure and that structure could be identified. So what were the theoretical foundations of NLP at the beginning? Mostly and primarily Transformational Grammar. That was Grinder’s contribution to it all.
In fact, my read on things is that John had been looking for some way to use Transformational Grammar (TG) for some years. After all, that was his speciality. He wrote his dissertation on it. He even wrote a book in the early 1970s with Suzette Haden Elgin, A Guide to Transformational Grammar (1973). And in Whisperings, John wrote,
“We have stated that Transformational Grammar was the single most pervasive influence on NLP.” (p. 92)
So in terms of theory, NLP began with all of the premises and assumptions of the Cognitive Psychology Models— which are inherent in TG. This explodes the myth that Bandler and Grinder propagated in the early NLP books that NLP is a model and has no theory. Well, excuse me, but if “TG was the single most pervasive influence on NLP” and TG was the work of Noam Chomsky who along with George Miller are credited with being the founders of the Cognitive Revolution in Psychology in 1956, then NLP does have a theory. It does come from a discipline (actually several disciplines) and so does have premises and presuppositions.
It was because Grinder and Bandler had their heads down— buried in the specifics of Perls’ and Satir’s language patterns and processes for facilitating growth, they were blind to the larger context— that Perls and Satir were leaders in the Human Potential Movement, that they were carrying out the original vision of Maslow and Rogers.
So yes, NLP has a theory. And that theory involves the premises that any intelligent reader can find in Cognitive Psychology (Chomsky, Miller, etc.), in General Semantics, in Humanistic Psychology (Maslow, Rogers, May, etc.), in Gestalt, Family Systems, Bateson, etc. And as I have noted in numerous articles and books, Bandler and Grinder snuck in the theory and hide it in the form of the “NLP Presuppositions” (User’ s Manual of the Brain, Volume I). In other words, if you want to find the theory, you need to look no further than the list of NLP Presppositions.
The map is not the territory. People operate from their maps of reality, not reality. You cannot not communicate. The meaning of your communication is the response you get. People are not broken, they work perfectly well given their representations and strategies. Behind every behavior is a positive intention. Etc.
And these are the ideas and premises that arose originally from Maslow and Rogers and that you can find scattered throughout the writings of Perls, Satir, and others of the Human Potential Movement as well as in Cognitive Psychology.
Now earlier this year, I’ve been in conversation with some people from the Grinder camp of NLP and several recommended that I go back and re-read what Grinder wrote about the history of NLP—at least as he remembers it or after he’s run the Change History Pattern on himself (!). So I did. And in doing so I now understand why Grinder does not understand or like Meta-States, he no longer likes the original NLP! In fact, in Whispering in the Wind (2001) he rejects a lot of what the rest of us call NLP. I did not fully picked up on this when I originally read the book.
For example, in that book he argues against accepting many of the NLP Presuppositions: “There is no need to subscribe to the so-called presuppositions of NLP in order to benefit from an effectiveapplicationofthepatternstosomeproblemorchallenge. Normallythesepresuppositions includestatementssuchas:havingchoiceisbetterthannothavingchoice. Allresourcesnecessary to make changes are already available at the unconscious level.” (2001, p. 201)
“If the so-called presuppositions are NLP are to be taken seriously this decidedly odd collection of different logical types and levels are badly in need of revision and reorganization. I believe that Robert Dilts played a strong role in their compilation. … Unfortunately, presuppositions, like beliefs, are ultimately filters that reduce the ongoing experiences of their possessors. We personally do not find any value in the enumeration of such rationalizations (the so-called presuppositions of NLP).” (202)
Even some of the presuppositions which Grinder himself introduced, he no longer accepts. For example, he no longer accepts the law of requisite variety.
“I accept responsibility for importing this law of requisite variety — here argued to be inappropriate for NLP practice.” (309)
Rather than base NLP on these premises and make them conscious, Grinder prefers to postulate them upon something much more vague and indescript, “the unconscious mind.” This, for him is the chief flaw with what he calls “the Classic Code:”
“There are important decisions and it is unfortunate in the extreme that the classic code assigns the responsibility for these decisions to the client’s conscious mind— precisely the part of the client least competent to make such decisions. (214)
“This makes the work shallow and unecological as the conscious mind is notoriously weak in its ability to appreciate what the function of a consciously undesired piece of behavior might be in the larger system of the person’s experience. The critique we offer is that such classic code patterns are flawed. They fail to provide for any systematic framing or access to the enormous potential of theunconscious. (215)
“The unconscious is superior in its competency for accessing the long term and global effects of some particular change with respect to consequences. Consciousness with its limitation of 7 -+ 2 chunks of information is ill-equipped to make such evaluations.” (218)
So does that mean that “the unconscious mind” is more competent to make decisions for us? Does that mean the unconscious mind doesn’t make mistakes (like allergies, false memories, auto- immune system diseases, etc.)? And didn’t Grinder, quoting Freud, also postulate that there is no time in the unconscious mind? Then how does the “unconscious” now have such competency for accessing the long term and global effects of the consequences of a change? All of that strikes me as especially convoluted.
Of course, many other problems are also created when we a dichotomy is set up between the parts of the mind that are conscious and that are not. So rather than solving problems, it only creates more problems.
Personally I prefer the original NLP model that equally trusted (and distrusted) both aspects of our mind— what is conscious and what is outside consciousness. I like the original design of NLP— to discover how to “run your own brain” and take charge of your states. I like the original NLP that sought to make explicit its theory and then hid them in the form of the NLP Presuppositions.