Re: Computer Power and Human Reason by Joseph Weizenbaum
Peter A. Putnam
January 20, 1978
Introduction
Weizenbaum points out certain fundamental weaknesses and inadequacies in the concept of a computer as implicit in present practice, when applied to human thinking. He develops a variety of compelling arguments for his thesis that brains must be calculators of a different genus from that of the computers they build (203). How might that be possible? What conceivable mathematical approach to computer modeling could give rise to a different genus? A Turing machine is already so general it can be turned into any other (63).
Weizenbaum's beautiful critiques present an ideal foil against which to suggest an alternative type of mathematical approach to human information processing that shows signs of integrating with our religious traditions, and meeting his critiques.
Perhaps, as Weizenbaum suggests, the barriers are now conceptual (not a lack of experimental facts). If, indeed, we are blocked in by unseen walls, errors that we take for self-evident truths, then merely showing the possibility of an alternative, a not obviously absurd other genus, may be an important task.
We will not here be concerned with evidence for an alternative (except very briefly to help clarify mathematical suggestions). We will limit ourselves to merely hinting at a kind of mathematics (parts of one-person game theory) that might lend itself to a different approach (new to computer theorists, but not at all new to human thought).
The defense of alternatives is now so desperate that to find a not-on-its-face obviously absurd approach, other than the self-evident necessity of the present computer species — ("What else could it be?! ") — is a step forward, perhaps even a main part of the battle.
I. Is Man a Computer?
What is a computer? A symbol manipulator (74) whose outputs are a finite function of past inputs, a finite set of pre-given rules, and the present state. Such themes can seem compellingly implicit in the inescapable nature of human thought (Turing). We have only a finite memory, we can only use a finite number of words, we can only coordinate a finite number of reasons. All sensory inputs, even the visual (184) can be treated as symbolic. The realistic "rest" left out can be only error and confusion (with quasi-random aspects), not a step towards something deeper or more.
Studies of what computers are adds compellingly to this plausibility. The general purpose computer has a universal character, so that any well-defined or effective procedure (algorithm) can be set up on them (63). Any computer can be transformed into any other (as the carpenter can play pope).
Some experts believe that any aspect of the real world can be simulated by computers (all we need is a more powerful machine) (201), and that computers will be outperforming man shortly (245), as they already outperform us at checkers, and fast chess.
For now, we will merely ask, is the brain basically a symbol manipulator? Or is the symbol manipulation a secondary feature that arises as a result of the special character of the signals fed in, and some other more basic kind of law? In a computer the laws are laws of symbol manipulation (nearly) independent of the laws of physics which embody them (as long as they operate "properly"). Could the laws of brain operation involve the laws of physics in some deep, more intimate way, and still give rise (by emergence) to information processing, as derived, rather than built in laws? Life involves repetition, a seemingly innocent enough notion, rather bland and contentless. But are you so sure?
The computer's role as symbol manipulator, set up in ways independent of the causal laws of its embodiment, is an all pervasive one. It reflects properties of the symbol in rigid static ways everywhere in its local organization. Symbols are very special kinds of logical entities. They are all or nothing type units. (A symbol can't be half asserted). They also mutually exclude, and stand in relations of before-afterness (and compounds of these) to one another. Everywhere, in a computer, we see features that reflect this in static pre-given ways. Finite local micro or macro structures have useful natural simple relations to aspects of symbol manipulation. Is this also the case with man? Unless you give up belief in causal laws, there is a great deal of compulsion in "what else could it be?"
Perhaps in brains these symbolic counterparts are laid down more dynamically than statically and reside in the global interaction of act-related channels, rather than locally in flip-flop or gate-like structures. The brain has genetic pre-givens that help facilitate a dynamic emergence in a given type of context. Could these pre-givens obscure a more general law (independent of these pre-givens) hidden beneath? Could the way in which a mechanical conceptual parallel is set up in the brain, be fundamentally different from that in our computers?
Some observations are not incompatible with this. Turing's quintuplets (62) are very general terms in which any effective procedure can be defined. There are also such much more workable notions of flip-flops (registers), gates (and subroutines built of them), address (for both) and conditional branch instructions (96) via which anyone can be quickly and easily taught to program any effective procedure he can think up (104). But neither the most general forms, nor the most practical forms seem to have any easy natural correspondence to nervous system features (170). Nobody seriously presents neural counterparts to computer models. Neural models seem too flexible, as the experimental evidence presents them, computer models seem too rigid to have natural counterparts in each other.
Indeed, computer models have contributed little to psychology or even problem solving (229). And at least one author who knows computers (Chomsky) thinks that to even try to translate (his) language theories into computer models would be a diversion and nonproductive (184). If we look at the situations where computer models have been successful, we find that it is only where a solid foundation of theory already existed (160, 231-2). Computer modeling does not help bring much new understanding in, if it was not already there.
If there is something fundamentally amiss about the computer analogy in understanding brain function, where look? After all, elaborate symbol manipulation does take place using brains. Computer models define symbol manipulation in terms independent of causal law, and embody them in specific internal physical counterparts. Could the symbol-manipulation aspect brains not be independent but involve the form of causal law, via the whole of being (the nature of the correlations in data fed brains)? Could their symbol manipulation aspects be incomplete, only dynamically latent? Could they be the end product of laws of emergence, not the starting place?
Pretentious oversimplified and overgeneralized presentations of the power and status of computer models, and how they relate to the thinking process, help confuse real issues and hide possibilities. People talk, for example, as if the forms of thought and of program language were totally at one, and equate all "real" understanding with ability to put it in program form.
It is questionable that program writing involves deep understanding (108) even though it is a merciless critic. What we call understanding involves the inter-relation and origin of issues not found in these programs. In addition, to equate program writing to understanding obscures the experimental nature of much program writing. We program to understand, not because we do (108).
Indeed, programs have a very distant and indirect relation to pure thought. They are more like little bureaucracies (234) than abstract ideas. Nor do computers behave as they are programmed to do in the sense in which this is apt to be taken (164).
The success of a program is often used to imply understanding, but this is often not the case. A patchwork of lists and formulas (that works on most of the cases you are apt to meet) does not imply understanding (110). (Computer "knowledge" is often like that of the overnight cram student who manages the exam.) Performance is not (at least in the "practical" sense) an adequate test of understanding or theory. Jump the patchwork out of its usual slots and it is lost, as are many simulation type programs.
And computer language is also much further from everyday language than it can be made to seem. Program language is just a mnemonic for machine language (100) and merges with the machine (103). It is hard to explain anything in a primitive vocabulary which has nothing to do with that which is to be explained, yet that is what programs which would translate ordinary language into machine language, try (109). Ordinary speech goes via internal felt representations that have an ever changing dynamics of their own and a self-repairing quality when verbal programs break down, that have little counterpart in representations used by computers.
The barriers to tying ordinary language to computer language are perhaps far deeper than people suppose (109-10), misled by the patchwork "success of very limited vocabularies and question types. There is no indication that speech has a basis that is formalizable even in principle (197), as most people working in the field assume. But if what makes for a formal system is not already there, perhaps symbol, and system, abide in being, or the real world, already (as Socrates suggests). They come into our computers by emergence, not because of pre-given slots, in us, but because of the pre-given nature of being. Perhaps aspects of the laws of emergence that gave rise to our computer, remain, in some form, as essential aspects of that computer's routine operation as well.
In any case, the forms of thought and of program language don't merge in the way some enthusiasts suggest. Program language is almost too universal, and its forms or machine counterparts have no natural relation to the phenomena of subjectivity, which might seem an annoying and irrelevant epiphenomena.
Where look? Oh, there is that silly little seed of repetition which might seem to relate rather too easily to causal or machine language — no bigger than a mustard seed. What could anyone do with that?
II. What Else Could It Be?
It is important to realize that a non-causal or non-predictive vocabulary, an "approach," is never right or wrong, only useful or not useful. Philosophies tend to involve a set of descriptive categories, all of which are "right" and all inclusive, some more useful in one class of situations, some in another. They sometimes present problems in learning how to translate between them, but none can become "wrong" till its descriptive forms are compounded in such a way as to yield predictions. Thus Y= f(X) in the interval between X = 0 and 1 can be represented as a power series or the sum of a sine and cosine series. Which is right? That is a meaningless question. But "Which is useful? " is not. If a method or approach provides descriptive categories that are basic, it is generally overwhelmingly useful in endless ways. But, strange to say, computer models have been of no particular usefulness in either psychology or problem solving (229), as we noted earlier.
Every theory has associated with it what might be called an "approach, " a set of descriptive categories and types of construction in whose terms they are formulated or to which these lend themselves. When old theories first break down, or get pushed outside their sphere of usefulness, nobody ever admits that their associated "approach" might be wrong, or no longer useful. When an old physics breaks down there is always a way to complicate the old ideas so as to fit any specific set of new data. It is only the success of a new approach kills the old, it never kills itself, it just gets complicated and non-productive. (Thus facts of spectroscopy were first fitted by mechanical views of the atom with internal springs and couplings, a variety of resonances, etc. Enough complications (comparable to the number of data to be fitted) can keep up with anything!
Now the vocabulary of computer model-building is one in which anything can be simulated. If a type of modeling can contain set theory, it can contain anything with trivial ease. Computer modeling is not a predictive, merely a type of descriptive approach. "What else could it be?" causes confusion. But any and all other types of models "might" be "really" built up of information processing computers which are hidden beneath. Perhaps underlying the differential equations of field theory is a massive micro-computer that keeps all the numbers going on schedule. (There is a detective story that involves computer-like simulation of the real, good enough to fool someone, as God may be fooling us.)
If we are not to be tricked into interpreting data in terms of computer models until they have been proven "useful," we need a better understanding of what "useful" means. Usefulness does not mean detailed knowledge which does not, as such, constitute or imply any understanding (219) or usefulness. A pile of "correct" numbers does not get usefully at the reality of physics (159) or even of computers (136) where some form of functional positioning is essential to understand what goes on. (Even a correct wiring diagram of a computer circuit may be incomprehensible to the engineer who designed it until he is told its function. Then all falls magically into place.)
The fantastic success story of modern physics has restored to number something of the religious significance it once had for Augustine. A number or symbol is a very special kind of structure. Computers and people manipulate numbers or symbols. Hence, it is easily assumed the same kind of modeling must be useful for both. But the properties of numbers can be abstracted in many ways. The same abstract properties enter into radically different types of models. That the properties of symbols (mutual exclusion, all or nothingness, before-afterness) somewhere play an important role in brain modeling and computer modeling is not trivial. But it does not imply these models are of the same general type, or are usefully grounded via the same sort of descriptive vocabulary.
What makes a model useful? If it is not exactness (153) or being number-like, what is it? Usefulness means ultimately usefulness in (human) problem-solving. And what is a (human) problem? That is a very delicate question. The category problem is the magic unifier that runs through all fields of discourse. It is the one category common to sociology and mathematics, art and physics, politics and medicine. What is a problem? How so define it as get at its universal unifying role?
Every problem has its aspect as a conflict over how to behave, over how to order such all or nothing behavioral units as words or acts. Models are useful, when they get at the structure of the field of possible or competitive words or acts relevant to a given behavioral conflict, for it is also in these terms that categories predictively narrowing the choice function are found.
For example, physics provides no model of the waves of the sea, and has no need to. Physics instead gets at their generating laws. (There are surface tension waves, gravity waves, and terms that couple these to the wind, etc. ) The waves of the sea are said to be understood because we know laws that generate a field of possible waves, even though we will never know the numbers that represent any specific one. The same is true of hallucinations, as of waves of the sea. What is left in their incommunicable privacy (193) after we get at their generating laws, is trivial indeed (as even St. Theresa noted to some of her nuns who got too concerned with such things).
The mystery of the deep side of human privacy is that it can (after a lifetime of struggle) be communicated, despite the initial seeming impossibility. (The shallow side is (best) washed away by "death's rude alembic.")
Physics gets at the waves of the sea via the structure of the generators of their field of possibles. These are isolated in simple extreme cases of pure type. It is structure, in this sense, marks real understanding (152) not detailed exactitude or number. Indeed, useful understanding in physics often totally by passes both precise detail, and number. (What help are distributions of specific positions and velocities of many particles to understanding pressure or the uncertainty principle?)
Comprehension involves theory, or the structure generating a field of possibles in whose terms general goals and related choice functions can be oriented, and, hence, real (human) questions over how to behave answered. Are computer information processing (IP) categories the sort of structures needed to guide or orient our resolving questions regarding brain operation? When there are small finite static structures, with simple natural correspondences to the vocabulary of symbol manipulation, structures defined independent of causal law, then IP models have been useful. But no such set of correspondences have been found, and there is no sign they ever will be. Again (as with number) abstract bits (nerves are sometimes like wires, or coincidence gates, etc. ) can be combined every which way, and enter into all kinds of theories. Novels use the same words as math books, but are not, therefore, math books.
Also IP categories are useful when we want information on the exact order of symbol changes. But is such exact order the nature of our de facto concern? We are inside our computers. Our inside status gives a very special perspective on its operation, and might be expected to shape the logical forms of our questions in peculiar ways. A good brain model might hope to take advantage of this (as statistical mechanics takes advantage of the fact that our concern is often limited to properties of equilibrium conditions).
Perhaps man is a species of computer of fundamentally different genus than those she builds (203).
Where else might we look for a source of laws that might give rise to information processing, if we don't start with a general purpose computer (which is so general it can imitate any other)? The category "goal" seems essential to the human dimension of meaning. Of course, computers provide a rich variety of ways to simulate specific human goals, but there is something trite and superficial about such attempts. Religions often claim that there is an at least potential unity in human goals (reducing any specific one to subgoal status). This is not found in computer models of human goals. Might there be some other, radically different approach to the modeling of the category "goal"? If a radically different genus of computer model is thinkable, might it not center about some deeper definition of goal than can be usefully made in the present computer vocabulary?
Perhaps a small hint is found in the relation of "story" to the process of building up human understanding (185). We feel we "understand" when we can tell a story that recreates the given features. Of course, computers can also simulate this. But might this also be a hint of a simpler deeper role for the category emergence? Story smells of emergence.
Computer models must look to a genetic pre-encoding, and Darwin's laws to draw this encoding forth, via a slow evolutionary emergence process. Could there be some aspect of this emergence process that might lend itself to (or leave room for) additional powerful guiding generalizations? Could such aspects integrate with information processing in an intimate way, forming part of it, rather than merely helping selecting it from outside?
More concretely, any new laws or aspects hidden in emergence must be aspects of the laws of physics themselves. How is it possible that any simple unifying perspectives should be left hidden in these laws, after so many years of study? What conceivable room is there left for such? We know the causal laws that must underlie all of organic organization. We know them totally and precisely. Such continuing problems as are present in the laws of physics offer not the slightest possibility of the least qualification of our understanding of organic compounds or brains as now designed. The most basic laws underlying brain operation are now totally understood once and for all, without any question whatever. There are places where science is far more uncertain than outsiders realize (e. g., values), but there are also places where science is far more certain than outsiders often realize, and this is one. (Outsiders are often confused by talk about problems and contradictions in the foundations of mathematics and physics, and interpret this talk as implying working uncertainties in regions where these don't exist at all.)
This being the case, what possible room can there be for grand new simplifying perspectives?
We can offer two hints. Statistics gave rise to magical simplifications inside classical physics, in ways that were both grand and unifying and totally unexpected. Classical physics dealt directly with the laws of simple basic structures. The new physics took on complexity, but only at the disorganized level, where already such magical effects are found inside the old. Could yet another layer of magic hide inside the known old, as we pass on to the laws of "organized complexity"?
Another hint might hide in the logical nature of a causal program or general purpose heuristic (GPH) that must underlie brain functioning. A causal program that "succeeds," might (insofar as one could assume success) provide another kind of logical closure of universal (all) form. It might involve the felt and our special inside status. It might, combined with our previous clue, allow room for added sweeping generalizations that seem impossible in so tightly defined a world.
Is there a simple GPH underlying brain functioning that might be defined to fit into so tight a gap? If so, it might, as a secondary payoff, provide some understanding of the phenomena of subjectivity which appears now merely as an annoying epiphenomena. (Of course, very few right "mights" in stock market guessing makes anyone a billionaire, but it does.)
III. What is the Question
Could the overall aspects of brain operation be associated with some special type of question which could be taken advantage of to build a correspondingly special and simplified type of model? If the overall aspects could be handled separately, as we handle equilibrium conditions separately in physics, then, perhaps, remaining detailed secondary effects could easily fall into place.
The category question has an existential as well as formal aspect in models that treat our questions as "in" ourselves. This is a very peculiar logical feature (208-9). Does our inside status in our computers (209) force questions into a special logical form? And if so, (how) can we take advantage of it?
From a conscious, or present existential point of view, the act takes care of itself. We are never concerned with behaving, only with resolving conflicts over how to behave, a very different, and much more limited thing. Indeed, even these conflicts are of a very special form, involving only pairs of behaviors at a time. And our concern vanishes as soon as this behavioral conflict is resolved.
Now, this behavioral conflict (sometimes slowly) opens out into a formal verbal question via the digging up of the past history that underlies the behavioral branches in conflict. And what stops the given conflict, can, in practice, be similarly converted into a formal (relative) answer, by probing its felt historical counterparts inside us.
Could a limitation to a concern over ordering effects, taken a pair at a time, allow of the introduction of a radical new type of model to get at the over-all dynamics of brain functioning, and separate them off from the blinding details? In statistical mechanics we separate off the effect of single molecules, etc., and deal with them later if need be (via irreversible processes, etc.). Could a related separation be as important in dealing with brain operation? A letter j is treated as "the same" under a wide range of distortions, till it ceases to be recognized. Could the study of the details of such breakdown edges, how distortions effect response time, etc., obscure basic ordering effects, and be profitably separated off?
If so, what kind of special mathematics (150) could the brain use to take advantage of the special nature of the questions asked? Can we adapt our earlier hints in such a way as to take advantage of this? "Conflict over how to behave" suggests room for a quasi-random statistical element in filling in the conflict gap. Might this allow us to shift our attention from "details” to invariance properties that allow of emergence? if so, we have a natural lead in for laws that describe historical emergence (another earlier hint).
Our special "emergent" modeling might come into its own where old systems break down. Where a system is adequate, then features already emerged might fit into present calculator paradigms, and not require our special models.
But how could this emergence process hope to find a place in a causal framework already so well defined? Perhaps by assuming a different, more integrated relation with causal law than is done in the genus of computers built by us, which are quasi-independent of the causal aspect of their embodiment (111). What might this relation be? To say that a correlation is causal is to assert a type of invariance property that could be relevant to emergence in a statistical interaction. Perhaps this invariance might be relevant in the context of complex, quasi-random conditioned reflex interactions. After all, the class of correlations picked up by the brain is a very special one, loaded with peculiar redundancies that are ultimately systematized by causal law. Could causal law have a double aspect, as laws of this world, and as parts of laws defining the invariance needed for a controlling emergence in conditioned reflex interactions?
And this would lead us back to our earlier "success" theme, as a back gate for getting in a new class of global universals where there is "no more room for any." Could the success of such an emergence process be involved in the logical form of the answer we seek?
The study of complex order may involve a radically different kind of thinking than the study of complex disorder, or simple things, as others have predicted. But radically different does not imply radically new (except to people inside physics). May it not rather involve a vast re-simplification, a discovery that old ideas (with the help of modern mathematics) harbor a depth of literality, and a force that nobody had suspected. May it not force us to take seriously what had been taken before for granted as nice parable? Religious models are also concerned with isolating significant invariance properties that define the characteristics of behaviors that wins out in ourselves and in social interactions. Religion, too, is concerned with a kind of statistical emergent. Could there be any good mathematics hiding in all that garbage we are ashamed to even refer to (261)? Could the answer have been under our noses all along?
IV. Goal Function Language
The specifically human dimension of problem solving seems tied to multiple goal function language. But in computer models of human goal solving, what we get is a patchwork of odds and ends, empty heuristic slogans (196) that touch no general principles of interest (197). It is a general form for anything, with no structure (178) that sidesteps all basic questions about how goals interact and evolve (179).
Problems can always be further interpreted in terms of goals with related search areas. That ends suggest means is at the core of this patchwork (169). "If we provide a representation for [what is desired, under what conditions, by means of what tools and operations, starting with what initial information, and with access to what resources], and assume that the interpretation of [the symbol structures that represent this information] is implicit in the program of the problem solving information-processing system, then we have defined a problem" (quoted from Newell and Simon 179-80). And then we can set it up on a computer. But, as we know, in real life, the main part of the problem is "defining it, " "finding it, " "asking the question, " not solving or answering. Of course if a larger goal is given, sub-goals can be defined or found or asked about in a relative way as part of the problem solving. But the messy patchwork of today's practical simulations in effect assumes that the host of existing basic goals can be made clear and need not collide. It sees no basic latent inadequacy in the patchwork that might call for a broader unifying theory involving the dynamics of sub-goal emergence.
In Simon and Newell's General Problem Solver (GPS) these end-means themes are defined somewhat more concretely. "Classifying things in terms of the functions they serve, and oscillating among ends, functions required, and means to perform them — forms the basic system of heuristics of GPS. More precisely, this means-ends system of heuristics assumes the following:
"1. If an object is given that is not the desired one, differences will be detected between the available object and the desired object.
"2. Operators affect some features of their operands and leave other unchanged. Hence, operators can be characterized by the changes they produce and can be used to try to eliminate differences between objects to which they are applied and desired objects.
"3. If a desired operator is not applicable, it may be profitable to modify its inputs so it becomes applicable.
"4. Some differences will prove more difficult to affect than others. It is profitable, therefore, to try to eliminate 'difficult' differences, even at the cost of introducing new differences of lesser difficulty. This process can be repeated as long as progress is being made towards eliminating the more difficult differences." (172)
This is surprisingly adequate to a wide range of problem solving, but it assumes that goals are given already, and, except in secondary ways, provides no framework for their emergence. When this vocabulary is applied to real human situations, where such emergence is central (e. g., psychoanalysis) they can seem ludicrous indeed (181).
To commit yourself to a complex set of pre-given goals (263), assuming that the goal functions are already obvious and well known, and that the issues only involve their implementation, is a deeply reactionary policy in practice. It also goes counter to all practical life experience.
Sub-goals do collide in very basic ways. Could their collision, not their implementation, be at the core of a wide class of basic human problems, those involving overall brain operation? Of course, we could deny the very possibility of a unified approach, or of a single unified goal function which reduces other goals to sub-goal status. This would "handle" this class of problem by saying there was no way to handle them, that they were formally insolvable, and turn us back to life to see what happens. But science is too egotistical to admit that.
Belief in the existence of a single unified goal function is implicit in some religious language. The scientific attack on this language seems to imply that there is no need for a unified view (the sub-goals are clear enough).
But if, indeed, real adult human problems derive from conflicts as among our best defined (sub?) goal functions, rather than from our failure to fully and adequately implement known (sub?) goals, this might be a step in pinpointing the nature of the basic inadequacy in computer modeling, a step towards defining computers of a different genus?
Might such a goal function allow us to approach the problem of emergence of (sub) goals, and the relation of this emergence to changed perception (179) — problems totally sidestepped in present theory? Indeed, the whole phenomena of subjectivity is left as an annoying epiphenomena by the patchwork approach. Could a unified goal function shed some light on the phenomena of subjectivity?
Can we find a unifying goal function that will fill in our earlier hints? If so, it must satisfy certain conditions. For it to relate (sub) goals to conflicts over how to behave, the category “act” must play a central role in its definition. It must rely upon a quasi- random statistical element to give rise to a phenomena of emergence and to magical new simplifications hiding inside the old, under our very noses. It must assume a different, more integrated relation to causal law (which is irrelevant to computers of the genus we build) perhaps via that law's second role as defining invariance properties needed for emergence.
How could "the unified goal" emerge if not already there? What could smell of a reason for such emergence at all? It is not a computer trick for defining (sub?)goals, yet forms a part or aspect of all their meanings? What metaphysical tour de force could possibly fill such a prescription?
V. Repetition
Could repetition be a candidate for the global unifying goal function we seek? It has a foot in both the mechanical and conceptual world, which is, perhaps, a not bad sign. Repetition is close to the heart of what we mean by meaning (as Eddington notes). It is also a mechanical property, closely related to a basic feature of brain operation, the conditioned reflex principle, which, in turn, has a foot in causal law. Could repetition be, at once, a most general conceptual form under which we experience our own will, as felt, and an aspect of causal law? Perhaps reinforcement is, ultimately, a function of repetition, and other criteria (hunger, etc. ) are logically (genetically pre-encoded) sub-goals that are (or in man become with maturity) subordinated to it.
This is a wild metaphysical jump, with all our terms undefined. Are there ways to start some defining without obvious instant absurdity?
What is "repetition"? Repetition is here meant in a total institutional sense. A pendulum does not "repeat" in this sense because it stops. But the use of pendulums in clock design, as part of an institution that involved teaching the designs to the young, and adopting numbers built to social needs, might be. Malinowski called the institutional element the concrete isolate of cultural anthropology. Life is a great self-reproducing cycle. In its most primitive form we eat to work, and work to eat. We work enough to fulfill our eating needs, and eat enough to fulfill our working needs.
Actual life is a much more complex set of interlocking cycles. Malinowski's theme can be interpreted as asserting that all (all) the internal structure or values of a society derive from the needs of one part of the complex cycle to lock in with and support other parts.
This theme is a double-edged weapon. It not only defines the logical form of all of a society's internal structure or values, it also defines the class of all possible societies. The theme is that if you can construct a set of mutually supporting cycle components that are viable, then that forms a possible society.
This assumes a working unity in the decision process of any group. The variety of internal conflict must be either positively functional, or inherent in the ambiguity of the best available methods for predicting what repeats, as it affects the planning or decision process at that given level of causal insight.
Malinowski's approach provides a sharp criticism of much (not all) "historical" explanations. Past history if well known, can usually be shown to harbor precedents for any potentially useful response, and more. Why we do this rather than that can, therefore, not be "explained" from past history. The problem is why we choose this rather than that branch of past history to imitate. And the reason lies in what might be called its functionality, relative to certain present issues. Of course, a certain type of self-recreating cycle, once entered into, is stuck to, or continued, till its own consequences or changed conditions lead to a change. To this extent history explains.
The theme we would here develop is that repetition is a total and all-inclusive goal function, of which all others are sub-goals (some genetically encoded) which subordinate with maturity.
At first glance repetition can seem too easy, only one among many factors. But one must remember that all parts must be integrated into one another to form a totally self-reproductive system. The style phenomena forces this de facto unification of the group's decision process, whether we plan it consciously, or blindly. When this is taken into account, repetition takes on a very different aspect. Repetition emerges as a "total impossibility."
Since we start genetically and imitatively with well tested adjustments that repeat, we take the category repetition for granted. "The problem," as we first meet it subjectively, takes the form of an excess whose parts compete with each other. From a classical point of view, nature seemed built out of ingressing felt universals that formed part of a repeating system by their very definition. Hence, repetition presented no problem. But as we shift to a point of view that perceives time as a collection of abstract units, not as a sequence of constantly recurring events (21), we are faced with the problem of recapturing this recurrence or repetition, from elements (atoms, preserved patterns of local difference — or differential equations, etc.) that do not have any repetition built into them. In this perspective repetition emerges as a totally impossible miracle.
From its perspective as a near miracle, the use of repetition as a total grounding of the value problem looks less absurd. And repetition, as goal function, has a very different relation to causal law than the more specifically encoded goal functions of (for example) the General Problem Solver. Repetition (if correct) has the logical status of an abstract aspect of causal law itself (rather than being a special feature independent of causal law). The world appears to be such as to favor the self-emergence of certain very special patterns, those that are self-reproducing. This goal function appears pre-encoded in the nature of being itself. If so, this might ground a computer of logically different genus from those we build. Is there any indication that such a goal function and related point of view could be helpful in understanding learning or emergence?
Repetition locates places via functional cycles made up of interlocking techniques. Although over-all theory, and simples that a given theory imply, change radically, in jumps, as institutions evolve or differentiate, the concrete techniques that the theories systematize or coordinate do not. Techniques appear to evolve in a continuous additive way (Kuhn), throwing off old theories as snakes do skins, when they get too rich and complex to fit into them. This very special additive perspective on the learning process, when analyzing in terms of cycle components, contributes in an important working way to the potential importance and centrality of repetition's role. What counts is that an additive method of analysis exists, not that endless non-additive approaches also exist. (So, too, what counts is that a method exits to assign conserved numbers locally — the fact that most ways to generate numbers do not lead to conserved ones, is unimportant.)
Techniques refine in a continuous additive way. Any functional method once found, is potentially usable forever. It may be displaced in the cycle by other, more effective methods that compete for that given ecological or functional niche. But it remains as part of the description of the field of possibles.
If we use the functional cycle as our basis of orientation (instead of space time), then the same things or places when they play a different role in that functional cycle, are treated as different. Perhaps the word "site" could be used to distinguish functional location from "place" which relates to physical location. Sites individuate in a continuous additive way, as an aspect of the additive refinement of techniques. The additive character of component change in this perspective of repetition's, perhaps make it a candidate for helping understand the learning process causally.
Repetition also smells a bit of a type of category that might account for the logical form of the felt as felt. Most of the internal structure of the body (cell walls, bones, etc.) have no subjective counterparts. We only experience a sensory input when it is useful in resolving conflicts in the order of emission of acts and words, or in selecting the reinforced path that repeats. If a sensory input has no such usefulness then it cannot be felt even though present. (For example, the capillaries in the cochlea of the ear give rise to a noise we cannot hear till an object (e. g. sea shell) applied to the ear changes the quality of this noise by changing the resonances of the cavity or air volume associated with the ear. Such examples can be easily multiplied.)
Thus repetition has the smell of a kind of category that might serve as a unifying goal function and carries with it, already at the start, hope for an additive perspective that might help simplify our understanding of the dynamics of the learning process, and, in addition, features that might be relevant to getting at the logical form of the felt as felt.
VI. Middle Class Illusion and Lasting Change
However, right at the entrance way, there are certain very powerful working experiences that seem to specifically contradict the possibility of a unifying goal function of the form repetition.
If we examine the logical form of our value systems, as presently articulated, we will find that they are in large part concerned with finding ways to choose between what appear to be logically competitive forms of repetition, often by labeling certain of them good and others bad, or some other functional equivalent (mature, immature, etc.) Thus, the logical form of the decision process, as almost universally verbalized, and in almost all practical usable models, seems to be directly opposed to the suggestion we are here making. If values have the logical character of choices as among types of repetition, rather than as ways to find repetition at all, then, of course, the whole approach is shot down as obvious nonsense right at the start. A goal function, it would seem, must have some logical status prior to repetition, so as to offer a logical basis for choosing as among competitive forms of repetition.
This illusion (if it be such) is deeply and almost universally ingrained in most formulations of the value problem. It is experienced as self-evident, and hardly requiring justification. What we are here suggesting is that this is an illusion that derives from the very special functional role of group verbalizers, the middle class model maker.
Any group must have some mechanism for centralizing the decision process. When faced with a conflict (and related problem) that is not yet ripe for solution, someone must, in effect, choose sides or compromise it, to provide a way of living with it, and generating the needed data for a later solution. Nothing is so terrifying as the inability to decide.
The middle class specialize in the polishing of this centralizing process. It is essentially a secondary function, to help coordinate groups of people in the carrying out or elaborating of already relatively well defined tasks. When faced with a conflict for which there is as yet no answer, they must choose sides in a way that best preserves the unity and productivity of the group.
The middle class merely over-rationalize this function. They confuse the need to centralize, a relatively mechanical process, closely related to consensus smelling among the powerful, with the distinction between (a special kind of) truth and falsity (called good and evil).
In fact, they are merely throwing dice in an opportunistic way, trying to smell out who will win, and filling in the rest by chance. Doing this does not produce stable answers, merely a style cycle. Errors come in complementary sets. All "sides" in error continue to co-exist, until some deeper synthesis is achieved. In many cases, certain types of errors are of a kind that many may share, while their complementary errors are of a minority type. Neither is better. Both are needed to generate a style cycle which will ultimately force up deeper invariants once the law of the style cycle is itself penetrated.
Once the middle class choice function is put in this broader perspective, we can see how it might still be possible for repetition to serve as a global unifying goal function.
If evil were an illusion (as St. Thomas says), merely an absence of knowledge (evil ultimately ignorance of the laws of physics), then the moral choice functions could be viewed as a relatively blind mechanical trick for keeping the group decision process centralized and not a choice between competitive types of repetition, from the deeper ground of good versus evil. As arbitrary centralizing choices push to their consequences, helping old forms elaborate, they finally generate deep crises of a form involving existence versus non-existence, that threaten repetition itself. At this stage (and only then) is the conflict ripe for finding answers (as Goethe says in his Elective Affinities). If, indeed, every real fully penetrated question has the logical form of existence (of repetition) versus its non-existence, then we might still hold out some hope for repetition as the global unifying goal function.
A closely related issue involves the logical form of control variables of lasting change. Most changes are like throwing stones in a puddle. They create a commotion, but they lead to no lasting change in life styles. Human institutions are enormously delicate in their balance. Any least thing changes the specifics of the future totally. A changed pattern, changes neural timings, changes behavior a bit, lets one person die, another live. In crisis the future of nations and companies, etc., all depend on the least things. It is only too easy to convince yourself that anything changed changes everything forever, after a brief delay.
But such changes have a trivial side. It is really just the same old thing. A spends 10^7 dollars and puts up a hospital at B. This makes C's career, and ruins D's. But in the long run "nothing" is changed. Life styles remain the same.
What is it that we can do that leads to lasting or institutional change, that really changes the range of life styles in the style cycle? What is the logical form of the control variable of institutional (as distinguished from incidental) change?
The answer we would propose is that the control variable of lasting change has the logical form of causal self-model insight which serves to isolate the repeating paths hidden off in the chaos. We enact our theory of the repeating paths. Thus, the conflict over repetition is, ultimately, a conflict as among competitive causal insights that serve to define the logical form of repetition itself. This insight is double-edged . If only causal insight leads to lasting change, all maxims not rooted in causal insight, form parts of style cycles with their opposites, with which they ever co-exist.
In this way the social dynamics is taken up as part of the opening of the laws of physics themselves. "To an historical eye, there is only a history of physics" as Spengler says.
If, as we are here suggesting on very inadequate grounds, repetition serves as de facto global unifying goal function in the life game, then all other goals must be reduced to subordinate sub-goals. In chess we may have as sub-goals "get control of the center" or "avoid double pawns." These may collide, in practice, if we can only get greater control of the center at the price of double pawns. When this happens, we have a problem that can be solved, because they are sub-goals of "checkmate the other man's king." So, too all problems involving colliding goals in life may define solvable problems because these goals are really sub-goals of repetition. If they were not sub-goals of a common goal, their collisions would be physical only, and have no problem-like aspect.
Hopefully, repetition can be defended as at least still a possibility worth further testing. Clearly this approach would involve a different genus of computer, a genus whose categories show signs of formalizing, or making into good mathematical causal models, parts of our religious traditions.
But how follow through on this? Can basic details of the known operation of the nervous system, be derived (via emergence) from the repetition theme (in combination with the known laws of physics), rather than merely treated as pre-given in ways independent of the laws of physics (as is the case with computers we build)?
VII. The Symbol as Move
The notion of a symbol that is so central of our understanding of what information processing means, has very special properties, which are often taken for granted. But these properties must have mechanical counterparts. As with the properties of such concepts as atoms, or steel bars, they help give the concept "symbol" its footing in the real world.
Symbols have an all or nothing character. A word cannot be half emitted. In addition, they are oriented by before-after relations, where they mutually exclude. Two different symbols can't occupy the same position. We can't say "yes" and "red" at once.
In the perspective of computers built by us, these are arbitrarily pre-given structures. They are embodied in endlessly different ways, but all these ways have a pre-given character, independent of causal law.
What other sort of logical origin might they have? How else could we hope to root such properties, logically?
More specifically, is there any way in which they might be derived from the general properties of repeating paths in the type of logical environment that our world presents? Can we say anything about the properties of repeating paths at so general a level?
Broadly speaking, the field of possibles within which self-reproducing paths are to be found, is generated by a set of universals that are everywhere locally the same. (Hence, the usefulness of differential equations to get at more global consequences.) But these local universals are defined in ways independent of repetition (21). Indeed, such patterns lead off in every manner of directions, that have little or no suggestion of repetition in them. It might seem that repetition was nonsense and impossible.
The recognition that repetition is "impossible," roughly speaking, is a key one. Once repetition is no longer taken for granted, we can come back and see if there is not a tiny loophole somewhere to let it back in.
It seems that there are certain very special niches, hidden off in the field of possibles, and well isolated from each other that have at least the potential of serving as parts of repeating paths. Even these mostly don't but these have a "chance."
These special niches represent simple forms of qualitative changes of some pure type. Thus in the range of motor displacements of the body (our field of possibles) almost all distortions are meaningless convulsions with no chance of becoming part of any repetition. Only certain very special ones (walk, turn, sit, grasp, scratch, etc. ) have any chance for emission as parts of repeating patterns. These groupings that have some potential utility are very narrow bands in the field of possibles. There is a genetic pre-encoding which organizes these into their various functional centers, which mutually exclude, and have an all or nothing type character in their responses. Such genetic pre-encoding could be regarded as a pre-acceptance of the class of possibilities that repetition does sometimes reinforce in some sequential positions.
Only a relatively small set of all-or-nothing groupings that represent simple forms of important qualitative transitions (place change, direction change, etc.) get a chance to enter in. But these compound in an exponentially opening way which rapidly generates an enormous region of choice, even within these very narrow restrictions. Infinity is a very big place, and can be cut down infinitely many times and still be infinitely big if you do it right.
This notion of enacted symbols is not abstract. It involves the context dependence that separates them. Hence, the whole of being might be treated as built out of their conditional correlations.
Certain properties of symbols, mutual exclusion and all or nothing, closely parallel corresponding properties of the nervous system. Reciprocal enervation, which is so prominent a pattern in the organization of the spine, has counterparts in the central nervous system, (e. g., Strong facilitation anywhere on the cortex leads to inhibition elsewhere.) This suggests that symbol-like units may form part of a special sort of modeling that may turn out useful in understanding how brains operate. But in the case of the nervous system, there is some evidence that these symbolic properties may be rooted in a very different logical way than is the case with computers we build. Mutual exclusion and all or nothing, as themes, may ultimately have a dynamic basis that derives from the limited pure type character of the paths that repetition reinforces.
At the level of the single cell (before nervous systems) the atom indirectly provides for the entrance of a symbol-like mutual exclusion. Significant metabolic steps involve atomic re-arrangements which have an all or nothing mutually excluding type of character. The continuity of identity of mass, and minimization of rates of entropy production are other very general causal properties that contribute in obvious ways to the possibility of the early stages of self-emergence of self-reproducing sequences of symbol-like steps.
Once life got well started, the set of possibilities (recombinations of the twenty amino acids in polypeptide chains) was so vast and rich in its exponential opening that all the search since has been (till the recent emergence of language) internal to their recombinations.
The act generalizes on and simplifies the notion of metabolic step. As life forces its way out of the ooze certain potential generalities are drawn out which were only latent in the very concrete nature of the first sets of self-reproducing devices. These are given an embodiment independent of many early peculiarities that allows of the opening of ever more extended fields of possibles (reduced to a choice of single act via the goal function repetition). How great a range of possibles can be investigated seems limited only by our knowledge of the causal laws needed to efficiently explore these fields.
Thus the logical root of the concept symbol, or the properties associated therewith, may be traceable to certain very special aspects of causal law, as aspects of repetition and its self-emergence. Such a derivation of the role of properties all or nothing and mutual exclusion would support the theme that human brains were computers of a different genus. But there are many more qualitative hurdles that must be cleared before the goal function repetition can hope to be regarded as a logical candidate worthy of serious investigation.
VIII. The Conditioned Reflex Principle
The conditioned reflex (CR) principle is central to brain operation. How does it relate to the category repetition? What is its logical basis in this new context.
The CR principle, in this broadest context, serves to interlink context dependent symbol-like units in certain conditional ways. Via the notion of a tropism, the act is treated as context dependent from the first. In man there is an excess of competitive tropistic responses that tend to obscure each other. The CR principle draws out and interlinks certain of these act-like units, in ways which are a function of past history.
The notion of a conditional correlation among acts is a very special kind of computer machinery in the perspective of conventional computer design. But in the context of a world built up out of context dependent acts, or symbol-like units, the conditional correlation may be viewed as the bare minimum essential to the very possibility of meaning. All meaning depends on correlation. Thus, at the heart of the CR principle is a quasi-epistemological principle that relates the possibility of meaning, to correlation among act-like units. It is a logical minimum required for any meaning at all if repetition must go via mutually excluding all or nothing channels of limited pure types.
In postulating a central role for the goal function repetition, it was suggested that it might generate a CR dynamics in which an invariance related to repetition was needed for emergence, this invariance taking the form of causally rooted configurations selecting (and predicting) the repeating paths. In a very local limited way the CR principle begins this theme. For a CR to emerge at all, there must be some very locally repeating (and so causally rooted) correlation in an objective status that it, in effect, picks up. The CR principle begins the task of drawing in bushels full of causally rooted universals which are sure to be under or over-generalized as first picked up. Presumably, the CR interaction generates needed qualifying variables in some way.
The form of the CR principle is more closely defined by considering the logical problem involved in setting up values in causal terms. At first a causal rooting of values might seem a contradiction in terms. If you correctly predict what is going to happen, it's still going to happen, so in what sense can predictions involve values that change or decide things?
The relevance of prediction to decision and values lies in the acceleration it effects. If you know that a bridge will break if built in a certain way so you will have to rebuild it stronger, you don't build it that way, but build it stronger from the first. Right action is an acceptance of fate in "a" sense, but that acceptance is not neutral, for it greatly accelerates fate. And many little accelerations combine to give rise to deep qualitative ones.
Now the CR principle (merely) reflects this in a primitive elementary way. If act A follows act B (in a given context) then an elementary effect of the CR is to make A follow B harder, faster, and more often. It accepts what is more likely to happen (because it happened in the past) and pushes it on harder, to its consequences.
It sometimes bothers people that the feedback from a prediction enters into the prediction. They seem to think that this invalidates the prediction in some way. But physicists deal with this kind of feedback every day. All you have to do is take that feedback explicitly into account. When the moon goes around the earth, the motion of the moon affects the earth's motion, and that of the earth affects the moon's. It has been philosophically argued that this would prevent solution too, but today physicists take such interactions for granted. In the case of predictive feedback, the problem is much easier, because we are dealing at the level of semi-statistical emergence (from more quasi-random material) of universal rules of behavior in group interactions. This gives us a cushion, so to speak, that even takes most oscillations out of the interaction so that it is far easier to handle.
The CR principle may also be viewed as containing the Augustinian Trinity in embryo. It is a statement of past history (Father), since it implies a past that gave rise to it, much as a footprint in the sand is a statement of past history. It is a statement of will (Son) since its presence effects future acts, and is the mechanism of the will. It is also a present intellectual statement, (Holy Ghost), since the emergence of a CR depends upon some repetitive aspect of the environment, which is, in effect, abstracted or implied by the CR. The attribution of universals to the environment is the root form of the intellect.
It is important to note the one way nature of the CR principle as thus formulated. It ties act A to act B harder and faster, but it has no provision for untying them. This one way nature has deep consequences for the whole organization of the dynamics of thought. It means that there is no internal regulation in the CR principle, so that all regulation must depend upon its own negative consequences, or things that might be interpreted as contradictions to which it gives rise. Thus all regulation or learning might be viewable as built up or compounded in steps involving specific contradiction. In short, the one way nature of the CR principle may be a key mechanical counterpart of the Hegelian theme on the form of all learning.
The CR principle is not itself to be identified with learning. The CR gives rise to too vast a hoard of links every which way for that. Rather learning is to be associated with a selection as among CR's that arises from their interaction (ultimately) reinforcing that which repeats. This is an important distinction. There is a gap between acts which "may" be repeated (that tell us something about the world), and further qualified acts which form part of a successful self-reproduction, or repetition in the full sense. It is important to separate these two, often confused, usages of repetition.
Finally, it is vital to recognize the essential role of internal anti-repetition devices in the successful dynamics of the brain. Any internal form of repetition gives rise to violent inhibitions that squelch out related patterns. There are, of course, endless feedback effects in the brain, but these are kept constantly changing in form, by such devices as inhibitory collaterals that keep the same cell groups from continuing to fire. Because of an array of such anti-repetition devices, the only way that full (reinforced) repetition can be found is via external return to functionally equivalent situations. This, in effect, also means that repetitions have to go via act related channels in the brain to be found at all. Such repeating returns to objective functionally equivalent situations are nearly impossible to find, but only nearly. Imitation helps guide us to those near impossible combinations as well as genetic pre-encodings.
IX. One Person Game Theory
With the category move (act, symbol) in so deep set a status and a goal function, repetition, defined in its terms, we have, in effect, beginnings of a world view and related new genus of computer oriented by a one person game theory. Each culture, as Spengler notes, is rooted in a self-model, or method of predicting human conduct in conflict situations. ("Religion is the causality of conduct" (Spengler)). If our clues are right, we will shift our basic rooting, from a space-time frame to a one person game tree.
The following paragraphs try to suggest how certain natural features of one person game theory might lend themselves to describing aspects of brain operation.
What is here presented is only a set of descriptive categories (that may parallel or fit known effects), not a causal model. Our aim is merely to try to show that a one person game theory framework might lend itself to representing human problems as a step to finding mechanical (and so causal) counterparts. We try to support the theme that the goal function repetition is not obviously absurd, by showing that one person game theory language has a richness of features and constructs that appear to fit in a simple way, at least descriptively, aspects of the problems to be attacked.
A game can be represented by a tree of possible moves. (For example, we might take the words in Webster's dictionary as our set of allowed moves.) At the bottom of the tree are all possible first moves. The tree opens from there exponentially. The world would then be viewed as built up most basically out of heuristics of the common goal function, rather than out of things in space. Space itself becomes a derived category. Poincare calls time deeper than space. In a world built of word-like units (related by heuristics) space emerges as a property of time-like act sequences. (For example, the dimensionality of space derives from the number of independent dis- placement-like acts needed for location or to get anywhere.) We infer the character of the world from the combination properties of act or symbol-like units in it, or from the various ways in which we can return to a same context-dependent act or functional step.
We know the world only as parts of heuristics of the common goal function. We do not see walls, even though we talk as if we do because that simplifies the communication process. We know a wall only indirectly, e.g., as part of policies of not walking in certain directions because we'd hit a wall. The category “heuristic” is an all-inclusive one. Everything, insofar as it is knowable to us, exists as part of heuristics of the common goal function, or part of the machinery of their reconciliation or further consistent qualification, when in conflict.
The world of things in space time, indeed, the laws of physics, exist for us as this machinery of coordinating or reconciling heuristics.
Life is, itself, oriented as the opening of these laws of physics that systematize heuristics. We enact our theory of the self-reproducing (repeating) paths. In the computers we build, theory has a different relation to the machine. "A theory written in the form of a computer program is both a theory, and, when placed on a computer and run, a model to which the theory applies. The theory performs the tasks it explains" (145). In the human computer we would have to include nature herself as part of the computer for this to hold true. In a way this is the case. An experiment is a nature-calculator step, and obeys the theory insofar as it is adequate. But this qualification marks a big difference. The theory internal to the brain is implicit in the machinery of reconciling colliding heuristics of the goal function. It is only concerned with selecting or isolating the repeating paths, not with enacting the full theory itself. We can get away with this simplification because of its intimate relation to the question asking process itself.
In a way, the new model does treat the world as one great computer, with its more digital aspects concentrated in the brain, and its more analogue aspects as the environment. People relate by imitative coupling, not by observation, in this perspective. It is a one person game, despite a middle class illusion of "competition" and "evil." Evil (as Thomas Aquinas says) is merely an absence, nothing positive in itself, merely ignorance of the laws of physics or God's laws. Outer conflict between people is but an outer projection of internal conflicts between competitive heuristics of the common goal function. Outer social conflict is a method of motivating and enacting the consequences of errors inside each and all of us, so as to generate the data needed to find answers. War is a part of education, soon to be outmoded along with a now untaught spectroscopy and the art of stained glass windows, because all its answers are in. War is not terrible enough, not so terrible as publish or perish. (People would rather face bullets than think.)
Along such lines the social dynamics might hope to be incorporated as part of the opening of the laws of physics within a one person game theory model of the world.
The more internal side of heuristic conflict takes the form of drives (some genetically encoded) that can be viewed as quasi-random searches (RS) for cues or acts that stop the related search (SRS), or "satisfy" the drive. A principle function of the brain as a computer is to tie or chain together past motor sequences (in ways dependent on the then present sensory inputs). The random searches represent competitive elements at some point in chains. What stops a random search (RS) brings part of these competitive elements into parallel in the choice of some act. It establishes a relative dominance as among some of the acts of the RS. Finding enough of such specially reinforced sequences puts system in our acts and reduces a quasi-random element in behavior.
The system-like chaining of our acts have dynamic internal counterparts because the engaging of act A will engage its successors before it is emitted.
We can call the sequence of acts that we "expect" to follow a given act, before emission of that act, its series elaboration (SE). When the SE of an act serves to so reshape drive factors that the act is undercut before emission, then we have the embryo of what might be called an internal RS, or thought. The internal RS then proceeds until one finds a series elaboration that does not undercut itself, and so is emitted.
This method analyses a quasi-random search into specific drive factors, and that into pairs of competitive heuristics. Of course, such an analysis is never justified, as the "solution" to one RS may cause another RS. They, in fact, interact very deeply. Nevertheless, a working method can be based on this two at a time approach. It is a bit like the two body problem in physics. All physics is organized as a compounding of two-body problems, in part because only the two-body problem can be solved. So, too, learning can be viewed as a compounding of specific RS (analyzed into pairs of colliding heuristics if necessary) and SRS.
This means, too, that in a crisis we work back or keep abstracting to deeper and more primitive RS until one finds the key orienting conflict, whose solution holds up, and is not upset by the solution of other RS. That such can ever be found has a relation to the fact that there are laws of nature out there to be discovered, based on universals "that ingress everywhere and are everywhere one" as scholastics said of God, describing the laws of physics ahead of time.
The internal RS drives a class of topological invariants to the surface, related to nextness. Via a system of nextness relations orientation dependent affects are subordinated to an orientation independent core, the internal model. Adequate cues to objectivize SRS conditions can be easily found, and located with a space-time oriented network of nextness relations. Thus the quasi-random search is turned into a generalized objective search (like the squirrel's search for its buried nut). A mathematical problem involves a nextness oriented space time search in a field of possible or allowed substitutions (to complete an allowed sequence from premises to conclusions). Every problem (as heuristic conflict (over how to behave) within a one person game theory) can be further embedded as a generalized objective search.
The learning process involves what might be called old brain effects, which directly enhance certain competitive tropisms at the expense of others, and the new CR brain, that gradually gives rise to much more complex forms of dependent regulation. It does this in part by actively generating needed cues and situations and by actively opening the field of possibles in certain niches (to several steps by chess masters).
Gradually the genetically pre-encoded drives are subordinated to repetition, as in maturity they collide with each other via their own series elaborations.
The very possibility of brain operation, its basic structures, come from the special harmonies and redundancies in the correlations fed into the brain. The genetic pre-encoding which is present would be rapidly subverted and washed out, if it did not, in effect, merely anticipate the kinds of structure that the more basic law of repetition would reinforce. The rapid breakdown of the thinking process under sensory deprivation, testifies to the intimate and all pervasive nature of this dependence. The basic structure depends for its emergence upon certain invariance properties, and the related roles they play in supporting repetition, which ultimately controls reinforcement, where the genetic pre-encodings collide via series elaborations.
This provides a very brief summary of the one person game theory perspective on overall aspects of brain operation. What we have tried to suggest is merely an unsupported abstract paradigm for future filling in. Perhaps an alternative genus of computer is not, on its face, obviously absurd. All we are trying for at this stage is to motivate interest in an alternative approach, one person game theory, by suggesting how it might be applied.
X. The Role of Contradiction (X)
If we apply our usual readout conventions to the "feelings" that underlie and generate the conflicts over how to behave that form what we call the center-of -attention phenomena, we end up with contradictory statements of three logical types. Contradiction (X) is a kind of Rosetta stone, linking together a trinity of bodies of syntax which are presently disastrously isolated from one another.
For example, if we ask a typical mathematician what mathematics is about, he will refer you to set theory. What is a set? It is a set of things. And what is a thing? Ah, that is not the concern of a mathematician, but of a physicist. So we go to the physicists to find out what a thing is. A thing, he tells us, involves persistent patterns of measurement. It is a property of measurements patterns. Ah, we are getting somewhere. But what is a measurement? That is no concern of the physicist. He sends you on to the psychologist. Measurement is a subjective category. And when we ask the psychologist what a subjective category is, he sends you back to mathematician and physicist. (Wigner)
How break this "Wigner cycle"? How integrate these separated vocabularies in a working way?
The contradiction (X) is the Rosetta stone we need, for here the same feature is read out in three types of vocabularies by human computers (i.e., brains). The X is a contradiction over how to behave, and so reads out in behavioral terms. It is an X in predicted facts, and so reads out as a formal X of the form "A is true" and "not true, " both asserted at once. It is also a felt category or subjective category as the form of the center of attention.
The road on from there to a working integration of these three types of syntax, i.e., the development of a translation procedure between them is, of course, a long one. But this breaks the ice jam.
Via the interpretation of the center of attention as an X among competitive heuristics of the common goal function inside a one person game theory, we are plunged into an absolute idealism. Being involves problem solving inside pure mathematics. The logical form of the felt is tied to the X. We feel things as part of a form that involves exciting a field of possibles (act related) and then selecting over them for a single choice of act. The field of possibles starts as an X that is acted out until the branches are brought into parallel. Space (as felt) is dead time. The refinement of the visual gestalt (as it gradually distinguishes itself from touch, with which it is at first confused) goes hand in hand with the refinement of motor behavior which the visual cues control.
That one person game theory should be an optimal framework in which to get at the logical form of the felt, argues for its transcendental status independent of any specific set of laws of physics. It serves as a guide in the opening of laws of physics, a guide to finding new ones when the old break down. "There is only a history of physics."
Because the X as center of attention is ultimately an X in our best available causal models, and, hence, an X in our understanding of the laws of physics, a causal grounding of values does not, cannot "debunk man." When a dog in Pavlov's harness has a "problem" we think of his problem as "debunked" because we can (often) predict ahead of time what is significant for him (e.g., how he will break down). If values are causally rooted, people fear that causal theory could be used to predict out for adult man what is emotionally significant, and so bypass the essential relevance of mind and spirit. Since, by its very logical form, the X as center of attention and of concern is an X in our best available causal models (at least in adult healthy humans), this debunking is impossible.
Via the X, one person game theory integrates with religious models, and might help renew their causal basis in the context of modern physics. The cross as X, and source of creation, source of emergence of new felt ex nihilo in its resolution, is an example of a natural point for joining religious metaphysics to one person game theory. The need for death and rebirth to accomplish significant new model building (as distinguished from mere elaboration of old models) is rooted mathematically in the cycle-like nature of a culture, each part locked in by its relation to the other parts. Hence, often no part can be changed in a basic way unless all are. This means that progress often requires "nervous breakdown," death and rebirth. This must be gone into "blindly," since the old system can't guide its own reformation directly, only indirectly, via abandon to its own self-destructive X's. The ties to religious vocabulary along these lines are only too obvious.
Note that the dependence of the felt on the X, and a bringing into parallel of competitive act-like units, provides a way of understanding why its conscious forms depend upon our present stage of advancement, and are not present at the cell level. Consciousness is a very complex form of interdependence in the X, even in its most elementary felt forms. The conditional controls of metabolic steps via the genes have a logical form vastly simpler, and involving no present X. Evolution first spreads the X out in nature and history, as a latency. The nervous system allows of the relative internalization of these under the logical form of a mechanical conceptual parallel. The properties of symbols, and the CR principle form elementary beginnings.
The relation between a question and an X is yet another. Values, as the logical forms that separate out repeating paths, are also causal. We must enact our theory of the repeating path. When we reach repetition we stop thinking and start acting, not because we "decide" to, but because it is the law of our nature (Kierkegaard). Repetition gives rise to a recruitment effect inside the nervous system which destroys that repetition by action if it does not destroy it by violent inhibition. The love of God (the seeking of repetition) is the law of our nature, not something we decided on (Thomas Aquinas).
The over-all dynamics is generated by the X. The one way nature of the CR forces involvement in "both" its branches, and an intense absorption in combinations of just those once tiny channels, opening them into a sea of exponentially opening possibles, several steps in places. This involves generating style cycles, and dividing competitive branches into different people and different moods. Finally, in history, one finds nextness rooted qualifying variables that begin to predict out the style cycles themselves, and reveal a structure in the field of possibles, which at first looks like pure chaos. Via these, in stages, one isolates needed qualifying variables to reconcile life heuristics (as we reconcile colliding heuristics inside chess).
Each X generates or opens new freedoms, and insights or theories that allow us to span them, to reach one choice of act. This, in turn, gives rise to new technologies, new societies, and redesigned brains. The historical evidence indicates that brain capacity, as such, is not the barrier. It is only too easy to design in more capacity, once one has heuristics powerful enough to use, rather than be destroyed by it.
In the one person game theory outlook there are no felt simples in the old sense. Indeed, the least bit of felt emerges as enormously complex. The word or act is no more felt, as such, than is the sodium atom. Our feelings related to acts, are no more abstract acts of one person game theory, than the flame's yellow flash by which we detect a sodium atom "is" the sodium atom.
There are no more simples and no need of any. No felt is treated as "simple" or "bottom." All felt is viewable as capable of a transcendental further enrichment and qualification. But if we have no more simples to tell us when to stop our analysis "because we have hit bottom," what takes their place?
What takes their place is the X itself. When the X is resolved, we reach repetition and pass out into action. We stop, not because we have reached a final bottom, but because we can't ask ourselves any more basic questions that open the felt, and our interest shifts out to application, or action.
It is important to note that the loss of simples gives an essential transcendental role to a host of types of analogy. When we have an adequate synthesizing theory, then we can half pretend to ourselves that we can bypass analogy and deal in "basics." Of course, even the most successful theories never allow of that. Physics can only handle directly the two body problem. We have to use analogies, treat nature as "nature-calculator" to handle any actual situation.
But historical periods when theories are adequate are the exception, not the rule. In most periods one must deal directly with a host of competitive analogies or metaphors that cannot be interpreted into a common system, and so must be treated as if basic (212). Different personality types take different metaphors as starting places trying to recreate a relative unity in new tasks. Total reduction is but a false illusion of a passing phase. We always need and use all perspectives (140). These are held together in a working way by the methods of the Roman peace, side-choosing by peaceful means to act out the X, and hold the group together. The additive opening of working techniques provides framework enough. Theory and implied simples are the luxuries of heroic periods that come in seeming jumps.
The X goes to the very definition of organism and character. We are defined by the problems we face (203). We are born, not when we come out of a mother's womb, but when a social X (coupled in imitatively via the sub-groups in which we participate), takes hold of us. Mans’ heroic nature is related to the one way nature of this taking hold. Once an X has taken hold, it is impossible to change problems, except by solving it. Freud is in error here. Psychoanalysis views the class of problems that are destructive of a person's chances for middle class success, problems not yet ripe for solution, as diseases to be avoided at all cost. Freud traces the origin of the disease to early childhood training and relates it to forms of sexual preference. Early training does effect the class of X a child specializes in. But once an X takes hold, there is no going back. Freudian theory is deep and correct as description, but nonsense as a method for changing people or avoiding the X. It isolates no control variables.
Sex is closely related to imitation. The circumstances under which orgasm is found most reinforcing are those which provide us with renewal of the class of imitative couplings needed by our task. If self-imitation is a primary need, as is the case with some modern analytic work, then masturbation is the preferred form of a large percentage of orgasms. The conventional marriage ideal is a form of orgasm that best reinforces elaboration stages. Hence, it has the character of a correct orienting ideal. We all want to push our X's on to solution and elaboration. But you can't force that stage by suppressing other forms of sexuality, any more than you can change the voltage in a circuit by breaking the glass on the voltmeter and forcing the needle. To reject other forms of sexuality is to reject working on hard to solve problems, and so to reject progress. It is deeply reactionary, and in line with psychoanalysis's role as middle class religion.
Religion differs from psychoanalysis in that it invites us to abandon ourselves to the X, intensify it, and undergo death and rebirth, not avoid it.
This death and rebirth is the form of basic problem solving inside the pure mathematics of being. Every X pushes to solution, where it converts itself into a task or goal, namely, that transformation of the group or its life style that the resolving insights associated with the X's solution may reasonably be expected to effect.
The X pre-exists. It is an existential category. It is not decided on. We look into ourselves to see what it already is. This is yet another way in which human computers bear the mark of a very different genus of computers.
XI. Some Comments on CR Interactions
The general purpose heuristic (GPH) underlying brain functioning involves acting out of the X by splitting it up between people. A key element in this is the verbal fetish, via which certain correlations, or CR's are given an absolute character that can hold up and drive the individual on to a total uprooting and destruction of old forms via their own elaboration. Without our verbal fetishism for the written word, our belief in rule by law, this would be impossible. Forcing group decisions via written verbal rules vastly simplifies information processing, but also generates far deeper and more violent X's. These gradually force the take-over by the word from non-verbal insights. The breaking of the genetic code is symbolic of the total take-over by the word. Evolution must, in future, go via it. War and social interactions have taken a special violent inter-personal character, in part because of the word's need to take over and incorporate in itself latent insights encoded at a preverbal and genetic level. The word must dig to the roots of our being psychologically and physically to do this. But these answers are now almost in. (All that is left is the coming shock in recognition that the many pieces form a unified picture.) War, like spectroscopy, is growing too boring to teach us much. All the experimental evidence we need to guide the forming and grounding of a unified theory is in now. The log jams lie at the model building level.
Various factors contribute to the success of the internal CR interactions. One of these is the enormous redundancy, both in the inputs themselves, and in the dependent encoding of behavioral controls (with related implied memory). This allows of a rich variety of conditioning methods to winnow over.
More specifically this leads to an over-individuation of the act-like units, so that acts only interfere via series elaboration (except in trivial ways, easily and quickly repaired).
The CR links, or learned links, appear to have a positive facilitatory character. Related exclusions and mutual inhibitions are genetically encoded. Inhibitions are facilitated via special intermediate cells. This helps to put different functions into distinct areas. Learned inhibitions are not formed in the same area as related facilitations. Inhibition has to go via a distinct area where it facilitates inhibitor cells. So, too, learned facilitation of drives and anti-drives (that stop drives) are formed in distinct areas.
This in conjunction with the one way nature of the CR helps one to understand why past memories last as they do. Links are never rubbed out or even locally neutralized. Memory depends, more specifically, upon the individuation of CR type universals to uniqueness. When they are not unique, when the factors entering policy are over-generalized, then they interfere. But once unique enough, a compounding of universals takes on a lasting objective in one place character. Memory thus goes hand in hand with refinement of motor behavior to a point where its compounding CR have a unique "enough" character.
Memory also goes via nextness (or orientation independent) type links which form the root of conceptual structures via which most conscious learning goes (191). We set up CR in orientation independent ways from the start, (e.g., Goats taught to raise a right front foot off a yellow spot to avoid shock when a bell rings, will instead raise any other foot, or even their head, if it is placed on the spot, the first time.)
The orientation of the felt in relation to the X limits the felt to everything that is relevant to one choice of act. This everything is much, but not that much, and is quite finite and handleable. The ends-means chain (all that effects the X) simplifies and focuses the thought process (169). But this is not a built-in pre-simplification, but rather something implicit in the nature of the goal function repetition. Although there is a lot of genetic preacceptance, the emergence of repetition is ultimately an aspect of causal law.
The operation of the brain, its GPH (general purpose heuristic) is rooted in transcendental properties of causal law (independent of any specific level of causal insight). This GPH depends in essential ways on the class of correlations fed into the brain.
The complexity of brain operation lies in the brain itself. The exponentially opening class of competitive tropisms, and the richness of their potential links are all pre-encoded. The effect of interaction with the whole or others is to greatly simplify the exponential richness latent in this initial potential, to reduce randomness, etc. The environment and memory have vast redundancies that return us to simplicity.
Statistical forms of interactions force up relatively vastly simpler invariants in the pre-given maze of genetically given potential. This is quite the contrary of what happens in computers we make. Their built-in forms are simple. The complexity of their operation derives from memory and the environment (128, 169, 175) that feed in pre-formed instructions.
Man starts not as something simple, but as a pile of dust, a complex manyness, as the Bible says. Man, as apart from his environment, is a meaninglessly complex potential. Like the typewriter as apart from the poet, so is man taken alone, except the "typewriter" we are has the built-in complexity of exponentially opening quasi-random links that actively probe the world. The refinement that enters in takes the form of an additive functionally oriented site refinement, something conceived of as outside time, and for all time.
The environment role must not, however, be simplistically conceived of, as Skinner does (175-245). Interaction with the environment does not control significant change directly. Indeed, such interactions are all too crude for them to have even the least chance of a lasting effect. All the environment can do is choose as among a set of pre-given potentialities (175), drawn out this instead of that. It does not add one iota of insight, or lasting change.
The basic interactions that effect lasting change are of an "I-thou" form, never "I-It." (Buber). They derive from interaction between competitive heuristics, each of which is successful in some niche. Heuristics are carried by people, not things, communicated by imitation, not observation. Indeed, it is only when the competition is inside one person (not between people) that it has a chance to give rise to something new. This is already true at a genetic level, and is related to the success of sexing as a device. Jump theories with changes due to single events of outer origin are not supported by the evidence. Rather a gradual refinement of relative weights among competitive factors both already usefully internalized seems the way of evolution.
Man is not controlled successfully via his environment, but only via the development of his own causal insights, his ability at self-prediction. Outer control is just too clumsy and over-generalized. It always breaks down. This is a key insight that begins to tie the laws of CR interaction to Kant's formulations of the thinking process.
XII. The Two Modes, Deduction Versus Induction
The brain is a parallel as distinguished from series computer (John von Neumann). Series computers make a large number of steps that each depend on a few factors. An error in any one of these may make the final answer absurd. Parallel (human) computers, by contrast, operate slowly, taking only a few steps. But many factors shape each step, with much redundancy, so that errors are self-correcting.
Series computers are relatively more optimal for deduction, carrying out the consequences of already well defined ideas or theories. Parallel computers are relatively more optimal for induction, reforming theories, and picking up new correlations of basic significance. Redundancy plays an essential role in the success of its operation.
This distinction between induction and deduction is also a basic one for roughly separating out two phases or modes of the brain's own operation. There are two basic modalities of operation in human computers, as Poincare notes (217). One is verbal and elaborational, involving words about words, a second-hand (blind) method (concentrated more in the left hemisphere). The other is intuitive and inductive, involves our space sense in essential ways, and more our right hemispheres (214-16). Deduction is really just a blind set of mnemonic devices (Poincare) involving words about words, although it uses induction in secondary ways. So, too, induction finds a rooting via the X's of the old deductive system, even though it plays with it from outside. Deduction is the telling, doing side versus the knowing (72), being (267) character of induction which reaches to the mystic felt of present being itself. This present felt is harder to really get into words, or the arts, than it seems. But twenty years work is enough. It can be done, relative to any given set of issues.
The social dynamics proceeds by taking the biggest range of possibles that can be systematized, and then acting out the consequences of this (causal) system for predicting or selecting repeating paths, to all its consequences. (Prediction and selection are at one, as we enact our theory of the repeating paths.) A group can only be organized about a formed system. The leadership function is inherently blind and trivial (as Henry Adams observes), throwing dice, and trying to smell out where people are headed. Society is really leaderless. Leaders are but zombies in their capacities as leaders, blind unfeeling machines that roll dice. (Power corrupts.)
The mistake that the middle class make is to confuse this relatively blind elaborational function of groups and leadership with the whole of life, when it is only the tip of the iceberg (requiring about 1/30 of the population or group effort at most (Alinsky)). They confuse elaboration with the essence of all progress, when, in fact, it is but a relatively blind machine that serves to generate the basic X in which all deep progress lies. Society is already leaderless, de facto. It is just a matter of recognizing the zombie state for what it is. Leadership seems to will most when, in fact, it does so least (Kierkegaard). It just represents or repeats the best in the past, and throws dice. The poet and scientist rule society over the heads of leaders by a higher law (Napoleon). (Where the real power is, we all stand as equals)
Induction, by contrast, is a relatively lone affair. It is only where people are outside the system, or universal, that they are thrown back upon a first-hand relation to the world, as felt. In elaboration there is no time for first hand thought which is very slow and time consuming, keeping us in a paralyzed state for a long time. Induction derives from X's in elaboration, where formal terms become inadequate (70).
The X which sends us back to the felt can't be put in a system (yet), though it depends heavily on the old system to focus and draw it out.
The ultimate expression of the deductive methods is the total formalization of values via money. Via money's absolute elaborational impetus (26) there is a progressive total uprooting of the word from the felt intuitive side. Power becomes impotence (258). The whole inner man becomes irrelevant (129), (i.e., inner in the felt sense. Learning involves principally the synapse, and leaves all the rest of our interior irrelevant.)
Clocks destroy our sense of time (25), highways imprison (37), the mass media stops us from talking (38). Man can seem nothing but a clockwork (8, 259) even to himself.
But this destructive money push, which uproots group-man, recreates paradoxically the lone individual, and a renewed richer inner life in secret niches, everywhere. What touches on this secret lone individual life cannot be formalized as can the group side of the decision process. All deviant behavior involves the X. Hence, judge machines (207), and computerized psychoanalysis (269) are blasphemies as mechanized, except in their limited aspects as part of the police force, that keeps the group decision process centralized, and does not try to reach the individual. In this limited capacity they can be mechanized, indeed, are so now (for big money payoffs turn people into zombies just as dope does. They only look human.)
The moral, felt side is an inherently lone personal affair. Schools are, also inherently, big machines to recreate the past, and it is self-defeating to try to make them "more." To ask that universities concern themselves with social problems (254) except in a relatively routine sense, or higher values (276) leads to waste money and crackpotism. It is most harmful to what it would most help, for it forces upon the individual either self-rejection or self-pity. Because it is impossible for any institution to help "creativity", (they can only fight it), the demand that it do so breeds self-pity (if the individual holds on to his felt beliefs) or self-rejection (if he does not). The individual is a match for the world, alone (and only when alone as Ibsen saw). Every known result that bears on one issue is not so much, not too much for the lone individual, once the issue is ripe for solution. To ask that universities take some special concern for "higher values" is a piece of self-delusion and snobbery. The new emerges equally from everywhere. We are all equal in our pursuit of God.
Creative thinking involves the struggle to integrate all voices inside ourselves. The word “all” is crucial, as it is the last drops that bring in the new. It is, of course, only "all" relevant to a given issue, or issue type. One fight is enough for a life.
Interaction is between heuristics and so always "I-other," as felt. But the many others, to our surprise, hide a latent unity that turns "I-other" into "I-Thou." The many heuristics of our generation can be systematized re a given issue, the struggle to do so taking the personal form of "I-Thou. " (Buber)
It is along these lines, in the struggle to reconcile all voices, to treat the felt as theoretic (not reject it as a meaningless jumble), to reason in madness (280), that the causal becomes moral, and then a lone personal God relation. Faith is blind in a precise technical sense. One can't use the old system directly, only paradoxically, to guide the crystallization of the new, by pushing the old own to its own total self-uprooting.
God language as Duhem saw, sneaks into mathematics via statistics, and a clarification of the equilibrium concept (as repetition). Our Judeo-Christian metaphysics becomes part of physics, a transcendental core of causal law itself. The moral reveals itself as the law of our own will, and a highly personal lone God relation, at once.
This faith in God, in the possibility of making our inherent manyness relatively one or theoretic via long effort, integrates directly with the scientific commitment (Polanyi). It is at one with the struggle to root our insights in observation or fact. We must imagine first to find (18), but after we turn phantasy to system, killing it as system takes over and roots itself in fact. The mystery of God (or being) is precisely His understandability, His communicability, as Einstein and Thomas Aquinas note.
From the perspective of science as limited to compulsive elaboration, God and grace are obscene (261) because they abide in the pushing on of the old to destruction, and rebirth (not limiting our channels to "successful" elaborations). A serious commitment to truth (239) and justice (252) and lone responsibility, vanish with our religious beliefs in this compulsive perspective. A search for well-being replaces nobility (261).
All understanding is confused with computer programming, the related blind mnemonic devices that push elaboration. But the demand for understanding goes both ways (157). If final solution leads to a formalization, a renewed focusing of the X involves a tying back to the felt. The blind elaborators who write programs have lost sight of what they are doing in a broader total sense which always involves the felt. They are hackers only (118), and see no need to probe to the roots of their own motives or felt feelings, no need to know what they are really doing. At the secret root of their overstatement of the role of deduction is but a power mad greed (253), the fear of thinking, of looking into themselves that power allows them to escape.
These people are all cynical. They believe in evil, in the need to compromise. They do not realize that it is their own belief creates what they fear. The cry of despair, creates it (Baldwin). We do choose (260). We have the power, we will what we find (273) which is only our own errors blown large. When we compromise, we mutilate ourselves (275) because we destroy the tensions that alone force up resolving invariants. Man is an easy counterpose for any and all of being, which is but an emanation of his word, without which all the felt order would collapse, factory and home. But the task is to subordinate this middle class centralizing function with its compulsive elaboration, not end it. Or, more precisely, the task is to build models that recognize that it is subordinated, already, for by exaggerating its status we make ourselves the victims (which we need to be (relative to any given X) three times to motivate and provide material for learning).
XIII. The Reactionary Nature of Computer Modeling
Forced elaboration for its own sake, the terror of basic thinking, and the confusion of elaboration or deduction with the whole game (36, 251) becomes a violent addictive (257) compulsion. Our whole sense of being is closed off in a secondary deductive mode which has lost all contact with the felt as felt. Elaboration is pushed to madness (258-9) and mindless rage (240), especially via the integration of computer methods with systems analysis (34).
Compulsive elaboration, computer methods conceived as all inclusive, is inherently reactionary (31, 250), indeed, the formal essence of the reactionary position. The many goal concepts needed in problem solving are assumed to be adequately defined by existing conventions (263), so that they do not collide and the problem is always to implement the many "basic" goals, not reform them.
Compulsive elaborators, in effect, assume that any misbehavior has origin in some technical program error (124), that all social problem derives from "failure to communicate" (14). All that is needed is more "self-discipline" (247) or a more powerful calculator (201) and all will go well.
To confuse understanding with computability (200), or a summary of facts (249), to confuse truth with provability (12) is circular and arrests all basic progress. These three criteria have an important orienting place with respect to dead issues as ideals that are reached when the life has gone out of a concern. But the interim stages of ideas can't be tied down into facts, or be put on computers, or proven. Such demands kill off ideas in an embryo stage.
People are forced into endless refinement of balances, instead of solving the problem (30). Therapy further formalizes this escape as a religion, helping people to avoid basic contradictions by defining them as diseases to be avoided. Psychoanalysis sees man's interior collisions or inherently irrational (not rational) something to be "handled," not solved, but merely forced back into the normal pattern (181), under threat of being defined as "maladjusted." As a last resort there are shock treatments that are less effective than beatings, but more lucrative. As Polanyi notes, reaction defends itself with circularity, myths of self-expansion (bluff and buff), and the suppressed nucleation of all new ideas (125). Of course, nobody ever lives inside such circularities, but for reaction the outside is a club secret, it is a cynical disbelief in the whole game, and a use of the circular myths for what they are worth, in self-serving terms (253), on those who can be taken in. And they are worth a lot.
Science is used by reaction to frighten nonscientists with illusions of certainty in regions that involve values, where no certitude exists, as values involve present contradictions in our best causal self-model insights. In other regions, where science has some threatening impact on old ideas, reaction will pretend that there are uncertainties in science that, in fact, do not exist in the region in question, but inconsistency is a help, not a weakness.
Formulas and program writing does not imply any understanding (234), deep or otherwise. Programming involves a clever pile of techniques (268) relatively easy to learn (277). It is a pile of patches without purpose (118) much like a bureaucracy (234). Indeed, programs provide good paradigms for establishments, reflecting both their strengths and weaknesses. It is only ideas that have been formed into systems that can be used for group-formation (that tip of the iceberg of life).
In the bringing in of the new we all stand as equals But science is still in a feudal stage of organization. Like the kings of old, they think that God (the new) emerges through them, when He only elaborates or deduces through them. As Socrates saw, the market place is always ahead of us. He who pulls together and makes a system of its many voices becomes the gateway of the new. But, important though this function is, it remains ultimately secondary. The group indirectly (in spite of itself) trains people to play this role, when it needs them.
And it needs them now. During the whole historical period up to about 1920 the need was not for basic new ideas, but rather to arrest them, and concentrate all the energies of the group on elaborating the consequences of already existing ideas. When people don't have enough to eat or to clothe or house themselves, any introspective tendency that might question and set us off in new ways is a dangerous diversion, and intellectually premature. One is never in a position to question old ideas till one pushes them to all their major consequences successfully, and this success undercuts itself. Then one is.
Here we now have too much. Overeating is our major killer, worse than cancer, or smoking. Not enough exercise is a next. Part of the world (not all) is faced with an over-success of the once needed method of elaboration for its own sake. What we need now is not more food or clothing or housing or communication, but more new ideas. We need now to induce people to turn in in basic ways, as we needed to stop it earlier.
Knowledge opens exponentially. There is plenty of room for everyone to consciously relate to issues in basic research, with the excess here present. There is ever plenty of room at the top, the only place where there is. (The worker has been largely killed off here.)
In the new community, birth is when a social X takes hold. Maturity is when that X has been pushed to a system, when working answers are found that convert the X to a social task. He who does not make his own system must be the slave of another man's system (Blake). Only those who have pushed or bloodied through their own X's somewhere, hold credentials to enter the lists at a social level. Only they have the right, means or morals needed to function among equals, and that right none can take away. Our Afro-American Jazz traditions teach people the needed egotism and self-imposed slavery to drive our own lone inner world, to repetition, and rule. They teach each person to sit herself against the whole, alone. There is no other road but compromise, which is only an illusion, a waiting till our children can take up where we leave off.
Jazz (the world's first universal art form) turns the deductive methods of reaction against the self, inwardly, where their meaning is paradoxically transformed. The Black man coming out of slavery in America bears a message as cosmic as did the Jew. Jazz, as an improvisational method, is the commitment to and method for mathematization or systematization of our inner felt world itself — a perpetual revolution, and the New World.