This text is reproduced solely for the limited academic use of students in Webster University BUSN 6150.

Numbers in brackets indicate the start of a page in the original text.



Roszak, Theodore. The Cult of Information: The Folklore of Computers and the True Art of Thinking. New York: Pantheon Book, 1986. ix-xii, 3-20, 87-115, 210-220.



The little boy in the fairy tale who blurted out the embarrassing truth that the emperor was wearing no clothes did not necessarily mean to say that the emperor deserved no respect at all. The poor man may have had any number of redeeming qualities. In his vanity, he had simply weakened to the appeal of an impossible grandeur. His worst failing was that he allowed a few opportunistic culprits to play upon his gullibility and that of his subjects.

This critique of the computers in our lives, and especially in our schools, has much the same limited scope. It is not part of my purpose to dismiss the computer as worthless or malevolent. I would hardly be in the position to draw that conclusion. The manuscript for this book was typed on a word processor; at numerous points, the research for the text made extensive use of electronic data bases. I approach this study with a healthy respect for the many helpful things computers can do, and not from a position of doctrinaire technophobia. I do, however, want to suggest that the computer, like the too-susceptible emperor, has been overdressed in fabulous claims. Further, I believe these claims have been deliberately propagated by elements in our society that are making some of the most morally questionable uses of computer power. The glowing promises with which they have surrounded that power need to be challenged if the computer is not to be delivered into the wrong hands.

As this should make clear, my interest in these pages is not in the technology of computers, but in their folklore: the images of power, the illusions of well-being, the fantasies and wishful thinking that have grown up around the machine. Primarily, my target is the concept to which the technology has become inextricably linked in the public mind: information. Information has taken on the quality of that impalpable, invisible, but plaudit-winning silk from which the emperor’s ethereal gown was supposedly spun. The word has re- [x] ceived ambitious, global definitions that make it all good things to all people. Words that come to mean everything may finally mean nothing; yet their very emptiness may allow them to be filled with a mesmerizing glamour. The loose but exuberant talk we hear on all sides these days about “the information economy,” “the information society,” is coming to have exactly that function. These often-repeated catchphrases and clichés are the mumbo jumbo of a widespread public cult. Like all cults, this one also has the intention of enlisting mindless allegiance and acquiescence. People who have no clear idea what they mean by information or why they should want so much of it are nonetheless prepared to believe that we live in an Information Age, which makes every computer around us what the relics of the True Cross were in the Age of Faith: emblems of salvation.

Information has had a remarkable rags-to-riches career in the public vocabulary over the past forty years. It was surely among the least likely candidates to achieve the exalted status of a god-word, but so it has become, and not by accident. Beginning with its esoteric redefinition by the information theorists during World War II, it has come to be connected with a historic transition in our economic life, one which unites major corporate interests, the government, the scientific establishment, and at last draws in the persuasive rhetoric of advertisers and merchandisers. If only as a unifying theme that holds so many powerful social forces together, the concept would be worth critical attention. But the Information Age has now entered the educational curriculum in an aggressive and particularly insidious way which could distort the meaning of thought itself. That is the special concern of this study.

Two distinct elements come together in the computer: the ability to store information in vast amounts, the ability to process that information in obedience to strict logical procedures. Each of these will be taken up in turn in Chapters 5 and 6 and explored for its relationship to thought. There we will see how the cult of information fixes upon one or the other of these elements (sometimes both) and construes its intellectual value. Because the ability to store data somewhat corresponds to what we call memory in human beings, and because the ability to follow logical procedures somewhat corresponds to what we call reasoning in human beings, many members of the cult have concluded that what computers do somewhat corresponds to what we call thinking. It is no great difficulty to persuade [xi] the general public of that conclusion since computers process data very fast in small spaces well below the level of visibility; they do not look like other machines when they are at work. They seem to be running along as smoothly and silently as the brain does when it remembers and reasons and thinks.

On the other hand, those who design and build computers know exactly how the machines are working down in the hidden depths of their semiconductors. Computers can be taken apart, scrutinized, and put back together. Their activities can be tracked, analyzed, measured, and thus clearly understood—which is far from possible with the brain. This gives rise to the tempting assumption on the part of the builders and designers that computers can tell us something about brains, indeed, that the computer can serve as a model of the mind, which then comes to be seen as some manner of information processing machine, and possibly not as good at the job as the machine.

The burden of my argument is to insist that there is a vital distinction between what machines do when they process information and what minds do when they think. At a time when computers are being intruded massively upon the schools, that distinction needs to be kept plainly in view by teachers and students alike. But thanks to the cult-like mystique that has come to surround the computer, the line that divides mind from machine is being blurred. Accordingly, the powers of reason and imagination which the schools exist to celebrate and strengthen are in danger of being diluted with low-grade mechanical counterfeits.

If we wish to reclaim the true art of thinking from this crippling confusion, we must begin by cutting our way through an undergrowth of advertising hype, media fictions, and commercial propaganda. But having done that much to clear the ground, we come upon the hard philosophical core of the cult of information, which is as much the creation of the academies and laboratories as of the marketplace. Gifted minds in the field of computer science have joined the cult for reasons of power and profit. Because the hucksters have enlisted so many scientists in their cause, there are tough intellectual questions as well as political interests that need to be examined if we are to understand the full influence of the computer in our society. In a very real sense, the powers and purposes of the human mind are at issue. If the educators are also finally swept into the cult, we may see the rising generation of students seriously hampered in [xii] its capacity to think through the social and ethical questions that confront us as we pass through the latest stage of the ongoing industrial revolution.

The so-called information economy may not be what its major boosters would have us believe. It is not the futuristic utopia so long prefigured by science fiction. It is, however, a significant and exciting transition in our industrial history. No technology has ever unfolded its potentialities as swiftly as computers and telecommunications are doing. It is understandable that those of us who are witnessing this whirlwind transformation should find ourselves dizzied by the rush of innovation, the sudden influx of new technical powers. But we have seen too many technologies of the past go wrong to let our critical attention be misdirected by the computer enthusiasts. Information technology has the obvious capacity to concentrate political power, to create new forms of social obfuscation and domination. The less prepared we feel to question the uses to which it is put, the more certain we are to suffer those liabilities.

Ultimately, this book is as much about the art of thinking as it is about the politics and technology of information. There is an obvious humanist agenda running through the critique. I work from the assumption that the mind—and not only in the form of human intelligence—is as close to being a wonder of nature as any miracle revered by the religions of the world. To reflect on the powers of the mind, to probe its secrets, these are among the time-honored pursuits of philosophy. It is quite another matter, however, to teach children and tell the public that the secrets have all been revealed and the powers harnessed—and to offer a collection of semiconductors in a metal box as proof. Measured against that claim, even the most ingenious computer is bound to look ludicrously inadequate in the eyes of thoughtful people—more of a joke than an achievement. As critical as this book may be at many points in challenging the status of the computer in our society, it includes among its purposes that of saving this remarkable invention from the inordinate claims that its enthusiasts are making for it. Unburdened of vainglorious ambition, dressed in more modest but palpable working clothes, the computer, like the emperor in the fairy tale, may yet become a reasonably valuable public servant.







When I was growing up in the years just before World War II, information was nothing to get excited about. As an intellectual category, it held a humble and marginal status. Few people would have conceived of it as the subject of a “theory” or a “science”; it was not associated with an advanced technology that lent it glamour as well as extravagant financial value. Probably the most common public use of the word was as part of the phrase “Information, please.” That was how you asked the operator for telephone numbers before we had 411 to dial. There was also, through the 1930s and 1940s, a popular radio program by that name which challenged listeners to stump a panel of experts by sending in unlikely questions about assorted trivia. Who was the shortest president of the United States? What grand opera contains the longest duet? What mammal reproduces by laying eggs?

That was the way most people thought about information in those days: disjointed matters of fact that came in discrete little bundles. Sometimes what was in the bundles was surprising, sometimes amusing, sometimes helpful. Most often it took the form of a number, name, date, place, event, or measurement that answered a specific question beginning with who, what, when, where, how much. Such matters got talked about in ordinary words; they did not require esoteric mathematical formulations or a special technical [4] vocabulary. Occasionally information might be urgently important—like knowing where to press to stop the bleeding—but it was not regarded as something for which there was an insatiable public need. Certainly nobody would have credited it with the status it has acquired in our day—that of a billion-dollar industrial commodity that we should want to see produced in limitless quantities.

Of course, everybody knew there were certain businesses and professions which needed to keep lots of files filled with information. There were the accountants, the lawyers, the engineers. The standard white collar occupations—banking, insurance, brokerage houses, real estate—were characterized by rooms filled with olive-drab filing cabinets and patrolled by busy platoons of file clerks. Above all, there was the government, which, as census taker, tax collector, law enforcer, had always been the record keeper par excellence since the earliest days of civilization. Steadily, since the beginning of the nineteenth century, the governments of the industrially advanced societies had found themselves being drawn into ever more administrative responsibilities, until the task of minding official data threatened to become an end in itself. Duties like supervising the economy, keeping track of the work force, handing out the dole, allocating jobs, revenues, resources, were taking up more and more of the attention of political leadership in the urban industrial nations. For some early social scientists like Max Weber, this expanding paper reality of social statistics represented one of the worst vices of modern society: the bureaucratization of life, the conversion of experience into numerical abstractions.

By and large, the data processing responsibility of all these professions, public and private, was more bemoaned than celebrated. It was seen as a dispiriting necessity that could be left to low status, usually poorly skilled office help. The familiar image of the office worker that we find in the stories of Dickens and Gogol is that of pale, pinch-faced scribes shuffling through overflowing ledgers, soulless statisticians and actuarials totaling up endless columns of figures, undernourished office clerks digging through dusty files to find an elusive memo. These were the people at the bottom of the bureaucratic ant heap. Herman Melville caught something of the general perception of these unfortunates in his famous tale of “Bartleby the Scrivener,” the neat and efficient clerk whose relentlessly dispiriting toil finally turns him into a zombie.

[5] The image of data keepers got no brighter even when their occupation passed beyond the pen and pencil stage and finally entered the machine age. It was in order to save time and office space for the government and the white-collar industries that business machines came into existence during the early years of this century. The key punch, the comptometer, the collator, the addressograph—all these were information processors. But nobody would have seen them as anything more than ingenious sorting and counting contraptions, of about as much intellectual interest as the air brake or the dry cell battery. Their inventors are hardly remembered; the companies that manufactured them were of no great weight in our industrial economy; those who operated them remained low-level clerical help. For the most part, the data minders of the economy were “office girls” who might have been trained in high school or at business college and who toiled at their monotonous jobs without hope of promotion. If anything, the work they did was still usually seen by more humanistic sensibilities as a sorry example of the ongoing massification of modern life.

In Elmer Rice’s bitter Broadway satire The Adding Machine (1923), the protagonist is an office clerk aptly named Mr. Zero. He is a pathetic nonentity, a “poor, spineless, brainless boob” who is lost in a wasteland of filing cabinets. At the end of the play, he is offered a “superb, super-hyper-adding machine”; it is the most spectacular business machine that can be imagined. Even so, the play finishes by identifying Mr. Zero as a form of life lower and less useful than a serf. He is “a slave to a contraption of steel and iron,” and the work he does is portrayed as the epitome of dehumanization. At the hands of Mr. Zero and his kind, people are reduced to statistical phantoms; yet those who perform the deed possess neither power nor status. They are themselves mere ciphers in the system.

In my youth, I had a taste of this dingy subservience. In the early 1950s, I worked as a file clerk for a major insurance company whose windowless basement was a honeycombed cavern of coffin-black filing cases and bound records shelved to the ceiling. Along with a score of lads fresh out of high school, I ran interoffice mailers and bulging sheaves of memos around the building from department to department. We were treated like so many slaves. From time to time our supervisor, trying to boost our flagging morale, would remind us that we were the life’s blood of the company. Without us, even [6] the top executives could not make a move. But we knew we were the lowest of the low. The work was a fatiguing bore, and we got paid the flat minimum wage. None of us stuck with the job longer than we had to.


The best known relic of Mr. Zero’s era, the paleolithic period of the early business machines, was the Hollerith punchcard, which dates back to the 1890s. Eventually, it would become an emblem of human alienation in an increasingly bureaucratized world. Somewhere in the early 1960s, its familiar injunction would be elaborated into a popular appeal for human understanding: “I am a human being. Do not fold, spindle, or mutilate.”

But by the time that plea was voiced, the punchcard was all but obsolete, replaced by far superior means of tracking data. At the hands of innovative firms like Sperry-Rand, Control Data, and Digital Equipment Corporation (IBM was actually quite laggardly in the field until the early 1960s), the business machine was undergoing an unexpected and rapid evolution. Spurred along by military necessity during World War II and afterward by the needs of the Census Bureau, it was maturing in the direction of becoming an electrical filing device that assigned a numerical address to the data it held and could then perform a variety of rapid calculations and transformations with those data. And that, in its most rudimentary form, is a computer: a device that remembers what it counts, counts what it remembers, and retrieves whatever it has filed away at the touch of a button. The woeful young women who once tended the cumbersome key punch in the back office would surely have been amazed to know that someday there would be “information scientists” who regarded their clanking and clacking machines as the distant ancestors of a form of mechanized intelligence possibly superior to the human mind.

The word computer entered the public vocabulary in the 1950s, when the most advanced models of the device were still room-sized mechanical dinosaurs that burned enough electricity to present a serious cooling problem. The first computer to enjoy a significant [7] reputation was UNIVAC, the brainchild of John Mauchly and J. P. Eckery, with important contributions from the famous mathematician John von Neumann.[1] UNIVAC was the first stored-program computer; it was based on military research done at the University of Pennsylvania during the war. Its later development was helped along by contracts from the National Bureau of Standards and Prudential Insurance; finally it was bought by Remington Rand in the 1950s for a variety of data services. But UNIVAC’s public debut was little more than a media gimmick. The machine was loaned to CBS television to make polling predictions in the 1952 elections. This number-crunching behemoth (it contained 5,000 vacuum tubes, but used a new, compact magnetic tape system rather than punchcards to store data) was programmed to analyze voting statistics for CBS in key districts and to compare them with early returns on election night. By doing so, UNIVAC gave a projection that quickly calculated which candidate would most likely win.

There is an amusing anecdote about UNIVAC’s introduction to the American public that evening. At CBS election headquarters, the esoteric machine, which the anxious electronic engineers were coddling like a spoiled child, was regarded as a mere sideshow attraction. So when UNIVAC, drawing upon a mere 5-7 percent of the popular vote, began projecting a landslide for Dwight Eisenhower, the CBS experts refused to report its prediction. The worried technicians then agreed to adjust the machine to keep it in line with the network pundits. Still UNIVAC insisted on an Eisenhower sweep, even in the solid Democratic South. Finally, when its predictions proved accurate, the experts conceded, publicly confessing that UNIVAC had indeed outguessed them and that the machine’s apparent inconsistencies that night were due to human interference. UNIVAC had predicted an electoral vote for Eisenhower of 438; he finished with 442, within 1 percent of UNIVAC’s startling prediction. This was an impressive display of what an advanced data processor could do, so impressive that for a short period the brand name UNIVAC bid fair to displace the generic name computer.

White-collar work was one of the last occupations to enter the machine age. Well after the mines, the factories, the farms had beer mechanized, office workers were still scribbling away with pen and pencil, hand-filing their papers in cabinets and loose-leaf binder. Even the typewriter (which appeared in the 1880s and did so much to bring a new generation of women workers into the offices) was a [8] low-level manual tool, the technological equivalent of the long-defunct hand-loom. Until well into the twentieth century, one looks in vain in magazines for advertisements that feature any sort of data processing equipment, let alone for books and articles celebrating their inventors and manufacturers. Compare this with the situation today, when the slickest, most futuristic ads in print and on television are those touting computers for the office, and you have a striking measure of how information has risen in status. The technology of the humble data keepers has finally outmatched the rolling mills, the dynamos, the railroads.

“Today,” a leading telecommunications firm announces in an imposing full-page advertisement, “information is the most valuable commodity in business. Any business.” In times past, one would have thought of information as more of a lubricant that helped get commodities produced, or perhaps the upshot of a service like a doctor’s diagnosis or a lawyer’s legal opinion. And its value would not be constant (let alone universally or invariably supreme) but would vary with its accuracy and applications. But these days information is freely called product, resource, capital, currency. There is no limit to how high the rhetoric may be aimed. In a 1984 TV spot commercial, Frank Herbert, author of Dune, a work which at once invokes the vistas of science fiction, intones a small hymn to technological progress for Pacific Telephone’s Infosystems. “The real revolution of the Information Age,” he announces, “will not be one of hardware, but of the human spirit. It will be the chance to be more than human.” Seemingly, a promise of godlike possibilities at hand. The product he is pitching is simply another electronic office system, one of several on the market. Yet, as the extravagant language suggests, the transition to the computer has come to be seen as more than a matter of new machines replacing old. The new machines have the look of something like an evolutionary leap forward in the history of industrialism. They are a new species of technology, one which has seemed from its first appearance to be flirting with the mysteries of the mind itself.


In my own life, there was a book that did more than UNIVAC to revise my understanding of information and the machinery that manipulated it. In 1950 the mathematician Norbert Wiener wrote a pioneering and widely read study called The Human Use of Human Beings, a popularized version of his classic 1948 work Cybernetics.[2] For the general reading public, this engaging and provocative little book landmarked the appearance and high promise of “cybernation,” the word Wiener had coined for the new automative technology in which he could discern the lineaments of a second industrial revolution. In the pages of his study, the computer was still an exotic device without a fixed name or clear image; he quaintly refers to it as “an ultra-rapid computing machine.” But even in its then primitive state, that machine figured importantly in what was for Wiener one of the key aspects of cybernation: “feedback,” the ability of a machine to use the results of its own performance as self-regulating information and so to adjust itself as part of an ongoing process.

Wiener saw feedback as far more than a clever mechanical trick; he regarded it as an essential characteristic of mind and of life. All living things practice some form of feedback as they adapt to their environment; here then was a new generation of machines reaching out toward the status of a sentient animal, and so promising to take over kinds of work that only human intelligence had so far been able to master. And not only work, but certain kinds of play as well. Wiener was much impressed by the research then under way to build chess-playing machines; this served as further evidence that machines would soon be able to process data in ways that approach the complexity of human intelligence. “To live effectively,” he concluded, “is to live with adequate information. Thus, communication and control belong to the essence of man’s inner life, even as they belong to his life in society.”

Wiener was claiming nothing less than that, in perfecting feedback and the means of rapid data manipulation, the science of cybernetics was gaining a deeper understanding of life itself as being, at its core, the processing of information. “It is my thesis,” he wrote, “that the physical functioning of the living individual and the oper-[10] ation of some of the new communications machines are precisely parallel in their analogous attempts to control entropy through feedback.”

Some five years after Wiener’s book was published, a new field of study based on his thesis announced its presence in the universities, an intellectual hybrid of philosophy, linguistics, mathematics, and electrical engineering. It was called artificial intelligence, or Al. The key assumption of AI was clear from the outset; in the words of two of the discipline’s founding fathers, Alan Newell and Herbert Simon, “the programmed computer and human problem solver are both species belonging to the genus ‘Information Processing System.’”[3]

A few years further along (1958), and Newell and Simon were pitching their hopes sky high:

There are now in the world machines that think, that learn and create. Moreover, their ability to do these things is going to increase rapidly until—in the visible future—the range of problems they can handle will be co-extensive with the range to which the human mind has been applied.[4]

At the time they made the prediction, computers were still struggling to play a creditable game of checkers. But Simon was certain “that within ten years a digital computer will be the world’s chess champion.”[5]

Wiener himself may or may not have agreed with the glowing predictions that flowed from the new study of artificial intelligence, but he surely did not endorse its optimism. On the contrary, he regarded information technology as a threat to short-term social ability, and possibly as a permanent disaster. Having invented cybernetics, he intended to function as its conscience. The Human Use of Human Beings, as the phrase itself suggests, was written to raise public discussion of the new technology to a higher, level of ethical awareness. Automated machines, Wiener observed, would take over not only assembly line routine, but office routine as well. Cybernetic machinery “plays no favorites between manual labor and white collar labor.” If left wholly in the control of short-sighted, profit-maximizing industrialists, it might well “produce an unemployment situation, in comparison with which . . . even the depression of the thirties will seem a pleasant joke.”

[11] Two years after Wiener issued that warning, the first cybernetic anti-utopia was written. In Player Piano, Kurt Vonnegut, Jr., who had been working in the public relations department of General Electric, one of the companies most aggressively interested in automation, imagines a world of intelligent machines where there is “production with almost no manpower.” Even the barbers have been displaced by haircutting machines. The result is a technocratic despotism wholly controlled by information technicians and corporate managers. The book raises the issue whether technology should be allowed to do all that it can do, especially when its powers extend to the crafts and skills which give purpose to people’s lives. The machines are slaves, Vonnegut’s rebellious engineer-hero insists. True, they make life easier in many ways; but they also compete with people. And “anybody that competes with slaves becomes a slave.” As Vonnegut observes, “Norbert Wiener, a mathematician, said all that way back in the nineteen-forties.”


In the same year Wiener produced his study Cybernetics, Claude Shannon of Bell Laboratories published his ground-breaking paper, “A Mathematical Theory of Communication,” which established the discipline of information theory, the science of messages. Shannon’s work is universally honored as one of the major intellectual achievements of the century. It is also the work most responsible for revolutionizing the way scientists and technicians have come to wield the word information in our time. In the past, the word has always denoted a sensible statement that conveyed a recognizable, verbal meaning, usually what we would call a fact. But now, Shannon gave the word a special technical definition that divorced it from its common-sense usage. In his theory, information is no longer connected with the semantic content of statements. Rather, information comes to be a purely quantitative measure of communicative exchanges, especially as these take place through some mechanical channel which requires that message to be encoded and then decoded, say, into electronic impulses. Most people would have assumed that information had to do with what happened in the understanding of a [12] speaker and a listener in the course of a conversation. Shannon, working out of Bell Labs, was much more interested in what might be happening in the telephone wire that ran between speaker and listener. In his paper, the fundamental concepts of information theory—noise, redundancy, entropy—are rounded up into a systematic mathematical presentation. Here, too, the “bit,” the binary digit basic to all data processing, first appears to take its place as the quantum of information, a neatly measurable unit by which the transmitting capacity of all communications technology can be evaluated.

One can see how useful such a calculus of communications traffic is for electrical engineers dealing with the problem of channeling signals over phone wires or from space satellites, and wanting to do so with the greatest possible economy and clarity. But from the outset, Shannon was beset by the understandable confusion that arose between his restricted use of “information” and the conventional meaning of the word. From his point of view, even gibberish might be “information” if somebody cared to transmit it. After all, a message translated into a secret code would appear to be gibberish to anyone who did not know the code; but it would be well worth sending by anyone who did. The early information scientists easily fell into thinking this way about messages and their transmissions; many of them had served as cryptographers during the war. Still this was an odd and jarring way to employ the word, and Shannon had to admit as much. Once, when he was explaining his work to a group of prominent scientists who challenged his eccentric definition, he replied, “I think perhaps the word ‘information’ is causing more trouble . . . than it is worth, except that it is difficult to find another word that is anywhere near right. It should be kept solidly in mind that [information] is only a measure of the difficulty in transmitting the sequences produced by some information source.”[6]

For a time, Shannon considered dropping the word and using another—like communications theory. With a name like that, the new field would have had more distance from the need for meaningful content which we associate with information. For example, a disease can be “communicated”—a transmission of great consequence but without intelligent content. At one point, John von Neumann suggested—not very helpfully—that Shannon use the word entropy. But information became the word, a choice which Fritz [13] Machlup has called “infelicitous, misleading, and disserviceable”—the beginning of the term’s history as “an all-purpose weasel-word.”’[7]

What we have here is an example of something that has happened many times before in the history of science. A word that has a long-standing, common-sense meaning is lifted from the public vocabulary and then skewed toward a new, perhaps highly esoteric definition by the scientists. The result can be a great deal of unfortunate confusion, even among the scientists themselves, who may then forget what the word meant before they appropriated it. The way physicists use the words motion, time, gravity, simultaneity has only a tenuous connection with commonplace, everyday experience. The word order in thermodynamics has a specialized application that at certain points diverges markedly from its normal meaning. Perhaps the most notorious example of such confusion involves the word intelligence as it has been reshaped by the psychologists. Among the IQ testers, “intelligence” is whatever certain highly eccentric academic tests measure. The result is a neat, numerical score: high scores mean high intelligence, low scores mean low intelligence. But neither the tests nor the scores may have any relationship to what we regard as real intelligence (or its absence) as we judge things in the midst of life.

In much the same way, in its new technical sense, information has come to denote whatever can be coded for transmission through a channel that connects a source with a receiver, regardless of semantic content. For Shannon’s purposes, all the following are “information”:

E = mc2

Jesus saves.

Thou shalt not kill.

I think, therefore I am.

Phillies 8, Dodgers 5

‘Twas brillig and the slithy roves did gyre and gimble in the wabe.

[14] And indeed, these are no more or less meaningful than any string of haphazard bits (x!9#44jGH?566MRK) I might be willing to pay to have telexed across the continent.

As the mathematician Warren Weaver once put it, explaining “the strange way in which, in this theory, the word ‘information’ is used .... It is surprising but true that, from the present viewpoint, two messages, one heavily loaded with meaning and the other pure nonsense, can be equivalent as regards information.[8]

One might expect that anyone reading through the list of items above would immediately note that each stands on a markedly different intellectual level. One statement is a moral injunction; one is a mathematical formulation; one is a minor point of fact; one is a theological teaching; and the last is deliberate (though charming) nonsense. But once they have all been transformed into electrical bits, and once the technicians have got us into the habit of labeling them all information, these vital differences—which it would, for example, be rather important to draw out for children as part of their education—cannot help but be obscured.

To be sure, Shannon’s work is highly technical and therefore largely inaccessible to the general public; nevertheless, its influence has been enormous. As information theory has come to be widely applied in our high tech economy, it has had a twofold impact upon our popular culture.

First of all, once “information” had been divorced from its conventional meaning, the word was up for grabs. Following the lead of the information theorists, scientists and technicians felt licensed to make ever broader and looser use of the word. It could soon be applied to any transmitted signal that could be metaphorically construed as a “message”—for example, the firing of a nerve impulse. To use the term so liberally is to lay aside all concern for the quality or character of what is being communicated. The result has been a progressive blurring of intellectual distinctions. Just as it is irrelevant to a physicist (from the viewpoint of the purely physical phenomenon) whether we are measuring the fall of a stone or the fall of a human body, so, for the information theorist, it does not matter whether we are transmitting a fact, a judgment, a shallow cliché, a deep teaching, a sublime truth, or a nasty obscenity. All are “information.” The word comes to have vast generality, but at a price; the meaning of things communicated comes to be leveled, and so too the value.

[15] The effect is similar to that which the mathematical theory of games had upon people’s thinking in the 1950s and 1960s. From the viewpoint of games theorists, chess, poker, business investments, arguments between parents and children, collective bargaining, thermonuclear war came to be seen as “games”—in the sense that certain general strategies could be applied to all of them. This was a valuable insight into many forms of competition and negotiation, but it was gained at great cost. Around the theory of games, a literature and discourse of military strategy grew up whose authors felt licensed to discuss the annihilation of the human race as casually as one might discuss a hand of cards. For, after all, these were simply different kinds of “games.” On balance, the result of this intellectual sleight-of-hand was a lamentable bamboozling of the public, who came to see arguments made in this esoteric terminology (all decked out with many numbers) as intimidatingly authoritative.

Secondly, information theory worked. In its own field of application, it provided the electrical engineers with a powerful tool that contributed significantly to rapid innovation. With UNIVAC, the original vacuum tube computer had reached the limit of its development, and still the machines were too big and slow to carry out truly sophisticated programs. In the course of the 1950s and 1960s, however, these limitations were overcome by the development of the transistor and integrated circuit. These highly miniaturized conductors allowed the computer to be compacted and its processing functions to be vastly accelerated. At the same time, thanks again to Shannon’s work, the computer was finding its way into the world’s burgeoning telecommunications network so that it could extend its power beyond local, on-site use. This permitted computers to communicate with one another over great distances, and eventually, with the deployment of space satellites, to remain instantaneously in touch around the world. While the computer was shrinking physically to desk-top size, it was taking on a new, disembodied, electronic “size” that dwarfed all previous technology in the scope of its power. In our own day, these two developments—miniaturization and telecommunications outreach—have allowed even the most modest personal computer to link into information networks that span the planet, giving them, in the view of some enthusiasts, the dimensions of a global brain.

Achievements of this astonishing order were bound to shift our understanding of information away from people (as sources or re- [16] ceivers) toward the exciting new techniques of communication. This is because the main concern of those who use information theory is with apparatus, not content. For that matter, the theory does not even require a human source or receiver on either side of the apparatus. The source might just as well be a ballistic missile registering its trajectory on radar; the receiver might just as well be a computer programmed to trigger a retaliatory strike. Such a situation fulfills all the mathematical requirements of the theory.

Thanks to the high success of information theory, we live in a time when the technology of human communications has advanced at blinding speed; but what people have to say to one another by way of that technology shows no comparable development. Still, in the presence of so ingenious a technology, it is easy to conclude that because we have the ability to transmit more electronic bits more rapidly to more people than ever before, we are making real cultural progress—and that the essence of that progress is information technology.


Between them, Wiener and Shannon radically re-conceptualized the meaning of information, lending the term a new mathematical precision without which the computer might never have developed much beyond the power of UNIVAC. But their professional work was too esoteric to find an audience outside the world of the logicians and technicians. For the general public, the intriguing image which Wiener had raised in The Human Use of Human Beings that of information as the basis of life—found its most dramatic support from another, unexpected quarter: biology—or rather, the new biology, where the most highly publicized scientific revolution since Darwin was taking place.

In 1952, microbiologists James Watson and Francis Crick announced that they had solved the master problem of modern biology. They had broken the “genetic code” hidden deep within the molecular structure of DNA. The very use of the word code in this context was significant. For one thing, it immediately seemed to link the [17] discoveries of the biologists to those of the new information theorists, whose work had much to do with the “encoding” of information. The word also carried with it the thrill of an espionage story and, in fact, harked back to the original use made of the computer in England: to break the German secret code during World War II. No sooner had Watson and Crick published their breakthrough than the DNA molecule came to be universally seen as something like a tiny cybernetic apparatus that stored and processed microscopic bits of chemically encoded data. Supposedly, these coded messages controlled discrete physical processes in the replication of living things. Soon, the entire code of the double helix would be unscrambled and its message might be read off bit by bit like the memory store of a computer. As John Pfeiffer of MIT described the function of DNA in a 1960 television documentary on CBS, “The program’s patterns of chemical bases may be compared to patterns of holes or magnetic spots on paper tapes fed into electronic computers.”[9] The DNA “program” has not turned out to be quite that simple, but in the first flush of discovery, it seemed that Wiener’s proposition had been confirmed: cybernetics and biology had found a common ground.

Since its inception, the new biology has been so tightly entwined with the language and imagery of information science that it is almost impossible to imagine the field developing at all without the aid of the computer paradigm. One biologist identifies “the theoretical tool” that unlocked the chemistry of life as

the new sciences associated with the development of computers. Theories of “control,” “feedback,” and “information transfer” were collated in 1948 by the American engineer and mathematician Norbert Wiener under the name of “cybernetics.” . . . Biochemists seized on these new concepts in order to probe the ways in which the cell controlled and regulated its own metabolism.

The job of the cyberneticist, he explains,

is the study of information transfer: the converting of information from one form to another—the human voice into radio waves and back into sound once more, or a complex [18] mathematical equation into a set of punched holes on a tape, to be fed into a computer and then into a set of traces on reels of magnetic tape in the computer’s “memory store.” . . . To him, protein synthesis is just such another case. The mechanism for ensuring the exact replication of a protein chain by a new cell is that of transferring the information about the protein structure from the parent to the daughter cell.[10]

One is left to wonder: could the revolution in biology have occurred if the model of the computer had not been conveniently at hand waiting to be adopted? This would not be the first time a technological metaphor served to launch a scientific breakthrough. In the seventeenth century, at the very beginning of modern science, astronomers and physicists appropriated the model of the clock to explain the mechanics of the solar system and soon taught their society to see the entire universe as a clockwork instrument.

However much the new biology may have borrowed from the preexisting cybernetic model, it repaid the debt many-fold by lending information a mystique it could not have acquired in any other way. In effect, it became the secret of life. From a data-computing mechanism as tiny as the DNA molecule, all the subtle complexity of life on earth had evolved. As John Pfeiffer confidently put it, “This is automation at the molecular level.” Here was an astonishing demonstration of how much could be pieced together out of mere particles of data. It was as if God Himself, formerly the great watchmaker in the sky, had been updated into the cosmic computer programmer. Within another decade, by the early 1960s, it became commonplace for people to speak not only of their genes but of their minds and private psyches as being “programmed.” If it was not yet the case, as Wiener had predicted, that cybernetic machines would become more like people, certainly people were coming to see themselves more and more as a kind of machine: a biocomputer.

Ironically, as the new biology has grown a bit older, it has changed in ways that no longer make the simple cybernetic model seem quite so persuasive. In the early days, the genetic code looked as if it would be a lot easier to crack than has turned out to be the case. Initially, it was assumed that the message of the genes might be read off as so many fixed, linear sequences of nucleotide bases, much like the digital bit-string in a computer. More recently, as problems [19] of developmental regulation have gained prominence in the field, genes have become a lot trickier to interpret. The mysterious process of “transposition” has begun to draw attention. The work of Barbara McClintock, among others, suggests that genes may actually pick themselves up and move about in the genome, almost purposefully changing their meaning as they alter their position in response to some larger context.[11] So far the biologists have no model to use for that context, but neither computers nor cybernated systems would seem to serve. Maybe the context is something like an “idea” about the whole organism and its relationship to the environment. If this is so, then the cybernetic model that did so much to launch the new biology might be totally misleading. For there are no computer programs that behave this way. If they did, it would amount to saying that they had a mind of their own—and that way lies science fiction, not serviceable technology. Yet, for lack of a better choice, the data processing image lingers on, rendering biology in the late twentieth century more mechanistic than physics.

Every historical period has its god-word. There was an Age of Faith, an Age of Reason, an Age of Discovery. Our time has been nominated to be the Age of Information. If the name takes, the fortuitous connection between cybernation and the new biology will have to be credited with lending information much of the vogue it has come to enjoy. Perhaps there is another reason for the increasing popularity and generality of the word, one that tells us something important about an era that is willing to accept such a seemingly characterless designation. Unlike “faith” or “reason” or “discovery,” information is touched with a comfortably secure, noncommittal connotation. There is neither drama nor high purpose to it. It is bland to the core and, for that very reason, nicely invulnerable. Information smacks of safe neutrality; it is the simple, helpful heaping up of unassailable facts. In that innocent guise, it is the perfect starting point for a technocratic political agenda that wants as little exposure for its objectives as possible. After all, what can anyone say against information?

But in contemporary America, even a god-word does not enter the popular consciousness in a decisive way until it can somehow be bought and sold in the marketplace. Only then can it be coveted as a possession, paid for, taken home, and owned. More importantly, only then does it qualify to receive the attention of the advertisers [20] who have the power to turn it from an interest into a want, from a want into a need. In the course of the 1950s, information had come to be identified with the secret of life. By the 1970s, it had achieved an even more exalted status. It had become a commodity—and indeed, as we have seen, “the most valuable commodity in business. Any business.”







In raising these questions about the place of the computer in our schools, it is not my purpose to question the value of information in and of itself. For better or worse, our technological civilization needs its data the way the Romans needed their roads and the Egyptians of the Old Kingdom needed the Nile flood. To a significant degree, I share that need. As a writer and teacher, I must be part of the 5 to 10 percent of our society which has a steady professional appetite for reliable, up-to-date information. I have long since learned to value the services of a good reference library equipped with a well-connected computer.

Nor do I want to deny that the computer is a superior means of storing and retrieving data. There is nothing sacred about the typed or printed page when it comes to keeping records; if there is a faster way to find facts and manipulate them, we are lucky to have it. Just as the computer displaced the slide rule as a calculating device, it has every right to oust the archive, the filing cabinet, the reference book, if it can prove itself cheaper and more efficient.

But I do want to insist that information, even when it moves at the speed of light, is no more than it has ever been: discrete little bundles of fact, sometimes useful, sometimes trivial, and never the substance of thought. I offer this modest, common-sense notion of information in deliberate contradiction to the computer enthusiasts [88] and information theorists who have suggested far more extravagant definitions. In the course of this chapter and the next, as this critique unfolds, it will be my purpose to challenge these ambitious efforts to extend the meaning of information to nearly global proportions. That project, I believe, can only end by distorting the natural order of intellectual priorities. And insofar as educators acquiesce in that distortion and agree to invest more of their limited resources in information technology, they may be undermining their students’ ability to think significantly.

That is the great mischief done by the data merchants, the futurologists, and those in the schools who believe that computer literacy is the educational wave of the future: they lose sight of the paramount truth that the mind thinks with ideas, not with information. Information may helpfully illustrate or decorate an idea; it may, where it works under the guidance of a contrasting idea, help to call other ideas into question. But information does not create ideas; by itself, it does not validate or invalidate them. An idea can only be generated, revised, or unseated by another idea. A culture survives by the power, plasticity, and fertility of its ideas. Ideas come first, because ideas define, contain, and eventually produce information. The principal task of education, therefore, is to teach young minds how to deal with ideas: how to evaluate them, extend them, adapt them to new uses. This can be done with the use of very little information, perhaps none at all. It certainly does not require data processing machinery of any kind. An excess of information may actually crowd out ideas, leaving the mind (young minds especially) distracted by sterile, disconnected facts, lost among shapeless heaps of data.

It may help at this point to take some time for fundamentals.

The relationship of ideas to information is what we call a generalization. Generalizing might be seen as the basic action of intelligence; it takes two forms. First, when confronted with a vast shapeless welter of facts (whether in the form of personal perceptions or secondhand reports), the mind seeks for a sensible, connecting pattern. Second, when confronted with very few facts, the mind seeks to create a pattern by enlarging upon the little it has and pointing it in the direction of a conclusion. The result in either case is some general statement which is not in the particulars, but has been imposed upon them by the imagination. Perhaps, after more facts are gathered, the pattern falls apart or yields to another, more [89] convincing possibility. Learning to let go of an inadequate idea in favor of a better one is part of a good education in ideas.

Generalizations may take place at many levels. At the lowest level, they are formulated among many densely packed and obvious facts. These are cautious generalizations, perhaps even approaching the dull certainty of a truism. At another level, where the information grows thinner and more scattered, the facts less sharp and certain, we have riskier generalizations which take on the nature of a guess or hunch. In science, where hunches must be given formal rigor, this is where we find theories and hypotheses about the physical world, ideas that are on trial, awaiting more evidence to strengthen, modify, or subvert them. This is also the level at which we find the sort of hazardous generalizations we may regard as either brilliant insights or reckless prejudices, depending upon our critical response: sweeping statements perhaps asserted as unassailable truths, but based upon very few instances.

Generalizations exist, then, along a spectrum of information that stretches from abundance to near absence. As we pass along that spectrum, moving away from a secure surplus of facts, ideas tend to grow more unstable, therefore more daring, therefore more controversial. When I observe that women have been the homemakers and child-minders in human society, I make a safe but uninteresting generalization that embraces a great many data about social systems past and present. But suppose I go on to say, “And whenever women leave the home and forsake their primary function as housewives, morals decline and society crumbles.” Now I may be hard pressed to give more than a few questionable examples of the conclusion I offer. It is a risky generalization, a weak idea.

In Rorschach psychological testing, the subject is presented with a meaningless arrangement of blots or marks on a page. There may be many marks or there may be few, but in either case they suggest no sensible image. Then, after one has gazed at them for a while, the marks may suddenly take on a form which becomes absolutely clear. But where is this image? Not in the marks, obviously. The eye, searching for a sensible pattern, has projected it into the material; it has imposed a meaning upon the meaningless. Similarly in Gestalt psychology, one may be confronted with a specially contrived perceptual image: an ambiguous arrangement of marks which seems at first to be one thing but then shifts to become another. Which is the “true” image? The eye is free to choose between them, for they are [90] both truly there. In both cases—the Rorschach blots and the Gestalt figure—the pattern is in the eye of the beholder; the sensory material simply elicits it. The relationship of ideas to facts is much like this. The facts are the scattered, possibly ambiguous marks; the mind orders them one way or another by conforming them to a pattern of its own invention. Ideas are integrating patterns which satisfy the mind when it asks the question, What does this mean? What is this all about?

But, of course, an answer that satisfies me may not satisfy you. We may see different patterns in the same collection of facts. And then we disagree and seek to persuade one another that one or the other of these patterns is superior, meaning that it does more justice to the facts at hand. The argument may focus on this fact or that, so that we will seem to be disagreeing about particular facts—as to whether they really are facts, or as to their relative importance. But even then, we are probably disagreeing about ideas. For as I shall suggest further on, facts are themselves the creations of ideas.

Those who would grant information a high intellectual priority often like to assume that facts, all by themselves, can jar and unseat ideas. But that is rarely the case, except perhaps in certain turbulent periods when the general idea of “being skeptical” and “questioning authority” is in the air and attaches itself to any dissenting, new item that comes along. Otherwise, in the absence of a well-formulated, intellectually attractive, new idea, it is remarkable how much in the way of dissonance and contradiction a dominant idea can absorb. There are classic cases of this even in the sciences. The Ptolemaic cosmology that prevailed in ancient times and during the Middle Ages had been compromised by countless contradictory observations over many generations. Still, it was an internally coherent, intellectually pleasing idea; therefore, keen minds stood by the familiar old system. Where there seemed to be any conflict, they simply adjusted and elaborated the idea, or restructured the observations in order to make them fit. If observations could not be made to fit, they might be allowed to stand along the cultural sidelines as curiosities, exceptions, freaks of nature. It was not until a highly imaginative constellation of ideas about celestial and terrestrial dynamics, replete with new concepts of gravitation, inertia, momentum, and matter, was created that the old system was retired. Through the eighteenth and nineteenth centuries, similar strategies of adjustment were used to [91] save other inherited scientific ideas in the fields of chemistry, geology, and biology. None of these gave way until whole new paradigms were invented to replace them, sometimes with relatively few facts initially to support them. The minds that clung to the old concepts were not necessarily being stubborn or benighted; they simply needed a better idea to take hold of.


If there is an art of thinking which we would teach the young, it has much to do with showing how the mind may move along the spectrum of information, discriminating solid generalizations from hunches, hypotheses from reckless prejudices. But for our purposes here, I want to move to the far end of the spectrum, to that extreme point where the facts, growing thinner and thinner, finally vanish altogether. What do we find once we step beyond that point into the zone where facts are wholly absent?

There we discover the riskiest ideas of all. Yet they may also be the richest and most fruitful. For there we find what might be called the master ideas—the great moral, religious, and metaphysical teachings which are the foundations of culture. Most of the ideas that occupy our thinking from moment to moment are not master ideas; they are more modest generalizations. But from this point forward I will be emphasizing master ideas because they are always there in some form at the foundation of the mind, molding our thoughts below the level of awareness. I want to focus upon them because they bear a peculiarly revealing relationship to information, which is our main subject of discussion. Master ideas are based on no information whatever. I will be using them, therefore, to emphasize the radical difference between ideas and data which the cult of information has done so much to obscure.

Let us take one of the master ideas of our society as an example:

All men are created equal.

The power of this familiar idea will not be lost on any of us. From it, generations of legal and philosophical controversy have arisen, political movements and revolutions have taken their course. [92] It is an idea that has shaped our culture in ways that touch each of us intimately; it is part, perhaps the most important part, of our personal identity.

But where did this idea come from? Obviously not from some body of facts. Those who created the idea possessed no more information about the world than their ancestors, who would, doubtless, have been shocked by such a pronouncement. They possessed far less information about the world than we in the late twentieth century may feel is necessary to support such a sweeping, universal statement about human nature. Nevertheless, those who shed their blood over the generations to defend that assertion (or to oppose it) did not do so on the basis of any data presented to them. The idea has no relationship whatever to information. One would be hard pressed even to imagine a line of research that might prove or disprove it. Indeed, where such research has been attempted (for example by inveterate IQ theorists), the result, as their critics are always quick to point out, is a hopeless distraction from the real meaning of the idea, which has nothing to do with measurements or findings, facts or figures of any kind. The idea of human equality is a statement about the essential worth of people in the eyes of their fellows. At a certain juncture in history, this idea arose in the minds of a few morally impassioned thinkers as a defiantly compassionate response to conditions of gross injustice that could no longer be accepted as tolerable. It spread from the few to the many; finding the saline insurgent response in the multitude, it soon became the battle cry of an era. So it is with all master ideas. They are born, not from data, but from absolute conviction that catches fire in the mind of one, of a few, then of many as the ideas spread to other lives where enough of the same experience can be found waiting to be ignited.

Here are some more ideas, some of them master ideas, each of which, though condensed in form, has been the theme of countless variations in the philosophy, religious belief, literature, and jurisprudence of human society:

Jesus died for our sins.

The Tao that can be named is not the true Tao.

Man is a rational animal.

Man is a fallen creature.

[93] Man is the measure of all things.

The mind is a blank sheet of paper.

The mind is governed by unconscious instincts.

The mind is a collection of inherited archetypes.

God is love.

God is dead.

Life is a pilgrimage.

Life is a miracle.

Life is a meaningless absurdity.

At the heart of every culture we find a core of ideas like these, some old, some new, some rising to prominence, some declining into obsolescence. Because those I list here in terse formulations are verbal ideas, they might easily be mistaken for intended statements of fact. They have the same linguistic form as a point of information, like “George Washington was the first president of the United States.” But of course they are not facts, any more than a painting by Rembrandt is a fact, or a sonata by Beethoven, or a dance by Martha Graham. For these too are ideas; they are integrating patterns meant to declare the meaning of things as human beings have discovered it by way of revelation, sudden insight, or the slow growth of wisdom over a lifetime. Where do these patterns come from? The imagination creates them from experience. Just as ideas order information, they also order the wild flux of experience as it streams through us in the course of life.

This is the point Fritz Machlup makes when he observes a striking difference between “information” and “knowledge.” (He is using “knowledge” here in exactly the same way I am using “idea”—as an integrating pattern.) “Information” he tells us, “is acquired by being told, whereas knowledge can be acquired by thinking.”

Any kind of experience—accidental impressions, observations, and even “inner experience” not induced by stimuli received from the environment—may initiate cognitive processes leading to changes in a person’s knowledge. Thus, new knowledge can be acquired without new information being [94] received. (That this statement refers to subjective knowledge goes without saying; but there is no such thing as objective knowledge that was not previously somebody’s subjective knowledge.) [12]

Ideas, then—and especially master ideas—give order to experience. They may do this in deep or shallow ways; they may do it nobly or savagely. Not all ideas are humane; some, which bid to become master ideas and may succeed, are dangerous, vile, destructive. Hitler’s Mein Kampf is a book filled with toxic ideas that were born of vengefulness and resentment. Yet they became, for a brief interval, the master ideas of one troubled society. No one who ever read that book and hated it did so because they thought the author had gotten some of his facts wrong; no one who ever read it and loved it cared about the accuracy of its information. The appeal of the book, whether accepted or rejected, was pitched at a different level of the mind.

Here are some more ideas that, at least in my view, are just as toxic:

Society is the war of each against all.

Self-interest is the only reliable human motivation.

Let justice be done though the heavens fall.

The only good Indian is a dead Indian.

Nice guys finish last.

The end justifies the means.

My country right or wrong.

It is precisely because some ideas—many ideas—are brutal and deadly that we need to learn how to deal with them adroitly. An idea takes us into people’s minds, ushers us through their experience. Understanding an idea means understanding the lives of those who created and championed it. It means knowing their peculiar sources of inspiration, their limits, their vulnerabilities and blind spots. What our schools must offer the young is an education that lets them make that journey through another mind in the light of other ideas, including some that they have fashioned for themselves from their own [95] experience. The mind that owns few ideas is apt to be crabbed and narrow, ungenerous and defensive in its judgments. “Nothing is more dangerous than an idea,” Emil Chartier once said, “when it is the only one we have.”

On the other hand, the mind that is gifted with many ideas is equipped to make its evaluations more gracefully. It is open and welcoming to its own experience, yet capable of comparing that experience discriminately with the lives of others, and so choosing its convictions with care and courtesy.


One of the major liabilities of the data processing model of thought is the way in which it coarsens subtle distinctions in the anatomy of the mind. The model may do this legitimately in order to simplify for analytical purposes; all scientific models do that. But there is always the danger—and many computer scientists have run afoul of it—that the model will become reified and be taken seriously. When that happens on the part of experts who should know better, it can actually falsify what we know (or should know) about the way our own minds work.

Take, for example, the significant interplay between experience, memory, and ideas, which is the basis of all thought. I have been using the word experience here to refer to the stream of life as it molds the personality from moment to moment. I use the word as I believe most artists would use it; more specifically, it is experience as it would be reflected in the literary technique called stream of consciousness.

Experience in this sense is the raw material from which moral, metaphysical, and religious ideas are fashioned by the mind in search of meaning. This may seem like an imprecise definition, especially to those of an empiricist inclination. In the empiricist tradition “experience” has come to be the equivalent of information. It is the sensory data which we collect in neat, well-packaged portions to test propositions about the world in a strictly logical way. When the empiricist philosophers of the seventeenth and eighteenth centuries defined experience in this way, they were in search of a form of [96] knowledge that would serve as an alternative to statements that were meant to be accepted on the basis of authority, hearsay, tradition, revelation, or pure introspective reasoning. Experience was intended to be that kind of knowledge which was firsthand and personally tested. It was also meant to be available for inspection by others through their experience. Hence, it was public knowledge and, as such, free of obfuscation or manipulation. This, so the empiricists argued, was really the only kind of knowledge worth having. Unless all the rest could be verified by experience, it probably did not deserve to be regarded as knowledge at all.

But experience of the kind the empiricists were after is actually of a very special, highly contrived variety. Modeled upon laboratory experimentation or well-documented, professional research, it exists almost nowhere except in the world of science—or possibly as evidence in a court of law. We do not normally collect much experience of this sort. Rather, we ordinarily take in the flow of events as life presents it—unplanned, unstructured, fragmentary, dissonant. The turbulent stream passes into memory where it settles out into things vividly remembered, half remembered, mixed, mingled, compounded. From this compost of remembered events, we somehow cultivate our private garden of certainties and convictions, our rough rules-of-thumb, our likes and dislikes, our tastes and intuitions and articles of faith.

Memory is the key factor here; it is the register of experience where the flux of daily life is shaped into the signposts and standards of conduct. Computers, we are told, also have “memories,” in which they store information. But computer memory is no more like human memory than the teeth of a saw are like human teeth; these are loose metaphors that embrace more differences than similarities. It is not the least of its liabilities that the cult of information obscures this distinction, to the point of suggesting that computer memory is superior because it remembers so much more. This is precisely to misinterpret what experience is and how it generates ideas. Computers “remember” things in the form of discrete entries: the input of quantities, graphics, words, etc. Each item is separable, perhaps designated by a unique address or file name, and all of it subject to total recall. Unless the machine malfunctions, it can regurgitate everything it has stored exactly as it was entered, whether a single number or a lengthy document. That is what we expect of the machine.

Human memory, on the other hand, is the invisible psychic ad- [97] hesive that holds our identity together from moment to moment. This makes it a radically different phenomenon from computer memory. For one thing, it is fluid rather than granular, more like a wave than a particle. Like a wave, it spreads through the mind, puddling up here and there in odd personal associations that may be of the most inexplicable kind. It flows not only through the mind, but through the emotions, the senses, the body. We remember things as no computer can—in our muscles and reflexes: how to swim, play an instrument, use a tool. These stored experiences lodge below the level of awareness and articulation so that there is no way to tell someone how we drive a car or paint a picture. We don’t actually “know” ourselves. In an old bit of folk wisdom, the daughter asks her mother how she bakes such a good apple pie. The mother, stymied, replies: “First I wash my hands. Then I put on a clean apron. Then I go into the kitchen and bake a good apple pie.”

Moreover, where we deal with remembered experience, there is rarely total recall. Experiences may be there, deeply buried in our brain and organism, but they are mostly beyond recollection. Our memory is rigorously selective, always ready to focus on what matters to us. It edits and compacts experience, represses and forgets and it does this in ways we may never fully understand. As we live through each present moment, something immediately before us may connect with experiences that call up vivid sensory associations, pains, pleasures; these in turn may make us laugh, they may leave us sad, they may bring us to the point of nausea or deep trauma. Some of what we have experienced and stored away in memory may derive from our speechless childhood; some may be phantoms of prenatal recollection. Much is drawn from private fantasies never reported to anyone, hardly admitted to ourselves.

We may say that we remember what “interests” us; but we may also perversely conceal or recompose things that are too threatening to face. The recollections we retain are mysteriously selected, enigmatically patterned in memory. There are hot bright spots filled with rich and potent associations; there are shadowed corners which may only emerge vividly in dreams or hallucinations; there are odd, quirky zones that delight to fill up with seemingly useless, chaotic remnants—things we remember without knowing why, even items (insistent song lyrics, irritating advertising jingles) we would just as soon erase if we could . . . but we can’t. If we could draw a full anatomy of memory in all its elusive variety, we would have the [98] secret of human nature itself. The shape of memory is quite simply the shape of our lives; it is the self-portrait we paint from all we have experienced. It is not the computer scientist but a literary artist like Vladimir Nabokov who can tell us most about the strange dynamics of experience. He writes:

A passerby whistles a tune at the exact moment that you notice the reflection of a branch in a puddle which in its turn and simultaneously recalls a combination of damp leaves and excited birds in some old garden, and the old friend, long dead, suddenly steps out of the past, smiling and closing his dripping umbrella. The whole thing lasts one radiant second and the motion of impressions and images is so swift that you cannot check the exact laws which attend their recognition, formation, and fusion.... It is like a jigsaw puzzle that instantly comes together in your brain with the brain itself unable to observe how and why the pieces fit, and you experience a shuddering sensation of wild magic.[13]

Experience, as Nabokov describes it here, is more like a stew than a filing system. The ingredients of a lifetime mix and mingle to produce unanticipated flavors. Sometimes a single piquant component—a moment of joy, a great sorrow, a remembered triumph or defeat—overpowers all the rest. In time, this stew boils down to a rich residue of feelings, general impressions, habits, expectations. Then, in just the right circumstance—but who can say what this will be?—that residue bubbles up into a well-formed insight about life which we may speak or paint or dance or play out for the world to know. And this becomes, whether articulately or as an unspoken existential gesture, an idea. Certainly, this has much to do with the climate of opinion in which we find ourselves, the traditions we share, the autobiographical momentum of our lives. But how these will combine in any given mind at any given moment and what they will produce is wholly beyond prediction. The stew of personal experience is too thick, too filled with unidentifiable elements mixed in obscure proportions. What emerges from the concoction can be genuinely astonishing. Which is only to observe what all culture tells us about ourselves: that we are capable of true originality. History teems with such marvelous examples of invention and startling con- [99] version. Paul of Tarsus struck blind on the road to Damascus rises from the trauma to become the disciple of a savior he had never met and whose followers he had persecuted; Tolstoy, falling into an episode of suicidal depression, disowns his literary masterworks and strives to become an ascetic hermit; Gandhi, driven from the white-only compartment of a South African train, renounces his promising legal career to don a loincloth and become the crusading mahatma of his people. This is experience at work, mysteriously shaping new ideas about life in the depths of the soul.

So too all of us, as we bear witness to the emerging convictions of others, confront what they say and do with the full force of our experience. If there is a confirming resonance within us, it may be because our lives have overlapped those we encounter. But it may also be that the power of the encounter in itself—then and there in a single moment—shatters the convictions of a lifetime, and we have the sense of beginning anew, of being reborn. For there are such instances of people being unmade and remade by charismatic confrontation and the pressures of crisis. It may even be the case that these gifts of originality and sudden conversion play a crucial evolutionary role in the growth of culture. Perhaps this volatility of mind is what saves human society from the changeless rigidity of the other social animals, the ants, the bees, the beasts of the pack and the herd. We are gifted as a species with a crowning tangle of electrochemical cells which has become an idea-maker. So spontaneously does this brain of ours make ideas and play with ideas that we cannot say much more about them than that they are there, shaping our perceptions, opening up possibilities. From moment to moment, human beings find new things to think and do and be: ideas that erupt seemingly from out of nowhere. We are remarkably plastic and adaptable animals, and the range of our cultural creativity seems unlimited. It would be a great loss if, by cheapening our conception of experience, memory, and insight, the cult of information blunted these creative powers.

There are computer scientists who seem well on their way toward doing that, however. They believe they can simulate our originality on the computer by working out programs that include a randomizing element. (The Logo program for poetry which we reviewed in the previous chapter is an example of this.) Because this makes the output of the program unpredictable, it has been identified as “cre- [100] ative.” But there is all the difference in the world between such contrived randomness and true originality. Again, the data processing model works to obscure the distinction. In the human mind, an original idea has a living meaning; it connects with experience and produces conviction. What the computer produces is “originality” at about the level of a muscular spasm; it is unpredictable, but hardly meaningful.

Of course, there are other forms of experience that come to us more neatly packaged and labeled: things learned by rote or memorized verbatim, precise instructions, procedures, names, addresses, facts, figures, directions. What such experiences leave behind is much like what fills computer memory: information in the proper sense of the term. Our psychological vocabulary does not clearly distinguish these different levels and textures of memory; we have simply the one word for the remembrance of things past. We remember a phone number; we remember an episode of traumatic suffering that changed our lives. To sweep these different orders of experience under the rubric information can only contribute to cheapening the quality of life.

“The heart has its reasons,” Pascal tells us, “which reason cannot know.” I would take this to mean that the minds of people are filled with ideas which well up from deep springs of mixed and muddled experience. Yet these ideas, hazy, ambiguous, contradictory as they may be, can be, for better or worse, the stuff of strong conviction. In a debate that involves such “reasons,” information is rarely of much use. Instead, we must test and sample in the light of our own convictions, seeking the experience that underlies the idea. We must do what I dare say you are doing now as you read these words, which are convictions of mine presented for your consideration. You pause, you reflect, probing to discover what my moral and philosophical loyalties might be. As you try to get the feel of the ideas I offer, you cast about in your recollections to see if you can find there an echo of the experiences I draw upon. You may loiter more over nuances and shades of meaning than over matters of fact. Here and there you may detect distant implications or hidden assumptions that you may or may not care to endorse. Possibly you sense that some of your fondest values are challenged and you hasten to defend them.

There is no telling how this critical rumination will turn out, but one thing should be obvious: none of this is “data processing.” It is the give and take of dialogue between two minds, each drawing upon [101] its own experience. It is the play of ideas, and all the information in all the data bases in the world will not decide the issues that may stand disputed between us.


Once they focus on the matter, many people will find the primacy of ideas so obvious that they may wonder why it has to be raised as a bone of contention at all. How have the computer scientists managed to subordinate ideas to data so persuasively? This is an intriguing historical question to which we might do well to give some attention.

Earlier in this chapter, I made reference to the empiricist school of philosophy and the way in which it has chosen to reinterpret the meaning of experience. Let us return for a moment to the impact of empiricism upon Western philosophy, for it plays a significant role in the cult of information.

Some four centuries ago, in that turbulent transitional zone that leads from the Renaissance to the modern period, the realm of knowledge in the Western world was a relatively small island of certainty surrounded by a sea of accepted mystery. At its far, unfathomable reaches, that sea merged with the mind of God, the contents of which might only be approached by an act of faith. On the island, the major bodies of thought were the scriptures, the works of the Church fathers, a handful of surviving Greek and Roman masters, and possibly a small select group of Jewish and Arab thinkers. Over several centuries of the medieval period, these sources had been worked up, often by way of brilliant elaborations, into an august repertory of knowledge that was held to answer all the questions the human mind could expect to have answered.

In such a culture, there is no such category as “information”; facts count for very little where whatever can be known is already known and has been assimilated to well-known truths. Instead of information there is confabulation: constant, sometimes inspired play with familiar ideas that are extended, combined, reshaped. By the latter part of the sixteenth century, this intellectual style was becoming more and more incompatible with the social and economic dynamism of Western society. For one thing—a dramatic thing—[102] new worlds were being discovered, whole continents and cultures that were unaccounted for by any existing authority. These were discoveries. And if there could be geographical discoveries, then why not new worlds of the mind as well? Francis Bacon used just that comparison to justify his restless quest for a “New Philosophy.” He, Descartes, Galileo, Giordano Bruno were among the first to match their culture’s expansive passion for physical discovery with a corresponding intellectual daring.

These seminal minds of the seventeenth century hit upon an exciting cultural project. Their proposition was this: Let us devise a kind of inquiry which will have the power to discover new things about the world—about its forces, and structures, and phenomena. This will be a way of thinking that will be the equivalent of the great voyages of discovery that have found new worlds across the seas. This style of inquiry, they decided, should involve rigorous, well-targeted interrogation of nature by close observation and experimentation. It should be undertaken in a spirit of total objectivity, avoiding all assumptions and presuppositions. It should simply try to see things as they really are. The result of this new method will be a growing body of solid, reliable facts, usually measurements, which have heretofore been overlooked. Then, if an observer sets about scrupulously collecting these facts, they will eventually speak for themselves, shaping themselves into great truths as vast in their scope as the size of the whole universe.

We can now recognize this method (the novum organum, as Bacon called it) as the distant beginning of the modern scientific world view. No one can fail to appreciate its historical contribution; but we also have enough historical perspective to know how very misconceived that method was. In its narrow focus on facts, it left out of account the crucial importance of theoretical imagination, hypothesis, speculation, and inspired guesswork—without which science would not have had its revolutionary impact. Looking back from our vantage point, we can clearly see theoretical imagination at work in the minds of Galileo, Newton, Kepler, Boyle, Hook, contours of thought which were there but which they were too close to notice. We have learned that great scientific breakthroughs are never assembled piecemeal from lint-picking research. At times, limited, fine-grained investigation may succeed in raising important doubts about a scientific theory; but it must at least have that theory before it as a target or a baseline. Without some master idea that [103] serves that function, one would not know where to begin looking for facts. Science is structured inquiry, and the structures that guide its progress are ideas.

There was, however, a good reason why the founding fathers of modern science should have erred in the direction of overvaluing facts at the expense of ideas. In Galileo’s day, the dominant ideas about nature were derived from a few sacrosanct authorities—either Christian theology or Aristotle. In order to free themselves from this increasingly restrictive heritage of tired, old ideas, these daring minds were moved to call ideas themselves into question. So they recommended a new point of departure, one which seemed innocuously neutral and therefore strategically inoffensive to the cultural authorities of the day: they would concentrate their attention on the clear-cut indisputable facts of common experience—the weights and sizes and temperatures of things. Facts first, they insisted. Ideas later. And this proved to be a persuasive approach. It brought to light any number of terrestrial and astronomical novelties that could not be adequately explained by Aristotle, the Bible, the Church fathers—or perhaps had never been noticed by them at all. If the mission of the early empiricists is viewed in its historical context, it can be recognized as a clever philosophical gambit whose purpose was to break down ethnocentric barriers and ecclesiastical authority. In this, it finally succeeded. By encouraging a bold skepticism about all inherited ideas, it liberated the restricted intellectual energies of Western society. Its connection with the birth of modern science will always endow it with a special status.

The trouble is, the very success of the empiricists has helped to embed a certain fiercely reductionistic conception of knowledge in our culture, one that drastically undervalues the role of the imagination in the creation of ideas, and of ideas in the creation of knowledge, even in the sciences. In our time, minds loyal to the empiricist love of fact have seized upon the computer as a model of the mind at work storing up data, shuffling them about, producing knowledge, and potentially doing it better than its human original. Those who see the world more or less in this way represent one pole in an argument which had already been joined in the days of Plato, Aristotle, and Democritus. Which is more “real,” things or the ideas we have of things? Does knowledge begin in the senses or in the mind?

It is hardly my intention to try to adjudicate that argument here. I only wish to emphasize that the data processing model of the mind [104] is not some purely objective “finding” of contemporary science. It grows from a definite philosophical commitment; it represents one side in an ancient debate, still with us and still unsettled. The empiricist side of that debate deserves to be respected for the rich contribution it has made to our philosophical heritage. We would not want to do without it. But I have found it interesting, whenever I am in the company of those who hold a rigorously empirical position, to remind them of a paradox: their viewpoint is itself an idea. It is an idea about ideas . . . and about knowledge, experience, and truth. As such, it is not based on fact or information, because it is this very idea which defines information in the first place. There is ultimately no way around ideas, then. They are what the mind thinks with, even when it is attacking the primacy of ideas.

For that matter, the computer is also an idea, just as all machines are. It is an idea about number, and classification, and relationship—all realized in the form of a physical invention. The proposition that the mind thinks like a computer is an idea about the mind, one that many philosophers have taken up and debated. And like every idea, this idea also can be gotten outside of, looked at from a distance, and called into question. The mind, unlike any computer anyone has even imagined building, is gifted with the power of irrepressible self-transcendence. It is the greatest of all escape artists, constantly eluding its own efforts at self-comprehension. It can form ideas about its own ideas, including its ideas about itself. But having done that, it has already occupied new ground; in its next effort to understand its own nature, it will have to reach out still further. This inability of the mind to capture its own nature is precisely what makes it impossible to invent a machine that will be the mind’s equal, let alone its successor. The computer can only be one more idea in the imagination of its creator. Our very capacity to make jokes about computers, to spoof and mock them, arises from our intellectual distance from them. If there is anything that frustrates the technician’s talent, it is open-ended potentiality.


From the viewpoint of the strict, doctrinaire empiricism which lingers on in the cult of information, the facts speak for themselves. Accumulate enough of them, and they will conveniently take the shape of knowledge. But how do we recognize a fact when we see one? Presumably, a fact is not a mental figment or an illusion; it is some small, compact particle of truth. But to collect such particles in the first place, we have to know what to look for. There has to be the idea of a fact.

The empiricists were right to believe that facts and ideas are significantly connected, but they inverted the relationship. Ideas create information, not the other way around. Every fact grows from an idea; it is the answer to a question we could not ask in the first place if an idea had not been invented which isolated some portion of the world, made it important, focused our attention, and stimulated inquiry.

Sometimes an idea becomes so commonplace, so much a part of the cultural consensus, that it sinks out of awareness, becoming an invisible thread in the fabric of thought. Then we ask and answer questions, collecting information without reflecting upon the underlying idea that makes this possible. The idea becomes as subliminal as the grammar that governs our language each time we speak.

Take an example. The time of day, the dare. These are among the simplest, least ambiguous facts. We may be right or wrong about them, but we know they are subject to a straightforward true or false decision. It is either 2:15 p.m. in our time zone, or it is not. It is either March 10 or it is not. This is information at its most irreducible level.

Yet behind these simple facts, there lies an immensely rich idea: the idea of time as a regular and cyclical rhythm of the cosmos. Somewhere in the distant past, a human mind invented this elegant concept, perhaps out of some rhapsodic or poetic contemplation of the bewilderingly congested universe. That mind decided the seemingly shapeless flow of time can be ordered in circles, the circles can be divided into equal intervals, the intervals can be counted. From this insight, imposed by the imagination on the flux of experience, [106] we derive the clock and the calendar, the minutes, days, months, seasons we can now deal with as simple facts.

Most of our master ideas about nature and human nature, logic and value eventually become so nearly subliminal that we rarely reflect upon them as human inventions, artifacts of the mind. We take them for granted as part of the cultural heritage. We live off the top of these ideas, harvesting facts from their surface. Similarly, historical facts exist as the outcroppings of buried interpretive or mythic insights which make sense of, give order to the jumbled folk memory of the past. We pick up a reference book or log on to a data base and ask for some simple information. When was the Declaration of Independence signed and who signed it? Facts. But behind those facts there lies a major cultural paradigm. We date the past (not all societies do) because we inherit a Judeo-Christian view of the world which tells us that the world was created in time and that it is getting somewhere in the process of history. We commemorate the names of people who “made history” because (along other lines) we inherit a dynamic, human-centered vision of life which convinces us that the efforts of people are important, and this leads us to believe that worthwhile things can be accomplished by human action.

When we ask for such simple points of historical information, all this stands behind the facts we get back as an answer. We ask and we answer the questions within encompassing ideas about history which have become as familiar to us as the air we breathe. But they are nonetheless human creations, each capable of being questioned, doubted, altered. The dramatic turning points in culture happen at just that point—where new idea rises up against old idea and judgment must be made.

What happens, then, when we blur the distinction between ideas and information and teach children that information processing is the basis of thought? Or when we set about building an “information economy” which spends more and more of its resources accumulating and processing facts? For one thing, we bury even deeper the substructures of ideas on which information stands, placing them further from critical reflection. For example, we begin to pay more attention to “economic indicators”—which are always convenient, simple-looking numbers—than to the assumptions about work, wealth, and well-being which underlie economic policy. Indeed, our orthodox economic science is awash in a flood of statistical figments [107] that serve mainly to obfuscate basic questions of value, purpose, and justice. What contribution has the computer made to this situation? It has raised the flood level, pouring out misleading and distracting information from every government agency and corporate boardroom. But even more ironically, the hard focus on information which the computer encourages must in time have the effect of crowding out new ideas, which are the intellectual source that generates facts.

In the long run, no ideas, no information.







Up to this point, we have been concentrating on the computer’s capacity to store and retrieve seemingly limitless amounts of data. This is one of the machine’s most impressive and useful powers; it is the feature that stands foremost in the minds of those who hail the advent of the Information Age. They are emphasizing the computer’s ability to access data bases, of which there are already thousands in existence, and to channel this wealth of material into people’s homes and workplaces.

But when we speak of the computer as a “data processor,” it is easy to overlook the fact that these two words refer to two separate functions that have been united in the machine. The computer stores data, but it can also process these data—meaning it can manipulate them in various ways for purposes of comparison, contrast, classification, deduction. The data may be numbers which are being run through mathematical processes; but they may also be names, addresses, medical records, personnel files, technical instructions which are also being run through a program to be sorted, ordered, filtered, or placed in some designated sequence. Thus, when a computer is ordered to run a spreadsheet scenario for a business, it draws upon all the data it holds for that business (inventory, overhead, earnings, seasonal performance, etc.), but it also massages the data, shaping them as the program instructs. Even a simple mailing list may reor- [109] ganize the material in its data bank in response to a program designed, for example, to segregate names by zip code in order to upscale the subscription list of a magazine or to edit out names on the basis of credit rating, ethnicity, age, etc.

These two operations have become so integrated in the performance of most computers that they are rarely thought of any longer as separate functions. Yet they are, and each may be given a separate evaluation. Storing data connects the computer with the job of record keeping; it dates back to the ledgers and filing cabinets which electronic data banks are now replacing. In this capacity, the computer mimics the faculty of memory. Processing data, on the other hand, represents a different line of technological descent. Here the computer dates back to the adding machine, and in this capacity, it mimics the power of human reason. For many computer enthusiasts, this second line of development is the real significance of the machine. They value its ability to work through lengthy logical-mathematical procedures with blinding speed and absolute precision. For them, this is the computer’s closest approximation to the human mind.

In Chapter 4, we touched upon Seymour Papert’s Logo computer curriculum. Logo is an example of a computer application that has very little to do with data; the value Papert sees in it is its capacity to teach “procedural thinking,” and so to discipline the rational faculties in a way that a mathematician would find important. He believes children should be taught to “think like a computer” because he believes computers have somewhat the capacity to think like human beings and so can help children learn that mental skill.

I have argued that those who celebrate the computer as an information keeper and provider tend to underrate, if not ignore the value of ideas, assuming, as many strict empiricists have, that information somehow compiles itself automatically into knowledge without the active intervention of theoretical imagination. Yet, ironically enough, the second line of technological descent that flows into the computer—that which has to do with procedural thinking—derives from a very different philosophical tradition, one which is intimately connected with the power of pure reason. Along this rationalist line of descent, the computer draws upon a class of ideas which has proved to be uniquely persuasive and long-lived, even though it has no connection whatever with data or with human experience of any kind. These are mathematical ideas: ideas discovered in the light of [110] unaided reason, fashioned from the logical structure of the mind it self.

In the history of philosophy, it is mathematics that has again and again been used as an example of a priori knowledge, knowledge which supposedly has no connection with sensory experience, with the data of observation and measurement. As Bertrand Russell observes:

Mathematics is . . . the chief source of the belief in eternal and exact truth, as well as in a supersensible intelligible world. Geometry deals with exact circles, but no sensible object is exactly circular; however carefully we may use our compasses, there will be some imperfections and irregularities. This suggests the view that all exact reasoning applies to the ideal as opposed to sensible objects; it is natural to go further, and to argue that thought is nobler than sense, and the objects of thought more real than those of sense perception.[14]

The classic formulation of this idea about mathematical ideas is that of Plato, for whom geometry served as the model of all reliable knowledge. Plato assumed that geometrical ideas are born into the mind as our one sure foundation for thought. In the darkness and confusion of life, we have the certainty of mathematics to guide us. In his famous Allegory of the Cave, Plato portrays the human race as a population of wretched slaves confined by their physical mortality to a tenebrous dungeon where they can see nothing but a blurred show of animated shadows; they know nothing that is not impermanent and illusory. In their squalid prison, there is but one distant glimmer of illuminating sunlight. Only the true philosopher discerns it; it is the power of pure reason, which gives us, especially in the form of mathematics, a knowledge of eternal verities, the pure forms that transcend the flux of time and the frailty of the flesh.

Over the centuries, in a variety of ways, philosophers have taken issue with Plato’s theory of knowledge and the mystique which it lends to mathematics. Still, for all the criticism, there remains a haunting quality to mathematical ideas, a trust in the clarity of numbers and of mathematical logic that lingers on in modern science and which survives in cybernetics and information theory. Plato’s mysticism may have been banished from these new sciences, but the spell [111] of geometrical certainty remains. For, ironically enough, the machine that gives the cult of information its greatest strength is grounded in a body of ideas—mathematical ideas—which has nothing to do with information, and which might conceivably be seen as the best proof we have to offer of the primacy of ideas.

As computers have grown “smarter” (meaning faster, more capacious, more intricate in their programming) over the past two decades, computer scientists have often exhibited some uneasiness with the name of their machine. As most recent textbooks in computer science hasten to tell students in the first chapter, the computer is no longer merely a computing instrument; it has transcended its lowly origins to become a form of artificial intelligence in the broadest sense. Thus, Margaret Boden observes,

It is essential to realize that a computer is not a mere “number cruncher,” or super-calculating arithmetic machine, although this is how computers are commonly regarded by people having no familiarity with artificial intelligence. Computers do not crunch numbers; they manipulate symbols.... Digital computers originally developed with mathematical problems in mind, are in fact general purpose symbol manipulating machines....

The terms “computer” and “computation” are themselves unfortunate, in view of their misleading arithmetical connotations. The definition of artificial intelligence previously cited—“the study of intelligence as computation”—does not imply that intelligence is really counting. Intelligence may be defined as the ability creatively to manipulate symbols, or process information, given the requirements of the task in hand. [15]

It is certainly true that computers have evolved a long way from being super adding machines. But it is also true that, in large measure, the reputation which computer science and computerized forms of “intelligence” have acquired in our popular culture borrows heavily upon the age-old mystique of mathematics. Insofar as computer scientists believe that computers are “machines who think” and that may someday think better than people, it is because of the machine’s historic connection with what the scientists and technicians have always taken to be the clearest, most productive kind of thinking: [112] mathematics. The promise that many enthusiasts see in the computer is precisely that it will, in time, produce a form of intelligence which will apply the exactitude of mathematics to every other field of culture. The computer’s repertory of symbols may no longer be limited to numbers; nevertheless, the hope remains that its more sophisticated programs will be able to manipulate symbols with the logical rigor of mathematical reasoning. Fritz Machlup makes the point that the word computation has taken on a vastly extended usage, now covering whatever computers can do as symbol manipulators. This leads to a good deal of public confusion. When, for example, a cognitive scientist speaks of artificial intelligence programs, and people “read a sentence or clause to the effect that ‘mental processes are computational processes,’ they are most likely to think of processes of numerical computation—but would be wrong.”[16]

It is, however, this very error which works to enhance the prestige of the computer by making it all too easy to believe that whatever runs through a computer thereby acquires the ironclad certainty of pure mathematics. Though they would blush to associate themselves with Plato’s mysticism, many opportunistic figures in computer science and especially artificial intelligence have exploited that error for all it is worth in the confused public mind.

It is curious how, at times in the most unpredictable way, something of the old Platonic spirit surfaces in the world of computer science. Plato was convinced that it was the corruption of the flesh that separates us from the highest forms of knowledge. So he recommended the study of geometry as a sort of purgation of the senses that would elevate the mind above the body’s mortality. We can see exactly this same alliance of the ascetic and the mathematical in the following passage from Robert Jastrow’s study of “mind in the universe”:

When the brain sciences reach this point, a bold scientist will be able to tap the contents of his mind and transfer them into the metallic lattices of a computer. Because mind is the essence of being, it can be said that this scientist has entered the computer, and that he now dwells in it.

At last the human brain, ensconced in a computer, has been liberated from the weakness of the mortal flesh.... It is in control of its own destiny. The machine is its body; it is the machine’s mind....

[113] It seems to me that this must be the mature form of intelligent life in the Universe. Housed in indestructible lattices of silicon, and no longer constrained in the span of its years by the life and death cycle of a biological organism, such a kind of life could live forever.[17]

In this disembodied form, Jastrow imagines that the computer will transform us into “a race of immortals.”


The mathematical model of absolute certainty is one of the undying hopes of our species. As tough-minded as most scientists might be (or wish to appear to be) in their response to the old mathematical magic, that Platonic dream survives, and no place more vividly than in the cult of information. Data—the speed and quantity of their processing—may be what the cult most often emphasizes in its celebration of the computer. But quite as important as the data is the mathematical precision with which the computer’s programs manipulate the information fed into them. This is what computer scientists mean by the term effective procedure. We are told that a computer can do anything for which an “effective procedure” is given. The phrase means “a set of rules (the program) unambiguously specifying certain processes, which processes can be carried out by a machine built in such a way as to accept those rules as instructions determining its operations.”[18] The search for such a procedure would be pure whimsey were it not for the fact that there is one field of thought which offers us a model of just such strict logicality: mathematics, the field that produced the computer in the first place. When limited to the realm of what can be treated with such logical rigor, the computer functions at its full strength. But the further we stray from that realm, the more rapidly its powers fade.

Unfortunately, not all computer scientists are willing to concede that point. They forget—and help the public forget—that mathematical ideas are of a very special kind. They are formal ideas, meaning they are built from axioms by unambiguously specifiable rules. They can be analyzed into parts, and the parts are ultimately logical [114] principles and postulates that lend themselves to mechanical manipulation. The value of mathematical ideas lies precisely in this analytical clarity and non-ambiguity. Within their field of proper application, they have the power to confer logical transparency; they strip away ambiguity to reveal the skeletal structure that connects parts, stages, and procedures. They can be programmed. This is because, by an astonishing exercise of the human imagination, mathematical systems have been developed outside the real world of daily experience, which is more often than not blurred, fuzzy, and infinitely complex.

Since there are areas of the real world that appear to approximate formal order, there are portions of mathematics that can be applied to that world in order to isolate its more measurable and rule-abiding elements. Where that happens, we have the realms of theoretical and applied science. And so here too computers can be highly useful in channeling large amounts of information through scientific and technical programs. But even here we should bear in mind that there are underlying ideas of a nonmathematical kind (we might call them insights or, perhaps, articles of faith) that govern all scientific thought. Take our basic conviction that there is a rational order to nature, a pattern which the mind can grasp. This is the most fundamental of scientific ideas. But what is it based upon? It is a hunch or a desperate hope worked up perhaps from fleeting perceptions of symmetries or regularities in nature, recurring rhythms and cycles—all of which are continually dissolving in the “buzzing, booming confusion” of daily life. But working with that idea as a kind of filter, we screen out the exceptions and distractions and find deeper regularities which begin to look like an order of things. But what kind of order? Our science has chosen to look for the order of numbers. We work from Galileo’s potent idea that “the great book of nature is written in the language of mathematics.” But we might have chosen another kind of order. There is the order of music (thus the astronomer Kepler spent most of his life searching for the harmony of the spheres); there is the order of architecture and of drama; there is the order of a story (a myth) told over and over; there is the order of a god’s behavior, where we watch for reward and punishing, wrath and mercy. Which order is the most important? Making that choice is also an idea to be selected from all the possibilities.

Very nearly the whole of modern science has been generated out of a small collection of metaphysical, even aesthetic ideas such as:

[115] The universe consists of matter in motion. (Descartes)

Nature is governed by universal laws. (Newton)

Knowledge is power. (Bacon)

None of these ideas is a conclusion arrived at by scientific research; none of them is the result of processing information. Rather, they are premises that make scientific research possible and lead to the discovery of confirming data. Once again, these are master ideas about the world, and like all master ideas, they transcend information. They arise from another dimension of the mind, from a capacity for insight that is perhaps akin to the power of artistic and religious inspiration.

There is no question but that in the area of mathematical and scientific ideas, the computer supplements the mind significantly. It can carry out calculations at blinding speed; it can make hypothetical projections; it can offer amazingly flexible graphic representations; it can produce complex simulations that stretch the imagination. This is quite a lot for a machine to offer. Yet it may be that even in the sciences, the computer’s proficiency as an information processor has its liabilities. At least one leading scientist has raised a provocative warning about the use of computers in astronomy. Sir Bernard Lovell writes:

I fear that literal-minded, narrowly-focused computerized research is proving antithetical to the free exercise of that happy faculty known as serendipity.... Would the existence of radio galaxies, quasars, pulsars and the microwave background ever have been revealed if their discovery had depended on the computerized radio observations of today?. . . The computers act as very narrow filters of information; they must be oriented to specific observations. In other words, they have to be programmed for the kinds of results that the observer expects. Does this mean, then, that computers are anti-serendipitous? And if they are, should we not be troubled that they may be obscuring from our understanding further major features of the universe?[19]





Reflections on the

True Art of Thinking


On the night of November 20, 1619, Rene Descartes, then an aspiring philosopher still in his early twenties, had a series of three dreams which changed the course of his life and of modern thought. He reports that in his sleep, the Angel of Truth appeared to him and, in a blinding revelation like a flash of lightning, revealed a secret which would “lay the foundations of a new method of understanding and a new and marvelous science.” In the light of what the angel had told him, Descartes fervently set to work on an ambitious treatise called “Rules for the Direction of the Mind.” The objective of his “new and marvelous science” was nothing less than to describe how the mind works. For Descartes, who was to invent analytical geometry, there was no question but that the model for this task was to be found in mathematics. There would be axioms (“clear and distinct ideas” that none could doubt) and, connecting the axioms in logical progressions, a finite number of simple, utterly sensible rules that were equally self-evident. The result would be an expanding body of knowledge.

Descartes never finished his treatise; the project was abandoned after the eighteenth rule—perhaps because it proved more difficult than he had anticipated. He did, however, eventually do justice to [211] the angel’s inspiration in the famous Discourse on Method, which is often taken to be the founding document of modern philosophy.[20] Descartes’ project was the first of many similar attempts in the modern world to codify the laws of thought; almost all of them follow his lead in using mathematics as their model. In our day, the fields of artificial intelligence and cognitive science can be seen as part of this tradition, but now united with technology and centering upon a physical mechanism—the computer—which supposedly embodies these laws.

The epistemological systems that have been developed since the time of Descartes have often been ingenious. They surely illuminate many aspects of the mind. But all of them are marked by the same curious fact. They leave out the Angel of Truth—as indeed Descartes himself did. For he never returned to the source of his inspiration. His writings spare no time for the role of dreams, revelations, insights as the wellsprings of thought. Instead, he gave all his attention to formal, logical procedures that supposedly begin with zero, from a position of radical doubt. This is a fateful oversight by the father of modern philosophy; it leaves out of account that aspect of thinking which makes it more an art than a science, let alone a technology: the moment of inspiration, the mysterious origin of ideas. No doubt Descartes himself would have been hard pressed to say by what door of the mind the angel had managed to enter his thoughts. Can any of us say where such flashes of intuition come from? They seem to arise unbidden from unconscious sources. We do not stitch them together piece by piece; rather, they arrive all at once and whole. If there are any rules we can follow for the generation of ideas, it may simply be to keep the mind open and receptive on all sides, to remain hospitable to the strange, the peripheral, the blurred and fleeting that might otherwise pass unnoticed. We may not know how the mind creates or receives ideas, but without them—and especially what I have called the master ideas which embody great reserves of collective experience—our culture would be unimaginably meager. It is difficult to see how the mind could work at all if it did not have such grand conceptions as truth, goodness, beauty to light its way.

At the same time that Descartes was drafting his rules of thought, the English philosopher Francis Bacon was also in search of a radical new method of understanding. Bacon, who was a mathematical illiterate, preferred to stress the importance of observation and the ac- [212] cumulation of facts. He too was a man with a revolutionary vision—the intention of placing all learning on a new foundation of solid fact derived from the experimental “vexing” of nature. Before the seventeenth century was finished, these two philosophical currents—the Rationalism of Descartes, the Empiricism of Bacon—had formed a working alliance to produce the intellectual enterprise we call science: observation subjected to the discipline of an impersonal method designed to have all the logical rigor of mathematics. As Bacon once put it, if one has the right method, then “the mind itself” will “be guided at every step, and the business be done as if by machinery.”

Since the days of Descartes and Bacon, science has grown robustly. Its methods have been debated, revised, and sharpened as they have thrust into new fields of study; the facts it has discovered mount by the day. But the angel who has fired the minds of great scientists with a vision of truth as bold as that of Descartes has rarely been given her due credit, and least of all by the computer scientists who seem convinced that they have at last invented Bacon’s mental “machinery” and that it can match the achievements of its human original without the benefit of unaccountable revelations.

The gap that has so often been left by philosophers between the origin of ideas and the subsequent mechanics of thought—between the angel’s word and the analytical processes that follow—simply reflects the difference between what the mind can and cannot understand about itself. We can self-consciously connect idea with idea, comparing and contrasting as we go, plotting out the course of a deductive sequence. But when we try to get behind the ideas to grasp the elusive interplay of experience, memory, insight that bubbles up into consciousness as a whole thought, we are apt to come away from the effort dizzy and confounded—as if we had tried to read a message that was traveling past us at blinding speed. Thinking up ideas is so spontaneous—one might almost say so instinctive—an action, that it defies capture and analysis. We cannot slow the mind down sufficiently to see the thing happening step by step. Picking our thoughts apart at this primitive, preconscious level is rather like one of those deliberately baffling exercises the Zen Buddhist masters use to dazzle the mind so that it may experience the inutterable void. When it comes to understanding where the mind gets its ideas, perhaps the best we can do is to say, with Descartes, “An angel told me.” But then is there any need to go farther than this? Mentality is [213] the gift of our human nature. We may use it, enjoy it, extend and elaborate it without being able to explain it.

In any case, the fact that the origin of ideas is radically elusive does not mean we are licensed to ignore the importance of ideas and begin with whatever we can explain as if that were the whole answer to the age-old epistemological question with which philosophers have struggled for centuries. Yet that, I believe, is what the computer scientists do when they seek to use the computer to explain cognition and intelligence.

The information processing model of thought, which has been the principal bone of contention in these pages, poses a certain striking paradox. On the basis of that model, we are told that thinking reduces to a matter of shuffling data through a few simple, formal procedures. Yet, when we seek to think in this “simple” way, it proves to be very demanding—as if we were forcing the mind to work against the grain. Take any commonplace routine of daily life—a minimal act of intelligence—and try to specify all its components in a logically tight sequence. Making breakfast, putting on one’s clothes, going shopping. As we have seen in an earlier chapter, these common-sense projects have defied the best efforts of cognitive scientists to program them. Or take a more extraordinary (meaning less routine) activity: choosing a vocation in life, writing a play, a novel a poem, or—as in Descartes’s case—revolutionizing the foundations of thought. In each of these exercises, what we have first and foremost in mind is the whole, global project. We will to do it, and then—somehow, seemingly without thinking about it—we work through the matter step by step, improvising a countless series of subroutines that contribute to the project. Where something doesn’t work or goes wrong, we adjust within the terms of the project. We understand projects: whole activities. They may be misconceived activities, but they are nevertheless the ends that must come before the means. When we get round to the means, we remain perfectly aware that these are subordinate matters. The surest way any project in life goes wrong is when we fixate on those subordinate matters and lose sight of the whole. Then we become like the proverbial centipede who, when he was asked to explain how he coordinated all his parts, discovered he was paralyzed.

What I am suggesting is that, in little things and big, the mind works more by way of gestalts than by algorithmic procedures. This is because our life as a whole is made up of a hierarchy of projects, [214] some trivial and repetitive, some special and spectacular. The mind is naturally a spinner of projects, meaning it sets goals, choosing them from among all the things we might be doing with our lives. Pondering choices, making projects—these are the mind’s first order of activity. This is so obvious, so basic, that perhaps we are only prompted to reflect upon it when a different idea about thinking is presented, such as that thought is connecting data points in formal sequences.

Now, of course, the mind takes things in as it goes along. We do register data. But we register information in highly selective ways within the terms of a project which, among other things, tells us which facts to pay attention to, which to ignore, which deserve the highest and which the lowest value. Thinking means—most significantly—forming projects and reflecting upon the values that the project involves. Many projects are simply given by the physical conditions of life: finding food, clothing the body, sheltering from the elements, securing help in time of danger. But all of us at least hope we will have the opportunity in life to function at a higher level than this, that we will spend as much of our time as possible beyond the level of necessity, pursuing what John Maynard Keynes once called “the art of life itself.” Forming projects of this kind is the higher calling that comes with our human nature. Teaching the young how to honor and enjoy that gift is the whole meaning of education. That is surely not what we are doing when we load them down with information, or make them feel that collecting information is the main business of the mind. Nor do we teach them the art of life when we ask them “to think like a machine.” Machines do not invent projects; they are invented by human beings to pursue projects. What Seymour Papert calls “procedural thinking” surely has its role to play in life; but its role is at the level of working out the route for a trip by the close study of a road map. It is an activity that comes into play only after we have chosen to make a journey and have selected a destination.

The substance of education in the early years is the learning of what I have called master ideas, the moral and metaphysical paradigms which lie at the heart of every culture. To choose a classic model in the history of Western pedagogy: in the ancient world, the Homeric epics (read or recited) were the texts from which children learned the values of their civilization. They learned from adventure tales and heroic exemplars which they could imitate by endless play [215] in the roadways and fields. Every healthy culture puts its children through such a Homeric interlude when epic images, fairy tales, chansons de geste, Bible stories, fables, and legends summon the growing mind to high purpose. That interlude lays the foundations of thought. The “texts” need not be exclusively literary. They can be rituals—as in many tribal societies, where the myths are embodied in festive ceremonies. Or they may be works of art, like the stained glass windows and statuary of medieval churches. Master ideas may be taught in many modes. In our society, television and the movies are among the most powerful means of instruction, often to the point of eclipsing the lackluster materials presented in school. Unhappily, these major media are for the most part in the hands of commercial opportunists for whom nobility of purpose is usually nowhere in sight. At best, a few tawdry images of heroism and villainy may seep through to feed the hungry young mind. The rudiments of epic conduct can be found in a movie like Star Wars, but the imagery has been produced at a mediocre aesthetic and intellectual level, with more concern for “effects” than for character. At such hands, archetypes become stereotypes, and the great deeds done are skewed with an eye to merchandising as much of the work as possible.

Those cultures are blessed which can call upon Homer, or Biblical tales, or the Mahabharata to educate the young. Though the children’s grasp of such literature may be simple and playful, they are in touch with material of high seriousness. From the heroic examples before them, they learn that growing up means making projects with full responsibility for one’s choices. In short, taking charge of one’s life in the presence of a noble standard. Young minds reach out for this guidance; they exercise their powers of imagination in working up fantasies of great quests, great battles, great deeds of cunning, daring, passion, sacrifice. They craft their identities to the patterns of gods and goddesses, kings and queens, warriors, hunters, saints, ideal types of mother and father, friend and neighbor. And perhaps some among them aspire to become the bards and artists of the new generation who will carry forward the ideals of their culture. Education begins with giving the mind images—not data points or machines—to think with.

There is a problem, however, about teaching children their culture’s heroic values. Left in the hands of parents and teachers, but especially of the Church and the state where these institutions become dominant, ideals easily become forms of indoctrination, idols [216] of the tribe that can tyrannize the young mind. Heroism becomes chauvinism; high bright images become binding conventions. Master ideas are cheapened when they are placed in the keeping of small, timid minds that have grown away from their own childish exuberance.

In the hands of great artists like Homer, images never lose the redeeming complexity of real life. The heroes keep just enough of their human frailties to stay close to the flesh and blood. Achilles, the greatest warrior of them all, is nevertheless as vain and spoiled as a child, a tragically flawed figure. Odysseus can be more than a bit of a scoundrel, his “many devices” weakening toward simple piracy. It is the fullness of personality in these heroes that leaves their admirers balanced between adulation and uncertainty. The ideal has more than one side; the mind is nagged with the thought “yes, but . . . .” Where such truth to life is lost, the images become shallow; they can then be used to manipulate rather than inspire.

The Greeks, who raised their children on a diet of Homeric themes, also produced Socrates, the philosophical gadfly whose mission was to sting his city into thoughtfulness. “Know thyself,” Socrates insisted to his students. But where else can self-knowledge begin but with the questioning of ancestral values, prescribed identities?

Here is the other significant use of ideas: to produce critical contrast and so to spark the mind to life. Homer offers towering examples of courage. Ah, but what is true courage? Socrates asks, offering other, conflicting images, some of which defy Homer. At once, idea is pitted against idea, and the students must make up their own minds, judge, and choose. Societies rarely honor their Socratic spirits. Athens, irritated beyond tolerance by his insistent criticism, sent its greatest philosopher to his death. Still, no educational theory that lacks such a Socratic counterpoint can hope to free the young to think new thoughts, to become new people, and so to renew the culture.

In a time when our schools are filling up with advanced educational technology, it may seem almost perverse to go in search of educational ideals in ancient and primitive societies that had little else to teach with than word of mouth. But it may take that strong a contrast to stimulate a properly critical view of the computer’s role in educating the young. At least it reminds us that all societies, modern and traditional, have had to decide what to teach their children [217] before they could ask how to teach them. Content before means, the message before the medium.

The schooling of the young has always been a mixture of basic skills (whether literacy and ciphering or hunting and harvesting) and high ideals. Even if our society were to decide that computer literacy (let us hope in some well-considered sense of that much-confused term) should be included among the skills we teach in the schools, that would leave us with the ideals of life still to be taught. Most educators surely recognize that fact, treating the computer as primarily a means of instruction. What they may overlook is the way in which the computer brings with it a hidden curriculum that impinges upon the ideals they would teach. For this is indeed a powerful teaching tool, a smart machine that brings with it certain deep assumptions about the nature of mentality. Embodied in the machine there is an idea of what the mind is and how it works. The idea is there because scientists who purport to understand cognition and intelligence have put it there. No other teaching tool has ever brought intellectual luggage of so consequential a kind with it. A conception of mind—even if it is no better than a caricature—easily carries over into a prescription for character and value. When we grant anyone the power to teach us how to think, we may also be granting them the chance to teach us what to think, where to begin thinking, where to stop. At some level that underlies the texts and tests and lesson plans, education is an anatomy of the mind, its structure, its limits, its powers and proper application.

The subliminal lesson that is being taught whenever the computer is used (unless a careful effort is made to offset that effect) is the data processing model of the mind. This model, as we have seen, connects with a major transition in our economic life, one that brings us to a new stage of high tech industrialism, the so-called Information Age with its service-oriented economy. Behind that transition, powerful corporate interests are at work shaping a new social order. The government (especially the military) as a prime customer and user of information technology is allied to the corporations in building that order. Intertwined with both, a significant, well-financed segment of the technical and scientific community—the specialists in artificial intelligence and cognitive science—has lent the computer model of the mind the sanction of a deep metaphysical proposition. All these forces, aided by the persuasive skills of the advertisers, have fixed upon the computer as an educational instrument; the machine [218] brings that formidable constellation of social interests to the classrooms and the campus. The more room and status it is given there by educators, the greater the influence those interests will have.

Yet these are the interests that are making the most questionable use of the computer. At their hands, this promising technology—itself a manifestation of prodigious human imagination and inventiveness—is being degraded into a means of surveillance and control, of financial and managerial centralization, of manipulating public opinion, of making war. The presence of personal computers in millions of homes, especially when they are used as little more than trivial amusements, does not in any meaningful way offset the power the machine brings to those who use it for these purposes.

Introducing students to the computer at an early age, creating the impression that their little exercises in programming and game playing are somehow giving them control over a powerful technology, can be a treacherous deception. It is not teaching them to think in some scientifically sound way; it is persuading them to acquiesce. It is accustoming them to the presence of computers in every walk of life, and thus making them dependent on the machine’s supposed necessity and superiority. Under these circumstances, the best approach to computer literacy might be to stress the limitations and abuses of the machine, showing the students how little they need it to develop their autonomous powers of thought.

There may even be a sound ecological justification for such a curriculum. It can remind children of their connection with the lively world of nature that lies beyond the industrial environment of machines and cities. Sherry Turkle observes that, in times past, children learned their human nature in large measure by comparing themselves to the animals. Now, increasingly, “computers with their interactivity, their psychology, with whatever fragments of intelligence they have . . . bid to take this place.”[21] Yet it may mean far more at this juncture in history for children once again to find their kinship with the animals, every one of which, in its own inarticulate way, displays greater powers of mind than any computer can even mimic well. It would indeed be a loss if children failed to see in the nesting birds and the hunting cat an intelligence as well as a dignity that belongs to the line of evolutionary advance from which their own mind emerges. It is not the least educational virtue of the traditional lore and legends that so much of it belongs to the pre-industrial era, when the realities of the nonhuman world were more vividly present. [219] How much ecological sense does it make to rush to close off what remains of that experience for children by thrusting still another mechanical device upon them?

There is a crucial early interval in the growth of young minds when they need the nourishment of value-bearing images and ideas, the sort of Homeric themes that open the adventure of life for them. They can wait indefinitely to learn as much as most schools will ever teach them about computers. The skills of unquestionable value which the technology makes available—word processing, rapid computation, data base searching—can certainly be saved for the later high school or even college years. But once young minds have missed the fairy tales, the epic stories, the myths and legends, it is difficult to go back and recapture them with that fertile sense of naive wonder that belongs to childhood. Similarly, if the taste for Socratic inquiry is not enlivened somewhere in the adolescent years, the growing mind may form habits of acquiescence that make it difficult to get out from under the dead hand of parental dominance and social authority.

As things now stand, there is a strong consensus abroad that our schools are doing a poor to mediocre job of laying these intellectual foundations. The reasons for the malaise of the schools are many. Teachers are often overworked and under-appreciated; many Students come to them bored, rebellious, distracted, or demoralized. Some of the children in our inner cities are too disadvantaged and harassed by necessity to summon up an educative sense of wonder; others may have been turned prematurely cynical by the corrupted values of commercialism and cheap celebrity; many, even the fortunate and affluent, may be haunted by the pervasive fear of thermonuclear extinction that blights all our lives. The schools share and reflect all these troubles; perhaps, at times, the troubles overwhelm the best efforts of the best teachers, driving them back to a narrow focus on basic skills, job training, and competitive grading. But it is at least worth something to know where the big problems lie and to know there is no quick technological fix for them. Computers, even when we reach the point of having one on every desk for every student, will provide no cure for ills that are social and political in nature.

It may seem that the position I take here about the educational limits of the computer finishes with being a humanist’s conservative appeal in behalf of the arts and letters. It is that. Scientists and [220] technicians, whose professional interests tend to make them computer enthusiasts, may therefore see little room for their values in the sort of pedagogy I recommend. But as the story of Descartes’s angel should remind us, science and technology at their highest creative level are no less connected with ideas, with imagination, with vision. They draw upon all the same resources of the mind, both the Homeric and the Socratic, as the arts and letters. We do not go far wrong from the viewpoint of any discipline by the general cultivation of the mind. The master ideas belong to every field of thought. It would surely be a sad mistake to intrude some small number of pedestrian computer skills upon the education of the young in ways that blocked out the inventive powers that created this astonishing technology in the first place. And what do we gain from any point of view by convincing children that their minds are inferior to a machine that dumbly mimics a mere fraction of their native talents?

In the education of the young, humanists and scientists share a common cause in resisting any theory that cheapens thought. That is what the data processing model does by closing itself to that quality of the mind which so many philosophers, prophets, and artists have dared to regard as godlike: its inexhaustible potentiality. In their search for “effective procedures” that can be universally applied to all aspects of culture, experts in artificial intelligence and cognitive science are forced to insist that there is nothing more to thought than a conventional mechanistic analysis will discover: data points shuffled through a small repertory of algorithms. In contrast, my argument in these pages has been that the mind thinks, not with data, but with ideas whose creation and elaboration cannot be reduced to a set of predictable rules. When we usher children into the realm of ideas, we bring them the gift of intellectual adventure. They begin to sense the dimensions of thought and the possibilities of original insight. Whether they take the form of words, images, numbers, gestures, ideas unfold. They reveal rooms within rooms within rooms, a constant opening out into larger, unexpected worlds of speculation.

The art of thinking is grounded in the mind’s astonishing capacity to create beyond what it intends, beyond what it can foresee. We cannot begin to shape that capacity toward humane ends and to guard it from demonic misuse until we have first experienced the true size of the mind.

[1] For a survey of the early history of the computer industry, see Joel Shurkin, Engines of the Mind (New York: Norton, 1984). Shurkin details the first use of UNIVAC at CBS in 1952 (pp. 250-253).

[2] Norbert Wiener, The Human Use of Human Beings: Cybernetics and Society (Boston: Houghton Mifflin, 1950). A much-revised paperback edition appeared from Doubleday Anchor Books in 1954.

[3] Newell and Simon, quoted in Joseph Weizenbaum, Computer Power and Human Reason (San Francisco: W. H. Freeman, 1976), p. 169.

[4] Ibid., p. 138.

[5] Simon, quoted in John Pfeiffer, The Thinking Machine (New York: Lippincott, 1962), p. 174.

[6] Warren Weaver, “The Mathematics of Communication,” Scientific American, July 1949, p. 12.

[7] Fritz Machlup, “Semantic Quirks in Studies of Information,” in The Study o f Information, ed. Fritz Machlup and Una Mansfield (New York: Wiley, 1983), pp. 653, 658. Machlup’s prologue and epilogue to this anthology are incisive surveys of the many strange meanings the word information has acquired since Shannon’s work was published.

[8] Weaver, “The Mathematics of Communication,” p. 12.

[9] Pfeiffer, The Thinking Machine, p. 186. The book is based on Pfeiffer’s television documentary.

[10] Steven Rose, The Chemistry of Life (Baltimore: Penguin Books, 1970), pp. 17, 162.

[11] For Barbara McClintock’s work, see Evelyn Fox Keller, A Feeling for the Organism (New York: W. H. Freeman, 1983).

[12]Machlup and Mansfield, The Study of Information, p. 644.

[13] Vladimir Nabokov, “The Art of Literature and Common Sense,” Lectures on Literature (New York: Harcourt Brace Jovanovich, 1980).

[14]Bertrand Russell, A History o f Western Philosophy (New York: Clarion Books, 1945), p. 37.

[15] Boden, Artificial Intelligence and Natural Man, pp. 15, 16-17.

[16] Machlup and Mansfield, The Study o f Information, p. 671.

[17] Robert Jastrow, The Enchanted Loom: Mind in the Universe (New York: Simon & Schuster, 1984), pp. 166-167.

[18] Boden, Artificial Intelligence and Natural Man, pp. 6-7.

[19] Science Digest, June 1984, p. 94.

[20] Jacques Maritain offers a lengthy analysis of Descartes’s fateful dream in The Dream of Descartes (New York: Philosophical Library, 1944).

[21] Turkle, The Second Self, p. 313.