by Neil Postman

Theuth, my paragon of inventors, the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it. So it is in this; you, who are the father of writing, have out of fondness for your off-spring attributed to it quite the opposite of its real function. Those who acquire it will cease to exercise their memory and become forgetful; they will rely on writing to bring things to their remembrance by external signs instead of by their own internal resources. What you have discovered is a receipt for recollection, not for memory. And as for wisdom, your pupils will have the reputation for it without the reality; they will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom they will be a burden to society.

Phaedrus by Plato

These are the kinds of questions that technological change brings to mind when one grasps, as Thamus did, that technological competition ignites total war, which means it is not possible to contain the effects of a new technology to a limited sphere of human activity. If this metaphor puts the matter too brutally, we may try a gentler, kinder one: Technological change is neither additive nor subtractive. It is ecological.

In addition to this, and more important, it is not always clear, at least in the early stages of a technology’s intrusion into a culture, who will gain most by it and who will lose most. This is because the changes wrought by technology are subtle if not downright mysterious, one might even say wildly unpredictable. Among the most unpredictable are those that might be labeled ideological. This is the sort of change Thamus had in mind when he warned that writers will come to rely on external signs instead of their own internal resources, and that they will receive quantities of information without proper instruction. He meant that new technologies change what we mean by “knowing” and “truth”; they alter those deeply embedded habits of thought which give to a culture its sense of what the world is like—a sense of what is the natural order of things, of what is reasonable, of what is necessary, of what is inevitable, of what is real. Since such changes are expressed in changed meanings of old words, I will hold off until later discussing the massive ideological transformation now occurring in the United States. Here, I should like to give only one example of how technology creates new conceptions of what is real and, in the process, undermines older conceptions. I refer to the seemingly harmless practice of assigning marks or grades to the answers students give on examinations. This procedure seems so natural to most of us that we are hardly aware of its significance. We may even find it difficult to imagine that the number or letter is a tool or, if you will, a technology; still less that, when we use such a technology to judge someone’s behavior, we have done something peculiar. In point of fact, the first instance of grading students’ papers occurred at Cambridge University in 1792 at the suggestion of a tutor named William Farish. No one knows much about William Farish; not more than a handful have ever heard of him. And yet his idea that a quantitative value should be assigned to human thoughts was a major step toward constructing a mathematical concept of reality. If a number can be given to the quality of a thought, then a number can be given to the qualities of mercy, love, hate, beauty, creativity, intelligence, even sanity itself. When Galileo said that the language of nature is written in mathematics, he did not mean to include human feeling or accomplishment or insight. But most of us are now inclined to make these inclusions. Our psychologists, sociologists, and educators find it quite impossible to do their work without numbers. They believe that without numbers they cannot acquire or express authentic knowledge.

I shall not argue here that this is a stupid or dangerous idea, only that it is peculiar. What is even more peculiar is that so many of us do not find the idea peculiar. To say that someone should be doing better work because he has an IQ of 134, or that someone is a 7.2 on a sensitivity scale, or that this man’s essay on the rise of capitalism is an A − and that man’s is a C+ would have sounded like gibberish to Galileo or Shakespeare or Thomas Jefferson. If it makes sense to us, that is because our minds have been conditioned by the technology of numbers so that we see the world differently than they did. Our understanding of what is real is different. Which is another way of saying that embedded in every tool is an ideological bias, a predisposition to construct the world as one thing rather than another, to value one thing over another, to amplify one sense or skill or attitude more loudly than another.

This is what Marshall McLuhan meant by his famous aphorism “The medium is the message.” This is what Marx meant when he said, “Technology discloses man’s mode of dealing with nature” and creates the “conditions of intercourse” by which we relate to each other. It is what Wittgenstein meant when, in referring to our most fundamental technology, he said that language is not merely a vehicle of thought but also the driver. And it is what Thamus wished the inventor Theuth to see. This is, in short, an ancient and persistent piece of wisdom, perhaps most simply expressed in the old adage that, to a man with a hammer, everything looks like a nail. Without being too literal, we may extend the truism: To a man with a pencil, everything looks like a list. To a man with a camera, everything looks like an image. To a man with a computer, everything looks like data. And to a man with a grade sheet, everything looks like a number.

New technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop.

By connecting technological conditions to symbolic life and psychic habits, Marx was doing nothing unusual. Before him, scholars found it useful to invent taxonomies of culture based on the technological character of an age. And they do it still, for the practice is something of a persistent scholarly industry. We think at once of the best-known classification: the Stone Age, the Bronze Age, the Iron Age, the Steel Age. We speak easily of the Industrial Revolution, a term popularized by Arnold Toynbee, and, more recently, of the Post-Industrial Revolution, so named by Daniel Bell. Oswald Spengler wrote of the Age of Machine Technics, and C. S. Peirce called the nineteenth century the Railway Age. Lewis Mumford, looking at matters from a longer perspective, gave us the Eotechnic, the Paleotechnic, and the Neotechnic Ages. With equally telescopic perspective, José Ortega y Gasset wrote of three stages in the development of technology: the age of technology of chance, the age of technology of the artisan, the age of technology of the technician. Walter Ong has written about Oral cultures, Chirographic cultures, Typographic cultures, and Electronic cultures. McLuhan himself introduced the phrase “the Age of Gutenberg” (which, he believed, is now replaced by the Age of Electronic Communication).

I find it necessary, for the purpose of clarifying our present situation and indicating what dangers lie ahead, to create still another taxonomy. Cultures may be classified into three types: tool-using cultures, technocracies, and technopolies.

I mean that the world we live in is very nearly incomprehensible to most of us. There is almost no fact, whether actual or imagined, that will surprise us for very long, since we have no comprehensive and consistent picture of the world that would make the fact appear as an unacceptable contradiction. We believe because there is no reason not to believe.

Social institutions sometimes do their work simply by denying people access to information, but principally by directing how much weight and, therefore, value one must give to information.

One way of defining Technopoly, then, is to say it is what happens to society when the defenses against information glut have broken down. It is what happens when institutional life becomes inadequate to cope with too much information. It is what happens when a culture, overcome by information generated by technology, tries to employ technology itself as a means of providing clear direction and humane purpose.

I will start by making reference to a famous correspondence between Sigmund Freud and Albert Einstein. Freud once sent a copy of one of his books to Einstein, asking for his evaluation of it. Einstein replied that he thought the book exemplary but was not qualified to judge its scientific merit. To which Freud replied somewhat testily that, if Einstein could say nothing of its scientific merit, he, Freud, could not imagine how the book could be judged exemplary; it was science or it was nothing. Well, of course, Freud was wrong. His work is exemplary—indeed, monumental—but scarcely anyone believes today that Freud was doing science, any more than educated people believe that Marx was doing science, or Max Weber or Lewis Mumford or Bruno Bettelheim or Carl Jung or Margaret Mead or Arnold Toynbee. What these people were doing—and Stanley Miligram was doing—is documenting the behavior and feelings of people as they confront problems posed by their culture. Their work is a form of storytelling. Science itself is, of course, a form of storytelling too, but its assumptions and procedures are so different from those of social research that it is extremely misleading to give the same name to each. In fact, the stories of social researchers are much closer in structure and purpose to what is called imaginative literature; that is to say, both a social researcher and a novelist give unique interpretations to a set of human events and support their interpretations with examples in various forms. Their interpretations cannot be proved or disproved but will draw their appeal from the power of their language, the depth of their explanations, the relevance of their examples, and the credibility of their themes. And all of this has, in both cases, an identifiable moral purpose. The words “true” and “false” do not apply here in the sense that they are used in mathematics or science. For there is nothing universally and irrevocably true or false about these interpretations. There are no critical tests to confirm or falsify them. There are no natural laws from which they are derived. They are bound by time, by situation, and above all by the cultural prejudices of the researcher or writer.

A novelist—for example, D. H. Lawrence—tells a story about the sexual life of a woman—Lady Chatterley—and from it we may learn things about the secrets of some people, and wonder if Lady Chatterley’s secrets are not more common than we had thought. Lawrence did not claim to be a scientist, but he looked carefully and deeply at the people he knew and concluded that there is more hypocrisy in heaven and earth than is dreamt of in some of our philosophies. Alfred Kinsey was also interested in the sexual lives of women, and so he and his assistants interviewed thousands of them in an effort to find out what they believed their sexual conduct was like. Each woman told her story, although it was a story carefully structured by Kinsey’s questions. Some of them told everything they were permitted to tell, some only a little, and some probably lied. But when all their tales were put together, a collective story emerged about a certain time and place. It was a story more abstract than D. H. Lawrence’s, largely told in the language of statistics and, of course, without much psychological insight. But it was a story nonetheless.

There are two reasons why the case of management is instructive. First, as suggested by Galbraith, management, like the zero, statistics, IQ measurement, grading papers, or polling, functions as does any technology. It is not made up of mechanical parts, of course. It is made up of procedures and rules designed to standardize behavior. We may call any such system of procedures and rules a technique; and there is nothing to fear from techniques, unless, like so much of our machinery, they become autonomous. There’s the rub. In a Technopoly, we tend to believe that only through the autonomy of techniques (and machinery) we can achieve our goals. This idea is all the more dangerous because no one can reasonably object to the rational use of techniques to achieve human purposes. Indeed, I am not disputing that the technique known as management may be the best way for modern business to conduct its affairs. We are technical creatures, and through our predilection for and our ability to create techniques we achieve high levels of clarity and efficiency. As I said earlier, language itself is a kind of technique—an invisible technology—and through it we achieve more than clarity and efficiency. We achieve humanity—or inhumanity. The question with language, as with any other technique or machine, is and always has been, who is to be the master? Will we control it, or will it control us? The argument, in short, is not with technique. The argument is with the triumph of technique, with techniques that become sanctified and rule out the possibilities of other ones. Technique, like any other technology, tends to function independently of the system it serves. It becomes autonomous, in the manner of a robot that no longer obeys its master.

Students will always be “smarter” when answering a multiple-choice test than when answering a “fill-in” test, even when the subject matter is the same. A question, even of the simplest kind, is not and can never be unbiased. […] My purpose is to say that the structure of any question is as devoid of neutrality as its content. The form of a question may ease our way or pose obstacles.

Anyone who practices the art of cultural criticism must endure being asked, What is the solution to the problems you describe? Critics almost never appreciate this question, since, in most cases, they are entirely satisfied with themselves for having posed the problems and, in any event, are rarely skilled in formulating practical suggestions about anything. This is why they became cultural critics.