22.

Digital and Analog Texts

John Lavagnino

We have all been asked in the past few decades to consider the advances offered in our lives by digital sound and image recording. Some must have felt that listening, seeing, and reading didn't always seem so different in this digital world when compared to the analog world; that a change that goes all the way down to the most fundamental aspects of representation ought to be more noticeable. Is this distinction between two modes of representation something fundamental that matters for how we read and interpret? Much of the talk of digital and analog comes from work on cybernetics and cognitive science; are these fundamental categories of thought that shape our experiences?

My approach to these questions is through the history of their use in twentieth-century engineering, cybernetics, and cognitive science. It is a history of metaphorical transfers, in which conceptual tools from one field are reused in others, and along with the spur to thought comes the potential for error. As I will show, the origin of the digital/analog distinction is in the practicalities of building machinery rather than in the fundamental nature of things. And the significance of the distinction also depends on the context. In the realm of engineering, the concepts of digital and analog are exact descriptions of how machinery is designed to operate; in studies of the body and brain, some functions may fit one category or the other, but the whole picture does not contain the same kind of sharp definitions and boundaries; and in the realm of cultural productions, some common practices may be usefully related to these concepts, but many other practices do not abide by those boundaries.

The concepts of digital and analog are helpful to readers and interpreters of texts insofar as they can help us describe some practices in the use of texts more precisely. But these concepts do not control the way texts work, nor do they exhaust the range of things texts can do. Systems are digital or analog because they were designed to act that way; the concepts are less pertinent for application to a system without overt design, explicit boundaries, or rules for interpretation.

Digital and Analog Systems

The common habit is to refer to data as being digital or analog; but it is only as a property of whole systems that the terms are meaningful. A body of data in either form means nothing outside the system that is engineered to perform operations on it. References to digital and analog data suggest that this is an essential property of such data, when it is instead a way to describe how the systems using such data are built to work.

John Haugeland's definitions of digital and analog devices remain the best, and are those I will follow. His principal point about digital devices is that they are based on data defined so that it can be copied exactly: such a system can read, write, and transfer data with perfect preservation of exactly the same content. That is achieved by defining a closed set of distinct tokens that may appear, and requiring all data to be a sequence of such tokens; in most present-day computer systems these basic tokens are bits that may be either 0 or 1 and nothing else, but a system may be based on many more values. (Some early computer systems were intrinsically decimal, for example; and a three-valued notation is more efficient for storage than binary [Hayes 2001].) A copy of digital data is indistinguishable from the original, and within digital computer systems such copying happens with great frequency: the reliability of copying in such systems is not a potential property but one that is exercised constantly. Analog systems involve continuous ranges, not discrete ones: there could always be more gradations between two selected values on a scale. Analog data cannot be copied perfectly, so the criterion in building an analog system is not making perfect copying happen every time, but reducing the accumulation of error and minor changes. The nth-generation-copy problem is a problem of an analog system; a digital system must be designed so that it doesn't have that problem in the slightest degree.

Analog and digital systems embody different sorts of rules for interpretation of data, but they are both built on rules, and the data they process is defined in terms of those rules. Removed from the systems and from the rules for interpretation, the data may also change from analog to digital or vice versa. A famous example due to Nelson Goodman is the clock dial: this looks like an analog system, because you can read off the time to any degree of precision in measuring the position of the hands. In actual practice, we often treat this as a digital device, and only read it to full minutes. The clock might have a digital or an analog mechanism, which affects whether the information is really there to be measured at arbitrary precision. But in any case, this stage of interpretation of something outside the system is an occasion when information often changes from digital to analog or vice versa.

A system of either type cannot regard the data as stable and unproblematic; both digital and analog systems must be engineered to preserve the data. We have some familiarity with that effort from daily life when using analog systems: here again, practical familiarity with operations such as photocopying gives us the intuition that an analog copy may be better or worse and that some effort can improve the result. In analog systems, the crucial need is to avoid introducing noise as data moves about within the system. In computer systems the provisions for keeping data stable are internal and rarely visible to users, but they are there all the same. In discussions of computing it's common to read that switches are either on or off and therefore digital bits naturally have two values, 0 or 1; this analogy seems to go back to an essay by Alan Turing, but Turing's point that it is a simplified picture is usually omitted (1950: 439). The simplification is flatly wrong about how present-day computers actually work, since they use high and low voltages and not open or closed circuits, and their circuitry at many points works to push signals to the high or low extremes, and away from the intermediate range where one value could be mistaken for another. (Hillis 1998 is one of the rare popular accounts of computer hardware that touches on these provisions.) Peter Gutmann's account of the problems of erasing hard disk drives helps to illustrate the work behind the scenes to produce those perfect bits, a complex but ultimately very reliable process — and also the possibility of studying the history of what a drive has stored if you analyze it using analog equipment:

… truly deleting data from magnetic media is very difficult. The problem lies in the fact that when data is written to the medium, the write head sets the polarity of most, but not all, of the magnetic domains. This is partially due to the inability of the writing device to write in exactly the same location each time, and partially due to the variations in media sensitivity and field strength over time and among devices…

In conventional terms, when a one is written to disk the media records a one, and when a zero is written the media records a zero. However the actual effect is closer to obtaining a 0.95 when a zero is overwritten with a one, and a 1.05 when a one is overwritten with a one. Normal disk circuitry is set up so that both these values are read as ones, but using specialised circuitry it is possible to work out what previous "layers" contained. The recovery of at least one or two layers of overwritten data isn't too hard to perform by reading the signal from the analog head electronics with a high-quality digital sampling oscilloscope, downloading the sampled waveform to a PC, and analysing it in software to recover the previously recorded signal…

Deviations in the position of the drive head from the original track may leave significant portions of the previous data along the track edge relatively untouched Regions where the old and new data coincide create continuous magnetization between the two. However, if the new transition is out of phase with the previous one, a few microns of erase band with no definite magnetization are created at the juncture of the old and new tracks…

When all the above factors are combined it turns out that each track contains an image of everything ever written to it, but that the contribution from each "layer" gets progressively smaller the further back it was made. Intelligence organisations have a lot of expertise in recovering these palimpsestuous images.

The difference between digital and analog systems, then, is best understood as an engineering difference in how we choose to build systems, and not as resulting from an intrinsic property of data that somehow precedes those systems. Outside such systems, the status of data is less definite, because there is no longer the same specification about what is and is not significant. John Haugeland remarked: "digital, like accurate, economical, or heavy-duty, is a mundane engineering notion, root and branch. It only makes sense as a practical means to cope with the vagaries and vicissitudes, the noise and drift, of earthly existence" (1981: 217). It has become common in popular usage to talk about the analog as the category that covers everything that isn't digital; but in fact most things are neither. The images, sounds, smells, and so on that we live among have mostly not been reduced to information yet. Both analog and digital systems require some stage of choice and reduction to turn phenomena from the world into data that can be processed. The most common error about digital systems is to think that the data is effortlessly stable and unchanging; the most common error about analog systems is to think they're natural and simple, not really encoded.

Minds and Bodies

Both approaches were embodied in machinery long before the rise of digital computers: the abacus is digital, the slide rule is analog. As technology, then, the digital is not something that only arose in the twentieth century. But it was in the later twentieth century that digital computers came to provide an almost inescapable model for thinking about the mind. (On the longer history of such technological analogies, see Wiener 1948; Marshall 1977; Bolter 1984.) How the digital computer came to eclipse the analog computer in such thinking, even though careful study suggested that both kinds of processing were probably among those used in the brain, is one key element of that story, and one that illuminates some of the cultural associations of our current ideas of the digital and analog.

Although the use of both approaches and an awareness of their differences go back a long way, the standard distinction between the two, their names and their pairing, all came in the mid-twentieth century, when substantial advances were made in machinery for both kinds of systems, and there was a serious choice between building one kind of system or the other for many communications and computational applications. (The best available account of that moment is in Mindell 2002.) That was also the moment when the new fields of cybernetics and information theory arose, and seemed to offer remarkable insights into human bodies and minds. Cybernetics in particular proposed understandings of systems that responded to the environment, without strong distinctions between machinery and living beings. In that context, it was a natural question to ask whether human minds and bodies were digital or analog. The thinking of that era is worth revisiting because its perspective is so different from today's assumptions — and so suggests other paths of thought; this earlier perspective has the merit of pointing out complexities that are lost when we focus primarily on the digital.

Cybernetics, today, is a field that lingers on in small pockets of activity but does not matter. But in its day, roughly from 1943 to 1970, it was a highly interdisciplinary field, which encompassed topics now split up among medicine, psychology, engineering, mathematics, and computing, and one of its most persuasive claims to importance was its ability to bring so much together productively. Today its achievements (quite substantial ones) have been merged back into the subject matter of those separate disciplines, and in a broader cultural context its nature and ideas have been mostly forgotten — so that (to give one example) a recent editor of Roland Barthes's work assumes that the term "homeostat" was invented by Barthes, though in Barthes's writings of the 1950s its source in cybernetics is clearly indicated (Barthes 2002: 83).

Although cybernetics included much that is now classified under the heading of cognitive science, it did not give cognition primacy as the object of study. Definitions of cybernetics usually focused on the idea of "control systems," which could respond to the environment and achieve results beyond what it was normally thought machines could do; as Norbert Wiener wrote in his influential book Cybernetics; or, Control and Communication in the Animal and the Machine in 1948:

The machines of which we are now speaking are not the dream of the sensationalist, nor the hope of some future time. They already exist as thermostats, automatic gyrocompass ship-steering systems, self-propelled missiles — especially such as seek their target — anti-aircraft fire-control systems, automatically controlled oil-cracking stills, ultra-rapid computing machines, and the like.

(1948: 55)

The workings of the body were as important a subject as anything cognitive. The interest more recently in "cybernetic organisms" reflects the perspective of cybernetics, which did not distinguish sharply between biological and mechanical systems, and also did not emphasize thought. The excitement associated with the field goes back to earlier discoveries that showed how (for example) the body maintained its temperature by means that were natural but also not cognitive: we don't think about how to keep our body temperature stable, but it is also not a supernatural effect. Previous doubts that the body could be described as a machine had been based on too limited an understanding of the possibilities of machinery; cybernetics showed that a "control system" could achieve a great deal without doing anything that looked like thinking. Wiener's book, which at times has whole pages of mathematical equations, also includes a cultural history of the idea of building "a working simulacrum of a living organism" (1948: 51), leading up to the way cybernetics proposed to do it.

To sum up: the many automata of the present age are coupled to the outside world both for the reception of impressions and for the performance of actions. They contain sense-organs, effectors, and the equivalent of a nervous system to integrate the transfer of information from the one to the other. They lend themselves very well to description in physiological terms. It is scarcely a miracle that they can be subsumed under one theory with the mechanisms of psychology.

(1948: 55)

The word "cybernetic" today is most often just a synonym for "computational"; but in the early thinking of the field computation was only one subject, and being effectively "coupled to the outside world" seemed more important. Peter Galison has traced the role played by Wiener's war work in the development of his thinking — in particular, he had actually been involved in building "anti-aircraft fire-control systems." They could be thought of "in physiological terms" by analogy with reflexive actions of the body, not with thought or calculation. Galison mounts a critique of the dehumanizing tendency of cybernetic thought, but that doesn't seem to be the only way people took it at the time. Theodore Sturgeon's science-fiction novel More than Human (1953) is very clearly influenced by cybernetic thinking (one scene involves "a couple of war-surplus servo-mechanisms rigged to simulate radar-gun directors" that happen to appear as part of a carnival's shooting-gallery attraction [1953: 169]), and it follows cybernetics in assigning conscious thought a role that does not direct everything else. The novel describes the genesis of a being made up of several people with different abilities: "a complex organism which is composed of Baby, a computer; Bonnie and Beanie, teleports; Janie, telekineticist; and myself, telepath and central control" (1953: 142). Computing was quite distinct from "central control."

Cybernetics did include a chapter entitled "Computing Machines and the Nervous System," specifically concerned with how you might build a computer to do what the brain does; though it spends as much time on memory and reflex action as on calculation. Wiener talks about digital and analog representation, but the stress is on the engineering question of what would work best in building a computer, and he argues for binary digital computers for reasons broadly similar to those behind their use today. The neuron, he observes, seems to be broadly digital in its action, since it either fires or does not fire; but he leaves mostly open the question of just what sort of representation the brain uses, and is instead trying to consider very generally how aspects of the brain could be realized in machinery. The importance of the digital-or-analog question here is practical: if you are going to do a lot of computation and data storage, digital has more advantages; but analog representation for other purposes is not ruled out. (In contexts where not very much information needed to be preserved, and detection and reaction were more important, the issue might be quite marginal. W. Ross Ashby's Introduction to Cybernetics has only the barest mention of digital and analog representation.)

The writings of John von Neumann and Gregory Bateson are the two principal sources for the idea that the choice of digital or analog representation is not merely an engineering choice for computer builders, but had real consequences for thought; and that it was a choice that is built into the design of the brain. Von Neumann's views appeared in a posthumously published book, The Computer and the Brain, which in its published form is closer to an outline than to a fully elaborated text. (The review by von Neumann's associate A. H. Taub is a very helpful supplement.) The title characterizes the contents very exactly: Part I describes how computers (both digital and analog) work; Part II describes how the brain works, so far as that was known at the time, and then tries to conclude whether it's a digital or an analog system. Like Wiener he sees the neuron as mainly digital because it either fires or it doesn't; but he too sees that the question is actually more complicated because that activity is influenced by many factors in the brain which aren't on/off signals. This leads to a conclusion that is often cited as the book's main point: that the brain uses a mixture of digital and analog representation (Taub 1960: 68–9). But a substantial part of the book follows that passage, and here von Neumann develops a quite different conclusion: that the brain works in a way fundamentally different from computers analog and digital — it's statistical. Nothing like the precision of mechanical computers in storage or computation is available given the brain's hardware, yet it achieves a high degree of reliability. And von Neumann concludes that the logic used within the brain must be different from that of mathematics, though there might be some connection: a striking conclusion from a mathematician strongly associated with the development of digital computers. But in the end, von Neumann's account of computers is present to highlight what's different about the brain, not to serve as the complete foundation for understanding it.

Gregory Bateson, like von Neumann, was active in cybernetics circles from the 1940s onward; but unlike Wiener and von Neumann he did not work in the physical sciences, but instead in anthropology and psychology. His Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology (1972) collects work that in some cases dates back to the 1940s. It is perhaps best known today as the apparent source of the definition of information as "a difference which makes a difference" (1972: 315, 459). The breadth of concerns surpasses even that of cybernetics: few other books combine analyses of schizophrenia with accounts of communicating with dolphins. But looking for fundamental principles that operated very broadly was a habit of Bateson's:

I picked up a vague mystical feeling that we must look for the same sort of processes in all fields of natural phenomena — that we might expect to find the same sort of laws at work in the structure of a crystal as in the structure of society, or that the segmentation of an earthworm might really be comparable to the process by which basalt pillars are formed.

(1972: 74)

Bateson's criticism of much work in psychology in his day was that it lacked an adequate conceptual basis; so he sought to find fundamental principles and mechanisms on which to build in his work. The difference between digital and analog representation seemed to him one such fundamental distinction: he argued that the two modes entailed different communicative possibilities with different psychological consequences. Analog communication, in his view, did not support the kinds of logical operations that digital communication facilitated, and in particular negation and yes/no choices were not possible; that led to a connection with Freudian concepts that also did not work in accordance with traditional logic (as in "The Antithetical Meaning of Primal Words," for example), and to new theories of his own (of the "double bind," for example).

Bateson's work mirrors one feature of cybernetics in its advantages and disadvantages: he seeks to talk about whole systems rather than about artificially isolated parts; but it is then difficult to find places to start in analysis. The analog/digital pairing is important in his work because of the resulting ideas about different cognitive and communicative possibilities. The distinction had some connection with the difference between people and animals: nonverbal communication was necessarily analog, in Bateson's view, and so could not have the digital features he saw in verbal communication. But, like von Neumann, he concluded that digital and analog representation were both used in thinking — though von Neumann's account was based on a consideration of brain physiology, and Bateson's on the nature of different forms of communication. In both cases, the answers to the question track rather closely the answers current in the 1940s and 1950s to questions about how you'd engineer practical control systems using the available technology. The accounts by Wiener and von Neumann are only clearer than Bateson's about how thought is being seen in the light of current technology, because of their deeper understanding of that technology and fuller awareness of how open the choice still was.

In all of these accounts, then, the mind is not just a digital computer, and its embodiment is a major concern in developing a scientific account. Of course, the course of subsequent technological history was one in which digital computing grew far faster than analog computing, and also began to take over applications that had originally been handled using analog technology. We also have more and more an equation of the analog with the physical and the digital with the cognitive, an alignment that wasn't there in the 1940s and 1950s; popular ideas about the digital/analog pairing in general are narrower in their field of application. While work continues on all the topics that cybernetics addressed, attempts to bring them all together as cybernetics did are now much rarer, and as a result there is (for example) a good deal of work in cognitive science and artificial intelligence that is about ways to make computers think, without reference to how the body works, and with the assumption that only digital techniques are necessary. This approach results, of course, from the remarkable development in the speed and capacity of digital computers: this line of work is attractive as a research program, even if it remains clear that the brain doesn't actually work that way.

One stage along that path may be seen in work from the 1970s by the social anthropologist Edmund Leach, who absorbed influences from cybernetics and structuralism, among other things. In Wiener the choice of binary representation was an engineering decision; but the connection with structuralist thinking was probably inevitable:

In practice, most structuralist analysis invokes the kind of binary algebra which would be appropriate to the understanding of the workings of a brain designed like a digital computer. This should not be taken to imply that structuralists imagine that human brains really work just like a digital computer. It is rather that since many of the products of human brains can be shown to have characteristics which appear also in the output of man-made computers it seems reasonable to attribute computer-like qualities to human brains. This in no way precludes the probability that human brains have many other important qualities which we have not yet been able to discern.

(1972: 333)

Leach's is what by now is a common move: to state that the compelling analogy with digital computing is of course partial, but then to go on to assume it is total. This essay was a contribution to a symposium on nonverbal communication, and unlike most of the other contributors Leach saw nonverbal communication as very similar to verbal communication:

The "yes"/"no" opposition of speech codes and the contrasted signals of non-speech codes are related to each other as metaphors; the structuralist hypothesis is that they are all alike transforms of a common algebraic opposition which must be assumed to occur as "an abstract element of the human mind" in a deep level structure. The relation between this abstraction and the physical organisation of human brain tissue is a matter for later investigation.

Incidentally it is of interest that the internationally accepted symbolism of mathematics represents this binary opposition either as 0/1 or as −/+. The second of these couplets is really very similar to the first since it consists of a base couplet −/− with the addition of a vertical stroke to the second half of the couplet. But if a student of primitive art, who could free himself from our assumptions about the notations of arithmetic, were to encounter paired symbols 0/1 he would immediately conclude that the opposition represented "vagina"/"penis" and was metonymic of female/male. A structuralist might argue that this fits very well with his assumptions about the deep structure algebra of human thought.

(1972: 334)

As we have seen, Bateson's view of nonverbal communication was that it involved a kind of logic that was basically different from the digital and the verbal; but Leach doesn't respond to his views (though he cites one of his books on the subject), or to those of the other speakers at the symposium.

Like cybernetics, structuralism has come and gone. But just as anything binary looked like the right answer to Leach, today anything digital has the same appeal for many. Thus it is now common to read scrupulous and thoroughly researched books on cognitive science — such as Daniel C. Dennett's Consciousness Explained (1991) and Steven Pinker's How the Mind Works (1997) — in which considerations of brain anatomy are only a secondary matter. Once the point is established that there is a good case for seeing the brain as performing computations, though in ways different from conventional digital computers, there is a shift of emphasis to the computational level alone: with the idea that (as both writers put it) most of what matters can be seen as happening in a layer of "software" whose nature doesn't depend very strongly on the brain "hardware" whose details are yet to be established. Though the non-digital features of the brain are fully acknowledged, everything shows a tendency to revert to a digital mode. Some are highly critical of the digital emphasis of this kind of approach: Jerry Fodor, for example, commented in 1983: "If someone — a [Hubert] Dreyfus, for example — were to ask us why we should even suppose that the digital computer is a plausible mechanism for the simulation of global cognitive processes, the answering silence would be deafening" (129). It is nevertheless a strong program for research: it has produced good work, it makes it possible to draw on the power of digital computers as research tools, and it does not require waiting around for a resolution of the many outstanding questions about brain anatomy.

But it doesn't prove that all thinking is digital, or that the situation of the program on the digital computer, involved from start to finish in processing digital data, is a complete model for the mind. The imbalance is far greater in popular culture, where the equation of the digital with thought and a disembodied computational world, and of the analog with the physical world, is inescapable (Rodowick 2001). In the era of cybernetics, the body was both digital and analog, and other things too; not only was the anatomy of the brain significant in a consideration of the mind, but the anatomy of the body offered useful analogies. Now we find ourselves needing to make an effort to defend the idea that the body has something to do with thinking, so strong is the idea of the division.

The Nature of Texts

In the mid-1960s, Nelson Goodman used the ideas of digital and analog representation to help develop a distinction between artworks expressed through "notations" —systems such as writing or musical scoring for specifying a work, with the digital property of being copyable — and art objects such as paintings that could be imitated but not exactly copied. The alphabet, on this view, is digital: every A is assumed to be the same as every other A, even if differences in printing or display make some particular instances look slightly different. There is a fixed set of discrete letters: you can't make up new ones, and there is no possibility of another letter halfway between A and B. A text is readily broken down into its component letters, and is readily copied to create something just as good as the original. But a painting is not based on juxtaposing elements from a fixed set, and may not be easy to decompose into discrete parts. Like von Neumann and Bateson, Goodman found that the modes could be mixed: "A scale model of a campus, with green papier-mâché for grass, pink cardboard for brick, plastic film for glass, etc., is analog with respect to spatial dimensions but digital with respect to materials" (1968: 173).

The idea that text is in this sense a digital medium, a perfect-copy medium, is now widespread. And this view also fits many of our everyday practices in working with texts: we assume that the multiple copies of a printed book are all the same, and in an argument at any level about the merits of a recent novel it would not be persuasive if I objected, "But you didn't read my copy of it." In the world of art history, though, that kind of argument does have force: it is assumed that you need to see originals of paintings, and that someone who only saw a copy was seeing something significantly different. In discussions of texts we also assume that it is possible to quote from a text and get it right; you could make a mistake, but in principle it can be done and in frequent practice it is done. The digital view of text offers an explanation of what's behind these standard practices.

The digital view of text also suggests reasons why it works so well in digital computer systems: why text does not in general pose as many technical difficulties as graphics, and is at the heart of some outstandingly successful applications, most notably digital publishing and full-text searching. Written language is pre-analyzed into letters, and because those letters can be treated as digital data it is straightforward to make digital texts, and then to search them. The limitations of full-text searching are familiar: above all, the problem that it's only particular word forms that are easy to search, and not meanings. But that is still far ahead of the primitive state of searching digital images: because there is no easy way to decompose digital images into anything like an alphabet of visual components, even the most limited kind of image searching is a huge technical problem. And while we have many tools for working with digital texts and images, it is for texts that we have tools like spelling correctors that are able to get somewhere near the meaning, without any special preparation of the data. Image manipulation cannot do that unless the image has been created in a very deliberate manner, to keep its components separate from the beginning (as with the "layering" possible in some programs).

But digital computers and written texts are digital in different ways. Digital computer systems are engineered to fit that definition: data in them is digital because it is created and maintained that way. Goodman's definition of a "notation" is an exact description of such data. But the same description is an idealized view of our practice with texts, one that helps explain many of its features but is wrong about others. A digital system has rules about what constitutes the data and how the individual tokens are to be recognized — and nothing else is to be considered significant. With texts we are free to see any other features as significant: to decide that things often considered as mere bearers of content (the paper, the design, the shapes of the letters) do matter. Such decisions are not merely arbitrary whims, as we internalize many assumptions about how particular sorts of texts should be presented. Jonathan Gibson has demonstrated the importance of blank space in seventeenth-century English letters: the higher the status of the addressee, the more blank space there was supposed to be on the page. And most readers of the present chapter would find their response to it affected by reading a text written in green crayon on yellow cardboard, rather than in a printed Blackwell Companion. Examples are not difficult to multiply once you try to examine implicit assumptions about how texts are appropriately presented and not merely worded. Though many of our practices assume that text is only the sequence of alphabetic letters, is only the "content," the actual system involves many other meaningful features — features which as readers we find it difficult to ignore. These issues about copying are ones that may well seem to lack urgency, because along with knowledge of the alphabet we've absorbed so much about the manner of presentation that we don't normally need to think about it: the yellow cardboard is not something you even consider in thinking about academic writing. But a description of what reading actually is must take these issues into account.

The digital trait of having an exact and reliable technique for copying does not in practice apply to texts on paper: it is not hard to explain how to copy a contemporary printed text letter-for-letter, but getting it letter-perfect is in fact difficult for a text of any length, unless you use imaging methods that essentially bypass the digitality of the alphabet. But the bigger problem is knowing what matters besides the letters: To what level of detail do we copy? How exactly do we need to match the typefaces? Does it need to be the same kind of paper? Do the line breaks need to match? Some of these features are ones that could be encoded as digital data, some as analog; but the core problem is that there is not a clearly bounded set of information we could identify as significant, any more than with a painting. Once we attend to the whole object presented to our senses every feature is potentially significant.

Texts created in digital form to some extent avoid these problems. They're digital and so can be exactly copied. Languages such as PostScript can specify what is to be displayed with great specificity (though the more specific such languages are, the more complex they are: PostScript is vastly larger than HTML). But actual presentations still wind up being variable because machinery is: a reader still sees the text on a particular screen or paper, and those interfaces are features that still matter for the reception of what's written.

It is not, of course, absurd to say to readers: look at what I've said, not at the way it's written down! It remains true that many practices in working with texts fit the digital view, in which only the sequence of letters is significant, and this view of their work is often what authors intend. The experience of using the World Wide Web has bolstered that view, as we now often see work of scholarly value presented in a typographically unattractive manner that we ought to disregard. But that bracketing is a step we have to choose to take: our normal mode of reading just isn't to read the letters alone and abstract them in the digital way; we normally take in much more than that. We may think of ourselves as working with text as computers do, but it is not a way of working that comes naturally.

The constitution of digital and analog data is a function of the whole system in which it is embedded, rather than of data as an independent entity; the problem with texts is that the system does not have the built-in limits of a computer system, because we as readers don't always follow the digital rules. Just as it is appealing and even useful to think about the mind in digital-computer terms, it is also appealing and useful to think about text that way: so long as we remember that it is a partial view. You are not supposed to be paying attention to the typography of this chapter in reading it and extracting its informational content; as a piece of scholarly writing it is not supposed to have an aesthetic dimension at all. But, as Gérard Genette argued, there are ample empirical and theoretical reasons to think that any text has the potential to be regarded aesthetically, even this one; some writing by its form (such as poetry) makes a direct claim to literary status, but readers choose to regard other writing as literary despite its nonliterary genre. And beyond this, there does not seem to be a human operation of reading that works the way digital reading has to: by making a copy of the data as defined within the system, and nothing more. That bracketing may to a degree become habitual for us, to the extent that it seems like the proper way to read, but it never becomes quite complete. Reading remains a practice that is not reducible to information or to digital data.

References

Ashby, W. Ross (1956). An Introduction to Cybernetics. London: Chapman and Hall.

Barthes, Roland (2002). Comment vivre ensemble: Simulations romanesques de quelques espaces quoti-diens. Notes de cours et de séminaires au Collège de France, 1976–1977. Paris: Seuil.

Bateson, Gregory (1972). Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology. San Francisco: Chandler.

Bolter, Jay David (1984). Turing's Man: Western Culture in the Computer Age. Chapel Hill: University of North Carolina Press.

Dennett, Daniel C. (1991). Consciousness Explained. Boston: Little, Brown.

Fodor, Jerry A. (1983). The Modularity of Mind: An Essay on Faculty Psychology. Cambridge, MA: MIT Press.

Freud, Sigmund (1957). "The Antithetical Meaning of Primal Words." In James Strachey (Ed.). The Standard Edition of the Complete Psychological Works of Sigmund Freud, Volume XI (Alan Tyson, Trans.). London: Hogarth Press and the Institute of Psycho-Analysis, pp. 155–61 (Original work published 1910).

Galison, Peter (1994). "The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision." Critical Inquiry 21: 228–66.

Genette, Gérard (1991). Fiction et diction. Paris: Seuil.

Gibson, Jonathan (1997). "Significant Space in Manuscript Letters." The Seventeenth Century 12: 1–9.

Goodman, Nelson (1968). Languages of Art: An Approach to a Theory of Symbols. Indianapolis: Bobbs-Merrill.

Gutmann, Peter (1996). "Secure Deletion of Data from Magnetic and Solid-state Memory." Sixth USENIX Security Symposium Proceedings, San Jose, California, July 22–25, 1996. <http://www.cs.auckland.ac.nz/-pgut001/pubs/secure_del.html>.

Haugeland, John (1981). Analog and Analog. Philosophical Topics 12: 213–25.

Hayes, Brian (2001). "Third Base." American Scientist 89: 490–4.

Hillis, W. Daniel (1998). The Pattern on the Stone: The Simple Ideas that Make Computers Work. New York: Basic Books.

Leach, Edmund (1972). "The Influence of Cultural Context on Non-verbal Communication in Man." In R. A. Hinde (Ed.). Non-Verbal Communication. Cambridge: Cambridge University Press, pp. 315–44.

Marshall, John C. (1977). "Minds, Machines and Metaphors." Social Studies of Science 7: 475–88.

Mindell, David A. (2002). Between Human and Machine: Feedback, Control, and Computing before Cybernetics. Baltimore: Johns Hopkins University Press.

Pinker, Steven (1997). How the Mind Works. New York: Norton.

Rodowick, D. N. (2001). "Dr. Strange Media; or, How I Learned to Stop Worrying and Love Film Theory." PMLA 116: 1396–404.

Sturgeon, Theodore (1953). More than Human. New York: Farrar.

Taub, A. H. (1960). "Review of The Computer and the Brain." Isis 51: 94–6.

Turing, Alan (1950). "Computing Machinery and Intelligence." Mind, NS 59: 433–60.

von Neumann, John (1958). The Computer and the Brain. New Haven: Yale University Press.

Wiener, Norbert (1948). Cybernetics; or, Control and Communication in the Animal and the Machine. New York: Wiley.