8(3), August 1991, pages 21-37

Ambiguity and Confusion in Word-Processing Research

Joel Nydahl

Despite a "300% increase in journal submissions" (LeBlanc, 1988, p. 29) on word processing as a writing tool--and there is no reason to believe that the submission rate has substantially decreased since LeBlanc's 1988 report--teachers still do not know how good the case for word processing in the writing class really is. Ironically, in fact, writing teachers are likely to be more confused than ever: the more that is published and presented on the effectiveness of word processing in the writing class, the less known. In contrast to the optimistic self-reports and enthusiastic anecdotal evidence widely disseminated in teachers' lounges and at writing conferences for half a decade, most recent studies of the efficacy of the "electronic pen" (Selfe, 1985) present, at best, an ambiguous picture of the pen's utility.

Would it be a wonder if teachers who once had untested faith now question the miracles supposedly wrought in the classes of others and hesitate to use this no-longer-new tool as a teaching aid? Surely, before going to the trouble of learning software programs, rethinking pedagogy and procedure, and revising syllabi, teachers want to know whether it is certain, only probable, or unlikely--their worst fear--that they will be better teachers and their students better writers if the new technology is put at the center of writing instruction. Even the latest survey of studies in word processing and writing is no help in assuaging doubts about whether students revise more and write better when they use word processing: "We find conflicting results when we examine two variables: revision and quality. Slightly more studies found an increase in revision as found no increase in revision, and fewer studies found improvement in quality as found no improvement" (Hawisher, 1989, p. 52).

A Brief History of Research in Word Processing and Writing

Skepticism about word processing in the writing class is not new. What is new, however, is the degree of informed skepticism about the utility of word processing, as opposed to earlier knee-jerk reluctance to accept its place in the liberal arts curriculum. During the late 1970s and early 1980s, many composition teachers had philosophical reservations about embracing the new tool. Trained in typically conservative English departments, instructors were suspicious of the role of technology in language instruction (Cross & Curey, 1984). Reacting viscerally, they simply felt fear and loathing when contemplating "processed words." [1]

While such reluctance continued in many English departments, unfounded prejudices began to give way in other English departments in the early 1980s, perhaps in part because of seminal studies like Daiute and Taylor's (1981) "Computers and the Improvement of Writing" and Gould's "Composing Letters with Computer-Based Text Editors" (Haas & Hayes, 1986, p. 23).

Gradually, the pendulum of opinion began to swing in the other direction. By 1983, Richard Collier could accurately point out that many early "skeptics [had] turned converts, and the formerly unconcerned [had] started to take wary notice" (p.149) of computer-assisted instruction (p. 149). In 1984, Cross and Curey concluded that "the climate [was] perhaps an ideal one for the empiric[al] examination of the effect the computer has on the writing process" (p. 1).

Instead of empirical studies, however, there followed, according to Jeanette Harris (1985), a period of "[r]ather extravagant, and largely unsubstantiated, claims" (p. 323) about the instructional potential of word processing as a writing and teaching tool. We were simply "told that word processing not only reduce[d] the number of errors in our students' writing but also encourage[d] them to experiment, increase[d] the amount of revising they [did], enable[d] them to perceive writing as a process, and [gave] them a sense of audience" (p.323). Most studies carried out during this period of naive enthusiasm dealt with word-processing systems used in loose conjunction with computer-assisted instruction (CAI) programs or with word processing on computers that function simply as glorified typewriters.

Research informed by an understanding of the word-processing system as a powerful tool with its own effects on writers and writing appeared only gradually. In 1985, Lillian Bridwell and Ann Duin found only one published study (Collier's 1983 essay) dealing exclusively with word processing and student writers (p .117). The only other pre-1985 study that focused strictly on the effects of word processing seems to be the 1984 conference presentation by Cross and Curey. By 1985, however (the same year in which Bridwell and Duin noted a dearth of studies), Cynthia Selfe sensed the immanence of the "state of empirical inquiry" that Cross and Curey (1984) had predicted:

English teachers and researchers are taking a closer look the electronic pen as a writing instrument. We are starting to wonder if these machines have affected those strategies students use to generate, plan, draft, redraft, and edit their composition assignments in the ways we expected and, if so, how broad their effect has been (pp. 55-56).

In 1986, Gail Hawisher published a survey of 24 studies which concentrated "exclusively on the effects of word processing rather than on other kinds of software programs" (p.7). The significant difference between the new research on which she reported and most of what had been done only a few years earlier was that researchers had begun to conduct controlled studies instead of merely using naive self-report data. Since then, the comparative flood of empirical research has not subsided.

The Situation Today

Unfortunately, the new empirical research has failed to come up with positive evidence of the efficacy of word processing as a writing aid. Colette Daiute (1986) has changed some of her early views on whether a computer encourages revision; Deborah Holdstein (1987) notes that "there is little or no conclusive evidence . . . that the computer can make students write more effectively" (p. 52); Jeanne Simpson (1988) laments "the difficulty of pinning down a clear, salutary effect of word processing on student writing" (p.12); and Gail Hawisher (1986), who once reported "initial evidence" that word processing might be "particularly effective" with basic writers (pp. 21-22), now (1989) tells us that "[c]ontradictory results are beginning to emerge with basic writers as they have earlier with other student populations" (p. 53).

In 1989, two elaborate studies--one out of Carnegie Mellon, the other based on research conducted at Southern Illinois--confirmed those revaluations. In the Carnegie Mellon study, Christina Haas (1989) notes that "numerous studies have failed to consistently support claims of increased revision with word processing" (p.202); in the other study, Stephen Bernhardt, Penny Edwards, and Patti Wojahn (1989) report that "researchers who have attempted to document the effects of computers on student writers have produced evidence that is at best inconclusive" (p. 109).

Even worse, however, studies frequently contradict each other. Consider, for example, two illustrative pairings: Jeanne Simpson (1988) found that her word-processing students "wrote substantially longer papers than the regular students" (1988, p. 13); Ruth Gardner and Jo McGinnis, on the other hand, found that although students wrote more quickly when using a word processor, they did not produce any more text. Jeanette Harris (1985) found that "using a word processor discourage[ed] revision" (p. 330); Richard Stracke (1988), however, found that the "painlessness of computerized revision [stimulated] more frequent revision" (p. 54).

At yet another level of frustration, even when there seems to be conclusive evidence that word processing can be an effective writing aid, results may be inconsequential as a purely practical matter. For example, a study by Balajthy, McKeveny, and Lacitignola (1989) found that although "there is a statistically significant difference in the amount of writing and revision done by children using word processors than by those using pencils, this difference is often so small that it is of little educational significance" [emphasis added] (p. 3); and Bernhardt, Edwards, and Wojahn (1989) report that the computer helped "students revise their work to the point where it was [only] a little better than that of the regular students" [emphasis added] (p. 125).

Where Are We Now?

Why, we may ask, is there no unambiguous evidence either validating word processing as an aid to "improving" students' writing or else entirely debunking the idea that word processing is anything more than a glorified typewriter? There are many possible answers to this question. One answer is that "teaching writing using word processing [may simply be] too new a process for our assumptions to have been thoroughly tested" (Leonardi & McDonald, 1989, p. 9 [2] ). Another possibility is that "writing ability [is] such a complex skill . . . that we may [never] be able to measure confidently the effects of such a powerful writing tool" (Bernhardt, Edwards, & Wojahn, 1989, p. 129).

A third answer--one not as commonly held as the other two, but worth exploring--is that researchers in computers and writing simply may never be able to validate "clean" research projects in the classroom. In attempting to replicate laboratory findings, for example, Gavriel Salomon (1989) claims that it is naive to assume that "the computer-related variables [we] study in [our] lab and the ones [we] will actually implement in real classrooms are more or less the same"; on the contrary, what is "studied about the computer under controlled conditions and what is effectively used in the classroom are qualitatively different from each other" p.1). Hawisher (1986) noted much the same thing: To "look at the effect of computers in an environment in which they are used to teach writing is probably not the same as examining them in an environment in which they are used to produce writing" [emphasis added] (p. 21).

Recognizing Problems and Issues in Research Design

These answers are, of course, neither mutually exclusive nor all-inclusive. There are, in fact, many other impediments to finding unambiguous evidence supporting or debunking the use of word processing as a writing and teaching tool--a wide range of methodologies: methodologies lacking focus, methodologies having uncontrolled variables, and a failure by users to differentiate among methodologies. [3]

Putting aside, temporarily, the range of methodologies (user awareness, after all, can surmount that problem), much of the ambiguity and confusion results from a lack of focus: the researcher has investigated writing or writing habits in general instead of looking at specific features of writing or the writing process. The studies by Stracke (1988) and Weiss (1988), for example, which tested, respectively, for "performance" (p. 51) and "better papers" (p. 58) are difficult to interpret because they look at nothing in particular; those by Harris (1985) and Simpson (1988), on the other hand, which looked, respectively, for "meaning changes" (p. 324-325) in micro- and macrostructure and for "T-units and coherence devices" (p. 12), are easier to interpret because they investigate specific features.

Another common flaw is a toleration for uncontrolled variables. Some researchers openly acknowledge this weakness. Harris (1985), for example, admits that her "findings were affected by a number of significant variables that [she] was unable or chose not to control"--among them that significant revisions may have taken place between first and last drafts (the only ones at which she looked); that her students began at different levels of writing; that responses from her and from peer editors helped shape final drafts; and that, regardless of the role word processing might have played, "[s]tudents typically improve their writing skills as a course progresses" (p.325). And Timothy Weiss (1988) openly acknowledges that "the novelty of word processing may have temporarily motivated students in the computer group"; that "the computer group may have been better writers than the non computer group"; and, finally, that there may have existed a "teacher-researcher bias favoring the computer group" (p.64). Many researchers, however, are not as frank--or as knowledgeable as Harris and Weiss about problems in their methodology, and we have to discover the limitations of their studies ourselves.

The bottom line is two-fold. First, as users of research data, we must learn to attend to what studies can and cannot tell us and to be wary of our own faulty interpretation. We cannot blame researchers if we carelessly cite various kinds of studies "as though they are comparable" (Hawisher, 1986, p. 6). Secondly, as researchers, we must be certain that we conduct controlled, "systematic observations" (Selfe & Wahlstrom, 1988, p. 58). In the meantime, we must be wary not only when others report little or no improvement in writing when students use word processing, but also when they report miraculous transformations.

When we consider any study of the relationship between word processing and writing--whether our own or someone else's--we need to ask whether the study addresses the following general questions:


Selection and Constitution of Subjects

How were the subjects selected? Because any study will be affected by the subjects' involvement in the study, we need to ask whether subjects volunteered or were randomly selected. Not unexpectedly, the practice varies among researchers. Some use all students who register for a course, for certain sections of a course, or for one particular section of a course. Stracke's department, for example, decided that "half of [the first-year composition sections] would have word processing and the other half would not" (Stracke, 1988, p. 51). Other researchers, however, carefully select their subjects, perhaps because they want to study how motivated writers adapt to and use word processing. (Bridwell-Bowles, 1989, for example, studied subjects who "wanted to learn to use new programs and were willing to volunteer their time" p. 83).

Even motivated subjects, however, were not always given the choice of participating or not participating. Although Weiss's (1988) students--would-be business and technical writers--were a "likely group to test in that [they] . . . consider[ed] writing important to their future, professional plans", students who "enrolled in [the experimental] sections . . . had no foreknowledge that they would meet in a computer classroom and write their papers with a word-processing program" (pp. 58-59).

As a subsidiary matter, we need to ask how many subjects there were. Using entire classes or even groups of classes is a common practice, especially if the researcher wants to produce quantifiable data; for example, the subjects studied by Bernhardt, Edwards, and Wojahn (1989) were students from 12 out of a total of 24 "first-semester, introductory composition classes" (p. 111). Researchers who employ case studies (Harris, 1985; LeBlanc, 1988; Selfe, 1985) or ethnographic studies, however, use relatively fewer students and are likely to turn up different kinds of data and different results because they take a more personal approach, look at their subjects in significantly more detail, and ask different kinds of questions.

Finally, in order to account for variables, we need to ask whether the researcher used a control group. Most recent studies of large numbers of subjects have had one. Bernhardt, Edwards, and Wojahn (1989), for example, who wanted "to approach a random assignment of students", used 12 regular and 12 computer classes (p.111); and Weiss (1988) used "two sections . . . in the 'computer' group, and two sections . . . in the 'non computer' group" (pp. 58-59). A control group, however, may create unexpected problems of its own. Cross and Curey (1984), for example, knew that they had to "avoid . . . creating experimental differences which would result from giving the [computer] group more instruction or writing time" (p .4). The existence of control groups in the study at Southern Illinois seems to have:

exerted strong and unintended effects on the whole experience of computer classrooms. For the most part, instructors struggled to keep their computer sections parallel with their regular sections. . . . If teachers had taught two computer sections, we believe we would have seen even more changes in strategies, assignments, and course requirements, and, we imagine, stronger effects on writing and attitudes (Bernhardt, Edwards, & Wojahn, 1989, pp. 128-129)

How much writing experience did the subjects have? Because writing is comprised of a series of skills developed over time, and because students' individual writing backgrounds and abilities help to determine how word processing affects their writing and writing habits (Bridwell & Duin, 1985), we need to ask how much and what kinds of writing experience subjects had when the study was carried out. Although most studies of college-level writers have focused on students in first-year composition classes (Bernhardt, Edwards, & Wojahn, 1989; Harris, 1985; LeBlanc, 1988; Simpson, 1988), some have used more advanced writers (Selfe, 1985), including those in classes for specific kinds of writing (Weiss, 1988).

Because composing strategies are more likely than "any specific effects of the technology alone" to determine how useful writers find the computer (Holdstein, 1987, p. 54), we need to ask how much previous writing experience the subjects had. Whether using word processing or not, inexperienced writers have more difficulty than experienced writers in carrying out the compound/complex acts required in revision because they have more difficulty in "juggl[ing] . . . the demands placed on short- and long-term memory" (Collier, 1983, p. 150 [4] ). Exposing basic writers to word processing may stack the cards against them by "add[ing] another burden to the already overwhelming task of writing" (Schwartz, 1985, p. 17); not only have they not mastered writing conventions, but their lack of experience also has given them inferior revision skills.

Instruction in Word Processing and Writing

What training did the subjects receive in word processing? Among recent studies, variations in training and in reports of the training are great indeed; frequently, we do not know how much previous computer experience subjects had, whether they were taught word processing by a computer person or by the teacher (and, if by the teacher, how much training in teaching with computers he or she had or how much he or she actually writes with computers), and whether writing and revising techniques were presented as part of the word processing lesson. We learn only that Bridwell, Sirc, and Brooke's (1985) subjects underwent a two-week training session in word processing; that Stracke's (1985) students attended a word-processing session during the summer preceding the class; and that Selfe's (1985) were "experienced computer users . . . who were already familiar with the microcomputers and word-processing program used in the . . . computer lab" (p. 56).

As an ancillary issue, it would also be helpful to know what the level of keyboard literacy was among subjects. Only a few researchers consider this to be a variable worth reporting. Among those who do, LeBlanc (1988) mentions that a "prerequisite for the student's participation" in his class "was the ability to type, though typing speed was not considered" (pp. 30-31); and Weiss (1988), although not requiring typing, notes that "75 percent [of the subjects knew] how to type" (pp. 58-59).

What instruction did the subjects receive in writing and revision strategies? We need to ask if teacher-researchers taught their subjects how to do what word processing supposedly encourages them to do. If we do not know how much and what kind of writing instruction subjects received, we cannot be certain whether the reported results are valid and applicable outside the study--that is, in our classrooms or for our research. Collier's study (1983) illustrates what happens when a researcher fails to give adequate training--in this case, in revision strategies. [5] Collier naively assumed that the "novel capabilities" of word processing, by themselves, would encourage "serendipitous learning"--in other words, that inexperienced writers would "play and experiment with language" and therefore "significantly expand the number and the complexity of the operations [they] used...." (pp. 149-150). He found, however, that his students did not do what they did not know how to do--even though they had a powerful new tool to help them do it; the "writing habits and revision paradigms of most of [his students] failed to alter very noticeably when they switched to using the word processor" (p. 153).

What criticism did the subjects receive from the instructor or peers? When we attempt to evaluate studies that purport to measure either attitudes toward revision or the amount, kinds, or quality of revisions actually made, we need to ask to what degree the writing produced might have been influenced by others. Much recent research on computers and composition has stressed the influence that informed writing instruction by teachers can have on student response to writing with word processing. Bernhardt, Edwards, and Wojahn (1989), for example, found that the "teacher had a very strong effect on whether the students improved" (p. 126); Sommers (1985) found that "writers are likely to benefit from using microcomputers if . . . [t]he writing teacher is indispensable as collaborator and audience" (p. 8); and Harris (1985) acknowledges that her "written responses to . . . early draft[s] affected the type and amount of revising that shaped the final draft[s]" of her students (p. 325). [6--no note 6 entry in hard copy] Some researchers--Hawisher (1989), for example-- who recognized the vital role that peer evaluation can play in a "process-based environment," went so far as to exclude "peer evaluation from the research design for fear that it would make judging the influence of word processing more difficult" (p. 47).

Equipment and Environment

What hardware and software did the subjects use? If we want to translate research into other classroom environments, we need to ask what hardware and software subjects used. Not only do we have "so many word-processing packages to choose from that we hardly know what we are measuring when we say we are measuring the effects of "word processing" (Bridwell-Bowles, 1989, p.83), but "word-processing packages have changed so dramatically since the first studies that we're barely looking at the same concept captured in the term 'word processing' " (Hawisher, 1991, personal communication).

Those who have struggled with clumsy word-processing systems or memory-deficient computers know that "the hardware and software one is working with will [profoundly] influence one's computer strategies" (LeBlanc, 1988, p. 40). Inexplicably, however, many researchers do not take into account--or at least neglect to mention--the kinds of writing tools their subjects used. Of the 24 studies investigated by Hawisher in her 1986 study, for example, 11 did not specify the computer and six did not specify the word-processing system used. In her 1989 study, Hawisher notes that because "different programs might well facilitate some writing strategies to the exclusion of others, . . . we can't infer this without a description of the features of the word-processing package" (p. 57).

Knowing what tools students used is especially important in the case of software because research indicates that the idiosyncratic features of various word-processing system will influence how subjects compose and revise and will, in fact, play an important role in the quality of the final product. Holdstein (1987), for example, claims that "word-processing . . . can actually impede students' writing processes if the available software is inappropriate for student use" (p.54). One research team, in fact, "switch[ed] to simpler word processing software" after running a pilot study (Cross & Curey, 1984, pp. 2-3).

We might wonder, for example, about the extent to which the results of the Carnegie Mellon study, reported by Haas (1989), were affected by students composing on the Andrew prototype system and not on standard computers. We might want to ask also whether studies carried out on such supposedly ideal systems will tell us much about what is likely to happen down in the trenches--in our classes. What, in other words, are the implications of such studies for most writing teachers who have to deal with the hardware and software given them?

In addition, how individual students relate psychologically to computers will affect how they take to composing with word processing. Quite apart from the training they received, LeBlanc (1988) found "students who seem[ed] to revise quite effectively on the machine [and] others [who] seemed almost hindered by the power of the computer" (p.30); and Selfe (1985) identifies three familiar groups into which her students naturally fell: At either end of a spectrum, she found "paper-and-pencil composers," "screen-and-keyboard composers" and all the others "somewhere in between" (pp. 62-64). Bernhardt, Edwards, and Wojahn (1989) are undoubtedly correct when they suggest that specific studies need to distinguish between those who adapt well to the technology and those who do not (p. 129).

Under what circumstances did the writing take place? Because individuals respond to physical and psychological settings in different ways, we need to know about the "facility itself [where writing took place] and its logistics" (LeBlanc, 1988, p. 40). A number of researchers have discovered that where students received instruction in writing and where they did the actual writing are important pieces of information. Elizabeth Sommers (1985), for example, believes that "[w]riters learn best when writing is taught as process in decentralized classrooms" (p. 9); and Stracke (1988) recommends teaching in a "full-service computer room" because "the best source of ideas is a social environment where people can toss ideas around freely" (p. 55).

We also need to know the number of hours students had access to computers (and perhaps even the schedule of those hours) and the degree to which users other than those involved in the study put demands on computer time. Dawn Rodrigues and Raymond Rodrigues (1989) found that "students with limited access to computers rarely learn how to compose freely and comfortably at the computer monitor" (p.16); Christina Haas (1989) believes that writers "working on public machines" sometimes "feel pressured to work quickly and not waste time because . . . other students [may be] waiting to use the computer" (pp. 201-202).

If opportunities to share and discuss ideas and texts, and pressure to perform affect the amount and quality of writing produced, we need to know which studies were conducted in laboratories (for example, the one by Haas and her colleagues at Carnegie Mellon) and which in a classroom settings. Studies of writing done in isolation or in computer laboratories where other writers are not present will not be comparable to studies of writing produced in computerized classrooms full of students.

Different Kinds and Idiosyncratic Use of Data

The difficulty of evaluating the effects of word processing on student writers becomes especially apparent when we consider the various kinds of data that researchers choose to collect and look at--or choose to ignore. Some researchers read and evaluate only texts from a single class, while others compare essays from both computer and control classes; and some researchers compare only first and last drafts, while others also evaluate intervening ones.

Among researchers who take into account various data other than texts produced, we find a great disparity among the data. The Southern Illinois study done by Bernhardt, Edwards, and Wojahn (1989), for example, used what seems to be the most extensive data base of any study. Besides evaluating drafts, Bemhardt, Edwards, and Wojahn (1989) looked at attendance, withdrawals, assignment completions, pre- and post-tests, Daly-Miller Writing Apprehension Tests, end-of-term evaluations by students and teachers, and class observations.

Other researchers have used more limited data bases--ones that may have relatively little in common with those used in other studies. LeBlanc (1988), for example, interviewed students about their composing processes, looked at student logs, and read their texts (Harris (1985) conducted both pre- and post-study interviews and observed her students as they wrote; and Collier (1983) taped "thinking aloud" protocols and "videotaped the screens of the word processors" (p.151) during the final revising session.

Comparing studies is further complicated by the fact that any combination of data is possible. It is as if researchers individually ordered from a menu at a Chinese restaurant. For example, a researcher looking only at essays from a single word-processing class could elect to look at first, intervening, and last drafts; could have students keep logs; could interview students only at the end of the term; and could record their key strokes. Another researcher could order quite another array of dishes.

We cannot ignore this inherent complexity if we want to carry out valid studies or evaluate the studies of others. A difference in even one variable will weaken the comparability of any two or more studies. For example, because "revision is likely to have taken place [even] before the first of two drafts" (LeBlanc, 1988, p. 32), a study failing to look at drafts produced throughout the writing process may overlook significant revisions or revising strategies.

What the Future Holds--Or Ought To Hold

The field of word processing and writing needs narrow rather than broad studies--focused, rather than sweeping views. Some researchers have recognized this all along. In 1985, Jeanette Harris complained that instead of "investigating a specific feature of writing, many . . . studies [unfortunately] attempt to determine the effects of computers on writing in general" (p. 323). Four years later, discussing her own work, Hawisher expressed a similar sentiment: "I firmly believe (after conducting two comparative studies) that experimental designs focusing on [general] improvements in writing quality tell us very little" (Bridwell-Bowles, 1989, p. 84).

In this regard, three recent attempts to present overviews of needed research are instructive because they suggest projects and help us understand the weaknesses in many of the studies available. Hawisher (1989) stresses "build[ing] upon previous research--specifically carrying out quantitative (comparative) studies on "patterns and themes" that qualitative (ethnographic and case) studies have unearthed (p.58); developing "systematic research agendas" designed so that "each of the studies builds upon previous ones" (p. 59); "us[ing] a longitudinal approach" to discover "emerging patterns of composing" (pp. 59-60); "focus[ing] on experienced student-users of word processors" and "experienced writers who are proficient at word processing" (pp. 60-61); studying computers in different "research contexts," such as "classroom activities" and "English curricula" (pp. 61-62); and studying computers as research tools--as in the development of keystroke-recording programs (pp. 62-63).

In a comprehensive survey of "theoretical and pedagogical approaches" that researchers need to adopt, Selfe and Wahlstrom (1988) argue for investigating such subjects as the effects of the physical layout of computer and writing environments, the protocols involved in the "new etiquette of collaboration", the role of hard copy, the nature of teacher comments on writing done with computers, and the student responses to such comments (p. 58).

Finally, Bernhardt, Edwards, and Wojahn (1989) suggest that "[s]pecific studies . . . be designed to distinguish subgroups among students" and that "investigation[s] of teachers in lab settings" be carried out to determine "how well individual teachers adapt to a lab environment" and how they "change their strategies when free to adapt instruction to a lab setting" (p. 129).

Because the results that "count" may be long--rather than short-range--especially if, as some claim, word processing and certain writing software can encourage higher-order thinking skills--one of the most important kinds of studies we can encourage and carry out is longitudinal. As writers and writing teachers, we know that improvements in writing, with or without the computer, take place in uneven increments over time. With the computer, matters can only get more complicated. As Bridwell-Bowles (1989) points out, the "real effects of writing with computers probably should be measured long after a writer has made initial adjustments to the medium" (p.83). Some in the field imply that studies might even extend beyond the time and spatial boundaries of academia; Charles Moran (1990), for example, has spoken of "the most useful work [being] long-term studies that track a few particular students while they are having a particular academic experience and continue to follow them afterwards, attempting all the while to connect their work, say in the basic writing computer lab, with the academic work--and anything else--they do" [emphasis added] (1990).

Summary

With all that has been published and presented during the past decade on the effectiveness of word processing as a writing and teaching tool, writing teachers should not find themselves more confused than ever. As teacher-researchers in the still-new field of computers and composition, we need to conduct well-conceived, controlled, and focused studies, both quantitative and qualitative; we need also to return to old studies with new vision since, as Hawisher (1986) points out, previous research "can reveal a wealth of detail through extensive description of the composing processes of writers and of their relationship with computers" (pp. 7-10). As Bridwell-Bowles (1989) puts it: "It seems appropriate . . . to critique the ways researchers have asked questions . . . [and] to consider the new kinds of research that must be designed to answer both old, unanswered questions and new, increasingly sophisticated ones" (p. 79).

In short, we need to take a much closer and more careful look the electronic pen.

Joel Nydahl teaches English at Babson College in Wellesley, Massachusetts.

Notes

  1. That such attitudes are likely to exist even today is suggested by the unconsciously revealing admission of one researcher that, as recently as 1986, her English department "dragged its first-year composition program into the computer age" [emphasis added] (Simpson, 1988, p.11).

  2. See also Selfe & Wahlstrom, 1988, p. 58.

  3. Of the 24 studies Hawisher investigated in her 1986 essay, "nine employed case-study techniques, nine were experimental, five exploratory, one ethnographic, and three used survey methods" (p.10); when she "look[ed] at the research design of the [42] studies reviewed [in 1989], 26 [could] be termed comparative (or quantitative) and 16 naturalistic (or qualitative). Twelve of the qualitative investigations were classified as case studies and four as ethnographies" (p. 46).

  4. In spite of its limitations, Collier's study was a milestone in computer-and composition research because it was one of the first attempts to explore systematically the effects of word processing on student writing.

  5. We should note that Collier's early in-depth study of the effect of word processing on students' revising techniques has been criticized precisely because he "did not intervene in the composing process in any way to suggest areas of the paper that might need revising" (Pufahl, 1984, p. 92).

References

Balajthy, E., McKeveny, R., & Lacitignola, L. (1989). Microcomputers and the improvement of revision skills." In R. Boone (Ed.), Teaching process writing with computers, Research and Position Papers (pp. 3-6). ICCE.

Bernhardt, S. A., Edwards, P., & Wojahn, P. (1989). Teaching college composition with computers: A program evaluation study. Written Communication, 6, 108-133.

Bridwell, L., & Duin, A. (1985). Looking in-depth at writers: Computers as writing medium and research tool. In J. L. Collins & E. Sommers (Eds.), Writing on-line (pp. 115-121). Upper Montclair, NJ: Boynton/Cook.

Bridwell, L., Sirc, G., & Brooke, R. (1985). Revising and computing: Case studies of student writers. In Freedman, S. (Ed.), The acquisition of written language: Revision and response (pp. 172-194). Norwood, NJ: Ablex.

Bridwell-Bowles, L. (1989). Designing research on computer-assisted writing. Computers and Composition, 7(1), 79-91.

Collier, R. M. (1983). The word processor and revision strategies. College Composition and Communication, 34, 149-155.

Cross, J. A., & Curey, B. J. (1984). The effect of word processing on writing. Paper presented at the Mid-Year Meeting of the American Society for Information Science. Bloomington, Indiana. (ERIC Document Reproduction Service No. ED 247 921).

Daiute, C. A. (1983). The computer as stylus and audience. College Composition and Communication, 34, 134-154.

Daiute, C. A. (1986). Physical and cognitive factors in revising: Insights from studies with computers. Research in the teaching of English, 20, 141-159.

Daiute, C. A., & Taylor, R. (1981). Computers and the improvement of writing. Association for Computing Machinery, 83-88.

Gardner, R., & McGinnis, J. (1989). Ten computerized college writing programs: Toward a benchmark. Research in Word Processing Newsletter, 6, 2-4.

Haas, C. (1989). How the writing medium shapes the writing process: Effects of word processing on planning. Research in the Teaching of English, 23,181207.

Haas, C ., & Hayes, J. R. (1986) . What did I just say? Reading problems in writing with the machine. Research in the Teaching of English, 20, 22-35.

Harris, J. (1985). Student writers and word processing: A preliminary evaluation." College Composition and Communication, 36, 323-330.

Hawisher, G., & Selfe, C. L. (Eds.) (1989). Critical perspectives on computers and composition instruction. New York and London: Teachers College Press.

Hawisher, G. (1986). Studies in word processing. Computers and Composition, 4, 6-31.

Hawisher, G. (1989). Research and recommendations for computers and composition. In Gail Hawisher & Cynthia L. Selfe (Eds.), Critical perspectives on computers and composition instruction. New York and London: Teachers College Press, 44-69.

Holdstein, D. H. (1987). On composition and computers. New York: Modern Language Association.

LeBlanc, P. (1988). How to get the words just right: A reappraisal of word processing and revision. Computers & Composition, 5(2), 29-42.

Leonardi, E. B., & McDonald, J. L. (1989) Assessing our assumptions. In Boone, R. (Ed.), Teaching process writing with computers. Research and Position Papers (pp. 9-10). ICCE.

Moran, C. (1990). Megabyte University electronic network, 8 April.

O'Connor, J. (1990). What happens later: A review of the writing habits of students taught to write with computers. Paper presented at the Sixth Computers and Writing Conference, Austin, TX.

Pufahl, J. (1984). Response to Richard M. Collier, "The word processor and revision strategies." College Composition and Communication, 35, 91-93.

Rodrigues, D., & Rodrigues, R. J. (1989). How word processing is changing our teaching: New technologies, new approaches, new challenges. Computers and Composition, 7(1), 13-25.

Salomon, G. (1989). Discontinuity between controlled study and implementation of computers in classrooms: A letter to a young friend. Technology and Learning, 3 (3), 1-5.

Schwartz, H. (1985). Interactive writing: Composing with a word processor. New York: Holt.

Selfe, C. L. (1985). The electronic pen: Computers and the composing process. In J. L. Collins & E. Sommers (Eds.), Writing on-line (pp. 55-66). Upper Montclair, NJ: Boynton Cook.

Selfe, C. L., & Wahlstrom, B. (1988) . Computers and writing: Casting a broader net with theory and research. Computers and Humanities, 22, 57-66.

Simpson, J. (1988). Word processing in freshman composition. Computer-Assisted Composition Journal, 3 (1), 11-16.

Sommers, E. A. (1985). Integrating composing and computing. In J. L. Collins & E. Sommers (Eds.), Writing on-line (pp. 3-10). Upper Montclair, NJ: Boynton/Cook.

Stracke, R. (1988). The effects of a full-service computer room on student writing. Computers and Composition, 5(2), 51-56.

Weiss, T. (1988). Word processing in the business and technical writing classroom. Computers and Composition, 5(2), 57-70.