9(1), November 1991, pages 83-109

Revising On-Line:
Computer Technologies and the Revising Process

Charles A. Hill, David L. Wallace, Christina Haas

Problems in Word Processing Research

In the last decade, computer technology has become increasingly prevalent in writing classrooms, and writing educators have continued to hold high expectations for the ways in which this technology can support writing and the teaching of writing. Of particular interest has been how word processing--which allows the writer to make changes to a text with minimum time and effort--might improve revision. However, research attempting to study the effects of word processing on student revisions has yielded contradictory results, with little agreement on the nature and direction of changes in revision for word processing users. (See Hawisher, 1986, 1987a, for reviews of studies in word processing.) For instance, Hawisher (1986) reports that of the studies that examined the effects of word processing on revision, six reported increased revision when students used computers, two found mixed results, three found no difference, and one found less revision with word processing than with pen and paper.

In this article, we will examine what we believe are some of the causes for this growing inconsistency among word processing studies. After reviewing several recent studies of computers and revision, we will argue that word processing research should look at how the computer affects writers' processes, not just their products. Then, we will discuss some recent revision research that has been praised for its theoretical complexity, but that has not been utilized by word processing research. Finally, we will present our own study, which, drawing from this theoretical work, attempts to show how writers' cognitive processes for revision are affected by the word processing technology.

Part of the inconsistency among studies undoubtedly results from the wide variety of methodologies that these studies employ. For instance, in these studies, the participants' experience with computers has ranged from two weeks to ten years. So far, word processing research has failed to isolate the general effects of various computer technologies from the effects of trying to write and revise while becoming acclimated to an unfamiliar technology.

Another common inconsistency among computer studies is the type and quality of technology used. Of the twenty-four studies cited in Hawisher's 1986 review, five studies used Apple computers, five used IBM-PCs, three used other brands of personal computers, five used some type of mainframe computer, and the rest of the studies didn't specify the type of computer that was used. Hawisher lists fourteen different word-processing packages that were used in the twenty-four studies. Again, however, six studies did not specify which word-processing program was used.

As Hawisher points out, the software and hardware being used can significantly affect the impact that the computer has on writing. For instance, Haas and Hayes (1986) found that the effects of using a computer depended to a significant degree on the features of the software used and on the physical details of the hardware used. The conflicting results in word processing studies may stem, in part, from the wide range of equipment used in these studies. While researchers continue to talk about computer technology in monolithic terms, we see important but unacknowledged differences among the different technologies that are now available.

Another problem with word processing studies is that most of these studies have measured only text changes, failing to account for how word processing technology impacts revision processes. Witte (1987) has argued that revision is a complex process, only part of which can be inferred by looking at actual textual changes.

Three highly respected studies of word processing and revision illustrate some of the problems we have been discussing. These three studies (Daiute, 1986; Hawisher, 1987b; Lutz, 1987) have been cited often as well-designed and articulate studies of the effects of word processing on revision, and they all used the same product measure--Faigley and Witte's (1984) taxonomy--to measure changes in revision activity caused by word processing. Yet, the three researchers came up with strikingly different results.

Daiute (1986) found that students corrected more mechanical errors on the computer than with pen and paper, but that they made fewer revisions on the computer. Daiute used Apple II's, which are commonly found in public school classrooms. However, she designed her own word processing program to avoid what she saw as unnecessary complexities inherent in most commercial programs. Her participants (7th and 9th graders) had been working with the program for 6 months by the time she collected her data. While Daiute claims to be looking at the cognitive factors involved in computer use, her revision measures are entirely product-based. Like most other revision researchers, she compared before-and-after drafts, thus looking only at textual changes.

Hawisher's (1987b) study, which examines the effects of word processing on the revising behavior of first year students, is notable both for its experimental design and for its natural classroom setting. In this study, twenty first year students wrote four essays each, two with a computer and two with pen and paper. Data collection included a first, second, and final draft of each essay. Again, Hawisher's participants revised more when they used paper and pen than when they wrote on-line. Further, she found no significant differences between levels of revisions (local, sentence-level versus global revisions) between the on-line and paper conditions. There were also no significant differences between quality ratings for drafts written on-line and drafts produced with pen and paper, regardless of the extent of revisions.

One potential problem with Hawisher's study is that eighteen of her twenty participants had been working on computers for only five-and-one-half weeks when she began collecting data. Her results may stem partly from the interference of an unfamiliar technology, since it seems likely that any benefit from using computers to revise will not show up until the writer has become more familiar with the technology than Hawisher's participants were. Finally, as in the case of Daiute's study (1986), Hawisher's measures reflect only revision activities that were actually instantiated between drafts. She did not attempt to trace the decision-making processes that resulted in the revisions that she found.

In a third study, Lutz (1987) attempted to build a more comprehensive method for tapping the revision process. She wanted to capture the revisions that her participants considered making, but that did not make it into the final text. Therefore, in the computer condition, the system automatically saved keystroke changes every five seconds to preserve a record of the writers' activities while revising. To account for this same record in the pen and paper condition, Lutz developed an elaborate system for writers to use that included using colored pens and numbering changes. She also used follow-up interviews after each revising or editing session.

Lutz found several interesting effects of the technology used. In general, when working at the computer, writers produced less text but made more changes than when they wrote with pen and paper. Lutz also argues that the computer focused the writers more toward the surface level of the text; i.e., writers made many more local text changes when working on-line than when working on paper.

Although all of Lutz's participants were familiar with the mainframe system they were using, the system had some drawbacks. She reports that, during heavy use, writers had to stop and wait for the system to catch up with their typing, and it sometimes "swallowed" characters that were typed in. She also mentions that the small size of the computer display made it difficult for writers to locate specific sections of text. (She does not indicate the actual display size of the screen.) She argues that this may have forced the writers to focus on the surface level of the text more in the computer conditions.

Lutz' methods helped capture some of the lost revision activity in the computer conditions. However, other research on revision (e.g., Witte, 1987) suggests that writers tend to work through a variety of considered revisions in their mind, although many of these considered changes may not actually make it onto paper. Lutz's results do not reflect any revisions that writers may have considered, but decided not to write down or to enter into the computer. Thus, in addition to the cumbersome writing methods that writers were forced to use in the pen and paper condition, Lutz' study falls sort of examining revision processes in its focus on text changes.

To summarize, though all three of these researchers used the same revision taxonomy, Daiute and Hawisher found that writers (junior high students and college first year students) made fewer changes to their texts when working on the computer while Lutz' professional and "academic" writers made more changes. Also, Lutz's writers focused significantly more on surface-level concerns when revising on the computer while there was no significant difference for the college first year students in Hawisher's study. Given the complexity of the writing process, the sensitive nature of its interaction with the technology of word processing, and the differences in computer technology, these conflicting results are not surprising.

Theoretical Concepts of Revision

In addition to addressing the aforementioned methodological issues, studies attempting to capture the complex interactions of technology and revision would benefit from drawing upon recent theoretical advances in our understanding of revision. In early models of the writing process, revision was seen as something that happened primarily after writing (e.g., Rohman & Wlecke, 1964): writers planned what they were going to write, wrote a draft, and then revised the draft. Much early revision research, including research on computers and writing, tried to look at revision by describing and analyzing the differences between discrete drafts. Later, revision came to be seen as an activity that occurs throughout the writing process (e.g., Emig, 1971; Perl, 1979; Sommers, 1980). Only recently has revision been viewed as a complex process that involves and is affected by a variety of cognitive activities that do not always get instantiated onto the page (Witte, 1987).

Recent studies of the revision process (Scardamalia & Bereiter, 1983; Hayes, Flower, Schriver, Stratman, & Carey, 1987; Witte, 1987) have attempted to broaden and enrich traditional conceptions of revision by focusing on the process in real time. This work uses process tracing methodologies in addition to product-based measures of revision. These studies have been used to develop cognitive models of revision that begin to detail its critical role in the writing process. In these models, revision is described as a complex and recursive process, driven by personal goals and by social convention, and shaped by individual conceptions of the rhetorical problem. Because of their importance in understanding revision and their centrality in our own work, the Scardamalia and Bereiter (1983) study and the Hayes, Flower, Schriver, Stratman, and Carey (1987) study are briefly described below.

Scardamalia and Bereiter (1983) began to detail a recursive model for revision. They proposed a compare, diagnose, and operate (CDO) model, arguing that revision can occur at any time in the writing process when writers compare their rhetorical intentions with the text that they have produced. Based on this comparison, writers can then diagnose a problem and proceed to make an operational change. The work of Scardamalia and Bereiter is important, for it helps to explain differences in writers' revising behaviors, not just to describe them.

Hayes, Flower, Schriver, Stratman, and Carey (1987) argued that, as writers produce texts, they compare texts not only with rhetorical intentions but also with knowledge about writing and writing situations. Hayes and his colleagues observed writers evaluating a text against grammatical and stylistic conventions and against specific criteria engendered by the rhetorical situation. Expert writers in this study engaged more often in the important subprocesses of evaluation, problem representation, strategy selection, and task definition. For instance, the expert revisers engaged in evaluation, not only of the text, but also of their own plans for revision. That is, the experts tended to evaluate and revise their plans, while the novice writers called upon rigid, sometimes inadequate, criteria against which the text was measured.

Hayes and his colleagues also argue that when people engage in problem-solving activities such as writing, they work with an implicit definition of what the task involves. This definition guides their decisions about what activities to engage in, and in what order. For instance, when revising, one writer might rewrite a text from scratch, barely looking at the original draft, while another might work through the draft, looking for glaring grammatical or stylistic errors. Clearly, these two writers would have a different conception of what is called for by the task of revision. Hayes and his colleagues found that expert revisers did not just have more highly developed mechanical skills, but that they held different representations of what the task of revising entails. Whereas novice writers tended to work through the text, hunting for errors, expert revisers were willing to completely rework the text, making necessary global changes.

In a case study of a documentation writer working with user feedback, Sullivan and Porter (1990) found that the writer's use of the feedback, and the ultimate success of his or her revision, was greatly constrained by his or her rhetorical orientation. In short, he or she ignored global, conceptual-level comments, concentrating exclusively on local solutions to the problems that the users were having with the document. Their participant was, like the novice writers in the Hayes, et al. study, locked in by a representation of the task that did not include the possibility of global revision.

The Study

Our study attempts to assess the impact of computer technology on revision by focusing on writers' task definitions for revising. Of the differences that researchers have found between expert and novice writers, we believe that a writer's conception of revision, his or her task definition, is of critical importance. Therefore, if the use of different technologies is going to affect revision in significant and important ways, we would expect some of this impact to be revealed in measures of task definition.

Our study compares the revision behavior of experienced and student writers, and examines the effect of word processing on both groups of writers. (We prefer to use the more descriptive terms "experienced writer" and "student writer" rather than the arguably value-laden terms "expert" and "novice.") We use a variety of measures to examine the effects of word processing on the task definitions of the revisers we studied.

Method

The tasks
The two revising tasks for this study were modeled after the revision task in Hayes, Flower, Schriver, Stratman, and Carey (1987). Hayes and his colleagues asked expert and novice writers to revise a letter so that part of the letter could stand alone as a pamphlet intended for first-year college students. The task required the revisers to deal with global issues (voice, genre, rhetorical stance, perceived audience, and style), as well as a set of local errors (e.g., spelling, punctuation, grammar, wordiness, and diction) that were planted in the text by the researchers.

In general, we attempted to create realistic writing tasks in which participants could engage in both rhetorical and stylistic revisions. The Eating Well task (see Appendix A) asked participants to read and revise a letter from the director of a local nutrition agency to the director of the local Meals-on-Wheels program. Participants were instructed to revise the letter so that it could be used as a nutrition pamphlet to be distributed by Meals-on-Wheels to its clients. The Job Placement task (see Appendix B) was similar. It asked participants to read and revise a letter to a college placement director from the placement director at another college. The participants were instructed to revise the letter so that it could be used as a pamphlet about job placement for seniors at their university. In each case, participants were asked to take an existing document and revise it for a different audience and purpose. An effective revision of either task would require changes in voice, genre, and point of view. In addition to these global problems, the two texts also included numerous planted local errors.

We should point out that our participants were not revising their own texts in this study: Their revising behaviors may have been quite different had they been. However, in order to do a comparative study, we felt it necessary to have participants work from a given set of texts. Doing this allowed us to directly compare their behaviors with the results of those behaviors when faced both with identical drafts and with a relatively well-defined rhetorical situation.

Design
We chose a 2-by-2 repeated measures design that would allow us to compare performance in two ways: first, between the two groups of writers (student writers vs. experienced writers), and second, within the groups themselves to compare each writer's performance in the computer condition with his or her performance when revising with pen and paper. Each participant had two revising sessions on subsequent days. The participants revised both texts, one with pen and paper and one on the computer. The two tasks (Eating Well vs. Job Placement) and the two conditions (computer vs. pen and paper) were counterbalanced for order. This design allowed us to compare the revision behaviors of each writer while he or she was revising in different conditions. Table 1 illustrates our experimental design.

Table 1
Experimental Design

Computer
Pen & Paper

Experienced
E1, E5 "Eating Well" task "Job Placement" task
E2, E6 "Job Placement" task "Eating Well" task
E3, E7 "Eating Well" task "Job Placement" task
E4, E8 "Job Placement" task "Eating Well" task
Students
S1, S5 "Eating Well" task "Job Placement" task
S2, S6 "Job Placement" task "Eating Well" task
S3, S7 "Eating Well" task "Job Placement" task
S4, S8 "Job Placement" task "Eating Well" task

Participants
The sixteen participants in this study included eight student writers and eight experienced writers. All of the experienced writers had published in professional journals, were employed as technical writers, and/or were recommended as excellent writers by their supervisors. The student writers, second-semester first-year students at a mid-sized, private university, had recently completed the required first year writing course. All sixteen participants were paid for participating in the study.

Technology
Two technology issues were of concern to us: the type of equipment and the revisers' prior computer experience. First, for the word processing condition, writers used ANDREW, an educational word-processing program developed at Carnegie Mellon in conjunction with IBM (Morris, Satyarayanan, Conner, Howard, Rosenthal, & Smith, 1986) operating on an IBM-RT with a nineteen-inch bit-mapped (black-on-white) display. This word-processing package has a mouse- and menu-driven interface, multiple window capabilities, and a display that allows writers to see almost an entire page of text at one time. Earlier research had found that such an interface offers advantages for both readers (Haas & Hayes, 1986) and writers (Haas, 1989a).

Second, to minimize interference from unfamiliar word-processing packages, we studied writers who were experienced with the ANDREW word-processing program. The experienced writers had all used ANDREW on a daily or almost daily basis for six months or more. Student writers had used the word-processing package in at least two courses--a training workshop and a college writing course--that were ANDREW-based. In addition, each writer successfully completed a pre-test of facility with the ANDREW word-processing program before participating in the revision study. The pre-test included typing, adding and moving text, cutting and pasting, and moving between windows and files.

Procedure

Similar to Hayes and his colleagues (1987), we collected protocols from the participants as they revised each of the two letters. Rather than measure changes between discrete drafts (Daiute, 1986; Hawisher, 1987b), or rely on retrospective interviews to capture decision-making processes (Lutz, 1987), we used think-aloud protocols to focus more directly on the revising behavior as it occurred. Although this process does not capture every thought that passes through a person's mind while engaging in a task, protocols do present a way to trace the thoughts that are passing through a writer's focus of attention while he or she works (Ericsson & Simon, 1984). An experimenter first demonstrated a think-aloud protocol for each participant who was unfamiliar with this procedure. Each protocol session was audiotaped, and the tapes were transcribed for analysis.

Analysis

Protocol measures
Following a procedure adapted from Hayes and his colleagues (1987), we first prepared the protocols in three steps:

  1. Dividing the protocols into numbered clause units. Each clause in a given protocol was bracketed and numbered as in Figure 1 below.

    [Figure 1 here]

    Figure 1: Coded Protocol Excerpt

     

  2. Coding each clause as one of four activities (see Figure 1 for examples of each):

  3. Dividing the comments into problem-solving episodes. An episode consisted of all the clauses in which the reviser was commenting on a single problem in the text. The raters coded independently and achieved a 0.92 direct agreement rate for clauses within episodes, again in a ten percent subset of the protocols. For example, in the segment of protocol presented in Figure 1, an experienced writer read a sentence from the Eating Well text (clauses 99-100). The raters judged that clauses 101-104 (previously identified as comments) indicated a problem-solving episode because the writer identified a problem and decided what action to take about that problem. The writer decided to make a note of his or her decision in clauses 105-106.

After the protocols were prepared, separate analyses of the protocol data were conducted, in order to yield the three measures described below:

Initial Task-Related Comments--We follow Hayes and his colleagues in identifying and examining an initial stage that begins when a writer starts reading the task directions and ends when the writer makes the first change in the text. Writers' comments and activities during the first few minutes of a revising session often give important indications of how they are defining a task. For example, Hayes et al. noted that writers who began making sentence-level changes immediately often spent the entire revising session focusing on local problems, whereas writers who read through the entire text before making changes usually made more numerous global revisions.

For our study, we categorized the comments in this initial segment using the same seven categories identified by Hayes et al.:

  1. Global Sense: This category was selected if the writer read through the letter completely at least once before making any attempt to revise.
  2. Goal Statements: These comments expressed a plan or goal for the task that was not explicitly provided in the task directions and that was not a mere paraphrase of these directions. For example, a participant might say, "My objective here is to really convince elderly people of the importance of good nutrition."
  3. Gist of Content/Purpose: Some writers summarized the contents of the letter or its purpose, such as, "Basically this just tells people about the advantages of starting a job search early."
  4. Audience: These comments focused on the audience's possible needs, as in, "This document ought to focus on the dangers of procrastinating, because these students need to be shaken up."
  5. Inventory of Problems Detected: Some writers built a list or schedule of things to do with the text. For example, "The first thing I'll have to do is remove all the spelling errors, then the vague pronouns. Then I'll try to get the larger organizational matters taken care of."
  6. Global Evaluation: This category refers to comments in which the writer mentioned some negative impression created by the text as a whole, or some negative trait that cannot be localized. For example, "Somehow the voice is wrong," or "This letter is really boring."
  7. Critical Comments: This category refers to comments in which the writer identified a particular fault to correct, though the writer did not immediately proceed with any correction, as in, "Those exclamation points have to go," or "That's a vague pronoun if ever I saw one."

Again, the protocols were coded by two independent raters who achieved a 0.93 rate of direct agreement for classifying task definition items.

Levels of Text Problems--Our second measure attempts to capture the range of writers' focus of attention as they revise. In a study of planning, Haas (1989b) found writers planning significantly more at the surface level, and significantly less at the global level, when using a word-processing package alone, without paper supports, for composing. Further, writers sometimes report that the computer screen makes it difficult for them to "get a sense of the whole text (Haas & Hayes, 1986)." Finally, some previous studies addressing the impact of word processing on revision (e.g., Lutz, 1987) found that using the computer to revise caused writers to focus more on surface-level concerns.

In addition to the possible impact of computer displays on revisers' focus of attention, a number of revision studies (Stallard, 1974; Bridwell, 1980; Sommers, 1980; Faigley & Witte, 1981; Hayes et al., 1987) suggest that even when working with pen and paper, less experienced writers often define revision as error-hunting rather than as addressing global issues. Therefore, we had the coders classify each problem-solving episode according to the amount of text involved. Episodes were coded as either local (the episode involved one sentence or less) or global (the episode involved text in more than one sentence). A local problem, for instance, might be a misused word or a misspelling. In contrast, global problems include such activities as using information that is inappropriate to the audience, and placing paragraphs in the wrong order. When identifying the level of text that each problem-solving episode involved, the coders achieved an agreement rate of 0.65 using Cohen's Kappa. [1]

Detection and Correction of Errors--The tendency to work at a more local level when using a computer for writing also suggests that there may be different patterns in writers' detection and correction of errors on-line and on paper. If the computer does focus revision activity at a more local level, then we would expect writers to expend significantly more effort in identifying and fixing low-level text problems when they are working on a computer than when they are working on paper. However, the ability to rearrange chunks of text using the cut and paste features, which are now integral features of nearly all word-processing programs, may also make global text changes easier. The errors that we planted in the texts included local errors, such as misspellings and faulty parallelism, as well as local manifestations of global errors. Using both the protocols and the participants' revised texts, the coders checked each error to determine whether the writer corrected and/or detected the error. Each participant received a score for the proportion of the planted errors corrected and another score for the proportion of planted errors detected. The coders achieved a Cohen's Kappa reliability score of 0.78.

These three measures attempt to capture any effect that word processing may have on the way writers conceive of the task of revision. Instead of comparing the number and level of revisions when writers use a computer or work with pen and paper, our measures compare the scope of the problems writers attend to when revising on-line or with pen and paper.

We did not have a measure that tried to determine if writers revised more when using the computer or when working with pen and paper. While our student participants generally went through the texts on an error hunt, our experienced writers tended to recast the entire text from scratch. Given the difference in these revision strategies, we thought it unreasonable to try to determine how many revisions were made in the two conditions.

Statistical Analyses

The first analysis focused on whether technology or expertise affected writers' tendencies to engage in the seven initial task-related activities posited by Hayes and his colleagues. We totalled the number of times that each participant engaged in one of the initial task-related activities. We then performed a chi-square analysis on these numbers, looking for significant differences caused by technology or expertise.

For the next three measures, we performed three separate repeated-measures ANOVAs with one grouping factor. In each case, the repeated measure was performance in the computer condition versus performance using pen and paper, and the grouping factor was expertise in the writer (experienced versus inexperienced).

To look for differences in the Level of Text problems, we divided the number of global problem-solving episodes in each protocol by the total number of problem-solving episodes in that protocol. We then performed a repeated-measures ANOVA with one grouping factor on these proportions. Finally, we performed separate ANOVAs on the Proportion of Planted Errors Corrected--to see whether or not technology or expertise affected writers' tendencies to detect the problems that we planted in the text--and on the Proportion of Planted Errors Detected--to see whether or not either variable affected writers' tendencies to correct these planted problems. Because we were working with proportions, we performed a standard logit transformation on all of the proportions before running the ANOVAs to ensure comparable distributions in variance among the four cells (experienced writer on computer, experienced writer with pen and paper, student writer on computer, student writer with pen and paper).

Results

Initial task-related comments
Table 2 shows the number of participants who engaged in each of the seven initial task-related activities. For example, in the computer condition, five experienced writers read the letter completely through before attempting any revision, while only four experienced writers did so in the pen and paper condition. In each of the four cells, the instances were summed in order to get a comparative score.

Table 2
Effects of Technology and Expertise on Writers' Task Definitions

Task Definition Activity
Computer
Pen & Paper
Experienced
1. Reads letter completely before attempting to revise.
5
4
2. Establishes process plans or goals that are more specific than directions before revising.
1
3
3. Offers "gist" of letter content or purpose.
1
1
4. Establishes audience needs before attacking first sentence.
1
1
5. Offers inventory of problems detected in the first read.
6. Detects gobal problems (other than audience problems) during or after first read.
1
1
7. Makes critical comments (other than gobal or audience) during or just after first read.
3
5


Total number of instances
12
15
 
Students
1. Reads letter completely before attempting to revise.
2
4
2. Establishes process plans or goals that are more specific than directions before revising.
2
3
3. Offers "gist" of letter content or purpose.
4. Establishes audience needs before attacking first sentence.
5. Offers inventory of problems detected in the first read.
6. Detects gobal problems (other than audience problems) during or after first read.
7. Makes critical comments (other than gobal or audience) during or just after first read.
4
6


Total number of instances
8
13

 

As Table 2 shows, the total number of items in the four cells were small, and the totals tended to be similar. The experienced writers seemed to engage in more of the initial task-related activities, but the chi-square analysis revealed no significant differences between the groups of writers (experienced vs. student) or between the technologies (computer vs. pen and paper) [X² = 0.20, n.s.]. In general, we did not find the significant differences between experienced and student writers that Hayes and his colleagues observed. However, there were some small differences between the groups and between the two conditions; for instance, novices did not engage in any of the more global task-related comments, while at least one expert engaged in each of these comments in each condition (see items 3-6 in Table 2).

Table 3
Means of Proportions for Levels of Problem Representation,
Correction of Planted Errors, and Detection of Planted Errors

Global Problem-Solving Episodes
Planted Errors Corrected
Planted Errors Detected

Exper. Student Exper. Student Exper. Student
Computer .349 .264.923 .531 .198.260
Pen & Paper .411 .274.892 .598 .226.270

 

Levels of problem representation
The first column of Table 3 shows the mean proportions, in each of the four cells, of the participants' problem-solving episodes that involved problems above the sentence level. In both conditions, the experienced writers tended to focus on global problems more than the student writers did, and an ANOVA found that the difference between experienced and student writers was significant [F(1, 14) = 6.161, p < 0.05]. Though the experienced writers seemed to focus more on global problems when working with pen and paper (41.1%) than when working on the computer (34.9%), the difference when using the two media was not significant [F(1, 14)=1.327, n.s.]. As Table 4 shows, there was also no significant interaction between technology and expertise [F(1, 14) = 0.145, n.s.]. Therefore, experienced writers focused significantly more on global problems than did the student writers, but the technology used had no effect on the level of focus for either group.

Table 4
Effect of Expertise and Technology on the Proportion of Local
Problem Solving Episodes to Total Problem Episodes

SOURCE
SS
df
ms
F
p

Between
  Expertise 11.445 1 11.445 6.161 .026
  Error b 26.004 14 1.857
Within
  Technology 1.531 1 1.531 1.327 .269
  Expertise × Tech. 0.167 1 0.167 0.145 .709
  Error w 16.148 14 1.153

 

Correction of planted errors
The second column of Table 3 shows that, in both conditions, the experienced writers corrected many more of the planted errors than did the students. Again, the ANOVA found that the differences between the behaviors of experienced writers and students writers was significant [F(1, 14) = 8.157, p < 0.05]. Experienced writers corrected 92.3% of the errors when working on the computer, and 89.2% when working with pen and paper. Student writers corrected 53.1% of the planted errors on the computer, and 59.8% when working with pen and paper. Table 5 shows that the difference in correction rates between the two media was not significant [F(1, 14) = 0.000, n.s.]. There was also no significant interaction between technology and expertise [F(1, 14) = 4.059, n.s.]. Therefore, although the experienced writers corrected more of the planted errors, the technology used in this study did not significantly affect either group's tendency to correct these errors.

Table 5
Effect of Expertise and Technology on the
Proportion of Planted Errors Corrected

SOURCE
SS
df
ms
F
p

Between
  Expertise 19.575 1 19.575 8.157 .013
  Error b 33.598 14 2.400
Within
  Technology 0.000 1 0.000 0.000 .996
  Expertise × Tech. 1.265 1 1.265 4.059 .064
  Error w 4.363 14 0.312

 

Detection of planted errors
In contrast to the correction of errors, the third column of Table 3 shows that the student writers explicitly identified more of the planted errors in each condition. Though the experienced writers corrected more of the planted errors, the students explicitly referred to more of the errors in their protocols. The ANOVA (Table 6) found that the students' apparent advantage over the experienced writers was significant [F(1, 14) = 8.317, p < 0.05]. Again, though, there was no significant computer effect on the percentage of the planted errors that were detected [F(1, 14) = 2.538, n.s.]. There was also no significant interaction between technology and experience [F(1, 14) = 0.674, n.s.]. Thus, the student writers detected more errors, whereas the experienced writers corrected more of these errors.

Table 6
Effect of Expertise and Technology on the
Proportion of Planted Errors Detected

SOURCE
SS
df
ms
F
p

Between
  Expertise 35.667 1 35.667 8.317 .012
  Error b 60.041 14 4.289
Within
  Technology 5.350 1 5.350 2.538 .133
  Expertise × Tech. 1.420 1 1.420 0.674 .426
  Error w 29.505 14 2.108

 

In summary, there was no significant difference between the initial task-related comments of the experienced and student writers. The experienced writers tended to focus more on global text problems, the students more on local ones. Experienced writers corrected more of the planted errors, but student revisers explicitly detected more of these errors. The use of a computer vs. pen and paper had no significant effect on any of these measures.

Discussion

Our results point to two important conclusions. First, experienced writers define this type of revision task to include more global-level changes, while students tend to focus almost exclusively on local-level concerns. Second, the use of a computer does not change these task definitions.

Differences between experienced and student writers
We did not find the significant differences between the number of initial task-related comments made by experienced writers and students that Hayes and his colleagues found. This may be because, in our study, each participant performed a very similar task twice (once in each condition). It seems unlikely that such task-related comments would be made in the second session--the task would be familiar at this point. This practice effect, then, may have watered down any differences between our experienced writers and student writers.

Similar to Hayes and his colleagues, we found that experienced writers focused more on global problems when revising. The student writers tended to work through the text, focusing on problems at or below the sentence level. In contrast, the experienced writers' problem-solving episodes were focused more on matters above the sentence level. In fact, many of the experienced writers completely restructured the organization of the text, creating their own formats and organizational schemes.

As in the Hayes et al. study, the experienced writers were more successful than students at eliminating errors that we planted in the text. However, in the protocols, they did not explicitly identify more of the planted errors; in fact, they detected significantly fewer. We believe that this apparent contradiction is another artifact of the difference in focus between the experienced and student writers. Although the students tended to hunt through the text, looking for grammatical and stylistic problems, experienced writers tended to restructure major portions of the original text, thus eliminating many of the planted errors without verbalizing a detection. Only after they had completely recast the information in the original text did they look through their own versions for low-level grammatical or stylistic problems. At this point, most of the planted errors had already been eliminated.

In short, the differences that we found between the behaviors of the two groups is consistent with the findings of Hayes and his colleagues, who found that experienced writers tended to treat revision as a whole-text task; however, student writers treated revision as a more sentence-level task. They conclude, "Experts perform better than novices not just because certain of their subskills are better than those of novices, but also and more importantly because they are performing a better task--that is, one better suited to improving text" (1987, p. 233).

The computer's effect on revision
Although there were strong differences between experienced and student writers in all of our measures, we found no significant difference between the revising processes of writers working with pen and paper and the same writers revising on a computer. A number of our participants, both experienced writers and student writers, commented in their protocols about differences between the two conditions. For instance, in the pen and paper condition, they often commented on the constraints of marking up the written text. In the computer condition, several of the writers adjusted the size of text windows to allow for the maximum amount of text to appear on the screen. (Remember that the technology we used allowed the participants to see nearly an entire page of text at one time.) However, these differences between the two technologies did not cause the revisers to focus their attention any more (or any less) on sentence-level issues. This finding agrees with Hawisher's study (1987) and conflicts with Lutz's (1987) finding that writers focused more on surface-level concerns when revising on-line. While the differences we found between experienced and student writers were similar to those found by Hayes and his colleagues, we found that the difference in technologies did not significantly affect the way either group went about the task of revision.

Conclusion: Task Definition and Technology

We believe that these results underscore and build upon research suggesting the pivotal role that task definition has on writers (Hayes, et al., 1987; Wallace & Hayes, 1991; Flower, et al., in press). Our two groups apparently had different ideas about what the task of revising entails. These differences in task definition seemed to be much more important than differences in revising medium for determining how the participants would react to the task.

Of course, as with any study, ours leaves questions yet to be explored. First, the revisers we studied were not revising their own writing. In order to closely examine what revisers were doing in each of our conditions, we controlled the revising task, thereby sacrificing some of the "naturalness" of the writing situation. Further, writers revising their own texts may have had a higher level of commitment to the task and greater knowledge of their own intentions as writers. Future research should explore how computer technologies impact on writers' revising of their own texts. We believe that one contribution of this study is to point out the importance of task definition as a variable to explore in these future studies.

A second, related point concerns the interplay of revision with other composing processes. In this study we isolated revision as a post-generating process, but in fact the ways that writers define the task of revision--and how they use either word processing or pen and paper to revise--may be bound up in important ways with how they plan and generate, and replan and regenerate, their texts. We suspect that it is in the planning and generating of written texts that word-processing technology may have its most profound effects. Haas' research on writers' composing in computer contexts (1989a, 1989b, 1990) showed important effects for medium on planning, supports this hypothesis.

Finally, given the rapid advance of computer technology, generalizing from results obtained with one interface to make assertions about computers in general may be unwise. In this study of revision, we used a high-resolution, black-on-white display and a large screen that gave writers the ability to see nearly an entire page of text at one time. Our technology also allowed writers to change the font and size of the text on the screen, thereby manipulating the readability of the text on screen. Using this technology, we saw no differences in the writer's revising when they used word processing or pen and paper; however, these results might not replicate with different computer systems and visual displays. It could be that one reason for the conflicting results in word-processing research is the variety of technologies that have constituted the computer condition. We need to know, not whether computers in general affect the writing process, but what aspects of the technology are important.

The configuration of computer tools for writing are diverse and changing. A true understanding of technological impact on writing will require careful theoretical, experimental, and observational examinations of how various computer tools and their different interface features affect the writing process. Further, the ways that different technologies support or constrain particular writers doing particular tasks in particular situations is a rich area for further research. Understanding the complex, multi-faceted interactions among writers, writing technologies, and writing situations will continue to challenge literacy educators and researchers. Such an understanding is critical if we are to employ and, when necessary, to manipulate technology to meet our pedagogical goals.

Charles Hill teaches at Carnegie Mellon University. David Wallace teaches at Iowa State University. Christina Haas teasches at Pennsylvania State University.

Notes

  1. Cohen's Kappa adjusts direct match agreement rates according to the number of categories used for analysis. Thus, it is a more sensitive measure of agreement. Cohen's Kappa scores are usually 10 to 15% lower than direct match agreement rates. Cohen's Kappa can only be used when raters score or sort of predetermined number of items.
This research was funded by a grant from the Fund for Improvement of Post-Secondary Education (FIPSE) under grant number G008642161. The authors wish to thank Christine Neuwirth, John R. Hayes, Sarah Sloane, Mike Meyers, and Nancy Kaplan for their help with various aspects of this study.

References

Bridwell, L. S. (1980). Revising strategies in twelfth grade students' transactional writing. Research in the Teaching of English, 14, 197-222.

Daiute, C. A. (1986). Physical and cognitive factors in revising: Insights from studies with computers. Research in the Teaching of English, 20, 141-159.

Emig, J. (1971). The composing processes of twelfth graders. (National Council of Teachers of English Research Report No.13). Urbana, Il: National Council of Teachers of English.

Ericsson, K. A. & Simon, H. A. (1984). Protocol analysis: Verbal reports as data. Cambridge: MIT Press.

Faigley, L. & Witte, S. (1981). Analyzing revision. College Composition and Communication, 32, 400-412.

Faigley, L. & Witte, S. (1984). Measuring the effects of revision on text structure. In R. Beach & L. S. Bridwell (Eds.), New directions in composition research (pp. 95-108). New York: Guilford.

Flower, L., Stein, V., Ackerman, J., Kantz, M. J., McCormick, K., & Peck, W. C. (1990). Reading-to-write: Exploring a cognitive and social process. New York: Oxford University Press.

Haas, C. (1989a). Does the medium make a difference? Two studies of writing with pen and paper and with computers. Human-Computer Interaction, 4, 149-169.

Haas, C. (1989b). How the writing medium shapes the writing process: Effects of word processing on planning. Research in the Teaching of English, 23, 181-207.

Haas, C. (1990). Composing in technological contexts: A study of note-making. Forthcoming in Written Communication.

Haas, C. & Hayes, J. R. (1986). What did I just say? Reading problems in writing the machine. Research in the Teaching of English, 20, 22-33.

Hawisher, G. E. (1986). Studies in word processing. Computers and Composition, 4(1), 6-31.

Hawisher, G. E. (1987a). Research update: Writing and word processing. Computers and Composition, 5(2), 7-27.

Hawisher, G. E. (1987b). The effects of word processing on the revision strategies of college freshmen. Research in the Teaching of English, 21, 145-159.

Hayes, J. R., Flower, L., Schriver, K. A., Stratman, J. F., & Carey, L. (1987). Cognitive processes in revision. In S. Rosenberg (Ed.), Advances in applied psycholinguistics: Volume 2: Reading, writing, and language learning (pp. 176-240). New York: Cambridge University Press.

Lutz, J. (1987). A study of professional and experienced writers revising and editing at the computer and with pen and paper. Research in the Teaching of English, 21, 398-421.

Morris, J., Satyarayana, M., Conner, M., Howard, J. H., Rosenthal, D., & Smith, F. D. (1986). ANDREW: A distributed personal computing environment. Communications of the ACM, 29 (3), 184-201.

Perl, S. (1979). The composing processes of unskilled college writers. Research in the Teaching of English, 13, 317-336.

Rohman, D. G. & Wlecke, A. O. (1964). The construction and application of models for concept formation in writing. (U. S. Office of Education Cooperative Research Project No. 2174). East Lansing, MI: Michigan State University.

Scardamalia, M., & Bereiter, C. (1983). The development of evaluative, diagnostic and remedial capabilities in children's composing. In M. Martlew (Ed.), The psychology of written language: A developmental approach (pp. 67-95). New York: Wiley.

Sommers, N. I. (1980). Revision strategies of student writers and experienced adult writers. College Composition and Communication, 31, 378-388.

Stallard, C. (1974). An analysis of the writing behavior of good student writers. Research in the Teaching of English, 8, 206-18.

Sullivan, P. A., & Porter, J. E. (1990). How do writers view usability information?: A case study of a developing documentation writer. In SIGDOC '90 Conference Proceedings (pp. 29-35). New York: Association for Computing Machinery, Inc.

Wallace, D. L., & Hayes, J. R. (1991). Redefining revision for freshmen. Research in the Teaching of English, 25, 54-66.

Witte, S. P. (1987). Pre-Text and composing. College Composition and Communication, 38, 397-425.

Appendix A
Revision Task #1 - Eating Well

Squirrel Hill Dietary Consultants
1326 Murray Avenue
Pittsburgh, PA 15206
Telephone (412) 652-3317

October 11, 1987

Gary Beardsley
Director, Meals-on-Wheels
1421 Fifth Avenue
Pittsburgh, PA 15206

Dear Dr. Beardsley,

We at Squirrel Hill Dietary Consultants would like to propose an idea that would improve the eating habits and health of the independent elderly who live in our community.

******************************************************
In our offices, we tend to see only elderly patients whose health is already declining. But what about the well elderly in our community? Distributing a pamphlet outlining healthy eating habits to Squirrel Hill's elderly population would provide a genuine service. Meals-on-Wheels would be a good means for distributing such a pamphlet, especially since you would be targeting the well elderly whose cooking habits may already be slipping. One of your volunteers could write the pamphlet, perhaps called Eating Well Today for Happier Tomorrows, based on the following guidelines from Laurel's Kitchen, a cookbook we often recommend:

In summary, our advice goes like this: eat a good variety of whole, fresh, natural foods (vegetarian, of course) that are cooked with love and taken in temperate quantity and that just about says it, but here are a few of the hidden implications and some specific pointers. Eliminate food that are neither whole nor wholesome: white flour, polished rice, and refined sugar, for instance. In fact, cut all sugar, and honey too, down to rock bottom. Avoid all highly processed foods: frozen, canned, or dehydrated, for example. They've lost valuable nutrients, and their packaging wastes resources.

Cut way back on fats of all kinds, saturated fats in particular. The American diet, measured in calories, is 40 to 45% fat, so most of us have a distorted idea as to what's "normal" and "okay." An American should probably decrease their salt consumption too. Years of pretzels and potato chips have accustomed Americans in particular to a highly dangerous intake. Use a gentle hand with all spices: an overstimulated palate is hard to control and insensitive to the subtler flavors of whole, fresh food.

As a similar plan, you may like to replace sour cream with yogurt--start by mixing them half and half. In fact, give yogurt a prominent place in your diet, made at home with dried skim milk, it's inexpensive and healthful.

Once you've cut out all the wrong foods, you're halfway there. Now all you have to really to is get the right balance and variety of what remains. Aquaint yourself with the vegetarian Four Food Groups (vegetables; fruit; milk and eggs; grains, legumes, nuts, and seeds), and it's a snap. The diversity of these whole foods assures you of all the vitamins, minerals, and protein you need each day. You may adjust the portion sizes to your own calorie needs: lose weight safely, for example, by eating smaller quantities from each group. All but eliminate eggs and include very little butterfat and your diet will be extremely low in cholesterol and therefore much healthier than a meat-based diet.

Soybeans are the cheapest form of protein available. Cooked and pureed, they can be added to many foods. Keep a batch of ground soybeans in the refrigerator and you'll be surprised how many ways you find to creatively use them: in soups, spreads, casseroles, patties, and breads. Texturized soy protein is replacing meat in many homes. We're glad it's replacing meat, but we haven't been able to get very interested in it--surely if you're sincere about not wanting to eat your fellow creatures you won't want to eat something made to look and taste just like him either.

As you can tell from the above, we recommend eating no more than four pieces of fruit daily. Considering the calories involved, we know vegetarians who eat from six to eight pieces, but that's excessive, too. Fresh and dried fruit is very often the last stronghold of a sweet tooth that's overdeveloped--as whose isn't these days?

Fresh milk has a nutritional edge over dried milk and other more processed dairy products. For a great many of us, milk-drinking got lost in the shuffle around our sixteenth year, and we balk a little at resuming the habit. Give it a second chance, though, especially if you have children. Nothing will persuade them of it's value as effectively as your example. After all, would you have stopped drinking if a) someone hadn't told you it would make you fat, or b) you hadn't noticed that grown-ups all drank coffee instead?

A little cheese goes a long way towards meeting your protein needs--one ounce contains 6.5 grams of protein while a whole glass of milk only contains 8.5 grams. A lot of it, though, will pile on unwanted pounds, because it is high in fat--and saturated fat at that. Cheese is expensive, too. Still and all, cheese is a help to you if you're making the transition to a vegitarian diet. But low-fat cottage cheese can take its place a good deal of the time.
*****************************************************

You might want to include a sample menu for one day. We at Squirrel Hill Dietary Consultants would be happy to look over such a pamphlet after your staff has written it. A happy, healthier community is a benefit to us all.

Sincerely,


Janice Ruben
Director, S.H.D.C.

 
 

Appendix B
Revision Task #2 - Job Placement

Iowa Central College
Community College District No. 513
P.O. Box 1400
Des Moines, Iowa 71243
Telephone (817) 697-3211

August 27, 1987

Carol Rodin
Director, Career Placement Center
Carnegie-Mellon University
Schenley Park
Pittsburgh, PA 15213

Dear Carol,

Congratulations on your great placement record for CMU seniors who graduated in May 1987!

******************************************************
I was quite pleased to see the article in the recent In Pittsburg (Vol. 15, No. 4, June 1987) on the new surge in employment for recent graduates of CMU. The article doesn't give much background about what's behind the change, but I think it's a great start! I think you're right about students needing to be more involved in the job search from early on. For myself, I have found the only way to get a large number of seniors interested in preparing early for the job search is to hammer home the idea that to prepare early is to get a headstart on one of the most important events of your life. Students interested in graduate school or fellowships have to get on the ball early too, and I think our job is partly to attract seniors to the Placement Center early in the fall.

Most seniors are reluctant enough to think about the future, I bet especially students in a cozy environment like CMU. I've found that many seniors seem to have a lot of misconceptions about the whole process of looking for a job, time required, resumes, etc. Our Career Placement Center found it very useful to make a brief handout giving the reasons students should prepare early for their careers, I think it was called, "Facts and Myths about the Job Search: How the Career Center Can Work for You." It was especially helpful for returning seniors, and a relatively inexpensive method of getting their attention. We used all the brochures, and I seem to have misplaced the original, but the ideas we wrote up were roughly the following:

Many naeve college seniors possess the preconception that it is not necessary to prepare in advance in order to be successful in the job hunt and get a job. They fail to realize that jobs at I.B.M. or other Fortune 500 companies don't just happen--they have to get out there and hustle for them. Much of the job search is much more competitive than many seniors think. A college senior who is hired for a job that starts immediately after graduation were hired long ago in the previous fall. As noted above, for most fellowships too, like the Fullbright, all application materials must be turned in by October. Even some students with excellent college records fail to get a job or get into graduate schools of their choice because they waited until it was too late.

I don't want to infer that if they waits until the spring to apply for a job they won't get one. But applying for fellowships is a little different, with forms to fill out, recommendations to be sent in, and sometimes with trips to other parts of the country for interviews being necessary, and there really isn't any question, signing up early is better than signing up late--whether we're talking about fellowships or jobs.

Perhaps the most influential criteria affecting seniors' reluctance to apply early for jobs is their fear of leaving college. Many college seniors think that if they put it off looking for a job, maybe they won't have to face up to the fact that they're really finally graduating. It is hard for them to really face up to the fact that they're really leaving school, and that looking for a job early is worth the time invested. However, preparing early actually takes stress OFF of seniors because they are prepared. And feeling like they're in the same boat as a lot of other people looking for jobs can actually be a support when they get to compare experiences.

You can also gain other advantages from early involvement in the job search process. You can learn how to clearly write a good resume and define your career objectives. You learn how to get your resume off the ground by making it visually appealing and not burying crucial information. Looking at your resume, you can let someone in the Career Placement Office diagnose potential trouble spots. All these things are advantages when you're looking for a place in the working world--which is why seniors are really better off when they come in for help early.

There is, however, one comment which I frequently hear and which is more convincing than all these arguments, and that's a comment I hear time and time again from seniors who don't start thinking about next year until late in the spring--"Oh, I wish I'd started earlier!"

These arguments tend to make a very convincing little handout: we had 30% more seniors come in during September of their senior year than the year before. By the way, you should remember to include a list of companies that will be recruiting at CMU in the fall. It adds a little incentive for them to get their acts together.
******************************************************

I hope this method of recruitment works as well for you as it has for us. I'll send you a copy of the brochure if the original turns up. Good luck, and best wishes for your alumni's continued success.

Warmly,

Dana Yeager
Director, Career Center