5(3) p. 43

Risk Taking, Revising, and Word Processing

Delores K Schriner

Revising, according to John J. Ruszkiewicz, is risky business that comes with no guarantees of success. Student writers who confine their revisions to insubstantial alterations may simply be minimizing risk by "playing the odds, plotting the incentive for change against the choices available to them and going with the sure bets" (p. 146). Although Ruszkiewicz describes ways composition teachers can, through their instructional methods, help students learn to take risks when revising, claims have been made that risk-taking behavior when revising increases when student writers work at a computer. Colette Daiute, for example, finds that "the text editor helps writers take risks because the consequences of making mistakes are trivial for their hands....This physical ease of revising encourages writers to experiment and to view their writing as dynamic" (p. 136-137). Other evidence, however, suggests that although student writers display increased risk-taking behaviors when revising at the computer, the writer's changes do not result in noticeably improved texts (Collier, 1983; Harris, 1985).

p. 44

These studies suggest that a reduced sense of risk taking and subsequent willingness to experiment with changes in one's text may lead merely to more abundant revisions than appropriate revisions.

During the past year at The University of Michigan, my colleagues and I studied the relationship between the use of word processing and revising behaviors. The questions that formed the basis for this project included:

  1. What revision choices are student writers making when revising at word processors?
  2. Are these choices, in fact, riskier than those choices made by students revising with pencil and paper?
  3. What are the rhetorical consequences of the revision choices made by the writers working at a computer terminal as opposed to their "traditional" counterparts?

Our evidence suggests that the computer may predispose individuals to take what feels like, or appears to be, greater risks when revising; however, the outcome as measured by rating final papers indicates that the students working at computers are not any more successful in selecting appropriate revisions that their "pencil and paper" peers.

DESIGN OF THE STUDY

The study involved 88 first-year students at The University of Michigan, who were enrolled in four sections of introductory composition (English 125) taught by two experienced instructors. Each instructor taught one experimental computer section and one traditional control section using the same syllabus, texts, and assignments. The students in the computer sections were required to draft, write, and revise their papers on word processors; however, every attempt was made to keep all other treatments as similar as possible. At the beginning of the semester, students in the

p. 45

computer sections learned to use the Textra word-processing program on Zenith computers. A laboratory of Zenith computers was reserved for students enrolled in the computer sections. Although the students in the experimental sections were self-selecting, examination of their placement scores indicated that their writing abilities were rated only slightly above the average of those in the non-computer sections.

At the beginning and end of the semester, students in all four sections were administered a questionnaire that assessed their attitudes toward writing, revising, and the use of word processing.

First and final drafts of papers written at the mid-term and end of the semester were also collected from all students. The collection of papers was delayed until mid-term to give students in the experimental sections ample time to become proficient at word processing. All drafts were edited in peer groups and commented on by the instructors before the students revised and submitted final drafts. Because we were not, at this stage of the research project, looking for changes in the essays over time, the results presented here do not reflect differences between the two sets of paper; rather, we treated the texts as a whole, looking only for changes between drafts.

Each draft was read and analyzed twice (in some cases, a third reading was necessary) by trained readers who were lecturers or instructors in the English Department Introductory Composition Program. The readers analyzed the drafts in the following ways:

  1. Revisions made between the first and final drafts were coded by operation (addition, deletion, substitution, alteration, reduction, expansion, or order shift) and by level (lexical, phrase, clause, sentence, or multi-sentence). (Bridwell, 1980)

  2. First and final drafts were holistically scored using The University of Michigan English Composition Board's placement criteria. The readers then wrote a short evaluation assessing the effect of the revisions on the purpose, audience, structure, and coherence of the text.

p. 46

ATTITUDES TOWARD WRITING: QUESTIONNAIRE RESULTS

Attitudes toward writing and revising were gauged with a series of statements (i.e., "I think revising is easy."), which students rated on a scale of one to seven, with one the most negative and seven the most positive. Considering all students in the study, there were significant differences in the pre- and post-test ratings on how much they liked writing and revising and how easy they found revising. (All differences reported throughout this essay are significant at the .01 level, except when otherwise noted.)

Although the attitudes of students in all sections increased in how interesting they found writing compositions to be, the computer sections showed a higher degree of interest on both the pre-and-post-tests. The experimental group ratings increased from a 4.2 to 4.8, while the control group ratings increased from 4 to 4.5.

There were also significant differences in how much the students said they liked to revise and how easy they thought it was to revise. Students in the computer sections increased their ratings of how much they liked to revise from 3.9 to 4.5, as well as how easy it seemed to revise from 3.5 to 3.9. Students in the non-computer sections showed a significant decrease in their ratings of how easy it was to revise from 3.9 to 3.2, and remained the same in their ratings of how much they liked to revise, 3.5 on pre- and-post-test.

Students in the experimental sections, for the most part, liked using word processing, felt comfortable with the transition from paper and pen to keyboard and screen and, as a result, claimed to approach their writing tasks with greater enthusiasm. Our analysis of the revisions made by students in all sections, however, confirmed what common sense alone might tell us: there is no simple one-to-one correlation between how well one feels about writing and revising, and how well one writes and revises.

QUANTITATIVE ANALYSIS OF REVISION

The coding of between-draft revisions of the two papers collected from each student in the four sections continued to show

p. 47

other significant differences between the experimental and control groups. Students in the computer sections did indeed experiment with higher level revisions, but students in the non-computer sections stayed more with the "sure bets," limiting their revisions largely to surface level concerns (Ruszkiewicz, 1982).

Students in the control sections made significantly more revisions at the lower levels of the text (surface, lexical, phrase, and clause) than those students in the experimental sections. Lower level revisions accounted for 79% of the revisions made by the control sections and 68% of the revisions for the experimental sections. Higher level revisions (sentence and multi-sentence) accounted for 32% of the revisions made by the experimental sections and 21% of the revisions for the control sections.

Of the lower level revisions made by the non-computer groups, those at the lexical level were the most frequent, accounting for 36% of the total number of revisions. Lexical changes accounted for 29% of the total number of revisions made by the computer groups. (See Figure 1.)

% of Total Revisions 		Computer	Non-Computer 

% Multi-Sentence 			       	21		11
% Sentence 				       	11		10
% Clause  				       	 6		 7
% Phrase 				       	20		22
% Lexical				       	29		36

Figure 1:  Revisions by text level

In terms of revision operations, there were significant differences among the groups in the following three areas: addition, expansion, and substitution. The computer sections made more additions at all levels except at the clause level where there were no difference among the groups. In terms of expansion, differences were found between the two groups at the multi-sentence level with more in the computer groups, and at the lexical level with the non-computer groups showing significantly more. Finally, the non-computer group made more sentence level substitutions, and

p. 48

These differences, while significant at a statistical level, certainly do not show a landslide victory for the assertion that greater and more substantial revisions occur when students revise on computers. In fact, in terms of total number of revisions, the non-computer group made significantly more revisions: 37.4 revisions per paper on the average for the computer group; 63.8 revisions per paper on the average for the non-computer group. The figures do, however, corroborate Collier's (1983) observations that the complexity of revision operations increased for writers using word processing. One may also draw some similarities between Sommers' description of the revision strategies of basic and experimental writers and the revision strategies of the experimental and control groups. The writers in the control sections, like the basic writers in Sommers' (1980) study, did place greater emphasis on the selection and rejection of words, but writers in the experimental sections revised more at the sentence and multi-sentence levels. The mere increase in the number and complexity of revision operations, however, does not necessarily correlate with the production of a mature, well-reasoned text. The qualitative analysis of the revisions provided a clear reminder that the use of computers comes with no guarantees for more successful writing. The computer, as Rodrigues (1985) has noted, cannot and does not teach students how to write better.

p. 49

QUANTITATIVE ANALYSIS OF REVISIONS

On a scale of 1 to 5 with 5 representing the lowest score, analysis of the holistic ratings showed that students in the computer sections received significantly higher-first draft scores than did those in the control section (2.9 for computers, 3.2 for non-computers). Final-draft scores, however, revealed no significant differences between the two groups with both receiving an average holistic score of 2.5. The fact that the computer sections revised at higher levels of the text and received higher first-draft scores than the control groups, therefore, did not make a difference on the rating of the final product.

Examination of the trained readers' analysis of the effects of the revisions showed that the computer groups received higher first-draft scores largely because many of the surface problems of spelling, punctuation, and sentence structure, apparent in the first drafts of the non-computer sections, had already been edited. Although students were instructed to postpone surface level editing until the final stages of composing, this postponement did not occur with writers in our experimental sections.

Further analysis of the between-draft revisions revealed certain patterns in the revision strategies of writers in the experimental sections that were somewhat problematic. Comments made by trained readers on the effects of the revisions were tabulated, and the following patterns were revealed:

  1. Although writers in the computer sections engaged in more movement of larger units of text when revising, the movement often resulted in losing a clear sense of the writer's rhetorical strategies. Thirty percent of the readers' comments indicated perceptive shifts and/or changes in direction, as well as fewer cohesive ties or networking among the various ideas presented by writers in the computer sections. Only 23% of the comments indicated similar problems in the non-computer sections. What may be happening is that the text, as Donald Case (1985) has observed, becomes too fluid . As a result, students may lose sight of the overall structure of the text, as well as a sense of the relationships or connections between parts of the essay.

p. 50

  1. Related to the problem mentioned above, readers also noted shifts or inconsistencies in the writers' sense of audience. Nineteen percent of the comments made by readers indicated such problems in the computer sections, and 14% of the comments indicated similar problems in the non-computer sections. This finding is contrary to Daiute's contention that the capacity of the computer "reminds writers that they are communicating to readers....Since the text editor simulates a potential audience, writers are concerned to communicate clearly" (p. 141). one reason for this reduction of a clear sense of audience may be, as Richard Collier (1983) suggests, that more intricate revisions strain student's conceptualizing powers. In an effort to relieve this strain, the student writers may throw away the constraint of audience in order to continue composing.

    This finding may also be a result of a the writer's tendency at a word processor to develop materials too quickly and, as a result, neglect to weigh their revision decisions against all alternatives, and make choices carefully and with deliberation. The speed with which a writer can get ideas down, according to Richard Larsen, is 50% faster on a word processor than on an electric typewriter; therefore, writers using word processing "write closer to their real selves" and "closer to a style of language that approximates an ideal mix of the spoken and written" (p. 122). It may be true that the speed of writing at a word processor can lead to a style that is closer to speech, but for inexperienced writers, who already have a tendency to write as they speak, this can be a real danger. As Ellen Nold (1982) has noted, "The unskilled writer forges ahead with writing as with speech, not looking back even though he or she lacks the immediate feedback of a listener" (p. 18).

p. 51

  1. As the quantitative analysis of the revisions showed, writers in the computer groups engaged in more addition of text between drafts than those in the non computer sections. Their additions, however, tended to be repetitious or digressive. Comments made by the readers show that students in the computer sections made more nonsubstantive additions to the text than did those in the non-computer sections (12% for the computer sections, 9% for the non-computer sections). These results suggests that the computer may, indeed, prompt students to write more; however, there is the danger that students will simply equate more with better.

CONCLUSION

The results of this study confirm what other studies have shown: the word processor creates a favorable environment for writing, stimulating greater enthusiasm for the writing task among basic writers. The student writers in this study were also clearly more inclined to take risks while revising, experimenting with revisions at higher levels of the texts than those students composing with traditional tools. This risk-taking behavior when revising, however, did not lead to significantly "better" texts, as the data shows. This finding should serve to remind us, as Harris (1985) has stated in earlier research, that we cannot assume students will be more thoughtful, mature revisers simply by using word processing. Clearly, the role of the teacher has not been usurped by the computer; for it is certainly true, as Collete Daiute (1983) states, that while the computer has capacities for assisting in the revising of papers, it cannot teach students how to judge the appropriateness of their revisions. As Ellen Nold (1982) has noted, the quality of writing will improve only with explicit indications about a message's inadequacy (p. 21). Such messages cannot yet come from a word processor. Because students using word processors are attempting riskier revisions while knowing little about the appropriateness of their choices, maybe that increased instruction in revision is necessary in writing classes where computers are used.