6(2), April 1989, pages 61-79

Minicomputer Text-Editing in Upper-Division Cross-Disciplinary Courses

John Stenzel, Wes Ingram, Linda Morris

As the computer continues to prove itself an indispensable tool for writers at all levels, writing education professionals continue to search for ways to quantify and explain this phenomenon. In this effort, teachers of writing, fascinated by the possibilities inherent in the new technology, have been stimulated in part by our own positive experiences generating and revising text on-screen, and in part by stories of professional writers (e.g., Zinsser, 1983; Schipke, 1986) who discovered that word processing dramatically increased their productivity and improved their editing. Writing teachers have also tried to extrapolate these results to student writers. If the tool could make excellent writers even better, the thinking went, that same tool would transform poor writers into good writers. In the past five years, studies have moved from unwarranted enthusiasm--extolling the benefits wrought by the electronic writing aid--through various stages of confusion and explanation, to a more sophisticated brand of optimism, a realization that this far-from-simple tool has brought unexpected challenges to teachers' and experimenters' ingenuity.

Early studies by Bean (1983) and Collier (1983) examined small groups of students and cited an improved attitude and increased output brought about by the word processor. Arms (1982) noted that computerized editing gave students a feeling of power over their texts, as, for example, the way spelling checkers seemed to free students from anxiety; Rodrigues (1985) corroborated this testimony with respect to a small group of basic writers. Daiute (1983) explored the theoretical ramifications of computer-assisted composition, stressing the way the computerized writing environment altered the customary audience relationship. Challenging the conventional assumptions about the ways student writers read and process words on the screen, investigators like Bridwell and Duin (1985) and Haas (1987), among other more recent researchers, have helped refine the ways in which we view the computer-assisted writing process.

More directly addressed to the problems of computer-assisted vs. conventional writing was a case study by Harris (1985); she attempted to quantify revision in relation to computer use, and her pilot study concluded that word processing did not automatically free students to take more risks or work on their revision skills. A growing number of studies emerged that did not find a statistically significant correlation between computer use and writing quality, though some showed students writing more words or revising more than did their pencil-and-paper counterparts. Though Daiute's 1986 study of the interaction between physical and cognitive factors cited improvements in revision and quality, they were not of the order predicted by earlier researchers; Hawisher's (1987) results were more typically mixed and ambiguous. Her 1986 review article charted the newly muddied pedagogical waters, and the CCCC's computer sections in 1986 and 1987 exhibited a more guarded optimism than that which had prevailed earlier in the decade.

When this phase of our study was conceived by the staff of the University of California, Davis, Campus Writing Center, no investigation had presented convincing evidence that word processing led to better writing for a large number of students. Both Dean and Collier examined extremely small groups of students (n=4 in both cases), and neither provided quantitative evidence measuring computer-aided progress as compared to that of a control group; instead the studies were largely anecdotal, and hence not conclusive. In a more extensive study that appeared a few years later, Dean (1985) examined a larger sample and compared computer group results to those of a control group, but used the Houghton-Mifflin Writing Assessment, not student essays, as his measuring instrument. All these studies either gauged student response to computerization by means of interviews or questionnaires, or judged the effectiveness of computer processing through outside (non-essay) assessment of composition skills.

When this study began, no one had examined a large sample of conventionally composed and word-processing-assisted essays to assess the computer's effects on progress and competence. We needed to know how students used the computer resources available to them (in the case of the University of California, Davis, a networked system of DEC PDP-11 minicomputers running UNIX software) to write their papers in standard English classes and across the curriculum.

Though the system we used is no longer state of the art, this system was available on campus at the time; and, as Richard Elias (1985) and David Dobrin (1987) have noted, the minicomputer is not dead as a writing tool, making economic as well as pedagogical sense under certain conditions. The second phase of this study, to be conducted in the fully equipped Macintosh classroom that was partly justified by this trial, will reflect not only the differences a "user-friendly" environment can make but will also show how far the teaching of computer-assisted composition has improved at UC Davis and elsewhere.

Method

We directed our inquiry to the effects of word processing on standard coursework, using a group grading of essays and drafts rather than performance on a standardized test to establish what differences, if any, the computer aid would bring. We wanted to test several related hypotheses that have arisen in computer pedagogical research: first, that students in the computer group would show higher levels of improvement over the quarter (especially as indicated by progress from draft to final version) than the control group; second, that male students would have more success with the computer and hence show more improvement than female students; third, that students in the physical sciences and engineering, being more familiar with computers, would have an edge over their counterparts in the humanities and social sciences; and fourth, that in general, students with stronger computer backgrounds would respond more positively to the sometimes difficult editing environment of the UNIX operating system, a system considerably more complex (but more powerful and versatile) than word-processing microcomputers.

We examined student work in advanced composition courses that were paired with courses in other disciplines. In these adjunct writing courses, sponsored by the Campus Writing Center, students customarily used the subject matter of the master course as the starting point for their essays in the English class.

Over the academic year 1983-84, students from 13 different adjunct courses in a variety of disciplines were asked to participate in the study, although they were not given many details of the study. Volunteers were solicited at the beginning of the quarter; half these volunteers were then chosen to serve in the computer group and half in the control group. Though the sample was self-selected at the "volunteer" level, the random selection thereafter would, we felt, be corrected for motivation and previous computer experience. Students knew that participation in the study would have no bearing on their grade in the course because the papers were copied and graded independently after the academic year was over.

The students' work for both groups (control and experimental) was collected at five points in the ten-week quarter for later evaluation: 1) the first paper after the diagnostic essay that helped establish an initial baseline for comparison; 2) a rough draft and 3) a final draft of a mid-quarter assignment; and 4) a rough draft and 5) a final draft of the final assignment. All students completed the same number of papers during the term; these five papers furnished the study not only with a measure of student competence but also with an assessment of the students' success in revision. The instructors did not tailor their assignments to the study, and the resulting papers reflected the wide range of subjects, and formats ordinarily found in cross-curricular adjunct writing courses.

Both groups of students completed a pre-questionnaire and a post-questionnaire, providing information on their background-- major area of study, computer experience, and relevant personal data. In addition, the questionnaire elicited a self-analysis of revision habits; all of these characteristics and factors would later be subjected to statistical analysis and correlated with scores from the drafts and essays.

Each instructor trained to use the university's text processor facilities, which at the time consisted of Digital Equipment Corporation PDP 11/20 minicomputers running under the UNIX operating system (Version 4.2), with the text editor vi and the formatting program NROFF. Computer-group students were asked to attend four one-hour training orientations early in the quarter. Teachers were not to devote class time to the computer group, nor otherwise give them special treatment. Students had access to the computer terminals in public terminal rooms in various buildings on campus from 8 a.m. to 3 a.m., with specific times early in the quarter reserved exclusively for their use and instruction.

After typing and printing their papers for the class, computer users electronically "mailed" a copy of their work to a Campus Writing Center usercode for storage. Control-group students composed their papers in the usual way, and copies of each paper were kept in our files; a typist later entered these papers into the computer for printout so that the papers from the control and experimental groups would be identically formatted and printed, and hence indistinguishable to the panel of graders. Because no names appeared on any essay, a special code number assigned to each paper allowed for paper sorting after the grading; readers were asked not to grade papers they recognized as their own students', and the readers were asked not to grade a paper if they thought they might have already graded an earlier draft. The graders, who were experienced composition teachers affiliated with the Campus Writing Center, evaluated each piece of work according to a four-point rubric designed specifically for interdisciplinary writing. (See Figure 1.) Each paper received two readings, and differences of more than a full point were resolved with a third reading.


  1. Skillful
    Significant central idea or focus that is organized into well-developed, well-structured sub-topics. Accessible to a non-specialist with some knowledge of the general subject area. Qualifies assertions where uncertainty exists: acknowledges an incorporates alternative hypotheses or interpretations into discussion. Establishes a clear context for remarks. Uses predominantly emphatic active sentences and shows good control of mechanics.

  2. Competent
    Organized by sub-topics in response to a central idea or focus but analysis and support may be simplistic: may have minor problems in paragraph unity. Will acknowledge alternative views but may not develop them. Will exhibit some sentence variety, few serious mechanical problems. Not as accessible to non-specialists.

  3. Weak
    Will imply a central idea but may be disorganized: main topics and/or sub-topics will lack adequate support. Will rely on technical terms and concepts without adequate definition or elaboration. Speculative assertions not acknowledged as such. Marred by mechanical problems and sentence errors. Longer sentences will be unemphatic, poorly structured, and obscure.

  4. Poor
    No controlling idea and no clear sub-topics. Weak paragraphing. Assertions are basically unsupported. Primer or grossly inflated prose: syntax and diction errors impede understanding. Repeated serious mechanical errors.

Note: For statistical manipulation, the above scores (with pluses and minuses that came up in the grading) were converted to a ten-point scale, with a "poor" essay tallied as a 1, and a "skillful" essay tallied as a 10.

Figure 1: Grading Rubric



After all the papers were graded and matched with the questionnaires, the responses, scores, and class information were coded into an SPSSx data file and subjected to various cross-tabulation, regression, and t-test analyses.

Results

As we would normally expect for any advanced composition course, students in both the experimental and control group did improve over the quarter, as indicated by a rise in scores from the preliminary paper to the final draft of the assignment. The task of the statistical analysis was to determine whether membership in one of those groups showed statistically significant linkage to improved performance.

For the purposes of the statistical manipulation, the scores from the grading sessions, based on a 4-point rubric with plus and minus values, were translated into a 10-point scale, with 1 the lowest possible score, 10 the highest. Graders' scores were averaged, and means and standard deviations were computed for the study as a whole and for relevant sub-groups. Table 1 shows these means and deviations.

The improving trend is clear. The apparent improvement from preliminary essays to final products shows several interesting trends: Male students started slightly higher than females (mean of 5.6 versus 5.3), although both reached roughly the same level (mean 6.7, with males clustered more tightly around that mean than females). Students in different disciplines showed different means, although these figures applied to declared major rather than to the subject matter of the course being taught. Of the students as a whole, 71% improved their scores from the preliminary paper to the final essay, with 8% unchanged and 21% registering a lower final paper score.

We refined the improvement measurement by creating four indices of improvement, as noted in Table 2. We then studied these improvement indices to assess whether, for instance, computer users' improvement indices were significantly higher than nonusers'--that is, whether our first hypothesis would be supported by the data we collected. Table 2 depicts the means and standard deviations of these improvement indices; in each case, the deviations exceed the means.


Table 1: Paper-by-Paper Performance --
All Students and Various Sub-Groups


GroupnPre-
Treatment
Midterm
1st Draft
Midterm
Final Draft
Final
1st Draft
Final
Final Draft

All
Students
(s.d.)
122 5.4
(1.8)
4.3
(1.7)
5.6
(1.8)
5.0
(1.7)
6.7
(1.7)
Males
(s.d.)
535.6
(1.9)
4.5
(2.5)
5.4
(2.0)
4.3
(3.0)
6.7
(1.9)
Females
(s.d.)
675.3
(2.2)
4.2
(2.5)
5.8
(2.6)
5.2
(2.6)
6.7
(2.5)
Control
(s.d.)
725.2
(2.3)
4.3
(2.5)
5.3
(2.4)
4.3
(2.8)
6.6
(2.0)
Computer
(s.d.)
50 5.8
(1.6)
4.3
(2.6)
6.1
(2.2)
5.6
(2.7)
6.8
(2.5)
Natural
Science
(s.d.)
57 5.3
(2.2)
4.0
(2.7)
5.4
(2.6)
4.6
(3.1)
6.9
(2.4)
Physical
Science
(s.d.)
21 5.7
(1.2)
4.9
(1.8)
5.7
(2.0)
5.1
(2.5)
5.9
(1.9)
Social
Science
(s.d.)
24 5.0
(2.7)
3.8
(2.9)
6.0
(1.8)
5.2
(2.6)
6.6
(2.3)
Humanities
(s.d.)
17 6.0
(1.6)
5.1
(1. 7)
5.4
(2.8)
4.7
(3.0)
6.9
(1.9)



Table 2: Means and Variances of Improvement Indices

Improvement
Index
Mean Standard
Deviation
Range

Imp 1 (b)1.32.4 13.5 (a)
Imp 2 (c)0.91.8 9.5
Imp 3 (d)1.012.3 9.8
Imp 4 (e)1.01.7 9.3

(a) all improvement indices assumed negative values when improvement was absent.

(b) Imp 1 = (mean readers' score on final paper) - (mean readers' score on beginning paper)

(c) Imp 2 = (mean readers' score on midterm final) - (mean readers' score on midterm rough draft)

(d) Imp 3 = (mean readers' score on final final) - (mean readers' score on final rough draft)

(e) Imp 4 = mean of imp 1 through imp 3



Although these tables provide rough information on student performance, one cannot draw conclusions as to which of the independent variables played a statistically significant role in accounting for changes in the dependent variable (improvement). Multiple regression analysis would clarify the interactions between variables. We should emphasize here the key concept: Students improved, but only sophisticated statistical inference could reveal whether that improvement stemmed from any or all of the independent variables we isolated or whether other factors were responsible for the improvement we observed.

Disappointingly, multiple regression revealed that the individual variables we selected accounted for little of the observed improvement. Table 3 reports the regression results for three of the six independent variables we were most interested in testing--gender, self-reported amount of revision, and computer use. Not listed in Table 3 are the following variables: instructor (eight instructors participated), area of study (natural science, physical science, social science, humanities), and amount of previous computer experience. None of these latter variables proved statistically significant in determining improvement. That is, these independent variables did not increase the ability of our regression equation to explain the observed improvement.


Table 3: Beta and F-score P-Values for Three Independent Variables as Regresses Against Four Measures Of Improvement

Dependent Variables (separate regressions)


Independent Variables Improvement
Index 1 (a)
(R2=0.122)
Improvement
Index 2 (b)
(R2=0.0928)
Improvement
Index 3 (c)
(R2=0.128)
Improvement
Index 4 (d)
(R2=0.127)

Sex (e)
beta value
0.24065 0.22279 0.10382 0.24803
F-score
P-values
0.0135 0.0376 0.3270 NS 0.0080
Amount of Revision
beta value
0.25077 0.0305 0.35129 0.28120
F-score
P-values
0.0108 0.7752 NS 0.0014 0.0030
Computer
Group
beta value
-0.03484 -0.14844 0.06217 -0.05270
F-score
P-values
0.7245 NS 0.1756 NS 0.5681 NS 0.5785 NS

(a) Imp 1 = (mean readers' score on final paper) - (mean readers' score on beginning paper)

(b) Imp 2 = (mean readers' score on midterm final) - (mean readers' score on midterm rough draft)

(c) Imp 3 = (mean readers' score on final final) - (mean readers' score on final rough draft)

(d) Imp 4 = mean of Imp 1 through Imp 3

(e) Females improved significantly more than males



The most important overall finding revealed in Table 3 is that the six variables we tested explained little of the observed variation in writing improvement: Our R2s ranged from 0.0928 to 0.1285. The second important inference we can make from Table 3 is that most of the variation our regression equations do explain can be accounted for by only two independent variables: the sex of the student (females improving more than males) and the amount of revision the student undertakes (obtained from the questionnaires). Two out of three of the F-score P-values for both sex and revision were below 0.05.

The P-values for the variable of interest--computer use in writing--are, by contrast, consistently insignificant. Thus, computer use had no statistically significant effect on observed improvement. On the other hand, the student's gender appears to be significant, and self-supported revision practices appear to influence improvement positively: Female students in these classes improved more than their male counterparts did, and students of either sex who categorized themselves as thorough revisers were likely to have greater improvement marks than those who categorized their revision as cursory or nonexistent. Given the extremely low R2 values for our overall regression equations, however, these results would have to be tested more rigorously before being generalized beyond the bounds of the current study.

The notable finding evident in Table 3 is that our three indices of improvement appear to measure different things. (The fourth index is an average of the other three.) One would expect to see a fairly strong relationship in a large sample between improvement over the quarter, and improvement from draft to draft of individual papers. However, the Pearson's correlation coefficient between these three measures varied considerably, ranging from -0.1082 (between improvement indices 1 and 2) and 0.5398 (between improvement indices 1 and 3). (This correlation coefficient varies between -1 and 1, where positive values imply positive correlation [x and y rise and fall together] and negative values imply inverse correlation.) This finding suggests that the second index of improvement (that between subsequent drafts of a midterm paper) is somewhat anomalous. The regression run on improvement index 2 is the only one in which the revision variable was insignificant, which in the light of the foregoing seems difficult to assess constructively--perhaps revision skills had not taken hold yet. Oddly, gender proved insignificant in the improvement index 3 equation, although it was consistent (F less than 0.05) for other indices.

We were naturally curious as to whether computer use would influence improvement differently among any of the subgroups we tested. In order to test the possibility that students of one sex or in one area of study improved significantly more than did their peers as a result of computer use, we ran t-tests on each of these subgroups; the t-tests failed to find anything the regression analysis didn't find. We found that no subgroup responded to computer use any better than any other group. The t-scores for the physical and natural science students (both for gender and for computer use) came closer to significance than did those for humanities and social science students, but statistically significant score patterns were never observed.

Discussion, Implications, Recommendations

While students in the computer group did improve, their improvement was not statistically more marked than that of their counterparts. Females improved more than males did, but students in the hard sciences and engineering did not measure significantly higher than those in the other disciplines. Previous computer background also did not appear to have a significant role in the improvement of student writing. Students who said they often revised tended to score higher than students who did not revise often.

These observations might, on first glance, lead us to question the wisdom of word-processing instruction in this kind of institutional setting; and certainly they serve as a counterpoint to the earlier studies that wholeheartedly embraced widespread computerization as a magic key to improved college composition. Since the completion of this study, more cautious researchers have continued to refine and qualify the pioneers' optimistic projections: The review articles of Hawisher (1986, 1988) provide useful summaries of the decidedly ambiguous results that are emerging from a wide. range of experimental designs and sample groups.

Nevertheless, this study provides valuable and original insights into the mechanics of large-scale minicomputer text processing, and the specific factors working to render dramatic improvement less likely. Rather than disproving our hypotheses, these observations serve to clarify for us the conditions necessary for a truly definitive test, one which we hope to conduct in our Macintosh classroom soon.

From the nature of the computing system--its multi-user configuration and its software characteristics and documentation--to the physical layout of the terminal rooms and the logistical problems of training students in a 10-week quarter, the constraints of the experimental conditions combined to skew results away from the hypothesized outcomes. The experience also clearly outlines the obstacles that must be overcome before the hypotheses articulated above can be definitely tested, and suggests other follow-up studies--for instance, tracking computer-group students over subsequent classes to see whether they made a long-term investment in computerizing their writing processes.

Central to the difficulties faced by the computer users was the inherent difference between microcomputer word-processing software and minicomputer text-editing/formatting: Whereas a menu-driven word processor and simple operating system are relatively easy to learn and relatively well documented, the command-line screen editor vi and the UNIX operating system itself posed serious barriers to novice users. Although David Dobrin (1987) points out that a minicomputer-based system (one running on the UNIX operating system in particular) can potentially provide certain economic and pedagogical advantages over a roomful of micros, we encountered problems that tended to outweigh the benefits. The rather complicated demands of log-in procedures, terminal configuration, and shell-command syntax inhibited both students and instructors, frequently making the orientation sessions problematic and requiring repeated troubleshooting. A significant number of computer-group questionnaire responses mentioned the difficulty of mastering the system, and instructors generally felt the negative effects of a long learning process, especially in a 10-week quarter. The UNIX operating system is an excellent and powerful tool in the hands of experts, but it is difficult for many novices to master. A student had to be extremely well-motivated to learn enough word-processing techniques to perform sophisticated revisions, and the length of the learning curve did not fit well into the quarter system.

If the basic nature of a minicomputer operation system just outlined tended to cause problems for student users, the textediting software itself proved to be a liability. As Bridwell, Sirc, and Brooke (1985) and Haas and Hayes (1986) have noted, students often experience problems reading their own work on the screen in the best of circumstances, and the prevailing conditions no doubt compounded our students' difficulties. This multi-user system does not support a "what-you-see-is-what-you-get" display of a formatted text on screen, and the formatting requirements caused a certain amount of frustration. When a user makes a change or adds a word, the text does not immediately rearrange itself as it does with a word processor; the user must wait until the printing stage to see the passage in its "finished" form. Because the students had to comply with the requirements of the NROFF output formatter with the -ms macro package (both of which require mastery of about a dozen essential commands), they did not immediately reap the benefits of a clean, neat product that is supposedly the virtue of word-processed composition.

As questionnaire responses demonstrated, a confusing command protocol, inadequate documentation, and frustrating error messages often interfered with the composing and revising process. Nevertheless, the computer group was able to overcome some of these obstacles and improve their writing, as Table 1 indicates. The statistical analysis showed, however, that computer use per se could not be proven to be the cause of that improvement. Students were unable or unwilling to take advantage of the minicomputer editing capabilities in a way that could be detected by the tests we performed.

In fact, even studies using relatively "friendly" environments have stressed the increased copiousness of student writing, rather than the improvements at the sentence level (Haas, 1987; King, Birnbaum, & Wageman, 1984). Despite the recent advances in using keystroke-tracking programs to monitor revision processes (for example, Bridwell & Duin, 1985; Flinn, 1987), there is still little or no hard evidence to show that word processing alone changes basic editing strategies for the better; the computer can improve the productivity of a good writer who already has good revision habits and a motivation to write well (MacWilliams, 1982; Weiss, 1988; Zinsser, 1983), but we still need to explore why and how student writers react to the computer in the ways they do. As more composition programs like UC Davis' make the commitment to full-time computer classrooms in specific sentence-level and paragraph-level strategies taught with the word-processing systems, we expect to see more encouraging confirmation that what we as writers consider an indispensable tool is truly and measurably so.

Two major concerns emerge from this study: First, computer-writing lab designers and users should carefully consider the type of working environment they are creating when they computerize; second, this study should be re-done in a well-designed and user-friendly computing environment under ideal statistical conditions--i.e., with completely random sampling and an extremely large number of students, with a time frame longer than a 10-week quarter available. At UC Davis, we are preparing just such a study in conjunction with the Macintosh classroom.

Beyond the electronic problems with the editing environment, the physical situation of the terminals may pose a barrier to effective revision in a way not adequately examined by the literature or readily understood by those developing terminal rooms. Because the computer terminal rooms are crowded, uncomfortably warm, harshly lit, and noisy, they do not provide a congenial environment for revision. In the main terminal rooms at Davis (and, alas, even in the Macintosh classroom), terminals are so close together that students find it difficult to set down their drafts and books without interfering with neighboring users. Whereas writers of conventionally typed papers could do their editing in a quiet, private setting, computer users had to work in an environment that often made serious concentration decidedly difficult. Future studies should attempt to control more accurately for differences in physical environment between computer and control groups, so that important psychological and ergonomic factors do not prejudice the results.

Besides the problems at the test-generation stage, the logistics of essay collection and storage proved complicated. The electronic mail network may be simple for an experienced user, but it occasionally created problems with "lost" papers when a student mistyped the usercodes or made an error in command syntax. The individual instructors were not always able to keep up with the paper flow in a busy quarter, and thus a potential source of inaccuracy was introduced.

Computer costs for storage and for typing of essays were significantly higher than expected. This will be an important consideration for future studies of this nature because blind grading demands uniform presentation. However, scanners are now making it possible to move directly from typescript to disk, thus saving the costs of a typist and eliminating a potential source of inconsistency.

The grading sessions brought several problems into focus, not least of which was the expected difficulty in evaluating different types of assignments according to a generalized rubric. The subject matter of the adjunct courses varied from history and economics to physiology and environmental studies, and this meant a wide range of accessibility: It was usually easy to tell whether a history student had covered the topic, but quite another matter for a layperson to judge how well a student had explained the mechanism and importance of ion transfer across cell membranes. Some of the assignments were clearly defined lab reports or term papers, while others involved a narrative report of a field trip; a draft proposal for a study had to be judged by exactly the same standards as a draft paper on medieval guilds; all this occasioned some difficulties in norming. To judge a partial draft as a completed exercise meant compromise and inconsistency, and to assume that all audiences demand the same degree of explanation is somewhat impractical.

Thus, one of the strengths of the study--that it used student papers as the testing instrument, rather than some outside examination like the Houghton-Mifflin Assessment (see Dean, 1985)--was potentially also a weakness, at least insofar as the norming and grading problems were concerned. To make the papers more uniform and easier to evaluate in future studies, one might want to stipulate that instructors include several typical assignments, such as a definition and explanation of an important specialized term, or a cause-effect essay with specific characteristics.

Besides furnishing the Writing Center and the English Department with increased expertise in the conduct of such studies, this project has provided other benefits. Training techniques for textediting and beginner-level documentation have been improved, and the past year has seen an increased level of cooperation between Computer Center personnel and English Department instructors. The papers of both the control group and the computer group are stored electronically on magnetic tape and floppy disk, forming a ready-made database for prose analysis and investigation. These papers could be used as a reference standard for the WRITER'S WORKBENCH programs, so that a customized English 102 version of these surface analysis programs would be available to any interested student.

The fact that the data gathered in this study did not support the hypotheses is in itself extremely valuable: We have had to delve deeper into the advantages and disadvantages inherent in various computer-aided composition environments and have had to temper some of our initial optimism. By illustrating the difficulties encountered by a large number of students as they tried to combine composition instruction and computerization, the study has led directly to improved documentation of the text editor, improved computer-assisted writing pedagogy, and heightened awareness of physical and psychological constraints on computer-assisted writing. It also helped provide the impetus to establish a Macintosh-equipped classroom, which has proven to be a more productive and more efficient working laboratory for computer-aided composition. Future studies will have to address the short-term and long-term logistical and pedagogical problems of the computer writing environment, be it a multi-user minicomputer system or a room full of microcomputers; these studies should also take care to design assignments and norming mechanisms that will allow accurate and fair comparison of a wide variety of student work.

John Stenzel, Wes Ingram, and Linda Morris all teach at the University of California, Davis.

References

Arms, V. (1982). The computer kids and composition. (ERIC Document Reproduction Service No. ED 217 489)

Bean, J. C. (1983). Computerized writing as an aid to revision. College Composition and Communication, 34(2), 146-148.

Bridwell, L., & Duin, A. H. (1985). Looking in depth at writers: Computers as writing medium and research tool. In J. L. Collins & E. A. Sommers (Eds.), Writing on-line (pp. 76-82). Upper Montclair, NJ: Boynton/Cook.

Bridwell, L., Sirc, G., & Brooke, R. (1985). Revising and computing: Case studies of student writers. In S. Freedman (Ed.), The acquisition of written language: Response and revision (pp. 172-194). Norwood, NJ: Ablex.

Collier, R. M. (1983). The word processor and revision strategies. College Composition and Communication, 34(2),149-155.

Collins, J. L., & Sommers, E. A. (Eds.). (1985). Writing on-line. Upper Montclair, NJ: Boynton/Cook.

Daiute, C. (1983). The computer as stylus and audience. College Composition and Communication, 34(2), 134-145.

Daiute, C. (1985).Writing and computers. Reading, MA: Addison-Wesley.

Daiute, C. (1986). Physical and cognitive factors in revising: Insights from studies with computers. Research in the Teaching of English, 20(2), 141-159.

Dean, R., & Gifford, J. (1985). Word processing and the freshman English program: What does it matter? Proceedings of the Conference on the Freshman Year Experience, University of South Carolina, Columbia.

Dobrin, D. N. (1987). Minicomputers for a microcomputer lab? Computers and Composition, 5(1), 7-18.

Elias, R. (1985). Micros, minis, and writing: A critical

survey. Research in Word Processing Newsletter, 3(3), 1-6.

Flinn, J. (1987). Case studies of revision aided by keystroke recording and replaying software. Computers and Composition, 5(1), 31-44.

Haas, C. (1987). Computers and the writing process: A comparative protocol study (Report No. 33). Pittsburgh: Carnegie Mellon University Communications Design Center.

Haas, C., & Hayes, J. R. (1986). What did I just say? Reading problems in writing with the machine. Research in the Teaching of English, 20(1), 22-35.

Harris, J. (1985). Student writers and word processing: A preliminary evaluation. College Composition and Communication, 36(3), 323-330.

Hawisher, G. (1986). Studies in word processing. Computers and Composition, 4(1), 6-31.

Hawisher, G. (1987). The effects of word processing on the revision strategies of college freshmen. Research in the Teaching of English, 21(2), 145-159.

Hawisher, G. (1988). Research update: Writing and word processing. Computers and Composition, 5(2), 7-29.

King, B., Birnbaum, J., & Wageman, J. (1984). Word processing and the basic college writer. In T. Martinez (Ed.), The written word and the word processor (pp. 251-266). Philadelphia: Delaware Valley Writing Council.

MacWilliams, P. (1982). The word processing book. Los Angeles: Prelude.

Rodrigues, D. (1985). Computers and basic writers. College Composition and Communication, 36(3), 336-339.

Schipke, R. C (1986). Writers and word processing technology: Case studies of professionals at work (Doctoral dissertation, University of Pennsylvania). Dissertation Abstracts International, 47, 1226A.

Weiss, T. (1988). Word processing in the business and technical writing classroom. Computers and Composition, 5(2), 57-70.

Zinsser, W. (1983). Writing with a word processor. New York: Harper & Row.