6(2), April 1989, pages 33-43

Word Processing:
A Helpful Tool for Basic Writers

Craig Etchison

Much has been written in recent years about the use of word processing in the composition classroom. But most of the studies reporting the effects of word processing on writers--including a large empirical study I helped conduct (Etchison, 1985) focus on writers of average or above-average ability. The effect of word processing on writing was a major research question in all 24 studies analyzed by Hawisher (1986), but not one of the 24 studies looked exclusively at basic writers. In reviewing relevant literature for a study he conducted, Nichols (1986) states that "no studies examined only basic writers' use of word processing alone" (p. 82).

Recently, a few researchers have begun to look at how word processing affects basic writers. Nash and Lawrence (1988) released data showing that basic writers who use word processing increase dramatically the amount of text they write, the coherence or connectedness of sentences, and the amount of evidence used to support points in a paper. In light of the limited data, l wanted to examine the effects of word processing on writing quality and the amount of text produced by basic writers.


To obtain data, a small comparative design study was arranged for the fall semester of 1986 at Glenville State College, Glenville, WV. Two sections of basic writers were involved in the study: one section of students using word-processing software on computers and one section of students writing by hand. These writers came from two distinct backgrounds. Thirty percent of the subjects were minority students from large cities such as Washington, DC, and Cleveland, OH. The remaining subjects were from central West Virginia, an economically depressed area of Appalachia where English ACT scores were at or below 15.

The study posed two questions:

  1. Would the students using word-processing software on computers write a significantly greater number of words than students writing by hand?

  2. Would the students using word-processing software on computers write higher quality prose than students writing by hand?

Data were collected from transactional writing tasks. Students wrote one explanatory essay and one persuasive essay at the beginning of the semester and at the end of the semester. At the beginning of the semester, all students wrote with pen and paper. Students were not told they were involved in a study, only that these first two writing samples would be used by the instructors to make some decisions on what would be covered in the class during the remainder of the semester.

Students were given these pre-test writing tasks prior to the class period in which they began drafting because Bridwell (1979) found that doing so seemed to increase involvement on the part of students. Students were given two hours of class time and as much time as they wanted between class meetings to draft and revise each essay. The instructors made no attempt to influence the students' writing process. In fact, both instructors avoided giving any help to students during the pre-test--even when students requested help. The intent was to obtain representative samples of student writing prior to instruction.

Students completed two different post-test writing tasks at the end of the semester: writing within the context of normal class assignments and using the writing processes developed during the semester. Students in the experimental section wrote their papers on computers with word-processing software, the writing tool they had been using all semester, while students in the control section wrote with pen and paper. The four writing tasks used in this study are found in Appendix A.

The pedagogical approach used for instruction during the semester centered on student/teacher conferencing because Clifford (1981) found that such a pedagogy encourages improved writing quality. Classes were conducted primarily as writing workshops in which individual students had conferences with the instructors, while other students worked on their drafts. Multiple drafts were encouraged. Each week, one 30-minute block of class time was spent working on right-branching modification, a syntactic device characteristic of mature writers (Christensen, 1967; Faigley, 1970). Such study is recommended by Hartwell and Bentley (1982) and Christensen (1967).

The two instructors who taught the classes met regularly during the semester to ensure that both the experimental section and the control section were doing the same things during class meetings. Identical writing tasks were used for all assignments in both classes. Furthermore, because the data relied on transactional writing samples, all writing assignments during the semester were transactional.

Instructors made every effort to establish a positive classroom atmosphere--a non-threatening classroom where students were constantly encouraged in their writing. Grades in basic writing classes at Glenville State College appear on students' records, and students receive credit for the classes. However, the credit does not count towards graduation. Hence, the instructors made a conscious effort to use grades to encourage students, and no student was ever given a grade below a C on any writing assignment.

The word-processing students used IBM personal computers and the PC-WRITE word-processing program. Each student received a three-page introduction to PC-WRITE and a training session in the word-processing program one evening, so no classroom time was spent on learning the program. The instructor was always present during class sessions to help students who had any problems mastering the word-processing program. Within two weeks, all students were comfortable with PC-WRITE.

The word-processing section had only one accomplished typist. However, the non-typists did not evidence any frustration at having to proceed with a hunt-and-peck method of typing. During the semester, I asked the non-typists how they felt about using the computers with word-processing software as compared to writing with pen, and all agreed that word-processing software on computers was easier and more satisfying.

As most of us who have taught basic writing know, getting basic writers to write--to produce text--can be a major problem. Basic writers at Glenville State College do not usually rush to writing class, much less begin work on their writing without some encouragement from the instructor. However, during this study, the majority of the students using word-processing software were at their terminals working on their essays five to ten minutes before class was officially scheduled to begin. And when the class period was over, I always had to force students to leave the microcomputer lab so the incoming computer science class could get started. I was surprised (and pleased) at this unexpected behavior. The word-processing software on computers seemed to encourage most students to spend more time producing text as well as working with text in ways not usually seen when basic writers use paper and pen. Such productive behavior in basic writers suggests that word-processing software on computers may be a positive tool in the basic writing classroom.

The students incorporated the use of the word-processing software on computers into their writing strategies in a variety of ways. A couple of the students wrote their first drafts by hand, then typed them on the computer. Some students developed rough outlines--more along the lines of a brainstorm than a formal outline--and typed their initial drafts directly on the computer using their brainstorms as a guide. Some students just started typing away. Whatever the process, the majority of the students consistently spent the whole class period actively working on their texts.

The instructor of the word-processing section and the instructor of the handwriting section served as tutors/facilitators. As the instructor in the word-processing section, I conferenced with students as they requested my help. Sometimes, we worked on the computers with word-processing software, especially in the early stages of a draft. As drafts got closer to the final product (which was to be turned in), students would often want to work on a hard copy. No one seemed worried about doing extensive revising or editing because students quickly realized how the word-processing software did all the drudge work--the insertions, the deletions, the block moves, and most of all, the boring recopying.


The first question this study sought to answer was whether the word-processing software on computers would encourage the production of text by basic writers. Therefore, all words were counted on the two pre-test writing samples and the two post-test writing samples. The results were dramatic and correlate with the findings of Nash and Schwartz (1988). Students using word-processing software on computers wrote a mean of 621 words more on their post-test writing tasks than did students using pen and paper (Table 1). The significance test indicates that the change from the totals of the two pre-tests to the totals of the two post-tests are significantly different for word processing and handwriting, ~(1, 21) = 22.03, p c 0.0001.

Table 1: Mean Number of Words Written: Pre-test and Post-test

MethodPre-test Post-testChange

Word Processing3841304 920
Handwriting390689 299

Second, pre-test and post-test samples were subjected to holistic evaluation of writing quality. All writing samples from the pretest and writing samples from the handwriters' post-test were printed on the same dot matrix printer used by the word-processing section for their post-test writing samples. All samples were randomly mixed so that raters would not know whether they were reading a paper written before or after instruction, or by a student in the experimental or control section. The holistic evaluation was done by experienced writing teachers who also had previous experience in holistic scoring. The raters went through a training session before scoring each task, using anchor papers to establish the criteria for rating papers. Two raters read each paper and scored it on a scale of 1 to 4, 1 being the lowest score and 4 being the highest score. Where the score differed by more than one point, a third rater scored the paper and the two closest scores were used. This process produced four scores for each pre-test writing sample, and four scores for each post-test writing sample. For the ANOVA, all scores for the pre-test were combined, and all scores for the posttest were combined, resulting in a possible high score of 16 or a possible low score of 4 for both pre-test and post-test samples. Appendix B details the criteria developed by the raters for evaluating the papers.

Results of the holistic scoring are shown in Table 2.

Table 2: Analysis of Change in Writing Quality: Pre-test and Post-test


MethodPre-test Post-testChange

Word Processing5.89.5 3.6
Handwriting5.38.7 3.4

The significance test showed no significant difference in growth of writing quality across the semester between students using word-processing software on computers and students using handwriting, F(1,21) = 0.08, p < 0.05.


A study such as this one must be considered in light of what it is a pilot study with a limited population. When the study began, there were 20 students in each section. By the end of the semester, various forms of attrition had reduced the population by almost half. To make sweeping generalizations about all basic-writer populations based on this study alone would be unfair and unwise. But with that caveat in mind, it does seem possible to make a few observations.

First, in any study based on a small sample population, it is important to know if one or two members of that population skewed the statistics by virtue of extraordinary performance levels. Such was not the case in this study. Measured performances were amazingly uniform for all subjects, so statistical results noted above accurately reflect the population studied.

Second, I do think that the significantly increased production of text by the basic writers using word-processing software on computers indicates the possible advantages of using word-processing software in basic-writing classes. The word-processing software seemed to encourage the production of text to an even greater degree among these basic writers than it did among the large population of college writers I studied earlier. If this is the case, and Nash and Schwartz (1988) indicate similar findings, then I think teachers of basic writers would want to have their students using word processors or word-processing software on computers.

At this point, I do not know why the texts were longer. The increased length may involve greater development of ideas, or it may just be that students are writing more words and not really controlling the flow of words in productive ways. Nash and Schwartz (1988) found that increases in production of text could be traced to students using more evidence to support points being made in their essays. This area certainly requires more research with larger numbers of subjects.

Third, I am not terribly concerned that there were no significant differences in the development of overall writing quality between the students using word-processing software on computers and the students writing by hand. After all, a 15-week semester is a short period of time, especially for basic writers who are often struggling with their lack of writing experience. (Many of the subjects in this study had never written an essay.) Both groups made improvements in their writing, as would be expected given the pedagogical approach employed. Perhaps significant differences might turn up later if the students who used word-processing software on computers continue to do so, for the word-processing software has helped encourage productive writing behaviors, including a willingness to produce text and a willingness to spend time working with text. But questions concerning the development of writing quality will require further research, including, I suspect, longitudinal study.

However, even within the limitations of this study, there are a number of positive implications. Would I go to the effort to arrange for my basic-writing classes to have access to word processors or word-processing software on computers if the opportunity presented itself again? I would answer with an unqualified "yes."

Craig Etchison is an assistant professor of English at Glenville State College, Glenville, West Virginia.

Appendix A

The two explanatory writing tasks were these:

  1. As a recent graduate from high school, you've had some time to reflect on your education during those years. Prepare the body of a letter to your high school principal in which you tell him/her what you think the strengths and/or weaknesses of the program are. Explain them completely so that the principal will really understand what you thought were the highs and lows of your high-school career.
  2. The editor of your local paper has written an editorial blasting government statistics that use the "typical, average" family to make their points. Citing one study, he says that there is no family with 2.3 children and an annual income of $16,249.235. He would like to gather some examples of "atypical" families and invites his readers to send him essays explaining how their families are untypical. Each family has its own unique traits. Write an essay for the editor explaining the ways in which your family is the untypical American family.

The two persuasive tasks were these:

  1. Your parents want you to major in something practical, something like computer science or business management, so you'll be assured of a good job. You, on the other hand, want to major in one of the liberal arts, like music or theater arts. Write an essay for Parents' Magazine in which you persuade parents to let their college-aged children make their own decisions and direct their own lives.
  2. You are applying for a summer job. Part of the application asks for an essay explaining your qualifications and experiences. You know you are the best one for the job. Write this essay to convince the employer to hire you. If you persuade this employer, you'll have a fine summer job.

Appendix B

Four transactional writing tasks were used in this study, two explanatory and two persuasive. The holistic raters developed rubrics for each task based on sets of anchor papers, with a 4-paper being the highest quality and a 1-paper being the lowest quality. While the rubrics for each task had individual characteristics, it is possible--in order to save space--to collapse all the rubrics into one, giving the reader a good sense of how raters defined quality during the scoring sessions. It should also be noted that the criteria were arranged in descending order from characteristics the raters considered most important to characteristics they considered least important in affecting the quality of writing. The rubrics that were developed are as follows:

  1. This paper is focused on the writing task and is well organized. The writer makes good use of examples to support generalizations, and points are well developed. There is an excellent sense of audience, and the writer uses an appropriate tone. Excellent syntax is marked by skilled transitions and good mechanics.
  1. This paper has a sense of focus, but not quite as strong as in a 4-paper, and the organization is not quite as effective as in a 4-paper. The writer uses appropriate examples. There is a clearer sense of audience than in a 2-paper, but the writer sometimes assumes too much on the reader's part. There is a variety of syntactical approaches but some flaws in the mechanics.
  1. There is a problem with focus, and the writer is not totally on task. The paper lacks supporting evidence though the writer does give some relevant information. The writer tries, but fails, to develop ideas. The writer lacks a clear sense of audience, so tone may not be appropriate. The syntax may be confusing and generally lacks variety. There are mechanical problems.
  1. The writer is totally off task, and the paper has no organization. There are few or no examples used to support generalizations, and examples may be extraneous to the task at hand. There is no sense of audience, and tone may be totally inappropriate. The writer assumes the reader will fill in transitions. Syntax is choppy and has no variety. Mechanics are poor.


Bridwell, L. S. (1979). Revising processes in twelfth grade students' transactional writing (Doctoral dissertation, University of Georgia). Dissertation Abstracts International, 40, 5765A.

Christensen, F. (1967). Notes toward a new rhetoric: Six essays for teachers. New York: Harper & Row.

Clifford, J. (1981). Composing in stages: The effects of collaborative pedagogy. Research in the Teaching of English, 15(1), 37-53.

Etchison, C. (1985). A comparative study of the quality and syntax of compositions by first year college students using handwriting and word processing. (ERIC Document Reproduction Service No. ED 282 215)

Faigley, L. (1979). Another look at sentences. Freshmen English News, 7(3), 18-21.

Hartwell, P., & Bentley, R. H. (1982). Open to language. New York: Oxford University Press.

Hawisher, G. E. (1986). Studies in word processing. Computers and Composition, 4(1), 6-31.

Nash, J., & Schwartz, L. (1988). Computers and the writing process. On-Cue, January, 3-5.

Nichols, R. G. (1986). Word processing and basic writers. Journal of Basic Writing, 5(2), 81-97.