7(3), August 1990, pages 73-79

Computer Teachers Respond To Halio

In "Student Writing: Can the Machine Maim the Message?" Marcia Peoples Halio (1990) of the University of Delaware reports that first-year writing students using Macintosh computers produced consistently poorer essays than their peers working with MS-DOS computers. She attributes the difference in the quality of student writing to differences between the computers themselves specifically to the difference between the Macintosh's graphical interface and the character-based, command-line interface of the MS-DOS machines. Her own experience, along with that of four instructors who taught on both Macintoshes and IBMs during the Fall 1988 semester, "seems to demonstrate," she says, that "using the Mac or the IBM could have [a significant] effect on students' writing" (p. 17).

As we shall indicate below, Halio's article is so seriously flawed by methodological and interpretive errors that it would probably have been dismissed had it appeared in a journal directed to an audience of professional writing teachers. Publication in Academic Computing has given it wide circulation, however, not only among faculty members involved with writing instruction, but also among administrators responsible for purchasing equipment for their campuses. Its potential impact is therefore considerable.

This letter grows out of discussions taking place over a Bitnet discussion loop called Megabyte University (moderated by Fred Kemp at Texas Tech University) between January 20 and March 2, 1990. Megabyte University "enrolls" some 70 people interested in writing instruction, including faculty members and graduate students from universities and colleges across the United States. Approximately a dozen members became involved in the discussion of Halio's article. Most were initially inclined to dismiss the article as trivial, until faculty members participating both in Megabyte University and in another loop called Humanist with some 600 members in 20 countries (edited by Willard McCarty at the University of Toronto) reported receiving photocopies of Halio's article from deans and other administrators, with comments to the effect that Halio had "proved" the inferiority of the Macintosh as a machine for writing instruction.

These reports have persuaded several individuals that we should explain the problems we see in Halio's article to readers of Academic Computing who may not be aware of control procedures used in writing research and who may misinterpret the numerical data Halio provides. The signers negotiated the composition of this "corporate" reply in open discussion on Megabyte University, posting drafts and incorporating the ensuing comments. The result has been shaped not only by those whose names appear below, but also by the objections and counterarguments of individuals who have chosen not to sign; the latter are not, of course, accountable for the contents of this letter. We do not seek to demonstrate that Halio is wrong. Our point is that the University of Delaware experience may not lead to the conclusions Halio reaches, and that a far more careful study is required.

Halio offers the following evidence:

  1. Her own qualitative observations about the writing of students taught by herself and by several other instructors.
  2. Results obtained from the WRITER'S WORKBENCH text analysis program's Style module that she believes to have "confirmed" her observation (p. 18).
  3. The remarks of four unidentified instructors who responded to a query from her.

Halio also quotes at length from three student papers produced on the Macintosh (p. 17). They are indeed poorly written, though they might have been much worse without deserving the thrashing Halio gives them (p. 17). Furthermore, much of Halio's contempt is directed at the topics chosen by Macintosh users (p. 17) although she does not indicate how two of these students come up with identical topics ("American Eaters") if, as she claims, they were given only general "writing suggestions" (p. 17).

Direct comparison of these examples with sample essays produced on IBM computers would have been extremely helpful, but Halio offers no samples of writing done on an IBM. Instead, she uses results obtained from WRITER'S WORKBENCH to make her case against the Macintosh. Macintosh users wrote "fewer complex sentences" than IBM users, used more to be verbs, and wrote shorter sentences; their essays also received lower scores on the Kincaid readability scale--7.95 for Mac users, as opposed to 12.1 for IBM users. These results, says Halio, "confirmed [her] initial impressions" (p. 18);but they do not necessarily mean what she says they do.

The WRITER'S WORKBENCH Style program supplies a great deal of information about a piece of text, but none of it concerns the content or the quality of that text. That is, the program cannot tell us anything about what an essay says, or about its value; and it is likewise unable to determine the relationship between content and the stylistic features it is designed to measure.

Moreover, the program's analyses of those features have only about 80 % accuracy, and the program's output can confuse people who don't know how to interpret it. Three of Halio's WRITER'S WORKBENCH measurements are suspect even if the program's information is accurate:

  1. Readability tests measure readability, not writing ability. In general, a high readability score means that the writing is hard to understand. Clear business writing and general magazine writing, for instance, normally score from 8 to 10 on the Kincaid scale, not 12.1.
  2. Sentence length, as Halio says, is related to readability. In general, longer sentences are harder to understand. Most writers of nontechnical prose average only 16-20 words per sentence, and many style-analysis programs (and many readers ) would reject the 22.6 word average that Halio cites as desirable.
  3. "To be" verbs are too high in both samples that Halio cites. Even the 23% scored by IBM users in Halio's sample exceeds the 15% to 19% usually found in professional writing.

Even the formalistic measures Halio uses are open to multiple interpretations, then, and without sample texts for comparison, they do not by any means prove that students writing on the IBM produced significantly better work that those writing on the Macintosh. Certainly, these data offer no grounds for concluding that the computers caused the differences Halio perceived in the student's writing. Halio provides a similarly misleading description of the Delaware student population. She says that "all students in the computer sections [of Delaware's first-year writing course] have roughly comparable levels of writing ability" because their "SAT scores, as well as the results of a placement essay have put them in the medium writing-ability range (they did not qualify for the Honors Program, nor were they placed in the remedial sections)" (p. 17). This "medium writing-ability range" is wider that Halio implies: One presumed, for instance, that instructors felt justified in using the full grading scale--A to F--to delineate differences among individual students.

A more useful study would provide additional detail about the University of Delaware students, their backgrounds, and the attitudes toward writing they brought with them into the classroom--in other words, information about the factors influencing the students' classroom performance and, indeed, their initial choice of which sections to take. Halio completely ignores information crucial to evaluating student writing--information about the student's racial, ethnic, and class affiliations, about their gender, and (not least, in this context) about their previous experience with computers.

Halio says that students were free to choose between Macintosh and IBM-based sections of the first-year writing course; and it is precisely because the students were free to choose that we need to know so much more about them and the reasons for their choices. For example, did students with little experience in writing on computers choose the "friendly" Macintosh? The self-selecting nature of this sample may well have affected Halio's observations.

Finally, a reliable study would provide specific information about Delaware's writing curriculum, and the ways in which it integrates computers into the composing process and the curriculum generally. Halio say nothing about the curriculum, however. Moreover, she implies that students are left to figure out for themselves how best to use the computers for writing:

[A]fter approximately one and one-half hours of instruction they then work to improve their computer skills from self-paced handouts prepared by our Academic Computing Support services. They use the public sites on their own time to write their papers. (p. 17)

Ninety minutes may be enough training for students to learn the basic elements of word processing, though even that depends in large part on their familiarity with word processing and the particular word-processing package involved, which Halio does not identify. (Both The University of Texas at Austin and Iowa State University, for instance, offer Beginning, Intermediate, and Advanced courses, each an hour and a half long, on both MS-DOS and Macintosh versions of MICROSOFT WORD.) But there is a considerable difference between using a word-processing program on a computer to enter and manipulate text that has already been composed, and using the word-processing program as a fundamental part of the composing process itself. Even trained, professional writers making the transition from pen and legal pad or from typewriter to word-processing program require considerably longer than an hour and a half's training to integrate the word-processing program completely into their methods of composition.

For all the flaws in her article, Halio has raised serious questions about the effects of hardware and software design upon those who write with computers, and we must certainly investigate those effects more fully. As we do so, we will have to consider the strong possibility that we may need to adapt our writing curricula to computer--based (and graphics-oriented) writing technology. The computer changes writing practices, and the further the technology diverges from traditional practice, the more teaching practice has to take that shift into account. It is not in raising the questions, but in claiming to have answered them and in rushing prematurely to publication rather than waiting for the results of the "more carefully controlled experiment" she says she is now conducting (p. 45), that Halio--and Academic Computing--have acted irresponsibly.