7(1), November 1989, pages 79-91

Designing Research on Computer-Assisted Writing

Lillian Bridwell-Bowles

Investigators have been conducting research on computer-assisted writing for nearly a decade and are fast approaching the anniversary of one of the first major studies, Hugh Burns' 1979 dissertation entitled, Stimulating Invention in English Composition through Computer Assisted Instruction. It seems appropriate, then, to critique the ways researchers have asked questions during this decade. It also seems appropriate to consider the new kinds of research that must be designed to answer both old, unanswered questions and new, increasingly sophisticated ones. Research methodologies need to keep pace with developments in the field of composition studies, as well as with the rapid advances in technology that have marked the field in the 1980s.

In this article, I discuss the work of five researchers who have conducted studies of writing with computers or writing with computer-assisted instruction, and I place their research methodologies in particular research traditions. Because the editor of this special issue asked me to critique my own work, two of my studies are included here, along with studies by Gail Hawisher, Helen Schwartz, Geoffrey Sirc, and Ann Duin. Respectively, these studies represent research on (1) composing with word processing, (2) writing improvement attributed to computers, (3) instructional software, (4) computer networks for writing, and (5) collaborative writing through telecommunications. I have attempted to define the kinds of questions this body of work has answered and those that have been less well-resolved. I also include recommendations for future research methodologies and topics. The researchers whose work I have discussed have graciously responded to my queries about their work, and many of their ideas appear here, along with those of others whose work is central to research on computers in composition studies. [1]

Questions About Composing Processes

The first significant line of inquiry in the field which calls itself "computers and composition" concerned the effects of word processing on writers. In the early 1980s, my colleagues and I (Bridwell, Nancarrow, & Ross, 1984; Nancarrow, Ross, & Bridwell, 1984) studied the research that had been done on composing with computers and discovered little outside of industry, where researchers were often concerned with the speed of editing documents that had already been composed or with the various components of word-processing packages (e.g., Card, Moran, & Newell, 1980). Most of the early published reports were testimonials based on writers' personal experiences with word processing (e.g., Asimov, 1982; Berry, 1982; Gould, 1980; McWilliams, 1982; Pournelle, 1979; Rothmann, 1980). Because we were at the beginning of a new line of inquiry, our first studies had to be bibliographical, determining the range of existing knowledge that could inform new questions about the uses of computer technology in writing.

In composition research, the major questions during the late seventies and early eighties were about composing processes; similarly, the early questions about computers focused on the place of computers in the writing process. During this period, Lawrence Frase (1980), one of the principal developers of WRITER'S WORKBENCH, noted that "the critical problems that face us are not what computers can and cannot do. They are problems of how well we know and can describe the writing process, editing skills, and the features that characterize good and bad documents." Frase was interested primarily in developing software that could help writers improve their texts. WRITER'S WORKBENCH, for example, provided little in the way of help during composing, but focused on faulty words and phrases that could be detected and edited or revised by the writer or sometimes by the computer.

Next came research on the actual effects of writing with computers on composing. In 1982, my colleagues and I (Bridwell, Johnson, & Brehe, 1987) began studying experienced, published writers who were converting to word processing for their work. We chose them because we thought their prolific productivity, combined with self-awareness of their habits as writers, would reveal more to us about composing processes than studies of student writers. Our methodology was innovative in that we were using the computer to capture evidence of composing processes through keystroke records of the writers' texts as well as evidence of all the changes the writers made as they composed, revised, or edited. Our findings indicated that these experienced writers sought out the computer version of their old, established processes more often than they discovered new composing processes made possible by the new technology.

Then came studies of student writers (Bridwell-Bowles, Sirc, & Brooke, 1985) in which we used a second methodology, a "Playback" program that used keystroke data to recreate composing episodes, making it possible for us to see the text unfolding, just as the writer saw it in real time. [2] We could also slow the replay down or stop it to interview the writer about what she or he remembered about what was happening on the screen. Even though revision was theoretically made easier by the computer, we did not find dramatic improvements in the writing done on computers.

Donald Ross (personal communication, 1989), commenting on our keystroke method, says, "if [keystroke research] is followed up with replay and interviews, you have no better way to chart the moment-to-moment behavior of a writer. . . . Nevertheless, few people have done serious work in this area." Challenging me to offer some explanation for the lack of sustained inquiry using this methodology, he offered some explanations of his own:

The methodology is intrinsically boring to the researcher and . . . the number of data are overwhelming . . . [and] this sort of close analysis also runs into the face of current trends in composition research which see the more important issues to be in the broader context of the writing and teaching situations.

Ross is correct on all three counts, but I would add several other explanations. First, many of us who conducted this early research have also had heavy administrative responsibilities. A number of my colleagues who responded to my request for information about their research cited their lack of time as the main reason they had not continued their lines of inquiry. The service that compositionists provide to our institutions as we set up and administer writing programs, often with complex and expensive computer labs, leaves us with less time for pure research than many of our colleagues. Once initial decisions about the effectiveness of computer labs were behind many of us, there was little time left for serious research. The work of teaching writing across the curricula of colleges and universities leads us inevitably to a triple focus on administration, teaching, and research.

Second, we learned a great deal about the composing processes of novice and experienced writers during this era of research--information about their planning and invention strategies, their patterns and tempos of composing, their awareness of their readers and other rhetorical constraints of writing tasks, and their ongoing revision and editing processes. Many of these studies began to reveal similar patterns. Dozens of researchers found that the computer by itself did not seem to change either writing processes or writing quality dramatically, though it did make the process of writing less tedious. When the hope for discovery wanes, many researchers look for new questions. The potential for boredom lies not so much with the tedious analysis of massive amounts of data as with the repetitious patterns one uncovers.

Third, many of us became aware that behavioral observations and cognitive protocols of "thinking aloud" begged for a theoretical framework that was more complex than the data we could capture from mere observations or transcriptions. We needed a theoretical foundation for our data, one that drew from philosophy, critical theory, sociology, and politics to account for the writer at work within a larger socio-political-philosophical matrix. The whole field of composition studies has shifted its interest to questions of the social construction of meaning and texts, feminist rhetorics, and Marxist political readings of texts in the academic culture and beyond. Mirroring these changes, our methodologies shifted from experiments or clinical observations, cloaked in the respectability of "objectivity," to narratives and complex ethnographies that attempt to show what Dale Spender (1985) has called "multidimensional realities," competing but equally valid interpretations of the same events. As we have moved beyond English and composition classrooms to examine writing, we have become aware of the varying nature of texts in different cultures and discourse communities.

Finally, we have become much more sophisticated in our analyses of literacy in particular cultures. Learning to use language is a complex process that is not accounted for when we think of the human brain as a tabula rasa interacting with a word-processing package or with content in lessons, whether on or off the computer. Andrea Herrmann (1987) was one of the early advocates of studying computers in a social context. She knew that we would have to study the effects of computers on the whole writer/student and on the whole learning environment, as she did in her ethnographic study of computers in a classroom.

There were other limits to our research. We studied writers in the beginning stages of their exposure to word processing. We gained access to them as subjects in our research because they wanted to learn to use new programs and were willing to volunteer their time. Now that writers have greater access to computers, much of the novelty is gone, and they are less willing to participate in research. In the early stages, it was easier to recruit subjects because they were interested in our findings and eager to experiment with changes in their own composing processes effected by the new technology. The real effects of writing with computers probably should be measured long after a writer has made initial adjustments to the medium. As James Fallows (1983) predicted in The Atlantic some years ago, we seem to have settled into a comfortable and familiar middle age with computers, perhaps forgetting the curiosity we had in our earlier encounters with them.

Another problem is that we have so many word-processing packages to choose from that we hardly know what we are measuring when we say we are measuring the effects of "word processing." At the time we conducted our early studies, our university and most major businesses and industries used WORDSTAR, so our choice seemed simple. We could justify a major programming expense as we developed RECORDING WORDSTAR and WORDSTAR PLAYBACK. Today, there are at least a dozen word-processing packages that compete for a sizable market share, and their features are often quite different. Compare, for example, features of Micropro's WORDSTAR with Microsoft's WORD on a Macintosh. Thinking of them as "treatments" in an experimental or clinical paradigm, we cannot say that they are similar enough to be considered equivalent conditions in an experiment. Solving the problem by studying all the major word-processing packages as separate treatments hardly seems feasible, given the volatility of the market. Most recently, Christina Haas (1989) reports on the effects of a new package developed for an educational setting and finds significant effects on planning; perhaps one resolution is to focus our efforts on exceptionally advanced word-processing systems.

Questions About Improvements in Writing

Gail Hawisher's (1987; Hawisher & Fortune, 1988) work represents another kind of question that was asked in the early stages of computer/composing process studies: Is the writing better when the writer uses a computer? Administrators everywhere wanted to know the answer to this question before they invested thousands, and in some cases millions, of dollars in computerized writing labs. Hawisher's studies, first of above-average students and then of basic writers, were outstanding models of experimental design. She asked students to write multiple essays and alternated between computers and paper-and-pencil in the assigned writing tasks. Pre-/post-test ratings of the quality of student writing were the criterion variables. She found no significant differences in either of these studies, despite her sense that there were many changes in the ways the students were working. Commenting on her research, Hawisher (personal communication, 1989) writes, "I firmly believe (after conducting two comparative studies) that experimental designs focusing on improvement in writing quality tell us very little." In one of the best of these studies (Bernhardt, Edwards, & Wojahn, 1989), the authors also recommend descriptive "open, naturalistic investigation of teachers in [computer] lab settings to hypothesize and define variation and adaptation" (p. 129).

Geoffrey Sirc (personal communication, 1989), asked to comment on his reactions to research comparing gains in quality produced by this or that piece of hardware or software, says that "whenever I read articles on the efficacy of word processing or text-checkers or networks, they always evoke the sleazy air of those people who hawk Kitchen Magicians at the State Fair." Agreeing with Hawisher, Sirc argues instead that we should look at changes in the ways students work in classrooms, rather than at the gain scores on tests of writing quality. We in the field should do a better job of convincing administrators, legislators, and all kinds of funding agents that growth in language is a slow process. In a few short weeks or months, we cannot "teach students to write" and test writing growth, no matter how sophisticated our technology becomes, but we can demonstrate that students are actively learning and that good writing is being produced in our classrooms.

Questions about Instructional Software

In an earlier article, Donald Ross and I (1985) critiqued existing software for writers. We found that much of what was being produced fell into the category of "drill and practice" or formulaic instructions for producing the "parts" of an essay, neither of which produced terribly useful lessons or very good writing, for that matter. We also described the limits of computerized text analysis programs. At the time, we found that computerized simulations of realistic activities were the most promising kinds of software available.

Of these, SEEN, developed by Helen Schwartz (1984) and discussed elsewhere in this issue, is probably the most familiar. This software encourages students to develop a hypothesis about a literary character and then to answer tutorial questions about the hypothesis as a way of developing material. An electronic network makes it possible for students to engage in a written dialogue with other students, all writing under secret pen names. Finally, the software provides a written record of all the student's activities on the computer, useful presumably as raw material for more formal writing the student will do on the hypothesis. Other options include the ability to use all text produced in a word-processing file and variations adapted to almost any field. SEEN simulates a whole set of classroom activities--question-and-answer, class discussion, and freewriting using a heuristic. The twists are that the student controls the sequence, and, because everything takes place in writing, the student has a written record of all the transactions.

Schwartz's research was an exemplary model for the kind of classroom-based research recommended by Mohr and McLean (1987). She observed her students in great detail and wrote stories about their experiences in a number of articles. She gathered numerical data on questions that lent themselves to quantification: Which of the options did they use most? In what order? Did students report that they read differently because they were using the program? Were they satisfied that the program helped them to write? In an interesting aside in the report on SEEN, Schwartz reports an "effect" that she says is "almost too subjective to be treated as data": that she got to know her students better while they were working on her program and that she felt like a more humane teacher. In this kind of reporting, Schwartz is both participant and observer. Although we could hardly classify her reports as ethnographies, they tell us the kinds of things that people in the classroom want to know: What happened? (narratives, stories), and How well did it work? (opinion, supported by examples from many observations).

Commenting on her work, Schwartz (personal communication, 1989) reports that further research during a sabbatical at a major research center has convinced her that the important new focus is "implementation, not what COMPUTERS do, but how teachers and students use the medium to change teaching and learning goals." Hypothesizing that developments such as hypertext, electronic mail, and computer conferencing change reading and writing processes, As evidenced by her essay in this issue, Schwartz is also interested in new definitions of literacy that will emerge because of technology.

Studies of New Environments for Writing:
Collaboration Through Networks and Telecommunications

Geoffrey Sirc (1988) is one of the few researchers who has studied student writers at work using a local area network (LAN). Although the research here is too new to be critiqued thoroughly, Sirc (personal communication, 1989) justifies its existence by arguing that in this environment we can best study the effects of computers on the behavior of people in groups. Citing a wide range of theorists, Sirc states, "I'm beginning to wonder if there are any aspects to life that are not in some way determined collectively, socially." According to Sirc, LANs provide closer interaction, a smoother classroom delivery environment for the teacher, and the ability to observe learners in groups unobtrusively. He is definitely not interested in "the umpteenth look at how students use word processing," but rather in the more significant ways that we can study human nature and communication via technology. His important list of questions includes the following: "What do [students] talk about, what do they value, how do they identify themselves and others, how does their communication show the dynamics of power in which they are circumscribed?"

Ann Duin (Duin, Jorn, & DeBower, in press) has correctly noted that documents written collaboratively are commonplace in business and industry and that academic preparation in writing should include collaborative experiences. She draws on a number of composition theorists for this view (e.g., Ede & Lunsford, 1989; Trimbur, 1985) and introduces telecommunications to facilitate collaboration in her research report entitled "Collaborative Writing--Courseware and Telecommunications." Duin and her design team have built a complex package of instructional software designed to help students use the Appleshare telecommunication network as they write collaboratively in sites on both the St. Paul and Minneapolis campuses of the University of Minnesota. Students engage in tutorials designed to help them generate ideas for memos, proposals, short reports, and formal reports. Then they analyze their writing processes, their strengths and weakness, and determine how they will work together to acquire information; research a topic; analyze potential audiences; and organize, revise, and edit resulting documents. Using file servers, the instructor sets up four kinds of computer "folders" where writing is stored: "group" (shared with peers), "instructor" (shared with instructor), "conferencing" (electronic mail to exchange ideas with each other and with other groups), and "private" (individual student) folders. Throughout the collaborations, the instructor monitors and guides the students' work.

In her research, Duin asked individual students to keep extensive logs of their activities, impressions, and frustrations. To date, Duin notes only positive outcomes: more collaboration, better writing, and more positive attitudes toward writing. Asked if she might suggest any negative effects, Duin (Personal Communication, 1989) noted only that students were now demanding to be taught in the computer environments and that faculty who teach more traditional classes were not responding to the need because they often see little value in the collaborative model.

Clearly, these most successful new studies take for granted that using computers is a good thing for students. They have moved beyond basic questions such as "How do students write with word processing?" and on to questions such as "How can we organize an environment for learning that will use what we know about learning, the social construction of texts, and technology?"

Conclusions

Computer-assisted writing simply is or will be a fact of everyone's life in the latter part of the twentieth century. Computers are fast replacing typewriters in offices, schools, and homes. Desktop publishing, electronic mail, and problem-solving software in nearly every field make computer literacy a necessity in a college education. Computers can bring together communications, video, sound, and calculating capabilities in ways that are not possible with any of the tools we have previously used. We need to argue for computer-assisted instruction not on the basis of gain scores in writing quality, but on the obvious grounds that computers (along with telephones and Fax machines) are and will continue to be the primary medium for written communication. Studies of the growth of computer use in all facets of modem life should be powerful evidence of the need for computer literacy.

Writing about the future of research on writing with computers, Cynthia Selfe (personal communication, 1989) sets up a powerful set of questions about computer technology in our classrooms:

How can we use computers as a catalyst for positive social and political change in our writing classrooms and our educational system? How can we use computers to help us address the marginalization and silencing of individuals because of race, age, gender, handicap? How can we use computers to promote increasingly egalitarian exchanges among groups of people within our classrooms who have different levels of privilege and power? How can we use computers to promote both collaborative activities among writers and to support dissent in its most productive forms?

Studies of keystrokes, gain scores, and protocols did not directly address these issues. It may be that we will come back to them after we have satisfactory explanatory theories from which to build hypotheses that call for empirical, numerical research designs, but for a time it will be necessary for us to depart from them.

Lillian Bridwell-Bowles teaches English Composition at the University of Minnesota in Minneapolis.

Notes

  1. Contributors to this article were Stephen Bernhardt, James Collins, Terence Collins, Ann Duin, Gail Hawisher, Donald Ross, Helen Schwartz, Cynthia Selfe, and Geoffrey Sirc. These researchers responded in writing to the following questions: (1) How has your thinking changed since you first started conducting research on questions about computers in our field? (2) If you could redesign one of your studies, which would it be and how would you do it differently? (3) If you know how others have applied your findings, do you think they did so appropriately? (4) What had you hoped to learn in one or more of your studies that is still unanswered? (5) What do you think you know definitively, based on your research? (6) What are the important new questions? (7) Are there any generalizations about past, present, or future research in this area that you think have been overlooked by other articles? (8) What advice about methodology do you have for current and new researchers? (9) What methods beyond those based on models from social science can you recommend?

  2. The study of experienced writers was completed in 1983, before this student study, but because of some intricate history with the publishing of Ann Matsuhashi's edited collection, it appeared after the student study, which was published in 1985.

References

Asimov, I. (1982). The word processor and I. Popular Computing, 1(4), 32-34.

Bernhardt, S. A., Edwards, P., & Wojahn, P. (1989). Teaching college composition with computers: A program evaluation study. Written Communication, 6(1), 108-133.

Berry, E. (1982). Writing with a word processor for scholars, poets, and freshmen. Paper presented at the Modern Language Association Annual Convention, Los Angeles, CA.

Bridwell, L. S., Johnson, P., & Brehe, S. (1987). Computers and composing: Case studies of experienced writers. In A. Matsuhashi (Ed.), Writing in real time: Modeling production processes (pp. 81-107). Norwood, NJ: Ablex.

Bridwell, L. S., Nancarrow, P. R., & Ross, D. (1984). The writing process and the writing machine: Current research on word processors relevant to the teaching of composition. In R. Beach & L. S. Bridwell (Eds.), New directions in composition research (pp. 381-398). New York: Guilford Press.

Bridwell-Bowles, L. S., Sirc, G., & Brooke, R. (1985). Revising and computing: Case studies of student writers. In S. Freedman (Ed.), The acquisition of written language: Revision and response (pp. 172-194). Norwood, NJ: Ablex.

Bums, H. (1979). Stimulating invention in English composition through computer-assisted instruction. Unpublished doctoral dissertation, The University of Texas at Austin.

Card, S. K., Moran, T. P., & Newell, A. (1980). Computer text- editing: An information-processing analysis of a routine cognitive skill. Cognitive Psychology, 12(1), 32-74.

Duin, A. H., Jorn, L., & DeBower, M. (In press). Courseware for collaborative writing. In M. M. Lay & W. M. Karis (Eds.), Collaborative writing in industry: Investigations in theory and practice. Farmingdale, NY: Baywood Publishing Co.

Ede, L., & Lunsford, A. (1989). Collaborative learning: Lessons from the world of work. Writing Program Administrator, 9(3), 17-26.

Fallows, J. (1983, March). Computer romance, part II. The Atlantic, 107-109.

Frase, L. T. (1980). WRITER'S WORKBENCH: Computer supports for components of the writing process. Bell Laboratories Technical Report. Murray Hill, NJ: Bell Laboratories.

Gould, J. D. (1980). Experiments on composing letters: Some facts, some myths, and some observations. In L. W. Gregg & E. R. Steinberg (Eds.), Cognitive processes in writing (pp. 97-127). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.

Haas, C. (1989). How the writing medium shapes the writing process: Effects of word processing on planning. Research in the Teaching of English, 23, 181-207.

Hawisher, G. (1987). The effects of word processing on the revision strategies of college students. Research in the Teaching of English, 21, 145-159.

Hawisher, G., & Fortune, R. (1988). Research into word processing and the basic writer. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, Louisiana.

Herrmann, A. (1987). An ethnographic study of a high school writing class using computers: Marginally, technically proficient, and productive learners. In L. Gerrard (Ed.), Writing at century's end: Essays on computer-assisted composition (pp. 79-91). New York: Random House.

McWilliams, P. (1982). Writing poetry on a word processor. Popular Computing, 1(4), 38-40.

Mohr, M. M., & McLean, M. S. (1987). Working together: A guide for teacher-researchers. Urbana, IL: The National Council of Teachers of English.

Nancarrow, P., Ross, D., Bridwell, L. S. (1984). Word processors and the writing process: An annotated bibliography. Westport, CT: Greenwood Press.

Pournelle, J. (1979). Writing with a microcomputer. On Computing, 1(1), 12-14, 16-19.

Ross, D., Jr., and Bridwell, L. S. (1985). Computer-aided composing: Gaps in the software. In S. Olsen (Ed.), Computer-aided instruction in the humanities (pp. 103-115), New York: Modern Language Association.

Rothmann, M. A. (1980). The writer's craft transformed: Word processing. On Computing, 2(3), 60-62.

Schwartz, H. (1984). SEEN: A tutorial and user network for hypothesis testing. In W. Wresch (Ed.), The computer in composition instruction: A writer's tool (pp. 47-62), Urbana, IL: National Council of the Teachers of English.

Sirc, G. M. (1988). Learning to write on a LAN. T.H.E. Journal, 15(8), 100-104-

Spender, D. (1985). Man made language, 2nd ed. London: Routledge & Kegan Paul.

Trimbur, J. (1985). Collaborative learning and teaching writing. In B. W. McClelland & T. R. Donovan (Eds.), Perspectives on research and scholarship in composition (pp. 87-109). New York: The Modern Language Association.