6(1), November 1988, pages 47-61

Style and Usage Software:
Mentor not Judge

Randy Smye

At the 1986 International English conference in Ottawa, Betsy Barber described her recent study of English teachers whom she had queried about their views on the effects of technological change and English teaching. The majority had responded to Barber's survey--yes, that "technology will be directly responsible for significant change in English teaching between now and the year 2000" (Barber, 1986). One point in that report made me refocus on aspects of my own work. The commentators in the report, a wide range of international composition specialists, agreed on the importance of at least one field of technological advance that could have the broadest impact on the schools: educational software that is heuristic, interactive, and contextual.

Many of us are familiar with programs described by those three adjectives: for example, WRITER'S HELPER or HBJ WRITER. However, in contrast to these and other prewriting programs, computer-assisted revision often seems to be more summative or evaluative, focusing on the surface correctness of text at the completion of the writing process. Unfortunately, many instructors maybe tempted to allow this "traditional" paradigm to direct their use of technology. The limits of a microcomputer's speed and memory often rule that style and usage commentary must lurk like a red-pencilling terror at the end of students' writing processes. However, I'd like to show how the determined instructor can use style and usage checkers (for instance, HOMER, GRAMMATIK, WRITER'S WORKBENCH) for writing exercises that encourage a recursive pattern of student revision. Let's look at several instructional design ideas that can create "process-centered" revision software that appears heuristic, contextual, and interactive.

First, how might the technology of a style and usage checker tempt curriculum design? Notice the procedure of Michael Cohen's HOMER, which runs on the Apple IIe. Typically, students compose an assignment and then choose to have HOMER read their texts to find four different word types simultaneously: prepositions, to be verbs, words containing -ion or -sion, and vague words. The program keeps track of how often each word type appears in a text and also counts the total number of words and sentences. Students can choose several options for displaying the resultant information and statistics.

However, to use the program the writer must stop writing, quit the word-processing packager, load HOMER, process a chosen text, and wait for the results to be displayed. The writer must then reverse the procedure to revise. Considering the speed of an Apple IIe, the size of a typical essay text file, and the processing demands of HOMER's analysis algorithms and dictionaries, a typical checking cycle might take 15-20 minutes. The MS/DOS based GRAMMATIK, one of several style checkers for business writing, focuses on readability, passive voice, and possible misused words or phrases. But it, too, follows the familiar "write-then-check" model burdened by a lengthy checking cycle. Consider also that these revision activities might be occurring in the typical overburdened school or college computer site where a host of users must compete for meagre computing resources.

A composition instructor's options seem limited. Few of us, under the circumstances I have described, would consider frequent use of a revision tool. The usefulness of style and usage feedback diminishes when the user faces the software's laborious process.

The single-tasking, limited memory of the PC forces linearity upon computers and composition curriculum design. Computer assisted revision could, with some justification, be relegated to a world as separate from the composing process as a teacher's judgmental red marks are from the sweaty, immediate world of students' composing processes. The only feasible option might be a single run of the software at the completion of a word-processing session. Current "ecology" of computers and composition, PCs running in overused computer labs, gives few opportunities for frequent cycles of revision software. The technology drives composition instructors to an inappropriate composing paradigm: linear and judgmental.

The burden of technology may influence curriculum in yet another fashion. I suspect that HOMER, GRAMMATIK, and my own particular favourite, the UNIX program STYLE, need more controls built into the categories and quantity of their data. For instance, a surfeit of stylistic feedback may prevent a writer from attending to an appropriate feature; and, conversely, restricted feedback may focus on an inappropriate feature. How, then, does the writer most effectively incorporate the computer program's feedback into subsequent revision and planning? And, moreover, how does an instructor selectively integrate feedback features into a sequence of instruction?

The very wealth of STYLE can be a curse of plenty. Not only does STYLE comment on readability, numbers of prepositions, and to be verbs, but STYLE also attempts a more sophisticated analysis. The writer can find information on sentence type (for instance, simple, complex, compound, and compound-complex), word usage (for example, the percentage of various parts of speech), and, as well, percentages for sentence beginnings. STYLE relies on another powerful UNIX tool, PARTS, which assigns parts of speech to each word in a given text file. PARTS uses a small dictionary of function words, irregular verb forms, and word endings to classify words. The program then classifies any remaining words by looking for relations between words, using rules of English structure. Primarily, STYLE then prints a summary table of statistics produced by PARTS. But secondarily, STYLE can also be flagged to print sentences with certain characteristics: for instance, all sentences with a passive voice, nominalization, or a readability higher than a specified number. Here is a typical STYLE table:

readability grades:
  (Kincaid) 18.5 (auto) 19.6 (Coleman-Liau) 14.6 (Flesch)
17.0 (19.2)

sentence info:
  no. sent 6 no. wds 191
  av sent leng 31.8 av word leng 5.32
  no. questions 0 no. imperatives 0
  no. content wds 101 52.9% av leng 7.49
  short sent (<27) 33% (2) long sent (>42) 17% (1)
  longest sent 47 wds at sent 6;
  shortest sent 17 wds at sent 5

sentence types:
  simple 33% (2) complex 50% (3)
  compound 0% (0) compound-complex 17% (1)

word usage:
  verb types as % of total verbs
  tobe 62% (8) aux 31% (4) inf 0% (0)
  passives as % of non-inf verbs verbs 38% (5)
  types as % of total prep 16.8% (32) conj 4.7% (9) adv 5.2% (10)
  noun 27.7% (53) adj 14.1% (27) pron 3.1% (6)
  nominalizations 7% (14)

sentence beginnings:
  subject opener: noun (0) pron (0) pos (0) adj (3) art (1)
  tot 67%
  prep 33% (2) adv 0% (0)
  verb 0% (0) sub_conj 0% (0) conj 0% (0)
  expletives 0% (0)

A wealth of information! But what to do with it?

I've tried to surface two issues with style and usage checkers: that technology forces us to accept an inappropriate revision model and that the program output requires a supporting instructional context for a writer to focus effectively on a revision feature. The strength of HOMER is its integration with a specific revision pedagogy: Richard Lanham's Paramedic Method of stylistics as described in his book Revising Prose (1979). In contrast, most style-checking features will judge without explicit revision support. But even if the writer were to gain incentive or directions from the software, the burden of time discourages a recursive revision and planning cycle. Let's face these two issues.

First, the technology. The relentless increase in computer speed and memory and the proportional drop in price can now affect our composition curriculum. The new family of products with 68020 and 80386 chips brings to our classrooms the possibility of a multitasking and recursive writing environment. No longer do we have to suspend the composing process and wait several minutes for revision feedback. New software can integrate prewriting, writing, and post-writing heuristics.

Current primitive examples point the way. RAM resident programs already allow a degree of integration for spelling and outlining with word processing. In addition, a Macintosh loaded with SWITCHER, WORD (3.1), and communication software allows the writer to do the same but adds the possibility for easy spooling of an analysis task to a remote machine running WRITER'S WORKBENCH. However, the most exciting signposts for the future are the Virtue Workstations under development at Carnegie Mellon's Andrew Project and the Intermedia Project at Brown University. Here, developers are working with Sun workstations, which typically involve large bit-mapped screens as well as the speed and multitasking of UNIX running on extremely fast processors. Finally, the "writer's workstation concept has arrived.

Let me illustrate the possibilities for a more current revision paradigm from my own experience at York University where I spent 1986-87 as Director of the new Computer-Assisted Writing Centre. After evaluating affordable text editors for UNIX, the standard ex/vi editor and the Rand "E" editor, we began to adapt Richard Stallman's GNUEMACS (Free Software Foundation). EMACS was originally developed at the Massachusetts Institute of Technology as a programmer's environment for working on UNIX multi-tasking, multi-user computer projects.

This has turned out to be an exciting choice because EMACS (a Microsoft WORD type of environment) allows faculty to create their own student word-processing package through an impressively flexible use of the standard VT220 keyboard. The composition specialist can define what composing functions are allocated to which keys. For instance, a student, while composing in EMACS, could press F19 and a watch simultaneous WRITER'S WORKBENCH analysis appear in a window. Of course, this eliminates lines at the printer or those mind-numbing waits for an output file while your terminal sits frozen, incapable of doing anything else. But, for me, the major benefit is that, finally, we have a technology that encourages style and usage revision at any stage of the student's composing process.

Here is a rough idea of what STYLE looks like running within EMACS. The top half of the screen is a text available for editing while the output is sent into a buffer in the bottom half of the screen . The output buffer could be copied into the text or saved as a separate file. The writer could make revisions in the source text, push F19 again, and watch the effect of the revisions on the screen below.

Despite the marvels of EMACS and the power of a computer like the Sun 3/160s York uses, the problem I raised earlier still exists. How does a composition instructor use this software in a specific curriculum? To summarize the problems with most style and usage output, you could characterize the results as the same type of information overload that students must deal with when their innocent papers are returned, blotched with the zealous instructor's supposedly helpful notes. A wealth of information! But what to do with it?

For those familiar with the approach of Roger Garrison (1981) or Donald Murray (1968), the answer may be easy. Build sequences of student writing conferences that focus on a limited series of relevant revision priorities. Avoid the disincentive dump of extensive commentary. From this perspective, the limits of HOMER are a virtue: commentary on a very specific, manageable revision domain. We must begin to adapt the commentary of more revision software to focus on one or two salient points relevant to where students are in any stage of a composition curriculum.

The York Centre will to a degree differ from some already existing models of university computer writing centres typically serving composition courses or various word processing needs. York's ambition is to have the Centre provide a special resource for three areas of need: writing intensive courses, individualized instruction by the Writing Workshop and Essay Tutoring Centre, and the more traditional need for undergraduate word processing and text analysis. Writing intensive courses at York are regular academic courses designed to improve student writing in a wide range of disciplines. More generally, they are courses that employ what has come to be called the "writing through the curriculum method of teaching writing.

----Emacs: test (Text Fill)----Top------------------------
/usr/local/wwb/bin/style -mm-ll test

readability grades:
  (Kincaid) 16.1 (auto) 16.5 (Coleman-Liau) 14. (Flesch) 17.0 (24.6)

sentence info:
  no. sent 8 no. wds 201
  av sent leng 25.1 av word leng 5.38
  no. questions 0 no. imperatives 0
  no. content wds 122 60.7% av leng 6.9%
  short sent(<20) 13%(1) long sent (>35) 13% (1)
  longest sent 41 wds at sent 2: shortest sent 10 wds at sent 8

sentence types:

--**-Emacs: *ShellCommand Output* (Text Fill)----Top------

We might also want to consider the perspective of Sandra Perl, Sharon Pianko, or Nancy Sommers. Our curriculum should help students extend or elaborate their composing processes (Perl, 1979; Pianko, 1979) and look at revision as an ongoing part of the writing process, not as an isolated step at the end of the process (Sommers, 1980). If we value an extended and recursive electronic composing process, I believe it is essential to use style and usage commentary early in a multiple-draft composing process. If we value revision, let us bring revision heuristics into the curriculum. When used near the beginning of a writing project, style and usage software has the potential for encouraging students to experiment with and model a variety of revising options. Because multi-tasking software now eliminates the necessity of viewing invention and revision as separate, linear processes, new technology may allow more of us to manifest our values in the curriculum we shape with computers.

At my home institution, Sheridan College (Brampton, Ontario), we have developed different implementations of STYLE to give feedback specific to the appropriate rhetorical mode of an assignment. In several sections of Sheridan's first-year Writing Lab course, for instance, students use STYLE on four different occasions while word processing multiple drafts of four different writing assignments. Within each assignment, the students use STYLE for a different purpose. As an example, after the first draft of the description assignment, STYLE feedback focuses on descriptive language and sentence combining for sentence variety. The feedback limits itself to points students have previously covered in our SHERIDAN PREWRITER Description dialogue.

Here is what we did. I first wrote a script with holes available for inserting variables read from the STYLE statistical table. My programmer and I spent about three hours programming a unique version of the STYLE printout: not a major task. The text beginning with the banner below represents the feedback for a description assignment. The notes include standard AT&T WRITER'S WORKBENCH commentary mixed with my own text.


/usr/randy/bench RANDYSMYE Thu Jun 26 10:55:49 1986


Here is some some help in discovering ideas for the revision of your DESCRIPTION essay.

The readability of your text ranges between grade 14 .6 and grade 19.6. Good, clear messages generally fall between grades 7-11.

You've used a total of 191 words, with an average sentence length of 31.8 words per sentence. Professional writers aim for between 17-21 words per sentence.

You have 8 forms of the verb "to be." Descriptive writing should try to use concrete action words In place of the weaker "be" verbs. You might consider that in your revision.

You have 2 simple sentences and 3 complex sentences. For good sentence variety the difference between these figures should be less than 20%. While considering sentence variety, note that your sentences start 67% of the time with the subject of the sentence. That percentage will drop as you revise by using more prepositions, adverbs, verbs, or various joining words and phrases for sentence openers.

Adjectives supply descriptive detail for your narrative. Your sample contains 27 adjectives. Try increasing the number of adjectives that describe colour, sound, or size.

Texts differ in the extent to which they refer to concrete objects and abstract ideas. Concrete objects, places, or things can be seen, heard, felt, smelled, or tasted. Abstract ideas, on the other hand, cannot be experienced by our senses. From the results of psychological research, we know that concrete texts are easier to read, easier to use, and easier to remember.

In your sample, 5.2 percent of the words are abstract words, which is a high score. Texts with more than about 2% abstract words are abstract. One way to improve such text would be to add concrete examples to explain the abstract ideas.

Here is a list of your abstract words
  5 cost
  1 professional
  1 permission
  1 original
  1 development
  1 amount


A nominalization is a noun created created from a verb. such as "transformation" or "admittance". Research suggests that people will remember your description more effectively when you cut down the fuzziness of your nominalizations. Where possible turn your nominalizations back into their original direct verbs. For example "discussion" could become "discuss":

  His discussion of the term paper was . . . .
  He discussed the term paper . . . .

Here is a list of your nominalizations. You may have used these more than once.


  readability grades--from 14.6 to 19.6

  sentence info:
  average length--31.8 words
    number of simple sentences -- 2 (33%)
    sentences opening with subjects -- 4 (67%)
    word usage: tobe verbs -- 8 (62% of total verbs)
    passives -- 5 (38% as % of non-inf verbs)
    adjectives -- 27 (14.1% of total words)
    nominalizations -- 14 (7% of total)

We are now building two versions of this printout. The longer version presented here will be used by instructors using the results as a prompt to promote subsequent group work and conferencing. Students are expected to read parts of the printout to discover new concepts; however, notice that the content focuses on specific data helpful for later work on the limited domains of concrete, descriptive word choice and sentence variety. Other instructors, though, will have already developed the appropriate concepts in earlier classes with their students. For this reason, our new short form of the description printout will include only the statistics summary and the lists of abstractions and nominalizations. With both versions, instructors are trying to promote revising experiments very early in the writing process.

This curriculum-specific "script" may appear to be a novel idea. But in all fairness, I must credit Charles Smith and Kate Kiefer at Colorado State for their initial work in the early 1980s on their own WRITER'S WORKBENCH scripts. As composition specialists become more familiar with the concepts of software design, perhaps we may begin to see a brisk trade in these types of revision scripts, in the same way as many of us are developing computer-aided prewriting heuristics.

Standing at the brink of a new domain of computer-assisted revision, do we see any pedagogical effects to warrant the time and money we're forever seeking? Perhaps yes. My own exploratory research illustrates that students, with the encouragement of this revision software, seem to pursue in their subsequent drafts deep structure revisions as well as more obvious surface feature changes. Let me present one small experiment that brought me to this conclusion.

For our first-year "basic writers," our descriptive writing goals include descriptive language and also sentence variety. After a first draft, students encounter a few sentence-combining excerpts from O'Hare and Memering's The Writer's Work (1980). We show uncombined kernel sentences and compare them to the professional writer's original. In both cases, we include "style" output for the texts involved.

A specific example follows on the next page.


No two classes of object could be more different. A meteor is a speck of matter. The speck is usually smaller than a grain of sand . The speck of matter burns itself up by friction. It tears through the outer layers of Earth's atmosphere. A comet may be millions of times larger than the entire Earth. A comet may dominate the night sky for weeks on end. A comet may look like a searchlight. The comet is really great. The searchlight shines across the stars. Something is not surprising. Such an object always caused alarm. The object was portentous. It appeared in the heavens. Calpurnia said to Caesar something. 'Beggars die. There are not comets seen. The heavens themselves blaze forth the death of princes.
(O'Hare and Memerling)

Readability -- from grade 4.0 to 7.1
Sentences -- Average length: 6.9; Simple: 89%; Subject openers: 94%


No two classes of object could be more different. A meteor is a speck of matter, usually smaller than a grain of sand. which burns itself Up by friction as it tears through the outer layers of Earth's atmosphere. But a comet may be billions of times larger than the entire Earth. and may dominate the night sky for weeks on end. A really great comet may look like a searchlight shining across the stars. and it is not surprising that such a portentous object always caused alarm when it appeared in the heavens. As Calpurnia said to Caesar: "When beggars die, there are no comets seen: the heavens themselves blaze forth the death of princes." (Arthur C. Clarke)

Readability -- from grade 8.5 to 10.0
Sentences -- Average length: 23.2; Simple: 20%; Subject openers: 60%

Of course, at this stage the students also have STYLE feedback on their own first draft of the description assignment. In class, I will remark briefly on the use of "linking" words: The term "linking" usually is adequate enough for me to avoid such deadly terms as adverbial and relative clauses. The students then work in small groups, looking for similarities to either the kernels or the professional's original in their own work.

The goals for a subsequent second draft include work on concrete language and also work on reducing kernels and increasing the "relationships between ideas and word pictures." I've had the sense that my students' marked relish for this task is somehow related to the STYLE software's ability to act as a concrete monitoring or feedback mechanism. I suspect that somehow the cognitive task of revision is becoming clearer for them, more goal-oriented. Whatever the source, the results are impressive. Here is a before-and-after snapshot typical of what I am finding:

Pat's First Draft

I parked the car and began to hunt around the car lot. The flags that surrounded the lot were clapping in the warm, gentle breeze. I began to admire the new cars that were on display but I wasn't alone for long. A largely built salesman with small beady eyes approached me. He right away tried to sell me a new car but I quickly disappointed him. I told him what I was looking for and he took me to a few cars which did not interest me at all. I was about to turn away when he tried one last time. He took me toward a bright red vehicle. I think this was it. The body was in excellent shape and there weren't very many scratches scratches or dents. It was a 1979 Mustang with a 6-cylinder engine and I was surprised at how clean it was. The interior was black vinyl, very sleek-looking and it had its original AM radio. I took it for a test drive and was quite impressed. This was the car for me.

Readability -- from grade 3.3 to 4.5
Sentences -- Average length: 13.1; Simple: 36%; Subject openers: 93%

Pat's Second Draft

As soon as I parked the car I began to hunt around the car lot. While the flags clapped In the warm, gentle breeze, I admired the new cars that were on display. Suddenly, a largely built salesman with small beady eyes touched my shoulder. Quickly he tried to sell me a new car but I let him know right away that I was not interested. After explaining to him what I did want, he took me toward a few likely suspects. Again, there was nothing which appealed to me. Then, I noticed a bright red vehicle parked in the rear of the lot. Holding back my excitement, I enthusiastically approached the car. It was a 1979 Mustang with a 6-cylinder engine and I was surprised at how clean it was. The exterior was in excellent shape with hardly any scratches or dents. The sleek-looking black vinyl interior impressed me greatly, and it was equipped with its original AM radio. After taking it for a test drive I knew this was the car for me.

Readability -- from grade 5.4 to 6.7

Sentences -- Average length: 14.8; Simple: 42%; Subject openers: 33%

Note the increase in readability, average sentence length, and sentence type distribution, and, too, the decrease in subject openers.

In conclusion, I would urge you to re-examine style and usage software from two perspectives. First, the new technologies are finally bringing within our financial reach the possibility of a revision paradigm that is more interactive and recursive, more immediately available at any stage of the composing process. And second, we must mould the technology to match our curriculum goals. I remember Ray Rodrigues once remarking how prewriting software was in its infancy, sharing the infancy of invention heuristics. Revision software is in a similar infancy, shared by recent research that is only now beginning to probe the mysteries of the reviser's mental processes.

Randy Smye teaches at Sheridan College. Oakville, Ontario, Canada.


Barber, E. (1986). Technological change and English teaching: A Delphi study of American, British, and Canadian English educators' views of the future of secondary English teaching. Paper presented at the Fourth International Conference on the Teaching of English, Ottawa.

Garrison, R. H. (1981). How a writer works. New York: Harper and Row.

Lanham, R. A. (1979). Revising prose. New York: Charles Scribner's Sons.

Memering, D., & O'Hare, F. (1980). The writer's world: Guide to effective composition. Englewood Cliffs: Prentice-Hall.

Murray, D. H. (1968). A teacher teaches writing. New York: Houghton Mifflin.

Perl, S. (1979). The composing processes of unskilled college writers. Research in the Teaching of English, 13, 317-336.

Pianko, S. (1979). A description of the composing processes of college freshman writers. Research in the Teaching of English, 13, 5-22.

Sommers, N. (1980). Revision strategies of student writers and experienced adult writers. College Composition and Communication, 31, 378-388.