6(2), April 1989, pages 81-109

Analysis of Typical Models for First-Year Writing Courses

James J. Garvey
David H. Lindstrom

Teachers of English composition use professional essays for a variety of purposes. Traditionally, students have been asked to engage the ideas of others, sometimes because those ideas seem inherently valuable, sometimes because they provoke students into incorporating others' opinions into their own essays. Likewise, professional prose has been used to model rhetorical strategies, for example, to provide instances of arguments directed towards various audiences or conceived for various purposes. In recent years, pedagogical interest in writing across the curriculum has focused attention on professional writing that is characteristic of the moves and the evidence appropriate to various communities.

Professional prose plays another role: to model "good writing." Sometimes, of course, essays are consciously selected for such a function; but even when a teacher selects essays for their content, strategy, or discourse community, the writing comes along, too. And unless the writing is specifically criticized for stylistic flaws, it will stand implicitly as a model of good writing. Certainly, most of one's colleagues routinely praise the quality of writing in essays they use in composition courses. We assume, it seems, that our intuitions about "good writing"--whether they derive from theory, experience, or habit--will be transmitted to students through exposure to the experts. Even composition teachers whose pedagogy relies predominantly on the analysis of student prose, at least tacitly conceive of growth in student writing as a series of successive approximations to professional quality. There exists, then, a broad consensus that professional prose not only exhibits mature thought and complex development, but also illustrates the stylistic advice commonly offered to first-year writers, advice like "use fewer passives" or "develop greater sentence variety." But only recently have composition teachers begun to examine the features of professional prose systematically and to compare those features with our practical advice, with enunciated handbook standards, and with actual grading preferences.

Since the pioneering work on syntactic maturity by Christensen (1968) and Mellon (1969), several studies have examined the directly observable characteristics of texts, surface features such as vocabulary (Grobe, 1981; Neilsen & Piche, 1981), nominalizations (Gebhard, 1978; Hake & Williams, 1981), and other grammatical variables (Gebhard, 1978). More commonly, however, researchers have assessed relative maturity of style through analysis of surface features as they appear in the T-unit, a string of words containing an independent clause and all of its subordinate clauses. The results of this research are equivocal. While some studies have demonstrated correlations of T-unit length and complexity with holistic scoring (Hunt, 1971), others (particularly Grobe, 1981; Neilsen & Piche, 1981) have suggested that superficial textual details may correlate even more strongly with teachers' evaluations of compositions. Much of this research relies heavily on computer analysis of statistical data which are generated by human labor. [1]

Currently available computer text editors offer an ideal testing ground for the hypothesis that superficial textual features are useful in the assessment of writing quality. Text editors can recognize many such features--readability levels, numbers and percentages of different parts of speech, grammatical structures, vocabulary, and so forth. These text editors can also identify a writer's use of verbose, sexist, and abstract language, among other expressions. Such information provides grist for the writer's revisionary mill, and it also provides a mass of statistical data for research.

Colorado State University has one of the most advanced text editors, the WRITER'S WORKBENCH (WWB), originally developed by Bell Laboratories and modified at CSU by Kathleen Kiefer and Charles Smith for use in first-year writing courses. The programs in this system are summarized in Appendix A. WWB displays over 50 variables and has the ability to increase its array as necessary. Like most existing text editors, however, WWB provides only "surface" information; its pattern-matching programs recognize only graphic strings, associating them with syntactic/lexical analyses through various interpretive programs. T-unit analysis, however desirable, is beyond WWB's current capacity and would require, as in most of the research cited above, extensive manual pre-coding. In any case, our present purpose is to assess the utility of the analyses currently within WWB's range.

Most research on composition has concerned itself with the writing of students; professional prose has received little attention. An exception is Hake and Williams (1981) who report on a series of experiments suggesting that junior college teachers, and at least some college teachers, prefer (and write in) a nominal style over a verbal style, even though they enjoin students to favor the verbal mode. Significantly, while Hake and Williams recognize transformational derivations of nominalized forms, they include surface criteria, such as test frames, when they identify such expressions (436).

The possibility of identifying surface features of a text that correlate with evaluative criteria, along with the awareness of the need to relate such criteria realistically to the professional models that composition teachers hold up for emulation, prompts three questions that are central to this study:

  1. What are the characteristics of professional writing as determined by a computer text editor?
  2. How do such characteristics compare with norms employed in writing instruction?
  3. How do such characteristics compare with those of actual student writing?

Our hypotheses about these questions followed from an informal conception of what constitutes pedagogically useful models. First, we predicted that the professional essays would be sufficiently homogeneous (overall and within generic types) to provide students with grounds to form at least intuitive generalizations about a body of prose superior to their own, and, at the same time, diverse enough that students would not be locked into a single style. Second, we predicted that professional prose would, for the most part, adhere to normative advice commonly offered to students in writing courses. Third, we predicted that professional essays would be noticeably different from student essays, more so with weaker student writers and less so with stronger ones.

The first hypothesis is addressed through a set of descriptive statistical data that represents the range of information provided by WRITER'S WORKBENCH. The second is addressed through a comparison between professional texts and norms built into WWB by the CSU English Department to advise students on their essays. The third is addressed through a statistical analysis of differences between professional essays and student essays of different qualitative levels.


We were interested in essays commonly offered to first-year students as models of certain types of writing. Thus, we asked the major publishers of composition texts to identify their most popular readers used in both first-year and rhetoric courses with selections identified by rhetorical type (see Appendix B). We identified the most frequently recurring writers and selected a sample of 30 essays: 10 argumentative, 10 descriptive, and 10 in the general category of expository (definition, process analysis, compare and contrast, classification, causal analysis, example). We excluded narration because of the difficulties raised by dialogue. Individual essays were selected according to the following criteria: (a) repetitions of authors regardless of rhetorical category, (b) repetitions of authors within category, (c) repetitions of titles, (d) sufficient length for statistical sampling (roughly 1000 words).

Not surprisingly, almost all the essays are modern, although Swift's "A Modest Proposal" and the Declaration of Independence also appear. Not surprising, either, are many of the inclusions: old favorites such as Bruce Catton on Grant and Lee; S. I. Hayakawa on reports and inferences; George Orwell on politics and language; Jessica Mitford on embalming; E. B. White on his boyhood lake; along with selections by William F. Buckley, Jr., Joan Didion, Barbara Lawrence, Annie Roiphe, and Richard Selzer. (The complete list of essays appears in Appendix C.)

We entered thousand-word samples (to avoid bias we selected the first thousand whenever possible) of 30 essays into the word processor, omitting lengthy quotations and extended passages of dialogue. The WRITER'S WORKBENCH programs were run on each sample individually, and the Style subprogram statistics were collated into over 50 variables dealing with readability and sentence variety, sentence structures, parts of speech, vocabulary, and usage. When appropriate, raw numbers were converted to percentages. Characteristics of professional writing were analyzed through an SPSS statistical package: frequency distribution, correlation of variables, and one-way analysis of variance (ANOVA). Comparisons of professional writing with WRITER'S WORKBENCH standards and with student writing were analyzed using t-tests for significant difference.

The student essays used for comparison were a sample of 44 CSU first-year student placement essays, scored holistically and grouped into three categories: high (exemption from composition), middle (regular composition), and low (remedial composition). These essays were entered on and analyzed by WRITER'S WORKBENCH. Reid and Findlay (1986) analyze 25 variables as they relate to holistic scoring. Their data provide the student side of our correlations.


The descriptive data for the professional essays are displayed in Table 1. The means hide some interesting curiosities: A mean Kincaid readability score of 11.55 hides Richard Selzer's low of 2.4 as well as Swift's astronomical 24.4; a mean percentage of passives of 9.97 hides Jessica Mitford's 21% usage. Nevertheless, these data seem typical of professional writing: The mean word length (4.50) of our sample does not differ significantly from the 4.499 (s.d .0.242) of the Brown University Corpus, and the mean sentence length (25.24) of our sample does not differ significantly from that of the "belles lettres" category of the Brown University Corpus--22.70 with a standard deviation of 13.599 (Kucera & Francis, 1967). Moreover, recent studies agree: Broadhead, Berlin, & Broadhead (1982) report a mean sentence length of 24.9 (s.d .3.9) for academic writing, and Gebhard (1978) reports a mean sentence length of 24.02 for "contemporary quality magazine articles."

Such descriptive data allow a range of stylistic statements, such as "Author X uses a smaller vocabulary than author Y," or "Essay Z contains a relatively large number of abstractions." Moreover, these data are potentially normative: If further research confirms these descriptors, teachers can more confidently rely on professional models to support such statements as "Use twice as many short sentences," or "Do not use more than 4.27% non-specific words."

Because we suspected that some of these measures might be redundant, we selected apparently related variables for correlation: the four readability scores and the five vocabulary indices (average word length, average length of content words, percentage of content words, ratio of types to tokens, and ratio of words occurring only once-hapax legomena--to tokens). Ten correlations significant at the .05 level appear (see Table 2). In addition to the four readability scores (in all combinations), the significant correlations are the following: (a) average word length with percentage of content words, (b) average word length with average length of content words, (c) percentage of content words with types/tokens, and (d) types/tokens with hapax legomena/tokens. Thus, while the four readability indices derive from different text bases (school books, technical documents, and so forth), as applied to these typical professional essays they agree with each other to a considerable extent. For this reason, except in the analysis of variations among rhetorical types, we refer only to the Kincaid readability score. We will continue to report all lexical data, although the pattern of correlation is less consistent.

Our next step was to consider whether the WWB statistics show variations among rhetorical types (see Table 3). One-way analysis of variance reveals no significant difference in average sentence length, sentence structure, distribution of parts of speech, sentence openers, or vocabulary. Sometimes, this is not surprising: Percentage of nouns, for example, here about 24%, is likely to be consistent in the language, regardless of rhetorical type. But it is surprising that adjectival and adverbial usage in description--13.91% and 4.88% respectively--is actually lower than in exposition or argument, though not significantly.

WRITER'S WORKBENCH statistics do show significant differences among rhetorical types in readability scores; average word length; average content word length; length of the shortest sentence; and percentages of nominalizations, inspecifics, and abstractions. In every case, the scores for description are the lowest of the three. Although further statistical analysis is necessary, these results suggest the hypothesis that description is easier to read, more dynamic (verbal rather than nominal), more specific, and more concrete.

The second purpose of our study was to compare professional writing with CSU WORKBENCH standards. These standards are norms that the WWB output encourages students to approximate with such advice as "10% of your sentences are five or more words shorter than average for your document as a whole. Strong documents typically have 25% or more short sentences." As originally developed, WWB came supplied with standards derived from technical memoranda. But as users of the system may specify different standards, CSU tailored the program to first-year composition by deriving standards from 28 superior student essays. Contrary to our hypothesis, the results of t-tests indicate that professional essays overall differ significantly from nearly every CSU WWB standard (see Table 4). While professional "simple-plus-compound" sentences and "complex-plus-compound-complex" sentences closely match the WWB percentages, professional writers have longer sentences, greater variety of sentence length, more passive constructions, fewer nominalizations, and more expletives (there is/are). Most of these differences are huge. Professionals use 60% more short sentences and 76% more long sentences than the norms. Passives and expletives are particularly interesting: Such structures are normally stigmatized in composition classes, but teachers may be ignoring their rhetorical functions. Both, for example, are techniques of foregrounding, of highlighting parts of a sentence. The most striking aspect of the figures on sentence-structural variety is the equivalence between the simple-plus-compound and the complex-plus-compound-complex sentence percentages.

A division of professional essays into rhetorical types yields similar results. Expository and descriptive essays exhibit exactly the same pattern of significant difference from the WORKBENCH standards, except that descriptive essays show a large difference (14.9%) between the percentages of simple and complex sentences, where expository essays show almost none at all (0.7%). Argumentative essays vary from the pattern only by having significantly higher Kincaid readability scores and by showing no significant difference in the percentages of nominalizations.

The third purpose of our study was to compare professional writing with student writing. We ran t-tests for significant difference of professional writing (overall and by rhetorical type) against total, high, middle, and low students. The complete pattern of significant difference is displayed in Table 5. But because there is so little variation in the results between the professionals taken overall and by rhetorical type, the tables that follow display only data for professional overall against students. However, the differences among rhetorical types that seem remarkable are included in our discussion. We have grouped variables into five categories: readability and sentence variety, sentence structures, parts of speech, vocabulary, and usage.

Readability and Sentence Variety

The various readability formulae in WRITER'S WORKBENCH are based on measures of sentence and word length. In general, professional writers score higher on these indices than students, but there are many qualifications (see Table 6). Professionals overall have significantly higher Kincaid scores (11.55 grade level) than do total students and low students but differ insignificantly from both high students and middle students. Professionals use a significant number of longer words (4.5 letters per word) than weak students do, as well as a significant number of shorter words than good students do. Professionals have significantly longer sentences (25.24 words per sentence) and a much higher percentage of long sentences (19.73%) than all but the high students. But surprisingly, professionals have a much higher percentage of short sentences (41.27%) than do students at all levels. Thus, while good students approximate professional models most closely in readability, average sentence length, and percentage of long sentences, the use of short sentences differentiates the professional from the student.

Sentence Structures

The data on sentence structures indicate a much smaller difference between professional and student writing than we would have supposed (see Table 7). There is no significant difference in the percentage of subject openers, simple sentences, or compound sentences. And while there is some difference in the more complex structures, the picture is far from clear. Professional writers have significantly fewer complex sentences (31.13%) than do students at all levels. Importantly, however, descriptive essays seem to account for all of this difference because neither professional arguments nor expositions differ significantly from the students' at any level (see Table 5). The data on compound-complex sentences are even less clear. Although professional essays have a significantly higher percentage of these structures, they show no significant difference from those of high or low students.

One of the most surprising pieces of data concerns passive voice. Professionals, at 9.97%, use passives significantly more than do total students and more than do middle-range students. But professionals' passive use was not significantly higher than high or low students' use.

Parts of Speech

With few exceptions, professionals differ insignificantly from students at all levels in percentages of conjunctions, pronouns, content words, nouns, and to be verbs (see Table 8). However, whenever differences do occur, the professionals use fewer of these forms than the students use. This finding runs counter to Lunsford (1980), who reports that skilled writers use 75% more nouns than do basic writers (28% vs. 16%).

Three variables concern modification: prepositions, adjectives, and adverbs. Overall, professionals use significantly more prepositions (11.8%) than do students at every level. Similarly, professionals use significantly more adjectives (14.49%) than do total, middle, and low students. In contrast, professionals use fewer adverbs (5.21%) than do middle and low students, and significantly fewer adverbs than do total and high students. Thus, while the percentages of prepositions and adjectives suggest richer modification by professional writers, the percentage of adverbs contradicts this generalization. It is possible that professionals prefer adverbial prepositional phrases to adverbs. Indeed, Gebhard(1978)has noted "the professionals' marked fondness for the [adverbial] prepositional phrase" (p. 221). Unfortunately, because the WWB lumps together adnominal and adverbial prepositional phrases, our data cannot address this hypothesis.


Vocabulary statistics are not part of the regular Style subprogram. Nevertheless, we suspected that vocabulary would be a useful resource. Freedman and Pringle (1980) tentatively reported a high correlation (.54) between vocabulary range as measured by a "holistic analytic instrument" and a holistic grade (p.320). Similarly, Grobe (1981) argues that the number of different word types in an essay is a powerful predictor of higher holistic scores. Therefore, we devised a type-token program to give us lists of word types with an indicator of frequency (the number of tokens of each type). This identified the total number of different word types, the number of word types occurring only once (hapax legomena), and the number of word types repeated. Needless to say, we expected professionals to exhibit a wider range of vocabulary.

But, surprisingly, professionals use a significantly smaller vocabulary (types/tokens, 44%; repeated types/tokens, 13%) than do students at every level (see Table 9). Moreover, there is no significant difference in hapax legomena / tokens scores (31% ), except that professional writers overall, and in argumentative writing, have significantly lower ratings than do high students. Thus, it appears that differences between professionals and students are not uniformly indicated by quantitative differences in the rang of vocabulary. Perhaps it is necessary to take into account the relative maturity of lexical choices (see Neilsen & Piche, 1981).


The WORKBENCH programs that address usage reveal more consistent and predictable differences between professional and student writing (see Table 10). As one would expect, professional writing caused significantly fewer diction flags to appear (1 % ) than did students' writing. Similarly, professional writers' percentages of inspecifics (4.27%) are significantly lower than for those of students at all levels. The data on nominalization and abstraction, however, are not so predictable. Lunsford (1980) has suggested that skilled writers show, among other things, a "relatively high degree of nominalization, . . . abstract diction and complex concepts," and Hake and Williams (1981) confirm a college-level grading bias in favor of a nominal style. The student data in our study seem to reinforce those conclusions:

Thus it is surprising that professional writers' percentage of nominalizations (1.6%) is significantly lower than those for students at all levels and that, except in argumentative essays, professional writers have a significantly lower percentage of abstractions (1.88%) than do total, middle, and high students. The difference between professional writing and low students' writing is significant only in the descriptive essays (see Table 5). Clearly, these variables form an unambiguous set of measures that distinguish professional from student writing.


The superficial textual features described by WRITER'S WORKBENCH indeed provide useful information. Although further research on longer samples from a wider body of texts is necessary to confirm our findings, a paradigm of typical professional writing has begun to emerge. While statistical analysis indicates substantial homogeneity, the diversity of individual samples suggests a need for further research into the question of homogeneity within individual writers' prose, within rhetorical types, and, perhaps, in prose from across disciplines as well. Finally, the lack of uniform results in vocabulary measures urges further investigation into the relationship between syntactic and lexical maturity.

In addition to demonstrating the utility of WWB as a research tool, our study has potential pedagogical implications. For example, our data on professional departures from traditionally articulated norms raise questions about the wisdom of ignoring context to condemn such structures as passives and expletives. Thus, the computer provides a valuable check on the stylistic biases of English teachers. More radically, one might question whether traditional catalogs of desirable or stigmatized surface features reflect the ways in which good writers write or the ways in which students should be taught to write. Though our data do not yield conclusive support for such a position, computer analysis of surface features provides teachers with a means of detecting discrepancies between the handbooks and the real world.

Secondly, the irregular pattern of significant difference between professional prose and the writing of students of various levels indicates that the professional models function differently for different students and in different rhetorical contexts. That is, the data yield new awareness of differences among genres and reinforce our sense that students need advice that is flexibly adjusted to their individual proficiencies and purposes.

Finally, our study suggests a new role for professional models in writing courses, namely, that of statistical norm: What was once presented as an intuitive benchmark may now be explicable in quantitative terms. There is some danger here, of course: The teacher who says "Write sentences of 22.6 words each" is as repressively ineffectual as the one who says "Write longer sentences"--maybe more so, as he or she appears to have the weight of statistics on his or her side. But there is potential good here, too. Statistics may not account for everything of value in prose writing; but, by describing certain features of texts, they allow students a new understanding of such subtle qualities as cohesion, clarity, efficiency, and interest. It is imperative that WWB users construct standards carefully and warn students of the dangers of quantitative analysis for another reason as well: The computer is a powerful rhetorical device. The "objectivity" of the statistics is reinforced by the computer's "voice": When it speaks, students listen.


  1. We are indebted to Kenneth Berry, Department of Sociology, Colorado State University, for his advice on statistical analysis.

James J. Garvey and David H. Lindstrom teach in the Department of English at Colorado State University, Fort Collins, Colorado.


Broadhead, G. J., Berlin, J. A., & Broadhead, M. M . (1982) . Sentence structure in academic prose and its implications for college writing teachers. Research in the Teaching of English, 16(3), 225-240.

Christensen, F. (1968). The problem of defining a mature style. English Journal, 57(4), 572-578.

Freedman, A., & Pringle, I. (1980). Writing in the college years: Some indices of growth. College Composition and Communication, 31(3), 311-324.

Gebhard, A. O. (1978). Writing quality and syntax: A transformational analysis of three prose samples. Research in the Teaching of English, 12(3), 211-231.

Grobe, C. (1981). Syntactic maturity, mechanics, and vocabulary as predictors of quality ratings. Research in the Teaching of English, 15(1), 75-85.

Hake ,R. L., & Williams, J .M. (1981). Style and its consequences: Do as I do, not as I say. College English, 43(5), 433-451.

Hunt, K. W. (1971). Grammatical structures written at three grade levels. Urbana, Illinois: National Council of Teachers of English.

Kucera, H., & Francis, W. N. (1967). Computational analysis of present-day American English. Providence: Brown University Press.

Lunsford, A. L. (1980). The content of basic writers' essay. College Composition and Communication, 31(3), 278-290.

Mellon, J. C. (1969). Transformational sentence-combining: A method for enhancing the development of syntactic fluency in English composition. Research Report No. 10. Urbana, Illinois: National Council of Teachers of English.

Neilsen, L., & Piche, G. L. (1981). The influence of headed nominal complexity and lexical choice on teachers' evaluation of writing. Research in the Teaching of English, 15(1), 65-73.

Reid, S., & Findlay, G. (1986). WRITER'S WORKBENCH analysis of holistically scored essays. Computers and Composition, 3(2), 6-32.

Appendix A

The CSU WRITER'S WORKBENCH provides two versions of text analysis, a draft version to aid students in revision and a final version for submission to the instructor. The draft version for students in college composition consists of the following programs.

Organization: Offers an alternate view of organization and coherence by printing the first and last sentence of each paragraph.

Development: A CSU enhancement, compares word count of each paragraph with averages from sample essays. If any paragraph is significantly lower than average, the program, while pointing out that a paragraph may be of any length, suggests checking for development (if body paragraph) or for gracefulness (if introductory or concluding paragraph).

Findbe: Focuses attention on verb choice by capitalizing and underlining all forms of to be in the draft copy of a student's text.

Diction: Capitalizes and encloses with stars and brackets any of 525 wordy, overused, misused, sexist, or inflated words and phrases.

Suggest: Offers possible substitutions or advice about words and phrases flagged by Diction.

Vagueness: A CSU enhancement, gives the percentage of vague words in a text based on a 140-word dictionary prepared from faculty suggestions. If more than 5% of the words in a text are vague, the program prints a list with the recommendation that vague words be reduced to 3% or fewer.

Spell: Prints typographical errors and potential spelling errors.

Check: A CSU enhancement, alerts students to the presence of commonly confused homophones, word pairs, and certain other misused words in the text by listing the words and offering short definitions or references to the Glossary (included in a student manual).

Punctuation: Checks for missing quotation marks or parentheses and suggests changes for incorrect patterns: missing capital letters following periods, commas and periods outside quotation marks, semicolons and colons inside quotation marks, periods incorrectly placed inside or outside parentheses, double marks of punctuation, and the like.

Grammar: Identifies most split infinitives and misuses of a and an.

Prose: Compares values for 10 stylistic criteria in a text with those derived from good texts of the same kind and suggests revision when values exceed +/- one standard deviation from the mean. Prose prints all sentences with passive verbs and all sentences with nominalized forms when values exceed the standard.

Style: Offers a summary of important stylistic and other information. Below is sample output from the standard CSU version of the WORKBENCH:

Sentence information:
average sentence length: 22.1
% of sentences 5 words shorter than average: 26% (8)
% of sentences 10 words longer than average: 13% (4)
Sentence types:
simple 23% (7) complex 52% (16)
compound 16% (5) compound-complex 10% (3)
Verb choice: to be as percent of total: 15% (22) aux 17% (13) inf 4% (3)
passives as % of non-inf verbs: 5% (4)
nominalizations: 1 % (6)
Sentence beginnings:
subject openers: noun (4) pron (6) pos (0) adj (2) art (0)
  TOTAL 39%
other openers: prep 16% (5) adv 19% (6) verb 0 (0%)
  sub-conj 16% (5) conj 10% (3)
Other information:
no. sentences: 31; no. words: 686
average word length: 4.30
no. questions: 0; no. imperatives: 0
no. content words: 372 54.2% average length: 5.73
word types as % of total:
  prep 11.8% (81) conj 4.2% (29) adv 6.4% (44)
  noun 25.2% (173) adj 14.0% (96) pron 9.3% (64)
  (Kincaid) 10.1 (auto) 9.9 (Coleman-Liau) 8.2
  (Flesch) 8.8 (62.3)

Abstract: Gives the percentage of abstract words in a text based on a 314-word dictionary derived from psycho-linguistic research. If more than 2.3% of the words are abstract, the program prints a list and recommends checking for adequate concrete detail.

Appendix B
Readers and Rhetoric-Readers

Barnet, Sylvan and Marcia Stubbs. Practical Guide to Writing with Additional Readings. Little, Brown, 1980.
Burt, Forrest and Cleve Want. Invention and Design. Random-House, 1978.
Cooley, Thomas. The Norton Sampler. Norton, 1982. 2nd ed.
Decker, Randall. Patterns of Exposition 8. Little, Brown, 1982.
Eschholz, Paul and Alfred Rosa. Subject and Strategy. St. Martin's, 1981.
Ferrell, Wilfred and Nicholas Salerno. Strategies in Prose. Holt, 1978. 4th ed.
Kennedy, X. J. and Dorothy Kennedy. The Bedford Reader. St. Martin's, 1982.
Kirszner, Laurie and Stephen Mandell. Patterns for College Writing. St. Martin's, 1983. 2nd ed.
Levin, Gerald. Short Essays. Harcourt, 1983. 3rd ed.
McCuen, Jo Ray and Anthony Winkler. Readings for Writers. Harcourt, 1980. 3rd ed.
McQuade, Donald and Robert Atwan. Thinking in Writing. Knopf, 1983. 2nd ed.
Rosa, Alfred and Paul Eschholz. Models for Writers. St. Martin's, 1982.
Shugrue, Michael. The Essay. Macmillan, 1981.
Skwire, David. Writing with a Thesis. Holt, 1982. 3rd ed.
Smith, William and Raymond Liedlich. From Thought to Theme. Harcourt, 1983.
Trimmer, Joseph and Maxine Hairston. The Riverside Reader. Houghton, 1981.
Wyrick, Jean. Discovering Ideas. Hold, 1982.

Appendix C
"Typical" Professional Essays

Asimov, Issac. "The Case Against Man."
Buckley, William. "Capital Punishment."
Cousins, Norman. "The Right to Die."
Jefferson, Thomas. "The Declaration of Independence."
King, Martin Luther. "Letter from Birmingham Jail."
Lawrence, Barbara. "Four Letter Words Can Hurt You."
Mannes, Marya. "How Do You Know It's Good?"
Orwell, George. "Politics and the English Language."
Roiphe, Anne. "Confessions of Female Chauvinist Sow."
Swift, Jonathan. "A Modest Proposal."

Cowley, Malcolm. "The Long Furlough."
Didion, Joan. "Rock of Ages."
Dillard, Annie. "A Field of Silence."
Jacobs, Jane. "The Uses of Sidewalks."
Kingston, Maxine Hong. "Photographs of My Parents."
Orwell, George. "A Hanging."
Selzer, Richard. "In the Shadow of the Winch."
Twain, Mark. "Buck Fanshaw's Funeral."
White, E. B. "Once More to the Lake."
Woolf, Virginia. "The Death of the Moth."

Baker, Russell. "From Song to Sound: Bing and Elvis."
Carson, Rachel. "The Grey Beginnings."
Catton, Bruce. "Grant and Lee: A Study in Contrasts."
Eiseley, Loren. "The Bird and the Machine."
Hayakawa, S. I. "Reports, Inferences, Judgments."
Mencken, H. L. "The Satisfaction of Life."
Mitford, Jessica. "To Bid the World Farewell."
Thomas, Lewis. "The Iks."
Thurber, James. "Courtship Through the Ages."
Wolf, Tom. "Pornoviolence."

Table 1: Frequency Distribution in Professional Essays
Means and (Std. Dev.) Overall and by Rhetorical Type


Kincaid readability11.55(3.76)14.02(4.44)9.58(3.15)11.04(2.15)
Auto readability12.37(4 62)15.10(6.03)10.53(3.57)11.49(2.60)
Coleman-Liau read.9.40(1.65)10.52(1.34)8.33(1.60)9.35(1.32)
Flesch readability10.62(2.75)12 76(2 62)8.72(1.87)10.38(2.19)
Avg. sentence length25.24(8.39)29.12(11.93)23.05(6.25)23.56(4.50)
Avg. word length4.50(0.24)4.66(0.22)4.34(0.21)4.49(0.21)
% content words53.17(2.77)53.85(2.19)52.31(3.02)53.35(3.08)
Avg length content wd.5.87(0.38)6.15(0.36)5.62(0.34)5.83(0.27)
% short sentences41.27(7.18)39.20(7.94)42.20(7.91)42.40(5.76)
% long sentences19 73(6.68)18.50(7.55)19.20(7.73)24.00(4.65)
Length longest sent.74.17(30.62)81.90(43.71)75.60(23.84)65.00(19.49)
Length shortest sent.5.53(3.86)8.00(4.99)3 .80(1.48)4.80(3.19)
% simple sentences35.07(14.09)31.80(13.20)40.30(16.73)33.10(11.80)
% complex sentences31.13(9.50)34.20(12.53)25.40(5.02)33.80(7.32)
% compound sentences9.97(5.88)6.80(6.14)11.90(6.33)11.20(4.08)
% comp-complex sent.23.63(14.51)27.00(18.93)22 20(13.52)21.70(10.86)
% to be33.67(7.23)35 90(9.07)31.50(5.36)33.60(6.84)
% to be (auxiliary)18.23(6.11)20.00(7.30)15.40(4.99)19.30(5.31)
% to be (infinitive)14.13(5.88)16.80(7.64)11.40(4.17)14.20(4.42)
% passives9.97(5.48)12.50(5.84)7.80(4.57)9.60(5.42)
% prepositions11.80(1 62)11.84(1.98)12.00(1.54)11.57(1.43)
% conjunctions4.17(0.91)4.22(0.92)4.40(1.09)3.90(0.70)
% adverbs5.21(1.24)5.39(1.60)4.88(1.08)5.35(1.02)
% nouns24.39(1.85)24.98(1.46)24.59(1.86)23.60(2.08)
% adjectives14.59(2.03)14.67(0.93)13.91(2.23)15.20(2.54)
% pronouns8.40(2.10)7.49(1.97)8.78(1.72)8.93(2.45)
% nominalizations1.60(1.22)2.60(1.35)0.70(0.48)1.50(0.85)
% subject openers66.63(12.77)64.18(18.86)70.50(8.96)65.30(7.97)
% preposition openers8.93(5.34)9.20(5.87)7.10(4.12)10.50(5.82)
% adverb openers5.90(4.08)5.10(3.84)4.90(4.91)7.70(3.06)
% verb openers1.37(2.04)1.80(2.94)0.70(1.16)1.60(1.65)
% sub-conj. openers4.73(4 84)6.50(6.88)4.10(3.51)3.60(3.24)
% conjunction openers5.03(4.00)5.80(3.26)3.70(3.23)5.60(5.23)
% expletives6.70(4 76)5.30(4.95)8.90(4.43)5.90(4.53)
% inspecific4.27(1.59)4.31(1 56)3.29(1.21)5.22(1.47)
% abstract1.88(1.05)2.37(0.70)1 16(1 07)2.12(1.00)
Types/tokens0.44(0 03)0.43(0.03)0.45(0.03)0.44(0.04)
Hapax legomena/tokens0.31(0.03)0.30(0.03)0.33(0 03)0.31(0.04)
Repeatedtypes/tokens0.13(0.01)0 13(0.01)0.12(0.01)0.13(0.01)
Diction flags/words0.01(0.004)0.01(0.003)0.01(0.005)0.01(0.004)

Table 2: Correlation of Variables
Professional Essays Overall

Kincaid with Auto0.987 0.00001
Kincaid with Coleman-Liau0.707 0.00001
Kincaid with Flesch0.894 0.00001
Auto with Coleman-Liau0.629 0.0001
Auto with Flesch0.832 0.00001
Coleman-Liau with Flesch0.874 0.0001
Avg. Word length with % content words0.507 0 00208
Avg. word length with types/tokens0.153 0.20959
Avg. word length with hapax legomena/tokens 0.0850.327
Avg. word length with avg. length content words 0.9190.00001
% content words with types/tokens0.417 0.01906
% content words with hapax legomena/tokens 0.2570.08547
% content words with avg. length content words 0.1910.15624
types/tokens with hapax legomena/tokens 0.9570.00001
types/tokens with avg. length content words 0.0020 49500
hapax legomena/tokens with avg. length content words 0.00070.4984

Table 3: Discriminators among Rhetorical Types
One-Way Analysis of Variance

F Probability

Kincaid readability11.5514.029.5811.040.0207
Auto readability12.3715.1010.5311.490.0605
Coleman-Liau read.9.4010.528.339.350.0073
Flesch readability10.6212 768.7210.380.0017
Avg. sentence length25.2429.1223.0523.560.2043
Avg. word length4.504.664.344.490.0076
% content words53.1753.8552.3153.350.4635
Avg length content wd.5.876.155.625.830.0036
% short sentences41.2739.2042.2042.400.5523
% long sentences19 7318.5019.2024.000.5921
Length longest sent.74.1781.9075.6065.000.4750
Length shortest sent.5.538.003 .804.800.0331
% simple sentences35.0731.8040.3033.100.3604
% complex sentences31.1334.2025.4033.800.0593
% compound sentences9.976.8011.9011.200.1073
% comp-complex sent.23.6327.0022 2021.700.6810
% to be33.6735 9031.5033.600.4103
% to be (auxiliary)18.2320.0015.4019.300.1963
% to be (infinitive)14.1316.8011.4014.200.1200
% passives9.9712.507.809.600.1547
% prepositions11.8011.8412.0011.570.8447
% conjunctions4.174.224.403.900.4767
% adverbs5.215.394.885.350.0693
% nouns24.3924.9824.5923.600.2341
% adjectives14.5914.6713.9115.200.3722
% pronouns8.407.498.788.930.2794
% nominalizations1.602.600.701.500.0006
% subject openers66.6364.1870.5065.300.5077
% preposition openers8.939.207.1010.500.3685
% adverb openers5.905.104.907.700.2372
% verb openers1.371.800.701.600.4543
% sub-conj. openers4.736.504.103.600.3716
% conjunction openers5.035.803.705.600.4462
% expletives6.705.308.905.900.1970
% inspecific4.274.313.295.220.0188
% abstract1.882.371 162.120.0184
Hapax legomena/tokens0.310.300.330.310.1503
Diction flags/words0.

Table 4: Patterns of Significant Difference
Professionals Overall and by Rhetorical Type
Against Students: Total, High, Middle, and Low






Kincaid Readability+0 0+ +0++ 0- 000 00+
Avg. sentence length+ 0++ +0+ ++0 ++ +0++
Avg. word length0- 0+ 000+ 0- -00 -00
% content words0- 00 0000 0- 000 000
% short sentences++ ++ ++++ ++ +++ +++
% long sentences+0 ++ +0++ +0 +++ +++
% simple sentences00 00 0000 0+ 000 000
% complex sentences-- -- 0000 -- --0 000
% compound sentences0 000 000 000 00 0000
% com-complex sentences+ 0+0 +0+ 000 00 0000
% to be00 0- 0000 -0 0-0 0--
% passives+0 +0 +0+0 00 000 0+0
% prepositions++ ++ ++++ ++ +++ +++
% conjunctions00 00 0000 00 000 000
% adverbs-- 00 0000 -- 000 000
% nouns0- 00 0000 00 000 -00
% adjectives+0 ++ +0++ 00 00+ 0++
% pronouns00 -0 00-- 00 000 000
% nominalizations-- -- 0000 -- --- --0
% subject openers00 00 0000 00 000 000
% inspecific-- -- ---- -- --- 00-
% abstract-- -0 0000 -- --0 --0
types/tokens-- -- --0- 00 0-- -0-
hapax legomena/
0 -00 0-0 00 000 000 0
repeated types/tokens- --- -0- --- -- ----
diction flags/ words- --- 000 0-- -- 0-00

Table 5: Significant Differences
Professionals Overall and by Rhetorical Type





mean s.d.mean s.d.t-valuemean s.d.t-valuemean s.d.t-valuemean s.d.t-value

Kincaid10.785(1.305) 11.547(3.761)1.076 14.020(4.435)3.528* 9.580(3.151)-1.687 11.040(2.145)0.444
Avg. sentence length17.66 (3.99)25.243(8.393) 4 343*29.120(11.932) 4.512*23.050(6.254) 3.1395*23.56(4.504) 3.883*
Avg. length content wds.5.895 (0.415)5.865(0.384) -0.2866.152(0.359) 1.7375.671(0.336) -1.9025.825(0.265) -0.496
% short sentences25.72 (6.02)41.267(7.177) 25.905*39.20(7.941) 5.584*42.20(7.913) 6.835*42.40(5.758) 7.603*
% long sentences11.20 (4.90)19.733(6.680) 5.514*18.50(7.546) 3.4897*19.20(7.729) 3.784*24.00(4.649) 7.181*
% passives4.786(2.859) 9.967(5.480)4.466* 12.50(5.836)5.472* 7.80(4.566)2.429* 9.60(5.420)3.5598*
% nominalizations2.55 (0.81)1.60(1.221) -3.465*2.60(1.350) 0.1390.70(0.483) -6.769*1.50(0.850) -3.475*
% expletives0.071(0.378 6.70(4.757)7.348* 5.30(4.945)5.691* 8.90(4.433)10.697* 5.90(4.533)6.909*
% simple (-) complex-4.24 (24.07)3.933(19.816) 1.416-2.40(19.535) 0.25614.90(19.490) 2.258*-0.70(17.372) 0.426
% comp (+) comp-complex24.18 (11.73)33.60(13.833) 2.787*33.80(17.061) 1.96934.10(15.30) 2.117*32.90(9.515) 2.110*
% simple (+) compound46.0 45.03345.033(16.123) 38.60(17.753) 52.20(15.541) 44.30(13.317)
% complex (+) comp-complex54.0 54.767(16.343) 61.20(18.023) 47.60(15.629) 55.50(13.705)

*p .05

Table 6: Readability and Sentence Variety

Total Students

High Students

Middle Students

Low Students

mean s.d.t-valuemean s.d.t-valuemean s.d.t-valuemean s.d.t-value

Kincaid Readability11.55 (3.76)9.74(2.60) 2.45*12.23(2.15) -0.579.64(2.24) 1.958.03(1.82) 3.41*
Avg. word length4.50(0.24) 4.52(0.30)-0.32 4.74(0.30)-2.60* 4.56(0.25)-0.79 4.32(0.23)2.39*
Avg. sentence length25.24 (8.39)19.32(4.42) 3.95*22.87(3.14) 0.9118.27(4.21) 3.28*17.98(4.27) 3.14*
% short sentences41.27 (7.18)26.739.60 7.05*32.09(5.68) 3.81*25.00(9.74) 6.64*24.87(10.72) 6.11*
% long sentences19.73 (6.68)11.16(6.02) 5.75*15.73(3.95) 1.868.78(6.40) 5.59*10.67(5.15) 4.61*

Table 7: Sentence Structures

Total Students

High Students

Middle Students

Low Students

mean s.d.t-valuemean s.d.t-valuemean s.d.t-valuemean s.d.t-value

% simple sentences35.07 (14.09)32.09(15.02) 0.85826.55(14.19) 1.71234.44(12.66) 0.15433.33(17.94) 0.355
% compound sentences9.97 (5.88)8.73(6.86) 0.8088.27(6.96) 0.7788.67(6.28) 0.7309.13(7.86) 0.400
% complex sentences31.13 (9.50)41.50(12.62) -3.82*43.73(14.99) -3.20*41.11(11.80) -3.22*40.33(12.41) -2.76*
% compound-complex26.63 14.51)17.75(8.35) 2.21*21.73(8.31) 0.4116.00(8.29) 2.04*16.93(8.02) 1.88
% subject openers66.63 (12.77)64.86(12.17) 0.60264.55(12.32) 0.46865.22(13.34) 0.36554.67(11.42) 0.504
% passives9.97(5.48) 7.25(4.91)2.23* 9.18(5.00)0.42 5.61(2.91)3.11* 7.80(6.26)1.19

Table 8: Parts of Speech

Total Students

High Students

Middle Students

Low Students

mean s.d.t-valuemean s.d.t-valuemean s.d.t-valuemean s.d.t-value

% content words53.17(2.77) 53.53(3.41)-0.473 55.60(3.13)-2.403* 53.24(3.85)-0.077 52.34(2.38)0.9897
% nouns24.39(1.85) 24.58(3.06)-0.310 25.96(2.58)-2.164* 24.26(3.32)0.180 23.97(2.93)0.592
% pronouns8.40(2.10) 9.21(2.83)-1.33 7.26(2.60)-1.438 9.92(2.74)-2.159* 9.77(2.58)-1.914
% adjectives14.59(2.03) 12.82(2.83)3.16* 13.90(2.26)0.94 12.41(2.77)3.15* 12.52(2.51)2.99*
% adverbs5.21(1.24) 5.93(1.42)-2.26 6.05(1.00)-2.01* 5.80(1.42)-1.52 6.00(1.74)-1.77
% to be33.67(7.23) 37.00(8.13)-1.810 37.91(10.04)-1.496 33.17(7.01)0.235 40.93(5.95)-3.39*
% prepositions11.80(1.62) 9.80(1.87)4.78* 9.83(1.80)3.36* 9.74(2.03)3.88* 9.85(1.83)3.65*
% conjunctions4.17(0.91) 4.18(0.999)-0.009 4.53(1.34)-0.9696 4.11(0.89)0.249 4.00(0.83)0.618

Table 9: Vocabulary

Total Students

High Students

Middle Students

Low Students

mean s.d.t-valuemean s.d.t-valuemean s.d.t-valuemean s.d.t-value

Types Tokens0.44(0.03) 0.48(0.05)-3.11* 0.48(0.05)-2.72* 0.47(0.08)-2.07* 0.49(0.05)-3.95*
Hapax Legomena Tokens0.31 (0.03)0.32(0.06) -0.940.35(0.05) -2.34*0.31(0.07) 0.060.33(0.06) -0.97
Repeated Types Tokens0.13 (0.01)0.15(0.02) -7.55*0.14(0.01) -3.06*0.15(0.01) -7.86*0.17(0.02) -9.17*

Table 10: Usage

Total Students

High Students

Middle Students

Low Students

mean s.d.t-valuemean s.d.t-valuemean s.d.t-valuemean s.d.t-value

Diction Flags
Words0.01(0.004) 0.02(0.008)-3.09* 0.02(0.007)-3.08* 0.02(0.008)2.88* 0.02(0.008)-2.24*
% Nominalizations1.60 (1.22)3.05(1.35) -4.70*3.46(1.21) -4.32*3.17(0.86) -4.77*2.60(1.81) -2.20*
% Inspecific4.27(1.59) 6.81(2.32)-5.20* 6.26(2.56)-2.98* 6.79(2.59)-4.19* 7.23(1.81)-5.62*
% Abstract1.88(1.05) 2.66(0.98)-3.25* 2.91(0.67)-3.04* 2.92(0.94)-3.44* 2.16(1.08)-0.83