Alternatives to readability formulas fit into two categoriestext-based
(like the formulas) and reader/text-based
. Both alternatives allow the user to focus attention on factors involved in comprehension. Text-Based Alternative
A newer text-based method of analysis has the advantage of involving meaning in the estimation of readability. Phrase analysis (PHAN
) described by Clark (1980) is a straightforward method which employs the linguist's tool of suggestive analysis to determine the
coherence of text passages. The system enables the user to examine the clarity of relationships between ideas within the text. The present writer's experience using PHAN in college classrooms indicates that teachers become efficient in the use of Clark's system after one or two trial applications. Reader/Text-Based Alternatives
The least complex and most specific method to find if a text is appropriate for a reader or group of readers, involves trial reading. Select the book for a reader and ask him to read aloud the text. If a group of readers will be using
the book, then ask several average readers to read samples of the text. You can evaluate the appropriateness of the text according to the following criteria:
- Independent level 99% word accuracy (90% comprehension)
- Instructional level 95% word accuracy (75% comprehension)
- Frustration level 90% word accuracy (50% comprehension)
Some experts disagree about the exact limits of these levels. Harris and Sipay (1981), for example, allow two or three unknown words at the independent level. Generally, however, the above levels are acceptable. A second reader/text based
alternative to use readability formulas is teacher judgment. This approach is direct and highly reliable; Dale and Chall (1948) and others (Klare, 1963) have reported that the judgments of panels of teachers correlated on the order of .90 with formula scores. This method is probably best for experienced teachers who
are familiar with reading materials at several grade levels. Judge the difficulty of text as follows:
- Does the text include difficult vocabulary?
- Are difficult ideas or concepts included?
- Are sentences unusually complex or simple?
- Are relationships between concepts or events clearly stated? (A weakness of passages written in short sentences is that such passages frequently omit key relationship words i.e., because, thus, therefore.)
- Does the text require the reader to interpret graphics, such as pictures, charts, tables, and diagrams?
These factors should help maintain high reliability in teacher estimates of readability.
The third reader/text-based method of estimating readability involves the "Cloze Test." The term, "Cloze" is similar in sound and appearance to "closure
." The Cloze Test
involves removing certain words in the text, then have readers replace the blank spaces of text with missing words. Teachers can evaluate readers' filled-in words to evaluate evaluate materials for individuals or groups. the "Cloze Test" is commonly administered to assess native and second
language learning and instruction. To develop a "cloze measure of readability":
- Select your text and delete every fifth word in a selection to make a blank space.
- Ask the individual reader to read the passage silently and fill in each blank with the word that seems to belong in that blank.
- Ask average readers at a given level to read a passage silently and fill in each blank.
Scoring is based on the percentage of words, filled in by the reader, that match the original text exactly:
- 1) Less than 45% correct equals the frustration level.
- 2) Forty-five percent to 57% equals the instructional level.
- 3) Greater than 57% equals the independent level.
The performance of several average readers from a group will indicate how readable the text will be to the group as whole.
A systematic and comprehensive reader/ text-based method to estimate the readability of texts is the Readability Checklist
(Irwin and Davis, 1980). The checklist does not directly involve readers in reading a passage, but it requires the user to consider the match between reader and text characteristics.
Factors of instructional importance are arranged in two categories, understandability
The understandability section of the scale requires the user to consider 1)
the relationship between the information in the text and the reader's prior knowledge; 2)
the clarity with which concepts are developed in the text; and 3)
the incidence of factors which confuse readers (i.e.irrelevant detail, absence of explicitly stated connecting words, such as because
, and therefore
.) In examining the learnabilty of the text, the scale focuses the user on organizational
, and motivational factors
. The Irwin-Davis Readability Checklist
enables systematic, meaning-oriented examination of a text. Importantly, the checklist involves consideration of factors which are both reader-related and text-related. The checklist requires evaluation of written material in light of knowledge about readers. In Conclusion
Readability Formulas are useful tools for obtaining "ballpark" estimates of readability. They are beneficial when you need to determine the readability of texts, such as library books and
periodicals, which individuals will read independently. Readability formulas are not as effective to match a text to a specific reader or group of readers.
Writers tend to misuse readability formulas on instructional material. In such applications, writers must expect that readability formulas cannot accurately score such material since instruction relies more heavily on the role of the reader (i.e. his experiences, knowledge, and expertise). Science, social studies, and other subjects require the use of specialized, technical vocabulary. The occurrence of such vocabulary
artificially increases the number of "hard" (unfamiliar) words, thus inflating readability scores. Because these words are taught during classroom lessons, you should not rate these words as considered "hard."
The misuses and misunderstandings of using readability formulas have caused publishers and teachers to reject many good published instructional programs. One plausible explanation for the choppy, short-sentence style in many elementary school science and social studies textbooks, is that authors compensate for "hard" words by shortening sentences, thus assuring appropriate
The "catch 22" of this practice is that clarifying connectives (words like, because therefore and thus ) are often deleted to shorten sentences. As Irwin and Davis (1980) pointed out, deleting "relationship words" often makes text more difficult to comprehend. Similarly, Pearson (1974) has shown that shortening sentences does not necessarily make text easier to read.
Readability formulas are useful in matching reading materials to general audiences of some assumed level of reading ability. Formulas, however, are not always appropriate tools for matching
texts to readers. The alternatives to formulas, suggested here, are appropriate when you are matching texts to known readers. The key implication of this article is that if you are concerned with instructionof groups or individualsthen you should rarely use readability formulas. Rather, educators and other users should apply their knowledge of individual reading abilities and text-characteristics to plan successful reading experiences.