is a readability measure to calculate the difficulty of reading a foreign text. The Lix Formula was developed by Swedish scholar Carl-Hugo Björnsson
. The LIX readability formula is as follows: LIX = A/B + (C x 100)/A
, where A
= Number of words B
= Number of periods (defined by period, colon or capital first letter) C
= Number of long words (More than 6 letters) Let's Learn
about LIX, the Swedish Readability Formula
Scholars have published three major summaries of readability research in the past twenty years. One summary revealed that countries outside the U.S. have put forth little effort to research and develop effective readability formulas for foreign text. This summary apparently applied not only to English-reading countries, but to all countries other than the United States. Consequently, an academic report revealed that Mexican research workers applied American readability formulas to foreign language material to assess a reading-level for
During the last decade the United Kingdom Reading Association
published a book, entitled "Readability." The most recent work on readability, also from the United Kingdom, is Colin Harrison's "Readability in the Classroom
." Again, surprisingly, these publications do not talk about the problem of gauging the readability of non-English materials.
All reading teachers understand that education is becoming a multicultural society. The general lack of awareness of foreign language readability assessment is a serious shortcoming. And yet there is one
readability formula, developed in Sweden, which holds promise for assessing text difficulty in other languages including English.
A few years before the release of Gilliland's book, Readability
, Carl-Hugo Björnsson
(1968) published a book entitled Lasbarhet
(which, translated to English, is entitled "Readability
"). Björnsson's book is not well-known outside Sweden (no doubt because few people know Swedish), yet classroom teachers will find the formula useful.
Björnsson developed his new readability formula by selecting 12
features of text
(known to contribute to reading difficulty), and measuring each of these across 18 books
in each of the 9 levels of the Swedish comprehensive school
(i.e., 162 books in all). In the final step, he used the traditional regression approach to determine the difficulty of text judged by teachers and students.
Björnsson tested each of the 12 readability factors against such requirements as validity
, and ease of computation
. Björnsson, like other researchers before him, found that he could
use two factors a word factor
and a sentence factor
to predict readability quite accurately. He called his readability index "Lasbarhetsindex
" which, in time, was shortened to Lix.
The word factor in Lix is the familiar word length variable; however, it is measured differently from most other readability formulas. Rather than a count of syllables, polysyllabic words, or unfamiliar words as judged by a word list, Lix
gauges word length by the percentage of long words (i.e., words of more than six letters). Gauging words makes Lix more objective
and quicker to compute than other formulas. The sentence factor adopted (sentence length) is the average number of words per sentence, as in the Flesch, Spache, Fry and many other measures of readability. Finally, word and sentence factors are weighted equally, and this, too, contributes to ease of calculation. Lix is defined as follows: Lix = word length + sentence length
where, word length
= percentage of words of more than six letters;
and sentence length
= average number of words per sentence.
, in commenting upon the computation of Lix, noted: "Lix is not only simple to calculate, avoiding as it does, mathematical formulae, it differs from the English readability measures in two important ways. Firstly, it bypasses the problem of whether to count monosyllabic words, polysyllabic words or total syllables by including only words beyond a certain length; that it is a measure which ignores the linguistic rules of syllabification suggests that it is potentially useful across languages. Secondly, Lix calculates the percentage of long words from 100 word
samples while sentence length is computed from separate 10 sentence samples." Comparing French and English Texts with Lix
In the first trial of Lix, a graduate student at Flinders University
, in an unpublished report, tested Lix with French and English texts. This was a small independent project undertaken as part of the Diploma in Education
. The graduate student selected 10 books of fiction used in upper primary and lower secondary school. The books, however, were selected because of the availability of a translation in the other language. Some
books were English titles for which a French translation existed, while others were French titles for which an English translation existed. The student selected random samples of text from each book and then identified parallel samples in the translation.
To test Lix's accuracy, the student calculated indices for each title in both French and English. The indices included Björnsson 's recommended procedure of selecting a 2000-word sample
(made up of 20 100-word samples) and a separate sample of 200 sentences
(20 samples each of 110 sentences).
these estimates of reading difficulty with other acceptable measures, the student applied Flesch's Reading Ease Formula over the 2000-word sample from each title, in both languages. Flesch's formula was judged appropriate for the levels of difficulty of the texts, at least as far as English was concerned.
The resulting correlations are of interest: Lix indices in French correlated 0.87
with Lix indices in English; while Flesch indices across languages correlated 0.90
. Both correlations suggested that the factors being measured in French and English by these two formulas
were very similar. When Lix and Flesch were correlated over the French books, the correlation was -0.80
, while over the English books it was -0.78
(the negative results exist because difficulty is rated high on Lix
, but low on Flesch
). Because Flesch provides a valid measure of text difficulty, then Lix would appear to measure text difficulty similarly. How to Interpret Lix Readability Formula
To illustrate how you can apply and interpret Lix in the classroom, let's use these two samples: the Tax Act
Very Bad Day
. TAX ACT sample Where the amount of the annuity derived by the taxpayer during a year of income is more than, or less than, the amount payable for a whole year, the amount to be exclude from the amount so derived is the amount which bears to the amount which, but for this sub-section, would be the amount to be so, excluded the same proportion as the amount so derived bears to the amount payable for the whole year.Very Bad Day sample I went to sleep with gum
in my mouth and now there's gum in my hair and when I got out of bed this morning I tripped on my skateboard and by mistake I dropped my sweater in the sink while the water was running and I could tell it was going to be a terrible, horrible, no good, very bad day. I think I'll move to Australia.The necessary calculations are as follows: 1. Count for each text(a)
the total number of words, (b)
the number of long words (i.e. words of more than 6 letters), and (c)
the number of sentences. 2. Compute word
length (percentage of long words)
: divide (b) by (a) and multiply by 100
. 3. Compute sentence length (average length of sentences in words)
: divide (a) by (c). 4. Add the two values obtained in (2) and (3) and round to the nearest whole number.
These calculations are shown in Table 1.
The calculations above illustrate how easy you can compute Lix. The counts are quite
objective and thus it ensures inter-coder reliability. First
, the equal weighting of word and sentence factors means that the relative contribution of each of these factors is more readily apparent. In the two sample texts, for instance, sentence length clearly contributed more than word length to the total Lix score (84 percent of Lix for the Income Tax
text and 68 percent of Lix for the Very Bad Day
, you can easily interpret the word and sentence factors. Thus, the word length of 14.5 (Income Tax) means that about one in seven words is a long word;
and the word length of 16.2 (Very Bad Day) means that every sixth word on average is a long word. Just as the word length components of Lix have meaning, so do the sentence length components. In the Income Tax
text, the sentence length factor indicates that the average sentence length is 76 words while in the second text it averages 34 words.
Other factors contribute to make for reading difficulty, such as 1)
factors within the reader; 2)
the purpose for which the texts are written; and 3)
how readers interact with the text. Lix
, like other
readability measures, provides only an inexact and incomplete estimate of reading difficulty. Like any other information (e.g., content, form and layout), you need to evaluate the text selection.
Björnsson provided the following table to interpret Lix scores. These norms are for Swedish; currently, there are no similar norms to interpret Lix scores in other languages, except for French, German and Greek.
|Interpreting Lix Scores (Swedish texts)
The two texts above are very short (one and two sentence respectively). Do not make generalizations with samples of this size. Nevertheless, you can find consistencies which suggests that you can use Lix across other languages and that you can use Lix at a variety of levels from young children's materials through to secondary level and adult texts.