Document Type : Research Paper

Authors

1 Assistant Professor of Applied Linguistics, Ferdowsi University of Mashhad, Iran

2 Ph.D. Candidate of TEFL, Ferdowsi University of Mashhad, Iran

Abstract

To analyze and evaluate textbooks, researchers have either proposed scales and checklists to be filled by teachers and learners or conducted qualitative investigations of the match between SLA theories and textbook activities. This study, however, employs the microstructural approach of schema theory to scrutinize the reading passages of “Mosaic 1 Reading”. To this end, 17 passages of the textbook were randomly chosen and their constituting words were explored as semantic, syntactic, and parasyntactic schemata. The passages were also analyzed in terms of their readability indices. The results showed that they consist of 3722 schema types, 2979 (80%) of which are semantic in nature. Although the textbook aims at “academic success” at English language “proficiency levels”, it provides no objective definition of what they stand for. In terms of readability, however, the passages vary in difficulty from grade three in primary school to college level. Further, the textbook is discussed in terms of its constituting schemata and suitability to the Iranian context and suggestions are made for future research. The findings of this study have important implications for language teaching, testing and materials development. They show that language proficiency must be defined in terms of schema types and the bulk of class time must be spent on teaching semantic schemata rather than syntactic and parasyntactic ones. Similarly, for testing the reading comprehension of these passages, the number and type of test items must be based on the percentage of semantic and syntactic schema types and subjective criteria such as teachers’ intuition or experience must be avoided both in teaching and testing the comprehension of passages.

Keywords

Textbooks play a pivotal role in teaching English as a foreign language (EFL). They act as agents for change and serve as useful maps or plans of what is intended and expected (Hutchinson & Torres, 1994), allow learners take charge of their own learning (Crawford, 2002), assist less experienced teachers, facilitate self-directed learning and give a framework to the presentation of materials (Cunningsworth, 1984), help both teachers and learners overcome unpredictable and potentially threatening situations faced in social events (Reid, 1994), provide essential sources of information (Donoghue, 1992, p. 35), ‘put flesh on the bones’ of …syllabus (Nunan, 1991, p. 208), save teachers some time on facilitating learning activities (O’Neill, 1982), and “serve as the basis for much of the language input learners receive and the language practice that occurs in the classroom” (Richards, 2001, p. 251).

       In spite of serving a host of educational objectives some of which have been outlined above, few textbooks are written and taught by experts and/or teachers who live and teach English in countries such as Thai and Iran where learners have virtually no opportunity to live the language. As a result most teachers employ “western-compiled textbooks [which] project identities disconnected from … learners’ lived experiences, adversely affecting their meaning-making during discursive practices (Boriboon, 2008, p. 1). For this very reason, these textbooks need to be evaluated so that their applicability to EFL contexts can be determined objectively.

       The present study has, therefore, been designed to evaluate “Mosaic 1 Reading” (Wegmann & Knezevic, 2002) as a western-compiled textbook which is employed by some teachers in Iranian tertiary education centers such as Ferdowsi University of Mashhad to offer reading courses to EFL students at undergraduate level. However, unlike the majority of textbook evaluators who resort either to interviewing the learners and teachers and/or to administering questionnaires and checklists (e.g., Ansary & Babaii, 2002; Boriboon, 2008; Lee & Bathmaker, 2007), the present researchers have employed schema theory as a powerful rationale through which textbooks can be not only evaluated but also taught and tested (e.g., Khodadady, Alavi, & Khaghaninezhad, 2011) objectively.

 

LITERATURE REVIEW

The concept of schema was originally put forward by Bartlett (1932, 1958), proposing that it provides memory with a mental framework for comprehending and retrieving information. Later, Anderson (1977a, 1977b) made an outstanding contribution to schema theory. He believed that comprehension depended on the knowledge of the world held by individuals and referred to schemata as mental units into which such knowledge is organized. In other words, "in schema theory, individuals organize their world knowledge into categories and systems that make retrieval easier" (Pardo, 2004, p. 274). Consequently, many other researchers used the theory in the same sense (e.g., Carrell, 1987; Mandler, 1984; Rumelhart, 1980; Schank, 1982). Khodadady (1997, 1999) and Khodadady and Herriman (2000), however, used the term schema to demarcate a single or phrasal word which has been used along with other words to produce a specific text. It is commonly known as micro structural approach to schema theory (MICAST). Schema theory from this perspective explains the comprehension of texts as a process of understanding each and all schemata comprising the texts as they combine to produce broader cognitive concepts represented by phrases, clauses, sentences, paragraphs and passages.

In fact, language teacher's responsibility is nothing but to teach schemata in isolation and in combination with each other. According to Wiseman (2008), students need teachers to guide them in developing schemata in order to store and retrieve them accurately and efficiently. Consequently, in schema-based teaching (SBT) (Khodadady & Hesarzadeh, 2014), the English teacher must be highly proficient and qualified to be able to enrich the students' schemata in the language they teach. Since schemata are personalized knowledge (i.e., it varies from individual to individual), specific activities such as brainstorming and previewing are necessary to engage the students cognitively and to activate their schemata before starting to teach new materials; so that the input (i.e., "words") can become intake (i.e., "schemata") in the process of learning materials developed for teaching.

A number of studies have established the superiority of SBT over translation-based instruction (TBI) at schools and universities (e.g., Khodadady, Alavi, & Khaghaninezhad, 2011; Khodadady, Alavi, Pishghadam, & Khaghaninezhad, 2012; Khodadady & Elahi, 2012; Khodadady & Hesarzadeh, 2014). A case in point is a notable study carried out by Khodadady, Alavi, and Khaghaninezhad (2012) in which they introduced SBT as a language teaching approach, believing that it can "revolutionize the outcomes of foreign language teaching activities" (p. 65). They asserted that it is necessary for language learners to understand how the words comprising a given authentic text are related together internally and dynamically, pointing out that "any slight modification in the lexical network of a text may result in a huge distortion in comprehension" (p. 65). Therefore, the SBT is based on the premise that to comprehend a given text best, the learners should learn what each and all schemata in a text stand for by themselves and in combination with each other. Doing so, they will learn English significantly better than those who are taught via the TBI (Khodadady & Elahi, 2012).

Furthermore, it is agreed that a teacher is tasked with selecting an appropriate textbook for a class (Chen & Chen, 2001). One way to help teachers with this task is the analysis and evaluation of textbooks. "Coursebook analyses and evaluation do not only help teachers to develop themselves, but also help them to gain good and useful insights into the nature of the material" (Tok, 2010, p. 510). Another area in which schema-based approach can be utilized is that of analysis and evaluation of reading comprehension passages comprising different textbooks. As Rixon (2007) convincingly argued, reading skill very often does not receive any in depth analysis (Hughes, 2013). Comprehension is a complex higher level skill which is critically important to the development of students' reading; and critical to comprehension is vocabulary development (Gagen, 2007). Many scholars have, therefore, accentuated the strong relationship between vocabulary knowledge and reading comprehension (e.g., Hart & Risley, 1995; Hirsch, 2003; Nation, 2001).

Malay (2013) considered two main aspects to vocabulary: static and dynamic. Static meanings are found in dictionaries regulated by authorities. They have denotative and isolated meanings; and are conventionalized, predictable, and impersonal/generalized concepts in which the core meaning of the word is taken into consideration. On the other hand, dynamic meanings are found in actual use negotiated between users. They include connotative meanings formed by context. They are also creative and extended meanings, which are unpredictable and personal/particularized. Khodadady (2008) agreed with Malay that words presented in dictionaries are static. However, he argued that they become dynamic or schemata as soon as they are used to develop texts. In other words, schemata are learners’ personally acquired knowledge of words as they are activated and related to each other within the linguistic contexts of sentences, paragraphs and passages.

Schema theory has also been applied to reading from a different perspective: macro structural approach (MACAST). Almost all scholars follow the MACAST and view schema as "the structural patterns of various texts such as narratives and expository ones" (e.g., McNeil, 1987; Poplin, 1988) providing researchers with no objective units and procedure to explore their psychological reality. In the MICAST, however, "all the words and phrases constituting authentic texts are regarded as schemata" (Khodadady & Khosravany, 2014, p. 49) which are categorized into three main linguistic domains: semantic, syntactic, and parasyntactic; 16 genera and 122 species. For instance, the schema shocking belongs to the semantic domain, adjective genus, and agentive adjective species. Furthermore, in this approach, the frequency of occurrence of each schema is taken into consideration as well. Frequency is important because a word is gradually acquired as a result of numerous encounters with the word at different times and it cannot be learned by just one encounter, even if the word is taught explicitly (Nation, 2001). Therefore, the MICAST textual analysis and evaluation is very precise as well as objective and accounts for the comprehension of a text in details and as a whole. 

A comprehensive review of the literature on text evaluation shows that the majority of studies, if not all, are unsubstantiated and subjective, due to the fact that almost none of them have taken the reading comprehension passages into account. While the MICAST scrutinizes the "texts" to provide materials developers and language teachers with clear and systematic procedures of codification to base their evaluation and teaching on, the advocates of MACAST employ questionnaires or checklists to be filled by teachers or students (e.g., Miekley, 2005; Mukundan,  Nimehchisalem, & Hajimohammadi, 2011; Razmjoo, 2010; Razmjoo & Jozaghi, 2010; Tok, 2010; Williams, 1983). For example, considering the application of a number of research findings from SLA literature to materials development, Tomlinson (2013) questioned the effectiveness of many textbook materials. He identified 10 SLA theories, i.e. "1) rich and meaningful exposure, 2) affective engagement, 3) cognitive engagement, 4) utilization of the resources of the brain, 5) focus on meaning, 6) noticing, 7) opportunities for use, 8) opportunities for interaction, 9) making use of non-linguistic communication, and 10) catering for the individual" (p. 16). Further, he investigated the match between SLA theory and six currently in use course book activities (i.e., English Unlimited: Intermediate, Face2face: Upper Intermediate, Global, Just Right, Intermediate Outcomes, and Speakout). Finally, he claimed "none of the course books focus on meaning, that they are all forms-focused and that the majority of their activities are language item practice activities" (p. 16).

In another MACAST-based study on reading comprehension course books for teenagers and adults, Malay and Prowse (2013) investigated reading skills books and graded reader series published over the last 10 years. Drawing on some selected lessons taken from these course books to exemplify the most common approach in each book, they concluded that the examined reading texts were often "written pre-texts for grammar exploitation rather than for the development of reading skills" (p. 174). Additionally, describing different parts of sample lessons from the books under investigation which were all at upper-intermediate level and contained authentic or adapted reading texts for adults, they maintained that most of the sample texts were mainly developed for structural and lexical language practice or as writing models rather than for reading skills development. Finally, they suggested an integrated skills approach to reading material development in which the focus is on understanding the texts and the words that constitute them.

Likewise, via a qualitative study, Zabihi and Pordel (2011) evaluated the effectiveness of three well-known reading textbooks worldwide (i.e., Select Readings: Upper-intermediate, Active Skills for Reading: Book 4, and Mosaic 2 Reading). Utilizing a checklist, they investigated the extent to which the reading passages and the exercises that precede and follow them promote critical reading. They argued that autonomy and engagement were necessary for the development of critical reading that could be enhanced through strategy as well as task-based instruction. Based on their findings, they leveled the three reading textbooks against three criteria: "Critical thinking items, the use of appropriate tasks, and strategic instruction" (p. 80). Accordingly, they indicated that these three textbooks "meet the first criterion to some extent, but seriously lack the last two ones" (p. 80) without focusing on any passages.  

This study has, however, adopted the MICAST for the first time to examine randomly selected reading passages chosen from a well-known textbook, i.e. "Mosaic 1 Reading" (Wegmann & Knezevic, 2002), presumably written at the intermediate/high intermediate proficiency level. By adopting the elaborate procedure followed by Khodadady and Khosravany (2014), it scrutinizes the structures of the passages in terms of their constituting semantic, syntactic and parasuntactic schema domains, genera and species. Then it focuses on the readability of passages from both traditional and schema-based perspectives and finally discusses the suitability of teaching the textbook to Iranian undergraduate students.

 

PURPOSE OF THE STUDY

The current study, adopting the microstructural approach to textual analysis of pedagogical materials, aimed to answer the following research questions: 

1. Are the 17 passages of M1R selected or modified based on readability indices?

2. What percentage of schema types constitutes the semantic, syntactic, and parasyntactic domains brought up in the 17 passages of M1R?

3. Does the number of schema types forming the semantic, syntactic, and parasyntactic domains of the 17 selected passages presented in the M1R differ significantly?

 

METHOD

Materials

For the purpose of this study, the textbook "Mosaic 1 Reading" (Wegmann & Knezevic, 2002) [henceforth M1R] was evaluated by employing the MICAST. It is mainly designed to prepare students for academic content at the intermediate/high intermediate level of language proficiency and is widely taught at different universities in Iran to undergraduate students of English language. It is composed of 33 authentic reading comprehension passages within 12 chapters with various live and engaging topics such as health and leisure, money matters, remarkable individuals, and creativity followed by exercises that aim to help the improvement of this important skill in terms of vocabulary development, reading skills, critical thinking skills/culture and testing. Since the MICAST analysis of all the reading selections of the book would make a strong demand on the researchers' time and effort, 17 passages were chosen randomly to evaluate its content (see Table 1).

 

Data Collection Procedure

Following Khodadady and Khosravany (2014), M1R was treated as a linguistic text whose authors had employed certain words to create a language of their own. The words were, therefore, analyzed in terms not only of their meanings but also of the specific places they had assumed in combination with each other to produce the larger linguistic units of sentences and paragraphs. The analysis was conducted by utilizing 122 codes Khodadady and Lagzian (2013) developed to provide researchers with a theoretically sound and epistemologically objective method to study texts.

Each code reflects the linguistic feature of every and all words in M1R consisting of four digits which specify the first and broadest category as domains (the first left digit), the second as genera (the second left digit) and the third as species (the third and fourth digits). For example, the words “man”, “the”, and “not” are types of semantic (1), syntactic (2) and parasyntactic (3) domains whose genera indicate their being a noun (3), determiner (2) and para-adverb (5), respectively. They are further refined by indicating the fact that “man” is a simple noun (1380) while “the” precedes a noun to specify it (2270) as “not” follows a verb to negate its nature (3518). Thus, each code allows researchers to explain a large number of data whose validity had previously stayed unexplored. (All the species comprising the 17 passage of MIR along with their codes are given in Appendix).

Since each code specifies the semantic and linguistic features of given words such as “the”, “not”, and “man” within given texts, Khodadady (1997) used the term schema to render each and all words writer specific. The codification of all words/schemata comprising 17 passages of M1R, for example, showed that it consists of 3722 schemata among which “the”, “not”, and “man” were the most frequent because they had tokens of 827, 69, and 57, respectively. While the word “man” has a fixed meaning in a dictionary, it becomes a schema when the readers of M1R encounter it in various contexts, some of which will be brought up in the Discussion section shortly.

       In addition to counting a specific schema type such as “man” to obtain its tokens, some inflectional morphs, i.e., “the actual forms used to realize morphemes” (Yule, 2010, p. 71), were also treated as semantically redundant in order to determine the type of a specific schema. The schemata having the plural morphs “s” and “es” as well as those having the possessive morph “s” were treated as the tokens of a schema which did not have these morphs. The words “man”, “men” and “man’s” were thus given the same code (i.e., 1380) and were counted as four tokens of the schema type “man”. The same procedure was followed by Khodadady and Lagzian (2013) to study an English dentistry textbook and its Persian translation.

 

Data Analysis

First, the Microsoft Word software was used to estimate the readability level of the 17 passages chosen from M1R by employing Flesh Readability Ease score (Flesch, 1948) and Flesch-Kincaid Grade Level score. Then, in order to find out whether there was a significant difference among the three schema domains constituting the 17 selected reading passages of textbook, chi-square test was utilized. Moreover, Crosstabulation statistics was applied to the data to explore the difference in the number of genera which constitute the semantic, syntactic and parasyntactic domain schema types and tokens of the textbook. IBM SPSS Statistics 21 was used for the statistical analyses and answering the research questions.

 

RESULTS

Table 1 presents the readability level of the 17 reading passages of M1R as determined by Flesch Reading Ease score. It tests texts based on a 100-point scale in which higher scores indicate passages that are easier to understand and lower ones show materials that are more difficult to read. The texts that fall within a score of 60 to 70 are interpreted as standard. As it can be seen, the difficulty level of passages range from difficult (i.e., 42.9, 42.6, 49.0, 36.5, and 39.8) to very easy (i.e., 91.6 and 100). These results answer the first question negatively and show that no readability indices have been employed to select the 17 passages of M1R.

 

Table 1: Readability level of the 17 passages of "Mosaic 1 Reading"

Chapter

Part

Title

Flesch

FKGL

3

1

Who’s Taking Care of the Children?

52.0

10.4

3

2

70 Brides for 7 Foreigners

51.4

9.8

5

2

Tracks to the Future

42.9

12.8

6

1

Executive Takes Chance on Pizza, Transforms Spain

42.6

12.3

7

2

Beating the Odds

49.0

11.6

7

3

Courage Begins with One Voice

55.7

10.3

8

2

If You Invent the Story, You’re the First to See How It Ends

77.1

6.0

8

3

We can’t just sit back and hope

70.2

7.1

9

1

Ethnocentrism

60.2

9.9

9

2

A Clean, well-lighted place

91.6

2.6

9

3

The spell of the Yukon

100.0

0.3

10

1

Soapy Smith

56.0

10.6

10

2

Eye witness

88.9

2.9

10

3

Born Bad?

36.5

13.1

11

1

Touch the Earth: The Meaning of the Circle

70.4

8.3

11

3

Down the Drain: The Coming World Water Crisis

39.8

13.3

12

5

Inaugural Address

60.9

11.4

 

Table 1 above also provides Flesch-Kincaid grade level scores for each text corresponding with the appropriate US grade level.  As can be seen, the passages are very heterogeneous because they suit students coming from different grades, indicating that the authors of M1R did not base their selection of teaching materials on any objective measures of comprehensibility. Passage 10 (A Clean, Well-Lighted Place), for example, is suitable for grade three primary school students while passage 16 (Down the Drain: The Coming World Water Crisis) requires the ability to read textbooks written for college students.

Table 2 presents the descriptive statistics of the genera types and tokens comprising the 17 passages of the M1R. The results answer the second question and show that while almost the same percentage of semantic (44.2%) and syntactic (44.3%) schema tokens constitute the passages, only 11.3% of tokens are parasyntactic in domain. Nevertheless, the percentage of domains differs noticeably from each other when their types are taken into account, i.e. semantic schema types form 80% of the texts while syntactic and parasyntactic schema types constitute only 6.3% and 13.7%, respectively.

 

Table 2: Descriptive statistics of schema genus "types" and "tokens" forming the 17 passages of M1R

Schema

Genus

Type

Type %

Total %

Token

Token %

Total %

Semantic

Nouns

1280

34.4

 

3241

22.7

 

Verbs

964

25.9

 

1792

12.5

 

Adjectives

615

16.5

 

1023

7.1

 

Adverbs

120

3.2

80

266

1.9

44.2

Syntactic

Determiners

55

1.5

 

1912

13.4

 

Conjunctions

22

0.6

 

728

5.1

 

Prepositions

46

1.2

 

1640

11.5

 

Pronouns

70

1.9

 

1318

9.2

 

Syntactic verbs

42

1.1

6.3

733

5.1

44.3

Parasyntactic

Abbreviations

44

1.2

 

192

1.3

 

Names

234

6.3

 

469

3.3

 

Numerals

120

3.2

 

203

1.4

 

Para-adverbs

107

2.9

 

586

4.1

 

Particles

1

0

 

188

1.3

 

Symbols

2

0.1

13.7

11

0.1

11.5

 

Total

3722

100

100

14302

100

100

 

Table 3 presents the statistics and chi-square test run on the schema types forming the semantic, syntactic and parasyntactic domains covered by the 17 passages of M1R. As can be seen, the number of semantic schema types (2979) differs noticeably from that of syntactic (235) and parasyntactic (508) types. The test showed that the difference in the number of semantic, syntactic and parasyntactic schema types is significant, i.e., X2 (2, n= 3722)= 3683.4, p= .001, answering the third question positively and showing that the three domains have psychological reality for textbooks employed to teach EFL.

 

Table 3: Chi-square test of Mosaic 1 Reading schema "type" domains

 

Observed N

Expected N

Residual         

Tests

1 Semantic

2979

1240.7

1738.3

X2= 3683.48

2 Syntactic

235

1240.7

-1005.7

df= 2

3 Parasyntactic

508

1240.7

-732.7

Asymp. Sig.= .001

Total

3722

 

 

 

 

DISCUSSION

Mosaic 1 Reading (M1R) is a textbook developed based on the subjective macrostructural approach to schema theory (MACAST). Wegmann and Knezevic (2002) ask English language instructors to teach the M1R so that they can boost their students’ “academic success” (p. vi). Their nonacademic approach to convincing the teachers starts from an unspecified section whose heading runs “Mosaic 1 Reading” where they claim that “Interactions Mosaic, 4th edition is the newly revised five-level, four skill comprehensive EFL series designed to prepare students for academic content [emphasis added]. The themes are integrated across proficiency levels and the levels are articulated across skill strands” (p. vi).

       Unfortunately, many authors resort to the MACAST to justify their subjective arguments and unsubstantiated claims. One, for example, expects to read some explanations of what “Mosaic 1 Reading” (p. vi) is about. However, what they find is nothing but advertising “Interactions Mosaic, 4th edition” as a five level series presenting alleged integrated themes “across proficiency levels”. It is left to bewildered teachers such as the present researchers to find out what Wegmann and Knezevic (2002) meant by the terms “proficiency levels”. Furthermore, similar to other advocates of MACAST, the authors of M1R employ ESL and EFL interchangeably. Research findings, however, show that factors or genera underlying many cognitive domains such as emotional intelligence (e.g., Khodadady & Tabriz, 2012), personality (e.g., Khodadady & Mokhtary, 2014) and religious orientation (Khodadady & Saadi, 2015), to name a few, vary from ESL to EFL contexts.

       The adoption of MACAST as a theory of materials development seems to have led Wegmann and Knezevic (2002) to make other unsubstantiated claims. In part 1 of chapter 1 in the M1R, for example, they introduce the reading passage “Living in the USA” by saying that “the following article probably contains a number of words you do not know” (p. 2). They justify their readers’ unfamiliarity with the words by saying that “This is not surprising. Linguists tell us that, for historical reasons, English has a larger vocabulary than any other known language” (p. 2). More surprising than the claim is Wegmann and Knezevic’s solution stated as step 2, “Read the article for the main ideas. Skip words and phrases you do not understand. Do not slow yourself down by looking up words in a dictionary” (p. 2). Research findings do not, however, support Wegmann and Knezevic’s suggestion.

       Based on Khodadady and Herriman’s (1996) findings, Khodadady (2000), for example, administered the reading comprehension subtest of TOEFL 91 to 22 non-native speakers (NNSs) of English and asked them to underline the words whose meaning they neither knew nor could guess from their context. Out of 90 unknown words, he chose 30 most frequently underlined words and developed a multiple choice item test (MCIT) called contextual vocabulary test (CVT). Khodadady administered the CVT along with the MCIT reading comprehension test upon which the CVT was developed. He also administered a TOEFL vocabulary test which measured test takers’ global vocabulary knowledge (GVT) to 123 native speakers (NSs) and NNSs. When he correlated the three tests, the results showed that the CVT correlated higher than the GVT with the reading comprehension test, indicating that “the contextual vocabulary knowledge of both NSs and NNSs is the best predictor of their reading comprehension ability” (p. 200).

Khodadady’s (2000) findings are in line with those of the present which provide an objective theory-driven approach to textual analysis through which materials developed for English language teaching can be objectively analyzed. Since the MICAST followed in this study focuses on the meanings as well as the linguistic functions of words as they combine with each other to produce sentences and paragraphs of texts, it is far superior to Flesch Readability Ease Score (Flesch, 1948) which is based on average sentence length (the number of words divided by the number of sentences) and average number of syllables per word (the number of syllables divided by the number of words). Perhaps the dependence of the score on word length and syllable has contributed to Wegmann and Knezevic’s (2002) reluctance to make the passages of M1R homogeneous in terms of their readability level.

Instead of viewing words in terms of their length, the MICAST approaches them as representative of specific concepts whose comprehension in isolation and in combination with each other brings about understanding texts such as the M1R (Khodadady, 1997). They must, therefore, be used as the basic and most important units of teaching (Khodadady, Alavi, & Khaghaninezhad, 2012; Khodadady & Elahi, 2012; Khodadady & Hesarzadeh, 2014), translation (Seif & Khodadady, 2003) and evaluation of translated texts (Khodadady & Lagzian, 2013). The findings of this study show that schemata should also be employed to evaluate materials developed for teaching EFL.

        The results of this study, for example, show that 80%, 6.3% and 13.7% of schema types comprising the 17 passages of M1R are semantic, syntactic and parasyntactic in domain, respectively. Since the syntactic and parasyntactic schema types reflect the English structure as they connect the semantic schema types together to produce the broader concepts called cognitive species and genera represented by the linguistic units of sentences and paragraphs, respectively, their sum (6.3%+13.7%=20%) being divided by the percentage of semantic schema types (20/80=0.40) provides the most accurate index of MIR comprehensibility. As an index of text comprehensibility .40 is indicative of very high difficulty level compared to the materials taught at beginner levels in Iran.

Khodadady and Hesarzadeh (2014), for example, reported 56.7%, 17.3% and 26% for the semantic, syntactic and parasyntactic schema types comprising the passages of the “English Book 2” (Birjandi & Soheili, 2009a) and “English Book 3” (Birjandi & Soheili, 2009b) taught in Iranian junior high schools. Adding up 17.3 and 26 (43.3) and dividing the result by 56.7 yields 0.76, indicating that the two textbooks are much easier in terms of their comprehensibility than the M1R. This is because almost half of the former textbooks depend on schemata whose main function is to teach the students how to use them to express themselves by resorting to specific semantic schema types combined together within certain sentences to represent concepts-broader-than-schema called species in the MICAST.

       Finally, the suitability of teaching the M1R in Iran as an EFL context is questionable. The schema “man”, for example, is the most frequent concept brought up in its 17 passages because it has a token of 57. In Part 2 of Chapter 9, the short story entitled “A Clean, Well-Lighted Place” written by Ernest Hemingway is given as a reading passage in which “man” occurs 17 times. The introductory paragraph where “man” occurs three times consists of two sentences whose constituting words read:

It was late and everyone had left the café except an old man who sat in the shadow the leaves of the tree made against the electric light. In the daytime the street was dusty, but at night the dew settled the dust and the old man liked to sit late because he was deaf and now at night it was quiet and he felt the difference. The two waiters inside the café knew that the old man was a little drunk, and while he was a good client, they knew that if he became too drunk he would leave without paying, so they kept watch on him. (p. 158)

       No EFL learners can find a place similar to “the café” described in the paragraph above in Iran. Nor do they have any idea what “being drunk” means because alcoholic drinks are forbidden in public. How does then the 108 schema tokens comprising the paragraph help the learners to relate to “A Clean, Well-Lighted Place” as the main theme of the story advocated by the MACAST? What type of academic success can such a passage lead to when it starts with a place largely unknown to its readers? What personal and social reactions can such a passage produce when it brings up a theme, which has little relevance, if any, to the society in which it is taught?

 

CONCLUSION AND IMPLICATIONS

The present study analyzed seventeen reading passages of M1R in terms of their themes and showed that they are chosen based on a subjective approach called the MACAST. They are allegedly brought up to achieve “academic success” and address English language “proficiency levels” without objectively defining what they stand for and how the textbook helps its readers acquire them. As the compilers of the passages, Wegmann and Knezevic (1985), for example, believed that their readers can “perceive the author's general intent and to read for overall meaning, even when they are unfamiliar with many words and some grammatical structures” (p. xiv). Their belief is best captured by summarization defined as reducing reading passages to their key ideas or the main points that are worth noting and remembering (Hurst & Hurst, 2015).

       Academic studies do not, however, support Wegmann and Knezevic’s (1985) MACAST-based belief. To explore the effect of summarizing some passages of M1R on reading comprehension ability of undergraduate university students, Shamsini and Mousavi (2014), for example, recruited 75 undergraduate university students and assigned them randomly to summarization, question-generation and control groups. Upon securing the homogeneity of their groups through administering TOEFL as a pretest, Shamsini and Mousavi (2014) employed traditional reading comprehension techniques to teach some passages of M1R to their control group while they required the members of the other two groups to summarize and generate questions on the same passages as well. Contrary to their expectations, they did not find any significant difference in the scores of three groups on the TOEFL administered as a post test after treatment, rejecting Wegmann and Knezevic’s rationale behind compiling M1R, (i.e., helping the textbook users master “these skills [i.e., getting the main idea through summarization], rather than the content of the readings” (p. xiv).

            In contrast to the MACAST, the MICAST-based analysis of M1R reveals the language proficiency level of the textbook by specifying the exact number of schema types which constitute the semantic, syntactic and parasyntactic domains of its reading passages. The analysis is in agreement with vocabulary experts who consider adequate reading comprehension dependent upon knowing between 90 and 95 percent of semantic schema types in a text (e.g., Hirsch, 2003). Since the results of this study show that the 17 passages of M1R consist of 2979 semantic schema types, their readers must know between 2681 to 2830 of these schemata in order to comprehend the passages adequately. Considering the fact that a large number of these schema types bear little relevance, if any, to the Iranian society in which they are taught, the number of unknown schemata will be far more than the accepted range resulting in misunderstanding the passages.

       In spite of providing empirical indices through which the passages constituting M1R were evaluated in terms of MICAST, the findings of this study are limited in scope because no attempt was made to design any ability tests on the passages analyzed to study their comprehensibility psychometrically. It is, therefore, suggested measures such as S-Tests are developed on the passages in order to find out whether their teaching brings about comprehension on the part of Iranian EFL learners. S-Tests are recommended because they require their takers “to draw upon their experiences and background knowledge to distinguish the author's schemata from among the competitives which share some semantic features with those of the author” (Khodadady & Herriman, 2000, p. 206)

Finally, the findings of this study have important implications for language teaching, testing and materials development. They show that language proficiency must be defined in terms of schema types and the bulk of class time must be spent on teaching semantic schemata rather than syntactic and parasyntactic ones. Similarly, for testing the reading comprehension of these passages, the number and type of test items must be based on the percentage of semantic and syntactic schema types and subjective criteria such as teachers’ intuition or experience must be avoided both in teaching and testing the comprehension of passages. Materials developers must also focus on choosing passages whose constituting schemata deal not only with “customs, personalities, values, and ways of thinking of Americans and Canadians” (Wegmann & Knezevic, 1985, p. xiv) but also with those of local readers. 

Anderson, R. (1977a). The notion of schemata and the educational enterprise: General discussion of the conference. In R. Anderson, R. Spiro, & W. Montague (Eds.), Schooling and the acquisition of knowledge (pp. 415-431). Hillsdale, NJ: Lawrence Erlbaum Associates.
Anderson, R. (1977b). Schema-directed processes in language comprehension. In A. Lesgold, J. Pelligreno, S. Fokkema, & R. Glaser (Eds.), Cognitive psychology and instruction (pp. 67-82). New York, NY: Plenum.
Ansary, H. & Babaii, E. (2002). Universal characteristics of EFL/ESL textbooks: A step towards systematic textbook evaluation. The Internet TESL Journal, 8(2), 1-8.
Bartlett, F. (1932). Remembering: An experimental and social study. Cambridge: Cambridge University Press.
Bartlett, F. (1958). Thinking: An experimental and social study. New York, NY: Basic Books.
Birjandi, P., & Soheili, A. (2009a). English: Grade 2 junior high school. Tehran: Ketabhaye Darsie Iran Publication.
Birjandi, P., & Soheili, A. (2009b). English: Grade 3 junior high school. Tehran: Ketabhaye Darsie Iran Publication.
Boriboon, P. (2008). Cultural voices and representations in EFL materials: Design, pedagogy, and research (Unpublished doctoral dissertation). University of Edinburgh, UK.
Carrell, P. L. (1987). Content and formal schemata in ESL reading. TESOL Quarterly, 21(3), 461-481.
Chen, J.,  & Chen, J. C. (2001). QFD-based technical textbook evaluation: Procedure and a case study. Journal of Industrial Technology, 18(1), 1-8.
Crawford, J. (2002). The role of materials in the language classroom: Finding the balance. In J. C. Richards & W. A. Renandya. (Eds.), Methodology in language teaching: An anthology of current practice (pp. 80-91). Cambridge: Cambridge University Press.
Cunningsworth, A. (1984). Evaluating and selecting EFL teaching materials. London: Heineman Educational Books.
Donoghue, F. (1992). Teachers' guides: A review of their function. CLCS Occasional Papers, 30, 1-51.
Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32, 221-233.
Gagen, R. (2007).  Developing & improving reading comprehension skills: Overview of reading comprehension & specific actions to help students develop comprehension. Retrieved from http://righttrackreading.com/readingcomprehension.html
Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore, MD: Paul H. Brookes Publishing Company.
Hirsch, E. D. (2003). Reading comprehension requires knowledge of words and the world: Scientific insights into the fourth-grade slump and the nation's stagnant comprehension scores. American Educator, 27(1), 10-29.
Hughes, A. (2013). The teaching of reading in English for young learners: Some considerations and next steps. In B. Tomlinson (Ed.), Applied linguistics and materials development (pp. 183-198). New York, NY: Bloomsbury.
Hurst, E. B., & Hurst, M. R. (2015). Why can't my son read? Success strategies for helping boys with dyslexia and reading difficulties. Austin, TX: Prufrock Press.
Hutchinson, T., & Torres, E. (1994). The textbook as agent of change. ELT Journal, 48(4), 315-328.
Khodadady, E. (1997). Schemata theory and multiple choice item tests measuring reading comprehension (Unpublished doctoral dissertation). The University of Western Australia.
Khodadady, E. (1999). Multiple-choice items in testing: Practice and theory. Tehran: Rahnama.
Khodadady, E. (2000). Contextual vocabulary knowledge: The best predictor of native and non-native speakers’ reading comprehension ability. Especialist, 21(2), 181-205.
Khodadady, E. (2008). Schema-based textual analysis of domain-controlled authentic texts. Iranian Journal of Language Studies, 2(4), 431-446.
Khodadady, E., Alavi, S. M., & Khaghaninezhad, M. S. (2011). Schema-based instruction: A novel approach to teaching English to Iranian university students. Ferdowsi Review, 2(1), 3-21.
Khodadady, E., Alavi, S. M., & Khaghaninezhad, M. S. (2012). Schema-based instruction and general English courses at Iranian universities. The Iranian EFL Journal, 8(2), 44-68.
Khodadady, E., Alavi, S. M., Pishghadam, R., & Khaghaninezhad, M. S. (2012). Teaching general English in academic context: Schema-based or translation-based approach? International Journal of Linguistics, 4(1), 56-89.
Khodadady, E., & Elahi, M. (2012). The effect of schema-vs-translation-based instruction on Persian medical students' learning of general English. English Language Teaching, 5(1), 146-165.
Khodadady, E. & Hesarzadeh, R. (2014). The effect of schema-vs-translation-based teaching on learning English in high schools. Theory and Practice in Language Studies, 4(1), 143-154.
Khodadady, E., & Herriman, M. (1996). Contextual lexical knowledge and reading comprehension: Relationship and assessment. Paper presented at the 9th Educational Conference of the ELICOS Association of Australia, Sydney, New South Wales.
Khodadady, E., & Herriman, M. (2000). Schemata theory and selected response item tests: From theory to practice. In A. J. Kunnan (Ed.), Fairness and validation on language assessment (pp. 201-222). Cambridge: Cambridge  University Press.
Khodadady, E. & Khosravany, H. (2014). Ideology in the BBC and Press TV's coverage of Syria unrest: A schema-based approach. Review of Journalism and Mass Communication, 2(1), 47-67.
Khodadady, E., & Lagzian, M. (2013). Textual analysis of an English dentistry textbook and its Persian translation: A schema-based approach. Journal of Studies in Social Sciences, 2(1), 81-104.
Khodadady, E., & Mokhtary, M. (2014). Does personality measured by NEO-FFI consist of five dimensions?. International Journal of Humanities and Social Science, 4(9/1), 288-301.
Khodadady, E., & Saadi, N. S. (2015). Religious orientation and English language proficiency. International Journal of Psychology and Behavioral Sciences, 5(1), 35-47.
Khodadady, E., & Tabriz, S. A. (2012). Reliability and construct validity of factors underlying the emotional intelligence of Iranian EFL teachers. Journal of Arts & Humanities, 1(3), 72-86.
Lee, R. & Bathmaker, A. (2007). The use of English textbooks for teaching English to “vocational” students in Singapore secondary schools: A survey of teachers’ beliefs. RELC Journal, 38(3), 350-374.
Malay, A. (2013). Vocabulary. In B. Tomlinson (Ed.), Applied linguistics and materials development (pp. 95-111). New York, NY: Bloomsbury.
Malay, A. & Prowse, P. (2013). Reading. In B. Tomlinson (Ed.), Applied linguistics and materials development (pp. 165-182). New York, NY: Bloomsbury.
Mandler, J. (1984). Stories, scripts, and scenes: Aspects of schema theory. Hillsdale, NJ: Lawrence Erlbaum Associates.
McNeil, J. D. (1987). Reading comprehension: New directions for classroom practice. Glenview, IL: Scott, Foresman.
Miekley, J. (2005). ESL textbook evaluation checklist. The Reading Matrix, 5(2), 1-9.
Mukundan, J., Nimehchisalem, V. & Hajimohammadi, R. (2011).  Developing an English language textbook evaluation checklist: A focus group study. International Journal of Humanities and Social Science, 1(12), 100-106.
Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge: Cambridge University Press.
Nunan, D. (1991). Language teaching methodology: A textbook for teachers. New York, NY: Prentice Hall.
O'Neill, R. (1982). Why use textbooks? ELT Journal, 36(2). Reprinted in R. Rossner & R. Bolitho (Eds.), Currents of change in English language teaching (pp. 148-156). Oxford: Oxford University Press.
Pardo, L. S. (2004). What every teacher needs to know about comprehension. The Reading Teacher, 58(3), 272-280.
Poplin, M. S. (1988). Holistic/constructivist principles of the teaching/learning process: Implications for the field of learning disabilities. Journal of Learning Disabilities, 21, 401-416.
Razmjoo, S. A. (2010). Developing a textbook evaluation scheme for the expanding circle. Iranian Journal of Applied Language Studies, 12(1), 121-136.
Razmjoo, S. A., & Jozaghi, Z. (2010). The representation of multiple intelligences types in the Top-Notch series: A textbook evaluation. Journal of Pan-Pacific Association of Applied Linguistics, 14(2), 59-84.
Reid, J. (1994). Change in the language classroom: Process and intervention. English Teaching Forum, 32(1). Retrieved from http://dosfan.lib.uic.edu/usia/E-USIA/forum/vols/vol32/no1/p8.htm
Richards, J. (2001). Curriculum development in language teaching. Oxford: Oxford University Press.
Rixon, S. (2007). EYL teachers: Background beliefs and practices in the teaching of initial reading. In A. Hughes & N. Taylor (Eds.), Teaching English to young learners: Fourth international TEYL Research seminar 2007 papers (pp. 6e14). York: University of York.
Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. C. Bruce, & W. F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 33-58). Hillsdale, NJ: Lawrence Erlbaum Associates.
Seif, S., & Khodadady, E. (2003). Schema-based cloze multiple choice item tests: Measures of translation ability. Universite de Tabriz, Revue de la Faculte des Letters et Sciences Humaines, Langue, 187(46), 73-99.
Schank, R. C. (1982). Dynamic memory: A theory of reminding and learning. Cambridge: Cambridge University Press.
Shamsini, S., & Mousavi, S. A. (2014). Metacognitive strategy awareness and its effect on the learners' reading comprehension ability: Revisited. International Journal of English and Education, 3(3), 40-59.
Tok, H. (2010). TEFL textbook evaluation: From teachers’ perspectives. Educational Research and Review, 5(9), 508-517.
Tomlinson, B. (2013). Introduction: Applied linguistics and materials development. In B. Tomlinson (Ed.), Applied linguistics and materials development (pp. 1-29). New York, NY: Bloomsbury.
Wegmann, B., & Knezevic, M. (1985). Mosaic I: A Reading skills book. New York, NY: Random House.
Wegmann, B., & Knezevic, M. (2002). Mosaic 1 reading (4th ed.). New York, NY: McGraw-Hill Contemporary.
Williams, D. (1983). Developing criteria for textbook evaluation. ELT Journal, 37(3), 251-255.
Wiseman, D. G. (2008). Schema theory: Using cognitive structures in organizing knowledge. (Research Brief No. 10). Retrieved from https://www.coastal.edu/education/research/schematheory.pdf
Yule, G. (2010). The study of language (4th ed.). Cambridge: Cambridge University Press.
Zabihi, R. & Pordel, M. (2011). An investigation of critical reading in reading textbooks: A qualitative analysis. International Education Studies, 4(3), 80-87.
 
Appendix
 
 Schema species codes and tokens forming the 17 passages of M1R
Schema Species
Species code
Tokens
Abbreviations
3110
186
Acronyms
3120
6
Adjectival Noun
1310
49
Agentive Adjective
1110
45
Agentive Complex Adjective
1111
9
Comparative Adjective
1120
29
Comparative Adverb
1210
53
Complex Adjective
1130
53
Complex Dative Adjective
1141
28
Complex Noun
1320
48
Complex Preposition
2310
98
Complex Verb (Base)
1411
12
Complex Verb (Past participle)
1413
12
Complex Verb (Present participle)
1414
13
Complex Verb (Simple Past)
1415
13
Complex Verb (Third Person)
1412
6
Compound Noun
1330
112
Compound Preposition
2320
50
Conjunction (Phrasal)
2110
20
Conjunction (Simple)
2120
708
Dative Adjective
1140
81
Demonstrative Determiner
2210
79
Demonstrative Pronoun
2410
56
Derivational Adjective
1150
254
Derivational Adverb
1220
157
Derivational Complex Adjective
1151
41
Derivational Complex Noun
1341
33
Derivational Noun (Simple)
1340
472
Derivational Verb (Base)
1421
9
Derivational Verb (Present participle)
1424
4
Emphatic Pronoun
2420
1
Future
2544
22
Future Auxiliary
2545
6
Gerund Noun
1350
56
Gerund Noun (Complex)
1351
8
Interrogative Pronoun
2430
12
Model (Past)
2580
53
Model (Present)
2570
44
Name (Full)
3310
98
Name (Labeling)
3320
32
Name (Organizational)
3330
22
Name (Single)
3340
289
Name (Titles)
3350
28
Nominal Adjective
1160
37
Nominal Noun
1370
35
Numeral (Alphabetic )
3410
46
Numeral (Digital)
3420
123
Numeral (Year)
3440
34
Numeral Determiner
2230
29
Object Pronoun
2440
214
Para-adverbs (Additive)
3511
41
Para-adverbs (Contrasting)
3512
51
Para-adverbs (Emphatic)
3513
22
Para-adverbs (Exemplifying)
3522
10
Para-adverbs (Frequency)
3514
36
Para-adverbs (Intensifying)
3515
98
Para-adverbs (Interrogative)
3516
45
Para-adverbs (Location)
3523
33
Para-adverbs (Manner)
3517
15
Para-adverbs (Negation/Approval)
3518
99
Para-adverbs (Prepositional)
3519
16
Para-adverbs (Referential)
3520
38
Para-adverbs (Time)
3521
82
Particle (Simple)
3611
188
Past Auxiliary
2511
212
Past Model Auxiliary
2531
2
Past Perfect Auxiliary
2512
2
Past Perfect Model Auxiliary
2532
5
Phrasal Preposition
2330
10
Phrasal Verb (Base)
1431
48
Phrasal Verb (Past Participle)
1433
8
Phrasal Verb (Present Participle)
1434
19
Phrasal Verb (Simple Past)
1435
42
Phrasal Verb (Third Person)
1432
5
Possessive Determiner
2240
278
Possessive Pronoun
2441
5
Present Auxiliary
2521
345
Present Model Auxiliary
2541
13
Present Perfect Auxiliary
2522
15
Present Perfect Model Auxiliary
2542
4
Present Phrasal Auxiliary
2561
10
Quantifying Determiner
2250
227
Ranking Determiner
2260
41
Reflexive Pronoun
2450
17
Relative Pronoun
2460
252
Simple Adjective
1170
435
Simple Adverb
1230
52
Simple Noun
1380
2427
Simple Preposition
2340
1482
Simple Verb (Base)
1441
578
Simple Verb (Past Participle)
1443
221
Simple Verb (Present participle)
1444
203
Simple Verb (Simple Past)
1445
444
Simple Verb (Third Person)
1442
152
Specified Pronoun
2481
17
Specifying Determiner
2270
1258
Subject Pronoun
2470
634
Superlative Adjective
1180
11
Superlative Adverb
1240
2
Unspecified Pronoun
2480
110
Symbol (Conventional)
3710
11
Derivational Verb (Past Participle)
1423
2
Compound Complex Noun
1331
1
Derivational Verb (Simple Past)
1425
1
Complex Adverb
1211
2