Document Type : Research Paper

Authors

1 Assistant Professor of Applied Linguistics, Razi University, Iran

2 M.A. in TEFL, Razi University, Iran

Abstract

Dynamic Assessment (DA) is theoretically framed within Vygotsky’s Socio-Cultural Theory (SCT) and relies on reunification of assessment and instruction. This process-oriented study of reading comprehension aims at investigating the impacts of applying computerized dynamic assessment (C-DA) which is an ongoing strand of DA on promoting at-risk advanced Iranian EFL students’ reading skills. The sample of this study comprised of 32 advanced BA students selected based on convenience sampling from Teaching English as a Foreign Language (TEFL) undergraduates from a university in Iran. In this study, the DIALANG software and the Computerized Dynamic Reading Test (CDRT) were utilized to identify the individuals’ proficiency level and to examine the effectiveness of the enrichment program (EP) in DA respectively. Upon completion of the CDRT, the learners were presented with two mediated and unmediated scores. The formula called Learning Potential Score (LPS) was also utilized in order to measure the students’ potential for learning. Analysis of the results showed that a pretest (unmediated) score was a sufficient indication neither for measuring individuals’ ability nor for preparing an effective lesson plan for them. The findings of this investigation may prove to be significantly useful for those who are concerned about individuals requiring a lot of attention, that is, at-risk or retarded learners within the realm of DA.

Keywords

Dynamic Assessment (DA), an alternative to traditional testing which originates from Socio-Cultural Theory (SCT), focuses on process-oriented aspect of reading comprehension (Carney & Cioffi, 1990). SCT is a psychological theory developed by Lev. Vygotsky and its tenets serve as the basis of DA relying on reunification of assessment and instruction. DA holds that development occurs if we integrate a mediation phase into our assessment (Lidz & Gindis, 2003). In this regard, Vygotsky (1978) defined mediation as the appropriate form of support which leads to promotion of students’ abilities and contended that it will be more helpful in case it is based on the individual learners’ Zone of Proximal Development (ZPD); which is the difference between a person’s actual and potential ability.

An ongoing strand of DA is the Computerized Dynamic Assessment (C-DA) which provides learners with automatic mediations through computers. Applying C-DA has some advantages such as being administered simultaneously to a large number of learners, providing the opportunity for learners to reassess as many times they would like, and generating the scoring file of each learner as they finish answering to the questions. The present research has applied the use of C-DA on reading comprehension.

Dynamic assessment of reading comprehension which uses a response-to-instruction paradigm is of high importance for examiners in that it can help them choose an appropriate mediation for learners by exploring their responses to a series of questions presented in an interactive teaching-learning manner. To the best of the researchers’ knowledge, the number of studies conducted on process-oriented investigations of reading comprehension (e.g., Kletzien & Bednar, 1990; Kozulin & Garb, 2002) is not as many as the studies carried out on product-oriented reading comprehension. That is, most of the previous studies done in the area of reading comprehension have focused mainly upon product-oriented methods of exploring reading comprehension; a point which was also underscored by Ajideh (2003, p. 2) as “Research has tended to focus upon the product rather than the process.” In this respect, Vandergrift (2007, p. 192) commented that quantitative approaches “tell us something about the product”, but they fail to tell us anything about the process of “how readers arrive at the right answer or why comprehension breaks down” (Vandergrift, 2007, p. 192); hence the researchers thought that there is a need to investigate reading processes through in-depth qualitative methods to achieve a much better understanding of how foreign language readers attain successful comprehension.

Due to scarcity of research done in the area of process-oriented investigations of reading comprehension and qualitative methods of its exploration (English, 2003), this study attempted to address these concerns through applying C-DA, which is an ongoing strand of DA and overcomes some of the DA issues, to reading assessment and instruction.

One of the most important terms which has been brought in the title of this study as well is ‘at-risk’ students. Through stating the line that the DA procedure is most valuably used with at-risk readers, Kletzien and Bednar (1990) reflected the significance of this term in examining the DA procedure. Being at risk for them does not mean that the child is doomed to be a poor reader, but it does indicate that he or she may need especially close monitoring and prompt intervention to prevent reading difficulties. To define ‘at-risk students’, Schneider (1999) stated that there are some students who cannot learn a foreign language, that is, English, in regular classroom settings and at a speed equal to that of their classmates; these kinds of students are hence called at-risk students.

Another important key term in the title is ‘advanced’ which lies in the contradiction between the terms ‘advanced’, and ‘at-risk’ learners which are representative of the ‘should-be’ level and ‘actual’ status of the participants of the present study respectively. It can be expected from a TEFL senior (undergraduate) student to be able to have sufficient mastery of the four basic skills of a language (here English) but in spite of being senior, the participants of the present study lacked such requirement.

Therefore, because they were seniors, they were considered as ‘advanced’ students but due to their low proficiency, based on the results obtained from the Placement Test of DIALANG which is the primary outcome of an EU funded project to deliver an instrument for aligning language learners on the Common European Framework of Reference for Languages (CEFR), they were called ‘at-risk’ too. Having obtained these results, the researchers found it justifiable to use these two contradictory words together. Thus, this study sought to apply C-DA on these ‘advanced at-risk’ students to see if they could gain their potential (ZPD). The participants’ proficiency levels of DIALANG are based on the levels from CEFR and ranged from (a) A1 (Breakthrough) and (b) A2 (wastage) as Basic User; through (c) B1 (Threshold) and (d) B2 (Vantage) as Independent User; and to (e) C1 (Effective Operational Proficiency) and (f) C2 (Mastery) as Proficient User.

 

LITERATURE REVIEW

Dynamic Assessment

DA is a postmodern notion in testing (Pishghadam & Barabadi, 2012) which holds the instruction-assessment unity and attempts to assist learners move beyond their current independent function by offering mediation (Ebadi, 2014). There are generally two approaches to DA, that is, interventionist and interactionist, which were first proposed by Lantolf and Poehner (2004). These approaches are used to describe the two general kinds of mediation or ‘intervention’ as Sternberg and Grigorenko (2002) and Lidz (1991) refer to. The interactionist approach to DA is also referred to as ‘clinical approach to DA’ by some researchers such as Summers (2008) because it is against conducting quantitative research on the area of dynamic assessment and embraces a qualitative approach to do so. Interactionist DA proponents object what interventionist DA provides. They believe what interventionist DA provides is not a view of potential future development but instead is a view of actual development (Summers, 2008). Mediation, in interactionist DA, is provided exactly based on the ability of individual learners which emerges while they are interacting with a mediator during the assessment procedure.

Unlike interventionist DA which deals with standardization of the interaction, quantification of the learning and its measurement (Poehner, 2005), interactionist DA holds that feedback is to-the-point and emergent, that is, it should be provided at the moment it is needed, and “learning is interpreted rather than measured” (Tajeddin & Tayebipour, 2012, p. 90).

In the interventionist approach to DA, assessment procedures are standardized. That is, the quality and quantity of mediation during DA administration process is preplanned. Interventionist DA is also called “a psychometric approach to DA” (Summers, 2008, p. 71). On the contrary to interactionist DA in which mediation emerges from the interaction between a mediator and only few number of learners and it is arranged from explicit to implicit, in interventionist DA in addition to mediating a larger number of learners, mediation is already “scripted beforehand and is hierarchically arranged from most implicit to most explicit” (Lantolf & Poehner, 2013, p. 152). Owing to following pre-scripted intervention and mediating larger cohorts of learners through the C-DA procedures, the interventionist approach to DA was utilized in the present study.

 

Computerized Dynamic Assessment (C-DA)

Similar to DA, the central concept of the computerized dynamic assessment (C-DA) is grounded in Vygotsky’s theoretical framework (1978). So far a few known C-DA studies have been conducted on the field of English language teaching (ELT); therefore, what follows is a brief description of these studies.

Recently some other studies such as Pishghadam, Barabadi, and Kamrood (2011), Teo (2012), Poehner and Lantolf (2013), and Teo (2014) have been conducted in the area of C-DA and to clarify the topic the researchers have scrutinized Pishghadam et al.’s (2011) study. Based on these studies which are synopsized below, much more research is required to shed some light on the field of C-DA.

Pishghadam et al. (2011) investigated the effectiveness of using a computerized dynamic reading comprehension test (CDRT) on 77 Iranian EFL students. They designed a software program which provided the learners with 5 pre-fabricated strategy-based mediations for each item ranging from the most implicit (Your answer is wrong, try again.) to the most explicit (The right answer is ….(stating the correct answer) ) and showed two scores for each learner; a non-dynamic score which is indicative of the learner’s Zone of Actual Development (ZAD), that is, their ability to respond the items correctly without using the prompts and a dynamic score which shows the learner’s ZPD or his performance after being helped by mediations. The result of this study showed that providing hints in this way was significantly effective in improving the students’ performance with regard to text comprehension.

In another study, Pishghadam and Barabadi (2012) also conducted research on C-DA in the context of Iran by investigating the effectiveness of the researcher-made validated software in reading comprehension. The findings of their study represented the learners’ microgenetic development after being observed by the two important psychometric properties of testing, that is, validity and reliability.

Relying on the issue of large number of learners in classes, Teo (2012) sought to put an end to the problem by applying the Viewlet Quiz 3 software (available at: http://tinyurl.com/ch4ws8h) in a Taiwanese EFL context. Their findings also backboned the positive role of the C-DA procedures in helping the learners achieve higher proficiency levels in metacognitive reading strategies. Having set up the control and experimental group, Teo (2014) conducted another study on 137 non-English major students utilizing C-DA. The study only focused on three reading skills of identifying main ideas (FMI), using contextual clues (CC), and making inferences (MI). Here, the results showed the significant impact of C-DA on FMI and MI in the experimental group leaving the learners’ CC skill without any development.

Finally, transcendence (TR), which is the process of assessing leaners in an increasingly more challenging context compared to the original one, has not been the focus of a lot of studies, but Poehner and Lantolf (2013) made use of the DA principles on an online format while concentrating on the two recipient skills of listening and reading comprehension in an L2 context. The results revealed the learners’ microgenetic development both through the TR sessions and also through the stages before TR.

 

PURPOSE OF THE STUDY

The study’s main purpose is applying Computerized Dynamic Assessment (C-DA) to the development of learners' reading ability in Iranian EFL context to see if it is effective to develop and implement DA procedure with at-risk students majoring in English. Thus, this study attempted to answer the following question:

What are the effects of Computerized Dynamic Assessment on promoting Iranian EFL students’ reading skills?

 

METHOD

Participants

The participants of this study comprised of 32 advanced BA EFL students selected based on convenience sampling from Teaching English as a Foreign Language (TEFL) undergraduates from a university in Kurdistan, Iran. The mean age of the sample was 27 years that indicates that the participants were adults.

This study was carried out in Iranian context and the homogeneity of the participants was taken for granted by using Poehner’s (2005) contention that the number of semesters the students have spent studying a language shows their proficiency level in that language. Of course, the results obtained from the DIALANG, a free online assessment system to determine learners’ proficiency level, were also indicative of the homogeneity of the participants. In this regard, among the 32 participants, the results showed that 24 were at the B2 English reading comprehension level, 7 were at the B1 proficiency level, and only one participant was at the C1 level.

Having studied Teaching English as a Foreign Language (TEFL) for four years, some of these students were not even able to speak without any hesitations or pauses for two minutes and this means that these students are ‘at-risk’ students. Hence the reason for selecting them to take part in this study is their low proficiency in English despite studying it for four years. It is of paramount importance to state that though the learners’ level was determined by DIALANG, their proficiency level as either ‘beginner’ or ‘advanced’ refers to the number of semesters they have spent studying English at Islamic Azad University in the present study. Thus, the participants were advanced by virtue of completing their enrollment in an eighth-semester undergraduate university language course. In other words, though there was no need to determine the level of students (‘advanced’) who participated in this study, the present researchers found it necessary to strengthen their contention that these students were actually ‘at-risk’ because they did not have a sufficient mastery of the English language through utilizing the DIALANG software.

 

Instrumentation

The required data was collected by the following instruments: the DIALANG software and the Computerized Dynamic Reading Test (CDRT) developed by Pishghadam and Barabadi (2012).

 

 

DIALANG

DIALANG is a free online assessment system which is intended for adult (individual) language learners who want to obtain diagnostic information about their proficiency for three of the four main skills, that is Reading, Listening and Writing and two subskills, that is Grammar and Vocabulary in fourteen languages. DIALANG has instructions and tests in all these languages. DIALANG’s Assessment Framework and self-assessment statements are based on the Common European Framework of Reference for Languages (CEFR); thus, it also gives feedback on the strengths and weaknesses of the learner’s proficiency and advises about how to improve language skills. To determine the participants’ proficiency level in this study, they were asked to visit the following address: http://dialangweb.lancaster.ac.uk . Since each test took about two hours for each individual to complete, the whole process took two days until the results of the whole participants’ proficiency level were determined.

 

Computerized Dynamic Reading Test (CDRT)

One of the instruments which was used in this study and bore full responsibility for collecting the posttest data was a computer software program called Computerized Dynamic Reading Test (CDRT) developed by Pishghadam and Barabadi (2012). While reacting to the students’ responses automatically, this instrument provides a kind of preplanned help or mediation which has the two principal properties of any ZPD-based mediation, that is, it is both gradual and contingent. The hints are arranged in a way that they move from the most implicit to the most explicit. This means the first and last hint are fixed in all items and are shown as Hint 1) Your answer is wrong, try again and Hint 5) The right answer is …., respectively. The other three hints, however, vary depending on the type of items. The test has been developed in such a way that should be done individually. Upon completion of taking the test, each student’s performance is summarized on a scoring file containing two scores: a score gained with the use of hints and a score gained with no hint or mediation.

The developers of the software contended that the CDRT passages were selected in a way that in addition to being in line with strategy-based mediation, which means not being biased against or in favor of any particular students, they had a readability level, which is assessed by the readability formulas, suitable for those BA and MA students studying English as their major in Iran. The readability formulas available at the website: http://www.readabilityformulas.com/ which is a free website to help readers: (1) score their texts (documents, books, policies, technical materials, etc.) and (2) find the reading level and grade level that different readers need to read and comprehend their texts. These formulas had been used over time by many researchers such as Farr, Jenkins, and Paterson (1951), Pichert and Elam (1985), Meade and Smith (1991), and Wang, Miller, Schmitt, and Wen (2013) to name just a few. When taken, four pieces of information about each test taker are stored in this file:

1) Score gained with no hint.

2) Score gained with the use of hints.

3) The number of hints used in each item.

4) The total number of hints used.

5) The total time spent on the test.

With regard to the reliability and validity of the test, Farhadi, Jafarpur, and Birjandi’s (1994) description of these two terms was utilized. Based on their study, the test had moderate reliability (r = .70) and concurrent validity (.66) and because it was piloted, based on the developers’ claim, some modifications were made both in the content of test and also in the software package (Pishghadam & Barabadi, 2012).

 

Data Collection Procedure

Having determined the participants’ proficiency level by the DIALANG test and having identified the difficulties the learners experienced, the researchers sought to collect the pretest data by providing the students with two similar in-difficulty-as-the-DIALANG passages in paper-based form and kept the question types that were used in the posttest.

Having made sure that the participants were at-risk because of their low scores based on the DIALANG test, they were given two passages; each containing 10 questions. In other words, this stage of the study involved a non-dynamic pretest which provided the researchers their NDA or actual scores.

In the step followed by the pretest, the participants were trained under an Enrichment Program (EP) in DA. The EP sessions were also conducted dynamically and the points where all of the learners exhibited problems were given special attention by the researchers to be overcome. After the EP sessions, the participants took the CDRT to check the effectiveness of the program. They were both guided into how to deal with the CDRT software and also how to fill in the required information in order to take the tests in the CDRT program. They were informed of complete information about DA, C-DA and the hints or mediation they would face with during the process of taking the test as well. In case of any problems, they were recommended to tell us, the researchers, to help them. Though the participants’ eager request for holding the same process for other skills from the researchers could be indicative of its usefulness in itself, the CDRT test was used to check if they really learned the rules of the game in responding reading comprehension questions.

 

Data Analysis

DA is a relatively new approach in L2 assessment which is based on the learners’ ZPD and it is difficult to be defined quantitatively because it is basically a qualitative procedure (Ableeva, 2010). Qualitative research is used in this study as it is framed within DA. This study also attempts to capture learners’ ZPD. Poehner and Lantolf (2005) claimed that interventionist DA is rooted in quantitative interpretation of data. Being an interventionist in procedure, this study utilizes quantification of data as well.

In short, the data collected through DIALANG were analyzed using its special website in which students visited to take the test. The pretest data, the participants’ actual pre- and post-test scores along with their mediated scores, and the number of hints used by the students were then analyzed descriptively. Finally the following formula developed by Kosulin and Garb (2002), called the learning potential score (LPS), was utilized to approximate the learners' potential for learning.

 

Adopted from Kozulin and Garb (2002, p. 119)

(Where S post = dynamic scores; S pre = non-dynamic scores; and S Max = the highest dynamic score gained in this test)

 

RESULTS

DIALANG Results

Among the 32 participants, the results showed that 24 were at the B2 English reading comprehension level, 7 were at the B1 proficiency level, and only one participant was at the C1 level. The results of the DIALANG test taken by the participants of the study are synopsized from B1 through B2 to C1 respectively.

 

Table 1. The DIALANG results of the participants’ level based on CEFR

CEFR Levels

Number

Percent

B1

7

22

B2

24

75

C1

1

3

 

Based on Table 1, among the participants, 7 students’ English reading comprehension proficiency level was B1. Based on CEFR, students at this level could both understand texts that contained every-day or job-related language and could also understand personal letters in which the writer described events, feelings and wishes. Being in B1 proficiency level showed that about 22% of the undergraduate participants of the study were really at-risk because they were supposed to be at the C2 level.

The other level belongs to B2 which is of the highest level among the participant of this study. The results indicated that 24 students were at the B2 English reading comprehension proficiency level. Based on CEFR, students at this level can understand articles and reports about contemporary issues when the writer takes a particular position on a problem or expresses a particular viewpoint. They can also understand most short stories and popular novels. The highest percentage of the participants, about 75% belonged to B2. Similar to the B1 level, participants of this level are considered as really at-risk because they are not able to perform in a way that an advanced student (C2) can.

The last category belongs to the participants who, based on CEFR, can both understand long and complex factual and literary texts as well as differences in style and are also able to understand specialized language in articles and technical instructions, even if these are not in their field. Students of this level are considered as advanced but they should practice more to be as proficient as students at the C2 level. The results showed that 1 student was at the C1 English reading comprehension proficiency level indicating that for about 3% of the participants there is still some way to progress.

The students are considered as ‘advanced’ based on the number of semesters they had passed studying English as foreign language not based on the level determined by the DIALANG test. Therefore, the aim of using the DIALANG test was just to make sure if students were at-risk. According to the results obtained from the DIALANG test, 31 students out of 32 were not really advanced. That is, not only were the participants not advanced, but also they were at-risk. This supports the researchers’ claim that the participants were indeed at-risk. Therefore, it can be stated the participants of the study are homogeneous in that they are all at-risk eighth-term students. One of the assumptions within the DA procedures which has also been underscored by some researchers (e.g., Pishghadam et al., 2011; Tzuriel & Kaufman, 1999) is the usefulness of mediation or hints for low achievers whose low achievement might be due to some academic, cultural, economic, and socio-economic reasons.

 

The Pretest Results

According to Table 2, the highest number of questions answered without any mediation belonged to participant 20 who answered 18 items correctly and scored 90 points. The second highest score belonged to participant 7 who scored 55 by answering 11 items and while no one scored 50 in the pretest, participants 12 and 25 scored 45. Among the others, three participants (14, 24, and 31) answered 8 items correctly, six participants (3, 11, 13, 23, 27, and 32) scored 35, most of the participants, that is, 12 in all, (2, 4, 6, 8, 9, 10, 15, 17, 18, 28, 29, and 30) answered 6 items correctly and scored 30, only participant 5 answered 5 items correctly, and ultimately with regard to the lowest scorers, the results represented that five participants (1, 19, 21, 22, and 26) scored 20 and the least score, that is 10, belonged to participant 16.

 

 

 

 

 

 

Table 2. The number of questions answered without any mediation by the participants

How many participants

Which participants

Number of questions answered without any mediation

Score

1

20

18

90

1

7

11

55

2

12 and 25

9

45

3

14, 24, and 31

8

40

6

3, 11, 13, 23, 27, and 32

7

35

12

2, 4, 6, 8, 9, 10, 15, 17, 18, 28, 29, and 30

6

30

1

5

5

25

5

1, 19, 21, 22, and 26

4

20

1

16

2

10

 

 

This is indicative of their being at-risk. Based on the pretest’s results, a number of problematic areas (as many as the number of questions in the pretest) were of concern for the students but due to their less influential effects just the most important ones are brought below. The data presented below is based on Table 3. The areas where the learners showed to have problems with were as follows.

 

  1.  The learners’ inability in connecting the ideas in the passages.

Though they knew most of the words in the passages, they could not connect the ideas presented in all of the paragraphs to figure out the gist of the passages. In other words, since they attempted a lot to grasp the meaning of individual words, understanding the passages as a whole became complicated for them. Items 9 and 20 in the pretest were related to connecting ideas in the first and second passage respectively. Although only a total of 7 participants did not answer item 9 correctly (they lost 35 scores), 16 failed to answer the 20th question correctly without any mediation (they lost 80 scores).

 

  1. Their confusion about determining the meaning of vocabularies or words.

In line with the highest number of questions used in the CDRT that belonged in the ones aiming at assessing the learners’ ability in word guessing, a total of 5 items (2, 7, 10, 16, and 17) out of 20 were related to word guessing in the pretest. The results indicated that a total of 305 scores were lost by the participants who failed to answer the items correctly, that indicated their inability at identifying the meaning of vocabularies and consequently their low performance in reading comprehension.

  1. Their difficulty in distinguishing minor or least important details from the significant information.

Items related to implied information from a passage can be regarded as the ones which aim at assessing this part of reading comprehension. Items 1, 14, and 15 are of this type which faced the learners with a lot of difficulty; a point that they expressed upon the completion of the test.

  1. The impact of external factors on their performance.

Some external problems such as lack of concentration or distraction during reading, having trouble in relating what is read to their prior knowledge or personal experiences, and not having sufficient contact with those who could speak English after their graduation could also be considered as the problems they encountered throughout the process. The learners were told not to worry about anything and just try to stay calm while answering the questions.

 

In Table 3, the 9 reading skills along with the number of items in the pretest for each skill are displayed. Besides, initially for each skill, the number of participant who could not answer the items in the pretest are presented based on their inclusion in each skill and then they are summed up together to show the total number of participants answering items incorrectly in each skill.

 

 

 

 

 

 

 

Table 3. Reading skills, the related items of the pretest in each skill, and the total number of scores lost by the participants

 

 

 

Reading Skills

Items of the Pretest in Each Skill

No. of Scores Lost in Pretest Respectively

 Total

 

1

Finding Definitions or Word Guessing Questions

2, 7, 10, 16, 17

45, 65, 55,70, 70

305

2

Questions about Sentence Insertion

3, 6, 13, 18

60, 70, 60, 65

255

3

Questions about Where in the Passage

1, 11, 14, 15

45, 50, 75, 65

235

4

Questions about Table Form

4, 5, 20

50, 70, 80

200

5

Implied Detail or Inference Questions

1, 14, 15

45, 75, 65

185

6

Pronoun Referent Questions

12, 19

15, 160

175

7

Paraphrase Questions

14,15

75, 65

140

8

Factual Information or Stated Detail Questions

9

35

35

9

Main Idea Questions

8

35

35

 

Table 3 shows that 5 items (the highest number of items) in the pretest had the total number of 305 lost scores and that these 5 items focused on the first reading skill, that is, Finding Definitions or Word Guessing Questions (To distinguish the skills from the rest of the text, all of the sills are written in capital letters). In case the participants all failed to answer an item, the highest number for that item would equal 160 (32 X 5= 160) as in item 19. Though eight items (for each skill 4 items) belonged to the second and third reading skills, that is, Questions about Sentence Insertion and Questions about Where in the Passage respectively, the total number of lost scores in the second skill (255) were more than the ones in the third skill (235). Skills 4 and 5, that is, Implied Detail or Inference Questions and Questions about Table Form, had 6 items (3 for each skill) but there was again a slight difference in the number of participants who answered each skill. The Table paved the way for the researchers to identify the participants’ problems in terms of reading comprehension and mediate them focusing on the identified problems.

 

 

The Posttest Results of the CDRT

According to the posttest results, the number of questions answered without mediation increased largely in a way that those 5 students who could not answer more than 4 items (20 scores) in the pretest without using any mediation scored 66, 54, 68, 63, and 86 in the posttest upon provision of mediation. This is indicative of the point that learners’ same independent performance in the pretest may vary drastically from each other’s in the posttest depending on their varying responsiveness to mediation (Poehner et al., 2014). The most frequent score (30) belonged to 12 participants (2, 4, 6, 8, 9, 10, 15, 17, 18, 28, 29, and 30) who answered 6 items in the pretest correctly but enhanced their scores to 71, 84, 73, 66, 59, 68, 70, 73, 78, 63, 57, and 66 respectively which in addition to underscoring the previous point it represents the effectiveness of the EP sessions as well.

Due to the qualitative nature of the proposed question and in order to boost the credibility and validity of the results, the researchers used two types of resources (i.e., their pre- and post-tests scores in the given reading passages along with the number of hints used and their gain scores) to meet the purpose of the study.

As Table 4 shows the posttest results are indicative of high potential of these 5 participants (about 16%) in reading comprehension or of their different ZPDs. Their different performances in the posttest due to receiving the finely-grained gradual and contingent mediation offered by the C-DA program disclosed both the ignorance of the non-gainers by the NDA testing procedures and the importance of Vygotsky’s perspective on the concepts of learners’ independent performance (ZAD) and their mediated one (ZPD). The participants stated that they were totally hopeless when they saw their own pretest scores but were inspired to believe in themselves having attended in the EP sessions.

 

Table 4. The participants’ same actual scores in pretest and their differing actual and mediated posttest results

 

Actual Scores

Mediated Scores

Students

Pretest

Posttest

Posttest

1

20

35

63

19

20

15

54

21

20

45

68

22

20

35

63

26

20

55

86

Table 4 displays the students’ same pretest actual score (20) which are actually considered by the traditional testing as non-gainers but the posttest results show that the claim is not sound. Even though they performed differently, they all depicted the effectiveness of the C-DA program. For instance, participant 19 gained the lowest score again (15); meaning she answered 3 questions without mediation, while participant 26 was the best performer and gained 55 by answering 11 questions without any mediation. In summary, unlike traditional tests which wrongly interpret a low score (here 20 out of 100) as the inability of a person to learn anything afterwards, the C-DA program shows a currently unskilled learner who is able and eager to progress after he/she is presented with appropriate and sufficient instruction.

In line with the importance of investigating an individual’s performance in the C-DA program collectively, instead of doing it in isolation, which has also been underscored by Poehner et al. (2014), Table 3 provides the researchers with each individual’s collective performance. In this regard, the individual’s performance appears to be quite straightforward if his/her LPS is taken into account. So far, the participants’ general performance has been discussed, but a more nuanced set of information can be provided in case a learner’s scores are taken together. This way each individual’s needs for different constructs are specified and the mediator is certain about inclusion of appropriate ZPD-based materials for the individual. For instance, a close investigation of the scoring profile of participants 1 and 22 who had identical performance by reaching the same actual pretest score, mediated score, actual posttest score, and consequently gain score revealed that they would have different scores in the various stages of the assessment given that their specific needs were taken into account.

 

The First Resource: The Participants’ Actual Pre- and Post-test Scores along with Their Mediated Scores and the Number of Hints Used

The first set of data which was used to identify the effects of C-DA on promoting Iranian EFL students’ reading skills was highlighted by comparing their pretest–posttest scores along with the number of hints used in the posttest. This is in line with Vygotsky’s position that in order to diagnose people’s progress thoroughly, not only their ZAD but also their ZPD needs to be taken into consideration. Figures 1 and 2 summarize the most basic information on the participants’ pre- and post-test scores and the number of hints that they used which is indicative of the participants’ remarkable progress. Due to maintaining better quality of the Figures, they are presented separately; each containing the information about 16 participants.

 

 

 

Figure 1. The participants’ pre- and post-test scores and the number of hints used

 

As illustrated in Figure 1, the participants scored significantly higher after attending the EP sessions and being mediated with the C-DA program; that is, using hints in the posttest depicts that they progressed in reading comprehension. For instance, according to Figure 1, the lowest actual score in the pretest belonged to participant 16 who got 10 points but she improved her performance to 58 upon reception of 42 hints in the posttest. In the same vein, participant 1 who scored 20 without mediation improved her score to 63 just by using 37 hints in the posttest.

Figure 2 shows the participants’ performance and further clarifies the appropriateness of the C-DA program in enhancing their reading skills. Participant 20, to whom the least number of posttest hints (8) belong, was the best scorer in the pretest (90) among all of the others (both the ones in Figure 1 and the ones in Figure 2).

 

Figure 2. The participants’ pre- and post-test scores and the number of hints used

 

As illustrated in the figure, she scored 90 without any hints and in line with Poehner et al. (2014, p. 12) who stated that “those with high actual scores had less room for improvement when mediation was offered” just by gaining 2 points she scored 92 in the posttest which means that she outperformed all of the other participants with regard to the highest DA posttest score (92). However, regarding the highest number of hints used in the posttest (46) which was gained by participants 19 and 27, it should be mentioned that they earned the identical mediated score of 54. This point, based on Poehner et al. (2014), does not mean that they needed the same types of assistance or that they faced with the same problematic areas. Reportedly, while participant 19 used 1 hint to reach the answer for items 1 and 5, which dealt with finding specific information and filling in a table respectively, participant 27 answered them without using any hints.

That the participants’ performance has improved in the posttest in comparison to the pretest is obvious in all of the participants based on the above Figures. As it was also highlighted by Poehner et al. (2014), the participants’ development underlined the high importance of attending in a process of EP sessions so that learners’ emerging abilities are identified. The results indicating significant improvement in the students’ performance after the EP sessions are also in line with Mardani and Tavakoli’s (2011) study in which their participants’ performance improved significantly after the implementation of the EP sessions.

 

The Second Resource: The Participants’ Gain Scores

Gain scores indicate the difference between NDA and DA scores or in Poehner and Lantolf’s (2013) term, gain scores are indicative of “the change between the actual and mediated components of the tests (p. 334).” Though using a sole gain score was not agreed upon by Kozulin and Garb (2002) because of its inadequacy in how learners’ scores change, the present research made use of it as one of the methods in reaching the purpose of triangulation to identify the effects of C-DA on promoting Iranian EFL students’ reading skills and also as an indication to assist readers identify the differences between the participants’ actual and mediated performance more obviously.

Figure 3 which is about the participants’ gain scores depicts that all readers are able to achieve a more comprehensive understanding of the participants’ performance progress, especially if they have gotten low actual score in the pretest. Owing to their low actual scores, they had much capacity for improvement given that they were provided with mediation (Poehner et al., 2014).

 

Figure 3. The participants’ gain scores

Note: To keep it legible, only odd numbers are written but it also includes even numbers.

 

The main intent of fig. 3 is to elucidate the dramatic change in performance of the participants upon reception of mediation. As illustrated, almost all of the participants progressed significantly except participant 20 who did only to a trivial (insignificant) extent; she gained only two points owing to her perfect actual score in the pretest which shows that she required no mediation. Participants 7, 25, and 27 gained less than 20 points which is again to some extent supportive of Poehner et al.’s (2014) opinion about having less room for progress regarding the first two ones but with regard to the latest one (i.e., participant 27) who yielded a gain score of 19, anything might be true to claim. Because her actual score was 35, she was expected to progress as much as the others did due to having a lot of room for development, receiving the same mediation as others, and also being provided with the identical environment as others. In case the just-mentioned situations were different, it would be appropriate to complain about “find[ing] out whether their educational experiences yield similar achievements,” based on Delandshere (2002, p. 1480) and Poehner (2011), simply because they have all “been taught the same thing.”

However, in contrast to Lantolf and Poehner (2013) who contended that fairness is about individual difference in receiving the quality of support, it can be argued that in the present study no objectivity transpired in the testing process as all of the participants were provided with the same materials in all of the testing stages owing to the scripted nature of interventionist approaches to DA. In other words, the individual differences of the participants were not taken into account as they had the same pretest, and the EP sessions in DA, and the prefabricated hints in the posttest [CDRT] were the same for all. Hence, one cannot contend that gaining 54 and 66 points by participants 4 and 26 respectively is due to unfairness or objectivity because every effort was made to undermine recognizable “sources of variance that may affect different groups in different ways (ETS, 2009, p. 3 as cited in Lantolf and Poehner, 2013).”

In summary, the two resources utilized to show the purpose of triangulation embodied improvement of Iranian EFL students in their reading skills after participating in the EP sessions in DA and subsequently after reception of the mediatory prompts in the C-DA program. All three resources indicated that the students’ development from pre- to post-test was due to the EP sessions in DA along with the hints provided in the CDRT. For instance, the participants’ self-reflection buttressed the promising impacts of the C-DA on reading comprehension and their urging on holding identical classes for other skills could be interpreted as their satisfaction of the program. Meanwhile, their high gain scores using somehow low number of hints (31 in average) strengthened the effects of C-DA even more. In the end, the two resources unraveled the C-DA assessment purpose, that is, the need for a continuing or ongoing progress to realize and reveal individuals’ potential for learning.

 

DISCUSSION

In congruence with the aim of the study, that is to determine the effects of C-DA on the at-risk participants’ reading skills in the context of L2 (learners who are believed to have reached an endpoint regarding non-dynamic assessments, that is, being non-gainers in a better term), it was substantiated that a pretest (ZAD) score was an insufficient indication for both measuring an individual’s ability and preparing an effective lesson plan for him/her as well. This obviates what NDA concerns, that is, focusing on visible abilities (Ableeva, 2010), and instead approves the instruction goal of DA, which based on Poehner (2008, p. 315), is “to render the invisible visible.”

Therefore, DA scores pave the way for teachers to see far beyond what is concerned in NDA that neglects the differences among non-gainers. That two learners earn the same score in pretest, for instance, cannot necessarily be interpreted as having the same proficiency level as well. If so, one could not contend that a participant would surpass another with regard to having a superior range of learning potential unless they took part in the C-DA. In other words, the subtle distinctions of their learning potentials appeared only after the amount of mediation in different skills was specified through taking the C-DA. This is in line with the results obtained from Ebadi (2016) in which learners’ potential modified upon receiving ZPD-based mediation through Web 2.0 and consequently becoming more self-regulated learners.

With regard to learning potential, it is worth noting that the findings indicated that understanding the learning potential of each individual can lead to a more comprehensive ZPD-based lesson plan even if the approach to DA is interventionist. In an interventionist DA study done by Ahmadi and Barabadi (2014), it was shown that a test taker requires more mediation if his/her LPS is low. In the same vein, similar to Teo (2012) who was an opponent of interactionist DA and referred to providing one-to-one mediation to individual students as a really challenging and unmanageable task for many EFL mediators, the present study made the use of technology, hence, the interventionist DA was utilized as a helpful tool to overcome the aforementioned problems.

To sum up, C-DA as the other forms of DA reached the goal of instruction-assessment unity and in line with Vygotsky’s standpoint it unveiled the learners’ potential abilities. This was done by focusing on their ZPD instead of their ZAD which would lead non-gainers to remain inchoate permanently.

 

CONCLUSION AND IMPLICATIONS

Teo (2014) claimed that the number of wrong answers in the pretest influences the amount of mediation each participant needs provided that the interactionist approach to DA is used. The present study, however, due to following the interventionist approach to DA, focused on just the most important areas where most of the participants encountered the problems. Therefore, the content of the EP sessions in DA was chosen from the most problematic areas amongst the participants in pretest, that is, though they were taken into account, the mere incorrect answers of a single individual did not influence strongly on receiving the total amount of mediation.

Even though the current study attempted to overcome some of the drawbacks of other studies (e.g., Pishghadam et al., 2011), there remain some others which were far beyond its purposes. For instance, although it was interventionist, it was a small-scale study. In this study only 32 students were involved to be examined (to assure the practicality and feasibility of the C-DA procedure) but in future investigations the number of participants can increase to a larger extent through changing the aims to be studied and also by utilizing new technologies that may be developed by other researchers. The findings of this investigation may prove to be significantly useful for those work within the realm of DA and those who are concerned about individuals requiring a lot of attention, that is, at-risk or retarded learners. Fulfilling the foregoing issue can lead other researchers to help spread some more knowledge over the C-DA domain of research.

In summary, in spite of having a lot of merits and paying attention to ‘at-risk’ individuals, one cannot contend that DA is a miraculous means of teaching and learning overnight. It is also important to note that a considerable amount of time, energy, and effort would be required to gain the purposes of DA and its other forms such as C-DA. 

Ableeva, R. (2010). Dynamic assessment of listening comprehension in second language learning (Unpublished doctoral dissertation). Pennsylvania State University, USA.
Ahmadi, A., & Barabadi, E. (2014). Examining Iranian EFL learners' knowledge of grammar through a computerized dynamic test. Issues in Language Teaching, 3(2), 161-183.
Ajideh, P. (2003). Schema theory-based pre-reading tasks: A neglected essential in the ESL reading class. The Reading Matrix3(1), 1-14.
Carney, J. J., & Cioffi, G. (1990). Extending traditional diagnosis: The dynamic assessment of reading abilities. Reading Psychology, 11, 177-192.
Delandshere, G. (2002). Assessment as inquiry. The Teachers College Record, 104(7), 1461-1484.
Ebadi, S. (2014). L2 private speech in online dynamic assessment: A sociocultural perspective. Iranian Journal of Applied Linguistics, 17(1), 49-70.
Ebadi, S. (2016). Mediation and reciprocity in online L2 dynamic assessment. CALL-EJ,       17(2), 16-40.
English, S. (2003). Towards effective reading comprehension through strategy training for EAL middle school learners: a case study with intervention. (Unpublished master's thesis). University of Oxford, England, United Kingdom.
Farr, J. N., Jenkins, J. J., & Paterson, D. G. (1951). Simplification of Flesch reading ease formula. Journal of Applied Psychology, 35(5), 333- 337.
Kletzien, S. B., & Bednar, M. R. (1990). Dynamic assessment for at-risk readers. Journal of Reading33(7), 528-533.
Kozulin, A., & Garb, E. (2002). Dynamic assessment of EFL text comprehension of at-risk students. School Psychology International23(1), 112-127.
Lantolf, J. P. (2011). The sociocultural approach to second language acquisition: Sociocultural theory, second language acquisition, and artificial L2 development. In D. Atkinson (Ed.), Alternative approaches to second language acquisition (pp. 24-47). New York, NY: Routledge.
Lantolf, J. P., & Poehner, M. E. (2013). The unfairness of equal treatment: objectivity in L2 testing and dynamic assessment. Educational Research and Evaluation19(2-3), 141-157.
Lantolf, J. P., & Poehner, M. E. (2004). Dynamic assessment of L2 development: bringing the past into the future. Journal of Applied Linguistics, 1(2), 49-72.
Lidz, C. S. (1991). Practitioner’s guide to dynamic assessment. New York: Guilford.
Lidz, C. S., & Gindis, B. (2003). Dynamic assessment of the evolving cognitive functions in children. In A. Kozulin, B. Gindis, V. S. Ageyev, & S. M. Miller (Eds.), Vygotsky’s educational theory in cultural context (pp. 99-116). Cambridge: Cambridge University Press.
Mardani, M., & Tavakoli, M. (2011). Beyond reading comprehension: The effect of adding a dynamic assessment component on EFL reading comprehension. Journal of Language Teaching and Research2(3), 688-696.
Meade, C. D., & Smith, C. F. (1991). Readability formulas: Cautions and criteria. Patient Education and Counseling, 17(2), 153-158.
Pichert, J. W., & Elam, P. (1985). Readability formulas may mislead you. Patient Education and Counseling7(2), 181-191.
Pishghadam, R., Barabadi, E. (2012). Constructing and validating computerized assessment of L2 reading comprehension. Iranian Journal of Applied Linguistics15(1), 73-95.
Pishghadam, R., Barabadi, E., & Kamrood, A. M. (2011). The differing effect of computerized dynamic assessment of L2 reading comprehension on high and low achievers. Journal of Language Teaching and Research, 2(6), 1353-1358.
Poehner, M. E. (2011). Dynamic assessment: Fairness through the prism of mediation. Assessment in Education: Principles, Policy & Practice, 18(2), 99-112.
Poehner, M, E. (2008). Dynamic assessment: A Vygotskian approach to understanding and promoting L2 development. Berlin, Germany: Springer Science Media.
Poehner, M. E. (2005). Dynamic assessment of oral proficiency among advanced L2 learners of French. (Unpublished doctoral dissertation). Pennsylvania State University, University Park.
Poehner, M. E., Zhang, J., & Lu, X. (2014). Computerized dynamic assessment (C-DA): Diagnosing L2 development according to learner responsiveness to mediation. Language Testing, Special Issue,1-21.
Poehner, M. E., & Lantolf, J. P. (2013). Bringing the ZPD into the equation: Capturing L2 development during computerized dynamic assessment (C-DA). Language Teaching Research, 17(3), 1-21.
Poehner, M. E., & Lantolf, J. P. (2005). Dynamic assessment in the language classroom. Language Teaching Research, 9(3), 233-265.
Schneider, E. (1999). Multisensory structured metacognitive instruction: An approach to teaching a foreign language to at-risk students. Frankfurt am Main: Peter Lang Publishing.
Sternberg, R. J., & Grigorenko, E. L. (2002a). Dynamic testing. New York: Cambridge University Press.
Summers, R. (2008). Dynamic assessment: Towards a model of dialogic engagement (Unpublished doctoral dissertation). University of South Florida, USA.
Tajeddin, Z., & Tayebipour, F. (2012). The effect of dynamic assessment on EFL learners' acquisition of request and apology. The Journal of Teaching Language Skills, 4(2), 87-118.
Teo, A. (2014). Beyond traditional testing: Exploring the use of computerized dynamic assessment to improve EFL learners' reading. Arab World English Journal, 5(1), 42-58.
Teo, A. (2012). Promoting EFL students’ inferential reading skills through computerized dynamic assessment. Language Learning & Technology16(3), 10-20.
Tzuriel, D., & Kaufman, R. (1999). Mediated learning and cognitive modifiability dynamic assessment of young Ethiopian immigrant children to Israel. Journal of Cross-Cultural Psychology30(3), 359-380.
Vandergrift, L. (2007). Recent developments in second and foreign language listening comprehension research. Language Teaching, 40, 191-210.
Vygotsky, L. S. (1994/1935). The problem of the environment. In R. Van der Veer & J. Valsiner (Eds.), The Vygotsky reader (pp. 338-354). Oxford: Blackwell.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes.  Cambridge, MA: Harvard University Press.
Vygotsky, L.S. (1962). Thought and language. Cambridge, MA: MIT Press.
Wang, L. W., Miller, M. J., Schmitt, M. R., & Wen, F. K. (2013). Assessing readability formula differences with written health information materials: application, results, and recommendations. Research in Social and Administrative Pharmacy9(5), 503-516.