Document Type : Research Paper
Author
Department of English Language Teaching, Farhangian University,P.O. Box 14665-889, Tehran, Iran
Abstract
With the growing adoption of digital technologies in education, digital literacy (DL) is essential for both novice and experienced educators. Farhangian University, Iran’s leading teacher education institution, plays a key role in fostering DL and technology integration through internship programs where mentor teachers model instructional practices. Despite its significance, the DL competencies of EFL (English as a Foreign Language) teaching mentees of Farhangian University and their mentor teachers remain underexplored. Grounded in the Technological Pedagogical Content Knowledge (TPACK) and Teacher Digital Competency (TDC) frameworks, this descriptive study examined self-perceptions and cross-evaluations of DL and technology use among EFL mentor teachers and teaching mentees at Farhangian University. Purposive sampling was used to distribute two parallel Likert-scale questionnaires to 62 female EFL teaching mentees, enrolled in practicum course three, and 53 female EFL mentor teachers in Isfahan. Self‑perceptions revealed that both mentors and mentees rated themselves most proficient in basic digital tools; mentees reported broader confidence across collaborative and content‑specific technologies, while mentors rated themselves lower across specialized applications. Cross‑evaluations showed that mentors viewed mentees as digitally capable but only moderately effective in classroom integration, whereas mentees perceived their mentors as less digitally literate and infrequent technology users. Both mentors and mentees rated their own competencies higher than those of their counterparts, indicating a self‑enhancement bias and underscoring intergenerational differences in DL perceptions. These findings highlight the need for reciprocal DL development, stronger mentor modeling, and structured digital training within teacher education programs to address intergenerational gaps.
Keywords
Main Subjects
INTRODUCTION
Rapid advancements in information and communication technologies have ushered in the digital era, creating an urgent need for individuals to acquire a broad range of new competencies to remain professionally relevant. Among these, digital literacy (DL) stands out as one of the most critical skills required across nearly all aspects of modern life, including education. In particular, it is vital for 21st-century educators and learners to embed digital competencies into their teaching and learning practices (Soifah et al., 2021).
DL has evolved from basic digital navigation into a comprehensive skillset encompassing various media literacies (Sun & Zou, 2024). It was initially defined as the ability to interpret and utilize information from diverse sources in multiple formats using computers (Gilster, 1997). According to Martin and Grudziecki (2006), DL entails managing, integrating, and synthesizing digital resources to produce new knowledge, generate media content, and engage in effective communication. Cornell University (2009) further expanded the concept by characterizing DL as proficiency in using information technology and the Internet to locate, analyze, use, share, and create online content. This mirrors the American Library Association’s (2013) definition of information literacy, namely the ability to find, evaluate, create, and communicate information using digital technologies, encompassing both cognitive and technical skills. Perhaps, the most concise yet comprehensive definition comes from Eshet-Alkalai (2004), who described DL as a “survival skill in the digital era” (p. 93).
Given the pivotal role of education in shaping a nation’s future, it is essential for both teachers and students to be aware of technological advancements. Language teachers’ pedagogical and DL skills are fundamental to fostering effective, critical, and reflective teaching practices (Qi & Dai, 2023). Likewise, students must also be digitally literate to meet the demands of the modern workforce. Consequently, the integration of technology and pedagogy has become a core expectation in language teacher education and training programs (Hauck & Kurek, 2017).
Building on Shulman’s (1986) concept of Pedagogical Content Knowledge (PCK), Mishra and Koehler (2006) introduced the Technological Pedagogical Content Knowledge (TPACK) framework for understanding how teachers can integrate technology, pedagogy, and content in their instructional design and planning to meet the needs of learners in various educational contexts.
While the TPACK framework centers on the integration of technology into instructional practices, the Teacher Digital Competency (TDC) Framework takes a broader approach, encompassing a wide array of digital skills essential for modern education. Introduced by Falloon (2020), this model promotes a multidimensional understanding of digital competence, which includes not only technical knowledge but also ethical behavior, collaborative engagement, and pedagogical expertise. Rather than viewing DL as a collection of isolated abilities, the TDC framework positions teachers as active participants in complex, tech-rich environments that demand both confidence and critical thinking.
These frameworks are particularly relevant in the context of Farhangian University, Iran’s leading institution for teacher education, where student teachers are expected to develop not only technical proficiency but also the pedagogical and ethical dimensions of digital competence. The university aims to prepare them for their professional careers through a well-rounded curriculum that includes four practicum courses. The practicum experiences are supposed to provide teaching mentees with opportunities to engage in classroom teaching within actual school environments, under the supervision of mentor teachers. The mentors are expected to model effective pedagogy and support the development of core instructional competencies, including digital skills.
However, concerns remain regarding whether mentor teachers really possess the digital proficiency and technological expertise necessary for the determining role of modeling. Specifically, the extent of digital knowledge among EFL mentees participating in the practicum courses remains unclear. It is noteworthy that Farhangian University’s curriculum includes several technology-oriented courses intended to build foundational computer and digital skills, but it is uncertain whether they sufficiently foster meaningful DL in student teachers or not. Moreover, it is unknown to what degree they apply the digital competencies acquired during coursework within their practicum settings.
while DL has been widely studied in the context of education, limited research has addressed the digital competencies of EFL mentor teachers and teaching mentees at Farhangian University. To address this gap, the present case study was designed to investigate how EFL teaching mentees placed in real classroom settings under the supervision of experienced mentors perceived their own level of DL as well as each other’s digital proficiency. To be precise, the study aimed to examine both self-perceptions and cross-evaluations of DL and technology use among EFL mentor teachers and teaching mentees affiliated with Farhangian University.
Following the skill-based approach employed by Dashtestani and Hojatpanah (2021), DL in this study is operationally defined as the ability to effectively use a range of digital tools (e.g., word processing, email, multimedia, online communication, and educational software), which together represent the technological knowledge component of the TPACK framework. In addition, consistent with the TDC framework, DL here entails competencies such as engaging in safe and ethical technology use, effective online collaboration, data protection, and the continuous refinement of practice through reflection and institutional support.
LITERATURE REVIEW
Theoretical Foundations of the Study
As classrooms become increasingly mediated by technology, teachers must not only possess technical skills but also demonstrate the ability to integrate digital tools meaningfully and effectively into pedagogical practice. The shift to technology-based instruction necessitates robust frameworks that guide teacher preparation and professional growth. Two such models-TPACK and TDC Frameworks- offer complementary lenses through which teacher digital readiness can be examined.
Shulman (1986) argued that educators must understand how pedagogy and content knowledge intersect to design strategies that support deep student learning. The TPACK framework, as argued by Mishra and Koehler (2006), expands this foundation by adding technological knowledge as a critical component, recognizing that effective teaching today depends not only on content and pedagogy, but also on the ability to integrate digital tools in instruction. Effective technology integration, based on the TPACK framework, arises not from isolated mastery of digital tools but from the dynamic interplay among these domains. That is, teachers must understand how technology can transform content delivery and pedagogical strategies, tailoring instruction to specific learning contexts. In other words, technology integration is not merely about using tools; rather, it reshapes how concepts are taught and understood, requiring adjustments in both pedagogical and content strategies (Koehler & Mishra, 2005).
The TPACK model has gained prominence in teacher education research due to its holistic approach to instructional design. It emphasizes that technology should not be an add-on but a transformative element that reshapes how content is taught and understood. Furthermore, implementing TPACK in technology-enhanced instruction calls for a context-specific awareness of digital tools, allowing educators to adapt and utilize technology in ways that meet the distinct pedagogical and content demands of varied teaching environments (Mishra & Koehler, 2009). Studies have shown that pre-service teachers often struggle to develop integrated TPACK competencies, particularly when technology training is delivered in isolation from subject-specific pedagogy (Chai et al., 2013).
Falloon (2020) goes beyond the traditional focus on technical skills and literacy, advocating for a more holistic approach to digital competence. This approach emphasizes broadening teachers’ understanding of the competencies necessary to function effectively, safely, and ethically in diverse and increasingly digital environments.
To support this shift, Fallon (2020) introduced the comprehensive TDC framework. Within this framework, teacher educators play a crucial role in implementing its principles through modeling, deliberate planning, and purposeful teaching. Falloon further argues that responsibility for applying the framework extends to all faculty members, who must develop a consistent and well-informed understanding of its purpose, scope, and content.
The framework aligns closely with global benchmarks such as DigCompEdu (Digital Competence Framework for Educators; Redecker & Punie, 2017) and the International Society for Technology in Education (ISTE, 2017) Standards, which are widely recognized frameworks for guiding educators in developing and applying digital competencies in teaching. They highlight core competencies like digital content production, effective online communication, safeguarding data, and employing inclusive teaching technologies. TDC model emphasizes that digital competence must be continuously nurtured through reflective practice and institutional support systems. In teacher preparation programs, this involves encouraging both personal skill development and collaborative spaces where digital pedagogy can be observed, evaluated, and refined (Dominguez-Gonzalez et al., 2025).
The present study’s focus on self-perceptions and cross-evaluations of DL among mentor teachers and mentees aligns with both TPACK and TDC frameworks, situating individual skill assessment within a broader process of collaborative and continuous professional growth. In other words, the TPACK framework provides the conceptual basis for examining how EFL mentor teachers and teaching mentees perceive and evaluate DL and technology use through the interplay of technological, pedagogical, and content knowledge. At the same time, the TDC framework complements this perspective by underscoring the importance of reflective practice, peer evaluation, and institutional support in sustaining digital competence.
Related Studies
Numerous studies (e.g., Ansyari, 2015; Audrin & Audrin, 2022; Cervetti et al., 2006; Collier et al., 2013; Dashtestani, 2014a, 2014b, 2014c, 2014d; Dashtestani & Hojatpanah, 2020; Koc & Bakir, 2010; Lei, 2009; Wright & Wilson, 2011) have centered on digital competency within the realm of education in general, and EFL teaching and learning in particular. In fact, the integration of technology into pedagogical practices by EFL teachers has been a longstanding area of interest for researchers. However, as Young (2003) emphasizes, within the context of EFL instruction, the impact of technology on student learning outcomes remains somewhat inconsistent, but its appropriate and effective integration can enhance the authenticity and meaningfulness.
Dashtestani (2014a), in a qualitative research study, explored the views of Iranian EFL teacher trainers on the significance of computer literacy for language teachers. The findings revealed that teacher trainers regarded computer literacy as a crucial aspect of teaching knowledge. Teacher trainers play a key role in preparing EFL teachers for the effective implementation of technology in the Iranian EFL context. Therefore, it is essential for other EFL authorities to acknowledge computer literacy as an integral part of teaching knowledge to enhance EFL teachers’ computer literacy levels.
Ansyari (2015), aiming to develop and evaluate a professional development program for integrating technology in an Indonesian University’s English language teaching setting, examined the characteristics of this program in relation to English lecturers’ development of TPACK. The results showed that the professional development arrangement for technology integration improves English lecturers’ TPACK. Key aspects of an effective professional development program include a knowledge base, a thoughtful design approach, active engagement, authentic learning experiences in a collaborative environment, curriculum coherence, an intensive program schedule, guidance, support, and feedback.
Inspired by the existing gap in the research on Iranian EFL pre-service teachers’ TPACK, Maghsoudi (2023) assessed the relative contribution of each TPACK component to the current TPACK profile of student teachers at Farhangian University, who were expected to begin teaching within a year. Findings indicated that the components ranked from strongest to weakest in impact as follows: Content Knowledge (CK), Pedagogical Knowledge (PK), Technological Pedagogical Knowledge (TPK), Technological Knowledge (TK), Technological Content Knowledge (TCK), and Pedagogical Content Knowledge (PCK).
Durriyah and Zuhdi (2018) highlighted a persistent gap between the availability of digital technologies in EFL classrooms and their practical application by teachers, despite these tools being deeply embedded in students’ daily lives. Their study, conducted at a state Islamic University in Jakarta, explored Indonesian student teachers’ perceptions and use of digital technologies during a semester-long course. Although participants actively engaged with various tools and integrated them into junior high textbook units, many remained reluctant to apply them for literacy instruction. These findings underscore both the pedagogical potential of digital technologies and the pressing need for more intentional, competency-based training in teacher education programs.
Soleimani et al. (2017) investigated the computer information and multimedia literacy of 255 Iranian EFL teachers using a five-point Likert-scale questionnaire. The results showed that the teachers’ multimedia and information literacy levels ranged from low to moderate, highlighting the need to enhance teacher training courses and better prepare them for implementing technology in actual language teaching scenarios.
Cote and Milliner (2021) examined the digital literacies of English teachers at a private Japanese University. They assessed various aspects, including ownership and accessibility of computers, ability to perform electronic tasks, personal and professional use of computers, Computer-Assisted Language Learning (CALL) training, and interest in CALL. The study found that teachers in the English program were confident in using digital technology to support their teaching both inside and outside the classroom. However, there was a reported lack of digital training in English teacher education programs. This study emphasized the need for targeted DL training to ensure that teachers are well-equipped to integrate technology into their teaching.
Mardiah et al. (2021) in their research focused on senior high school students’ awareness of DL. The study, analyzing how students accessed and managed digital information, indicated that both teachers and students are capable of enhancing learning materials and creating a conducive learning atmosphere through information and communication technology (ICT). Moreover, all students are adept at using digital tools to find information and references to support their learning.
The research conducted by Heidari and Tabatabaee-Yazdi (2021) aimed to study the DL of Iranian EFL teachers and students to identify any significant differences between them. A total of 150 Iranian EFL teachers and 175 Iranian EFL students participated in the study. They employed A 181-item standardized measure, assessing three crucial digital skills: Information Literacy (IL), Media Literacy (ML), and Information Communication Technology Literacy (ICTL). The results indicated that Iranian EFL teachers scored higher than students in all three constructs of IL, ML, and ICTL, with ICTL having the highest mean score. The findings can serve as a reference for educational planners and decision-makers to emphasize the importance of digital skills at the university level.
One of the prominent studies in the context of teacher education in Iran is that of Dashtestani and Hojatpanah (2020) who explored the perspectives of junior high school students, junior high school teachers, and directors of the ministry of education of Iran on DL and associated issues. While interviews indicated that both students and teachers believed the students had an acceptable level of DL, the questionnaire results showed that students’ DL levels ranged from low to moderate. They also found that both teachers and students believed the junior high school students primarily used technology for recreational and non-educational purposes and did not utilize a wide range of computer applications and software tools. Interviews with the directors of the ministry of education revealed a lack of consensus on issues related to the DL of junior high school students and the absence of clear plans by the ministry to promote students’ digital competencies.
Rahimi (2023), in a comprehensive study of 863 Iranian EFL teachers, identified key variables such as TPACK proficiency, positive attitudes toward ICT, and access to tools as significant predictors of DL. These findings support the notion that teachers’ DL must be cultivated through both technological readiness and pedagogical design, echoing global patterns observed by Falloon (2020).
Zhang (2023), in a large-scale study of 2,110 EFL teachers in China, found that DL development was shaped more significantly by attitudinal and access-related variables than demographic ones. Similarly, Feng and Sumettikoon (2024) revealed that Chinese EFL teachers possessed moderate DL across five dimensions, but lacked depth in integration strategies. These findings underscore a global trend: despite widespread access to technology, teachers may not fully leverage digital tools without targeted support and training.
Hidayat et al. (2023) examined Indonesian EFL teachers’ engagement with DL in sociocultural professional development settings. Their results showed limited awareness of culturally responsive DL strategies. Kanchai (2021) described how Thai educators acquired digital competencies amid the pandemic, noting the critical role of peer collaboration and institutional support. These studies highlight the contextual nature of DL and the need for frameworks like DigCompEdu and ISTE to guide its application.
Fathali et al. (2024) addressed the psychological dimension of DL, investigating anxiety among EFL teachers using virtual classroom software (BigBlueButton). Teachers with higher DL exhibited lower anxiety and better virtual classroom management, reinforcing the importance of confidence and fluency in digital environments.
Zheng et al. (2025) examined the relationship between DL and online learning power among EFL undergraduates, focusing on the influence of perceived teacher support. A questionnaire survey was conducted using three scales related to DL, online learning power, and perceived teacher support, targeting EFL undergraduates at a Comprehensive University in Eastern China. The findings indicated that DL among undergraduates affects their online learning power through perceived teacher support. This underscores the importance of strengthening teacher support, particularly in improving teachers’ acceptance and beliefs regarding technology.
Barjesteh et al. (2025), studying DL levels of novice and experienced Iranian EFL teachers, revealed comparable levels of DL regardless of teaching experience. This finding challenges assumptions that longer tenure correlates with stronger digital competence, and instead points to the influence of institutional access and training. Taken together, these studies illustrate the growing recognition of DL as a multi-dimensional and context-sensitive competency in EFL education.
PURPOSE OF THE STUDY
While Iranian research is beginning to address DL in teacher education, gaps remain, particularly in self-perceptions vs. cross-evaluations and generational differences between EFL mentor teachers and teaching mentees at Farhangian University. Understanding self-perceptions and cross-evaluations is crucial for preparing future teachers to integrate digital tools effectively into their classroom practices. In addition, insights into mentor teachers’ DL will support university stakeholders in making informed decisions regarding curriculum design, professional development, and resource allocation to improve digital readiness across teacher education programs.
Accordingly, the present case study explored how EFL mentees from Farhangian University and high school mentor teachers in Isfahan province perceived their own DL and the digital proficiency of their counterparts. The research was then guided by the following questions:
- How do Iranian EFL mentor teachers perceive their own DL levels?
- How do Iranian EFL mentees perceive their own DL levels?
- How do mentor teachers evaluate mentees’ DL and their use of technology in terms of frequency and effectiveness during practicum courses, as indicators of mentees’ TPACK and TDC?
- How do mentees evaluate mentor teachers’ DL and their use of technology in terms of frequency and effectiveness in instructional practices, as indicators of mentors’ TPACK and TDC?
METHOD
Participants
This study adopted a case study design because it investigated the self-perceptions and cross-evaluations of DL and technology use among a specific group of EFL mentor teachers and teaching mentees within a defined institutional context.
The study was conducted during the first semester of the academic year 2024-2025 at Farhangian University, Fatemeh Zahra Branch, located in Isfahan. Participants were selected using purposive sampling, as the study required individuals with specific characteristics directly relevant to its objectives. The sample consisted of 62 female EFL student teachers (mentees) and 53 female EFL mentor teachers. Mentees who were enrolled in Practicum Course three during their seventh semester, had successfully completed prerequisite practicum courses, and were actively engaged in supervised classroom teaching were considered eligible. Mentor teachers were included if they had prior experience supervising mentees, possessed a B.A. or M.A. in English Language Teaching, Translation, or literature, and had a minimum of five years’ teaching experience at the junior or senior high school level. Those who did not meet these criteria or declined participation were excluded. This criterion‑based selection ensured that the sample was appropriately aligned with the study’s focus on self‑perceptions and cross‑evaluations of DL in EFL practicum settings. Table 1 presents the distribution of mentors.
Table 1. Demographic Distribution of Mentor Teachers
|
Participant Type |
Teaching Experience (Years) |
B.A. Degree |
M.A. Degree |
Total Participants |
|
Mentor Teachers |
5–10 |
18 |
11 |
29 |
|
11–21 |
19 |
5 |
24 |
|
|
|
|
|
|
53 |
Instrumentation
Two parallel 24-item Likert-scale questionnaires were developed, one for mentor teachers and one for mentees, to address the study’s objectives. Each instrument consisted of two sections: Self-perception section and cross-evaluation part. The first section comprised 20 items adapted from the validated and widely used questionnaire developed by Dashtestani and Hojatpanah (2021). While the original instrument contained four sections, only its first part was employed in this study since this section directly operationalizes participants’ self‑perceptions of digital literacy in using 20 essential digital tools, corresponding to the first two research questions of the present study. In fact, this operationalization reflects the technological knowledge component of the TPACK framework and the broader competencies emphasized in the TDC framework. The remaining sections of the original instrument addressing constructs beyond the scope of the present investigation were excluded from this study. Their exclusion ensured methodological coherence and preserved alignment between the data collected and the study’s stated objectives. Responses to the items of self-perception section were rated on a five-point Likert-scale ranging from 1 (Not proficient) to 5 (Totally proficient).
The cross‑evaluation section comprised four focused items, each aligned with a core construct of the study- DL frequency of technology use and effectiveness of technology integration. This section was researcher‑designed to capture reciprocal perceptions, with mentors and mentees evaluating their counterparts’ DL, frequency of technology use, and perceived effectiveness of instructional technology integration. Specifically, the mentor teachers’ version addressed their views on mentees’ DL level and mentees’ use of technology in terms of frequency and effectiveness in practicum settings. Conversely, the mentee version addressed their perceptions of their mentor teachers’ DL and their use of technology in terms of frequency and effectiveness in instructional practices. This limited number ( four items) was intentional to ensure parsimony and direct correspondence with the research objectives. Importantly, evaluations were grounded in authentic mentoring relationships: mentees had attended their mentors’ classes, and mentors had supervised and observed their mentees’ teaching. Thus, responses reflected informed judgments based on actual pedagogical interactions rather than speculation.
In both cases, ‘frequency of technology use’ was operationally defined as each counterpart’s perceived regularity of the other party’s incorporation of digital technologies in EFL teaching/learning activities. Mentees rated how frequently mentor teachers integrated digital tools into instructional practices; mentor teachers rated how frequently mentees employed digital tools in their learning tasks and classroom participation. Ratings used a five‑point Likert scale: 1 = Never used, 2 = Rarely used, 3 = Sometimes used, 4 = Used a lot, 5 = Always used. Also, ‘effectiveness’ was operationally measured through participants’ ratings on a five‑point Likert scale, reflecting their perceptions of the degree to which digital technology use by the other party contributed to instructional or learning success: 1 (Not effectively at all); 2 (A little effectively); 3 (Rather effectively); 4 (Effectively); and 5 (Totally effectively). Mean scores were calculated for each item. For effectiveness, higher means indicated greater perceived success in applying digital technologies to instructional objectives, while lower means reflected weaker success. For frequency, higher means represented more regular use of digital technologies, whereas lower means indicated less frequent use.
Following the development of the two questionnaires, item content was reviewed by a panel comprising two Educational Technology professors and two EFL faculty members from Farhangian University, each with extensive experience supervising student teachers in practicum programs. Minor revisions were made to the second section based on the panel’s suggestions and feedback, ensuring improved clarity and relevance.
To evaluate the internal consistency of the questionnaires, Cronbach’s alpha coefficients were calculated for each of the two sections in both instruments. The mentor teachers’ questionnaire yielded reliability coefficients of 0.91 and 0.83 for its first and second sections, respectively. Similarly, the mentees’ instrument produced alpha values of 0.92 and 0.87, indicating a high level of internal consistency.
Data Collection Procedure
The distribution of the questionnaires was carried out digitally using Google Forms and disseminated through Eita, a widely used social messaging platform in Iran. This method facilitated easy and timely access to participants, allowing both mentor teachers and student teachers to respond from their respective locations. Prior to data collection, informed consent forms were electronically delivered to all prospective participants. These forms outlined the purpose of the study, assured confidentiality and anonymity of responses, and emphasized the voluntary nature of participation.
Data Analysis
The questionnaire responses were analyzed using descriptive statistics to summarize participants’ self‑reported perceptions. For each survey item, the mean and standard deviation were calculated to reflect central tendency and response variability. All analyses were conducted using SPSS Statistics (Version 25), which provided a reliable and rigorous platform for organizing, computing, and reviewing the quantitative data. Given the exploratory nature of the study and its focus on capturing perceptions rather than testing group differences, inferential statistical analyses were not conducted. Instead, the descriptive results offered a clear overview of participants’ perceptions and evaluations, which were systematically interpreted through the lenses of the TPACK and TDC frameworks. This approach ensured that the findings were both methodologically sound and theoretically grounded, building toward a reasoned and evidence‑based conclusion.
RESULTS
The results of this study are presented in accordance with the four research questions guiding the investigation. Research questions 1 and 2 were addressed through the first section of the questionnaire, which included 20 Likert-scale items targeting both EFL mentor teachers and student teachers’ self-perceived DL in using 20 essential educational applications and tools. Research Questions 3 and 4 were explored using the second section of the instrument, which comprised four additional Likert-scale items designed to capture cross-evaluations of mentors and mentees of each other’s DL levels, frequency of technology use, and effectiveness of integration in instructional practices. The following subsections present the findings for each research question in turn.
Research Questions 1 & 2
As previously outlined, the first research question focused on investigating Iranian EFL mentor teachers’ self-perceived DL levels, while the second one examined the perceptions of EFL teaching mentees regarding their own DL. These two self-perception questions were addressed in the first section of each group’s questionnaire, which included 20 Likert-scale items targeting proficiency in key educational technologies. To ensure systematic interpretation, mean scores were mapped onto the defined Likert categories using midpoint thresholds. For example, values between 1.00–1.49 were interpreted as ‘not proficient’, 1.50–2.49 as ‘a little proficient,’ and so forth. This criterion is directly anchored in the operational definitions of the scale points and follows established practice in educational research (Boone & Boone, 2012). Thus, a mean of 1.43 was categorized as ‘not proficient,’ while a mean of 1.60 would be categorized as ‘a little proficient.’
|
|
Mentor Teachers |
Teaching Mentees |
||
|
DL subcategories |
M SD |
M SD |
||
|
1. Word processing |
3.51 |
0.56 |
3.61 |
0.73 |
|
2. E-mail |
3.44 |
1.19 |
3.27 |
1.16 |
|
3. World Wide Web |
3.53 |
1.26 |
3.24 |
0.97 |
|
4. Language learning databases |
3.14 |
1.07 |
2.76 |
0.94 |
|
5. Spreadsheet |
1.93 |
1.51 |
2.00 |
1.12 |
|
6. Graphic software tools |
2.14 |
1.52 |
2.89 |
0.93 |
|
7. Multimedia (audio & video) |
3.07 |
1.35 |
2.78 |
0.91 |
|
8. Language learning software |
2.47 |
1.47 |
2.71 |
1.01 |
|
9.Concordancer |
1.29 |
1.46 |
1.84 |
1.06 |
|
10.Blogging |
2.07 |
1.12 |
2.25 |
1.04 |
|
11.Wiki |
2.29 |
1.18 |
3.02 |
1.13 |
|
12.Online discussion group |
2.79 |
1.44 |
2.76 |
1.15 |
|
13.Text chatting |
3.29 |
1.46 |
3.54 |
1.00 |
|
14.Voice chatting |
3.14 |
1.53 |
3.35 |
1.10 |
|
15.Video conferencing |
2.87 |
1.26 |
2.92 |
1.17 |
|
16.Computer games |
2.43 |
1.52 |
2.93 |
1.15 |
|
17.Academic social network sites |
2.51 |
1.24 |
2.85 |
1.24 |
|
18.Non-academic network sites |
2.86 |
1.14 |
2.87 |
1.00 |
|
19.Podcasts |
2.17 |
1.26 |
2.86 |
1.05 |
|
20. PowerPoint |
3.79 |
1.26 |
3.86 |
0.94 |
Table 2. Mentors’ and Mentees’ Self-Perceptions of Digital Literacy Levels
The values presented in Table 2 show how mentor teachers and pre- mentees rated their own DL levels across twenty subcategories. Both groups gave the highest ratings to word processing (Mentors: M = 3.51, SD = 0.56; Mentees: M = 3.61, SD = 0.73) and PowerPoint presentation software (Mentors: M = 3.79, SD = 1.26; Mentees: M = 3.86, SD = 0.94), indicating they saw themselves as ‘proficient’ in these subcategories of DL.
Mentor teachers showed relatively low self-perceptions of their DL levels, ranging from ‘not proficient’ to ‘a little proficient’, in nine specific applications and tools. They reported lower scores in spreadsheet software (M = 1.93, SD = 1.51), graphic software tools (M = 2.14, SD = 1.52), language learning software (M = 2.47, SD = 1.47), concordancers (M = 1.29, SD = 1.46), blogging (M = 2.07, SD = 1.12), wikis (M = 2.29, SD = 1.18), computer games (M = 2.43, SD = 1.52), academic social network sites (M = 2.51, SD = 1.24), and podcasts (M = 2.17, SD = 1.26).
Other subcategories received moderate ratings, such as E-mail (M=3.44, SD=1.19), World Wide Web (M = 3.53, SD = 1.26), language learning databases (M = 3.14, SD = 1.07), multimedia tools (M = 3.07, SD = 1.35), online discussion platforms (M = 2.79, SD = 1.44), text chatting (M = 3.29, SD = 1.46), voice chatting (M = 3.14, SD = 1.53), video conferencing (M = 2.87, SD = 1.26), and non-academic social networking sites (M = 2.86, SD = 1.14), implying mentor teachers perceived themselves as 'almost proficient' in these tools. However, no item reached the threshold of M =5 or ‘totally proficient’.
The mentees reported low scores in spreadsheet software (M = 2.00, SD = 1.12), concordancers (M = 1.84, SD = 1.06), and blogging platforms (M = 2.25, SD = 1.04). However, they rated themselves more highly as ‘almost proficient’ in a wider array of technologies, including graphic software tools (M = 2.89, SD = 0.93), wikis (M = 3.02, SD = 1.13), academic social networks (M = 2.85, SD = 1.24), podcasts (M = 2.86, SD = 1.05), World Wide Web (M = 3.24, SD = 0.97), language learning databases (M = 2.76, SD = 0.94), multimedia tools (M = 2.78, SD = 0.91), language learning software (M = 2.71, SD = 1.01), online discussion platforms (M = 2.76, SD = 1.15), text chatting (M = 3.54, SD = 1.00), voice chatting (M = 3.35, SD = 1.10), video conferencing (M = 2.92, SD = 1.17), computer games (M = 2.93, SD = 1.15), and non-academic social networks (M = 2.87, SD = 1.00). Mentees gave higher ratings to e-mail (M = 3.27, SD = 1.16) and World Wide Web (M = 3.24, SD = 0.97) compared to some other tools. Similar to the mentors’ responses, no item reached the threshold of M =5, suggesting that neither of the groups rated themselves as ‘totally proficient’ in any DL subcategory.
Research Questions 3 & 4
Research Questions 3 and 4 aimed to explore how the two participant groups evaluated each other’s DL levels, as well as the frequency and effectiveness of technology use in instructional and learning practices. These reciprocal perceptions were investigated using the second section of the respective questionnaires, which comprised four Likert-scale items tailored to capture evaluative feedback across these dimensions. The same procedure, applied to the items of the first section of the questionnaire to ensure systematic interpretation, was applied to the constructs ‘frequency of use’ and ‘effectiveness of using digital literacy tools’ in the cross-evaluation sections of the instruments. In both cases, mean scores were mapped onto the corresponding Likert categories using the same midpoint thresholds, ensuring consistency in interpretation across all dimensions of the study.
Table 3 presents the descriptive statistical findings derived from responses provided by the mentor teachers. That is, how they evaluated mentees’ DL and their use of technology in terms of frequency and effectiveness during practicum courses (research question 3).
Table 3. Mentor Teachers’ Evaluation of Mentees’ DL and Technology Use
|
Items |
M |
SD |
|
21. How would you rate your mentee’s level of digital literacy? |
3.71 |
0.78 |
|
22. How frequently does your mentee incorporate digital technologies into her teaching practices? |
2.97 |
0.66 |
|
23. How effectively does your mentee apply digital technologies in her instructional activities? |
2.87 |
0.92 |
|
24. How would you evaluate your own digital literacy level in comparison to that of your mentee? |
3.76 |
0.88 |
Question 21 asked mentor teachers to rate their mentees’ overall DL on a five-point Likert scale, ranging from 1 to 5 (1=Not proficient; 2= A little proficient; 3= Rather proficient; 4=Proficient; and 5=Totally proficient). The mean score was M = 3.71 with an SD = 0.78, indicating a relatively high and consistent perception, meaning most mentors viewed mentees as ‘proficient’ in DL.
Question 22 assessed, from mentors’ perspectives, how frequently mentees were perceived to incorporate digital technologies in teaching practices. The scale ranged from 1 to 5 as follows: 1 (Never used); 2 (Rarely used); 3 (Sometimes used); 4 (Used a lot); 5 (Always used). The score (M = 2.97, SD = 0.66) suggests that mentors believed mentees used technology ‘sometimes’, with responses showing low variability across participants.
Question 23 focused on the perceived effectiveness of mentees’ technology use during instruction from mentors’ views. Responses were rated on a five-point scale from 1 to 5 as follows: 1 (Not effectively at all); 2 (A little effectively); 3 (Rather effectively); 4 (Effectively); and 5 (Totally effectively). The mean (M = 2.87, SD = 0.92) points to a perception of ‘rather effective use’, though the higher SD reflects greater diversity in mentors’ evaluations.
Finally, Question 24 invited mentors to compare their own DL levels with those of their mentees. The resulting score of M = 3.76, SD = 0.88, shows that mentors considered themselves ‘'proficient’, with a slightly stronger self-assessment than their evaluation of the mentees.
To sum up, mentor teachers regarded their mentees as generally proficient in DL and moderately effective in incorporating technology in their teaching practices. They also rated their own proficiency favorably in comparison. These findings underscore a mutual acknowledgment of basic competence while also highlighting the need for more frequent and integrated use of digital technologies in instructional practice.
Table 4 presents the descriptive statistical findings derived from the responses of the student teachers assessing mentor teachers’ DL and their use of technology in terms of frequency and effectiveness in instructional practices (research question 4).
Table 4. The Mentees’ Evaluation of Mentors’ DL and Technology Use
|
Items |
M |
SD |
|
21. How would you rate your mentor teacher’s digital literacy levels? |
3.42 |
0.93 |
|
22. How frequently does your mentor incorporate digital technologies into her teaching practices? |
2.85 |
0.86 |
|
23. How effectively does your mentor apply digital technologies in her instructional activities? |
2.83 |
1.02 |
|
24. How would you evaluate your own digital literacy level in comparison to that of your mentor teacher? |
3.98 |
0.98 |
Question 21 asked mentees to assess their mentor teachers’ DL levels using a five-point Likert scale (1 = Not proficient to 5 = Totally proficient). The average rating was M = 3.42, with an SD = 0.93, indicating that mentees generally considered their mentors to be ‘almost proficient’ with some variation in responses.
Question 22 was targeted to see the frequency of using digital applications or tools by the mentors from the mentees’ perspectives on a five-point Likert scale, ranging from 1 to 5 as follows: 1 (Never used); 2 (Rarely used); 3 (Sometimes used); 4 (used a lot); 5 (Always used). The mean score was M = 2.85, and the SD = 0.86, suggesting that mentors ‘sometimes’ integrated technology into their instructional practice, with relatively consistent perceptions among mentees.
Question 23 aimed at examining the effective use of technology by the mentors in their teaching practices from the viewpoints of the mentees on a five-point Likert scale, ranging from 1 to 5 as follows: 1 (Not effectively at all); 2 (A little effectively); 3 (Rather effectively); 4 (Effectively); and 5 (Totally effectively). Mentees provided a mean rating of M = 2.83, with an SD = 1.02, reflecting that mentors were viewed as ‘rather effective’, although responses varied to a greater extent than in the previous item.
Finally, question 24 asked mentees to compare their own DL level to that of their mentor teachers. With a mean score of M = 3.98 and an SD = 0.98, mentees generally viewed themselves as ‘proficient’ relative to their mentors, and this item showed moderate variability across responses.
DISCUSSION
To the researcher’s knowledge, this is the first study to examine self-perceptions and cross-evaluations of DL among mentor teachers and teaching mentees in the institutional context of Farhangian University, limiting direct empirical comparisons. However, the findings provide meaningful insights when viewed through the theoretical lenses and the broader literature on EFL digital pedagogy.
As for the groups’ self-perceptions of their own DL, findings revealed that both mentor teachers and mentees rated themselves as ‘moderately proficient’, particularly with conventional applications such as word processing, email, and PowerPoint. However, mentor teachers expressed low self-perceptions in several pedagogical and specialized tools, including concordancers, academic social networks, and CALL-specific applications. Mentees, by contrast, demonstrated broader confidence across a more diverse range of advanced tools such as wikis, graphic software, podcasts, and online discussion platforms, though neither group rated themselves as ‘fully proficient’ in any category.
These results mirror previous research indicating that EFL educators often possess surface-level digital skills and often feel more confident in general-purpose digital tools while struggling with specialized tools and pedagogical technologies (Milliner & Cote, 2018). Similarly, Dashtestani (2014c) noted insufficient DL among Iranian EFL teachers for effective CALL integration. Ng’s (2012) multidimensional model reinforces this notion, arguing that DL should include technical, cognitive, and socio-emotional skills domains that appear underdeveloped among the mentor group in the current study.
The finding that both groups fell short of rating themselves as ‘totally proficient’ in any category aligns with Falloon’s (2020) TDC framework, which highlights the need to move beyond basic technical literacy toward pedagogically meaningful digital competence. That is, this pattern underscores the need for professional development targeting instructional technology. Falloon emphasizes the importance of pedagogical depth in digital competence, while Valizadeh (2024) advocates for curriculum revisions and institutional support to help pre-service EFL teachers develop the DL required for modern classrooms. Without such initiatives, both mentor and mentee teachers may struggle to leverage digital tools effectively, limiting their ability to foster dynamic, learner-centered environments.
Interestingly, these results diverge from Heidari and Tabatabaee-Yazdi (2021), who reported that Iranian EFL teachers typically outperform students across DL domains. Such discrepancies may reflect contextual differences in institutional emphasis or access to training resources. Zhang (2023) similarly argued that enhancing teachers’ digital competence requires not only targeted professional development but also systemic reforms to improve infrastructure, resource availability, and pedagogical training. As Zhang aptly stated, “the need to attract, empower, and retrain teachers before and during service should be given serious attention” (p. 9).
Taken together, these findings underscore the necessity of designing specialized professional development programs that address both technical and pedagogical dimensions of DL. Without such initiatives, mentor and mentee teachers may continue to rely on basic tools, missing opportunities to enrich EFL instruction through innovative and advanced digital practices.
Regarding cross evaluations and frequency/effectiveness of technology use, both groups showed relatively consistent perceptions. Mentor teachers rated their mentees as ‘proficient,’ and mentees assessed their mentors as ‘almost proficient.’ However, technology use in instructional practice was evaluated by both parties as only ‘sometimes’ implemented and ‘rather effective,’ indicating limited integration and impact. This restricted frequency of technology application may be explained by underlying factors such as teachers’ perceptions of their own efficacy. In line with this, Naderi et al. (2023) emphasized that teachers’ efficacy beliefs and technology self-efficacy play vital roles in fostering positive attitudes and enabling successful technology integration in EFL teaching, suggesting that limited self-efficacy could be a key reason for the partial adoption observed in our study.
Interpreting the results related to the twenty DL subcategories through the TPACK and TDC frameworks in both groups, high ratings in word processing and presentation software reflect competence in basic productivity tools, corresponding to TK in TPACK (Mishra & Koehler, 2006) and Information/Content Creation in TDC (Falloon, 2020). Moderate ratings in communication tools such as e‑mail, chat, and video conferencing indicate partial development of TPK and communication competence. In contrast, low ratings in language‑specific applications (concordancers, databases, language learning software) highlight a gap in TPACK proper and Pedagogical Digital Competence. Social networking and multimedia tools (blogs, wikis, podcasts, academic/non‑academic networks) also received modest ratings, pointing to limited confidence in collaborative and creative practices, which align with TPK and the Communication/Problem‑Solving dimensions of TDC (Fallon, 2020; Mishra & Koehler, 2006).
Overall, both mentors and mentees demonstrate stronger proficiency in general digital skills than in pedagogical or subject‑specific applications, underscoring the need for targeted training in pedagogical digital literacy. In fact, while basic TK appears present, its merger with PK and CK remains incomplete. Teachers seem to understand and value digital tools but struggle with embedding them into instructional frameworks, a trend consistent with findings from Cervetti et al. (2006), who advocate for “learning about, through, and with technology” (p.383) to foster meaningful integration.
Barriers to DL development, identified in both this study and prior work by Dashtestani and Hojatpanah (2021), include time constraints, rigid curricula, and insufficient institutional support. These structural obstacles may limit teachers’ ability to translate awareness into action, a concept explored in Davis’s (1989) Technology Acceptance Model, which posits that technology adoption is driven not only by perceived usefulness, but also perceived ease of use and contextual readiness. That is, confidence and usability shape teachers’ willingness to adopt technology. Addressing these issues in pre- and in-service training could help close the implementation gap and encourage more effective digital pedagogy.
The study also reflects the generational digital gap noted by Lisenbee (2016), who characterizes younger learners as digital natives-typically more fluent due to early engagement with technology. In contrast, older teachers, especially those trained before the digital shift, are characterized as digital immigrants who require deliberate pedagogical transformation. Similarly, in this study, mentees, as digital natives, demonstrated more versatility and confidence in using educational technologies. This is supported by Zhang (2023), who observed that younger EFL educators often exhibit higher DL due to increased exposure, more favorable attitudes toward technology, and better access to digital environments. Similarly, Raeisi et al. (2023) highlight that DL development is closely tied to contextual factors such as institutional support, curriculum design, and opportunities for authentic digital practice. These findings underscore the importance of equipping mentor teachers with ongoing professional development to ensure that they can effectively model and support digital pedagogy in practicum settings.
An intriguing yet contradictory finding of this study was that mentor teachers rated their own DL levels as ‘proficient’ when compared to those of their mentees. This stands in contrast to their responses in section one of the questionnaire, where they expressed low self‑perceptions regarding their ability to use a broad range of educational technologies. At the same time, mentees also rated their own DL levels as higher than those of their mentors. Thus, both groups reported lower proficiency for the other party when cross‑evaluating, while positioning themselves as more digitally literate. This reciprocal inconsistency suggests that both mentors and mentees may hold misconceptions or exaggerated perceptions of their own digital competencies relative to others, underscoring the subjective nature of self‑assessment and the need for complementary evaluation methods.
The presence of self‑enhancement bias has important implications when viewed through the TPACK and TDC frameworks. Inflated self‑perceptions may obscure actual gaps in the participant’s TPACK and TDC, particularly in areas such as language‑specific tools and collaborative technologies. When teachers overestimate their digital literacy, they may assume competence in integrating technology into pedagogy without recognizing the need for further training. This highlights the importance of incorporating objective assessments and structured feedback mechanisms into teacher education programs, ensuring that confidence is aligned with demonstrated ability and that professional development addresses both basic general skills and subject‑specific digital practices.
The self-enhancement bias observed in this study has been documented in prior research. Kahveci (2021), for example, reported that foreign language teachers’ self-efficacy in digital contexts did not always align with their demonstrated DL, indicating a gap between confidence and competence. These findings underscore the importance of objective assessments and reflective practices in accurately gauging DL levels. Moreover, Zhang (2023) emphasized that without structured feedback or benchmarking, educators may rely on subjective impressions that do not reflect their actual capabilities. This underscores the importance of continuous reflective practice and diagnostic tools to help educators better assess their DL capabilities.
To take care of the subjectivity issue in this study, however, several safeguards were employed to strengthen the validity of these data. The questionnaire items were drawn from a validated instrument and reviewed by experts to ensure contextual relevance, and high reliability coefficients confirmed internal consistency. Moreover, the cross‑evaluation design anchored participants’ judgments in authentic practicum experiences, as mentors observed mentees’ teaching and mentees attended mentors’ classes.
CONCLUSION AND IMPLICATIONS
This study investigated self-perceptions and cross-evaluations of digital literacy among mentor teachers and student teachers at Farhangian University, framed within the TPACK model (Mishra & Koehler, 2006) and the TDC framework (Falloon, 2020). The study confirms core assumptions of the TPACK and TDC frameworks by showing stronger proficiency in basic technological knowledge than in pedagogical or subject‑specific applications, while extending them to account for self‑enhancement bias and intergenerational differences in digital literacy perceptions within teacher education.
To pave the way for the concluding analysis, a concise blend of findings from self-evaluations (research questions 1 & 2) and cross-evaluations (research questions 3 & 4) is first presented. Then, the main conclusions are drawn, followed by final reflections on pedagogical implications, study limitations, and suggested directions for future research.
Both groups rated themselves as most proficient in foundational digital tools. Mentor teachers showed lower self-ratings across nine specialized applications, while student teachers demonstrated broader confidence, reporting higher proficiency across collaborative and content-specific technologies (research questions 1 & 2; Table 2). Mentor teachers evaluated mentees’ DL as generally proficient but noted only moderate frequency and effectiveness in their classroom technology integration (research question 3; Table 3). Mentees rated their mentors as less digitally literate and less frequent users of digital tools, while assessing their own DL as stronger by comparison (research question 4, Table 4).
Several key conclusions can be drawn, each shedding light on the DL dynamics between mentor teachers and teaching mentees:
First, DL confidence is uneven across generations. Both groups feel proficient in basic digital tools, but student teachers exhibit greater confidence in specialized and collaborative technologies, suggesting that newer generations are more digitally fluent across diverse applications.
Second, mentor teachers may lack practical integration skills. Although mentors view mentees as digitally capable, they observe only moderate classroom implementation. This points to a gap between technical knowledge and pedagogical application in mentees’ practice.
Third, mutual perceptions reveal a competence gap. Student teachers view their mentors as less digitally literate and infrequent tech users, while considering themselves stronger in comparison. This perception asymmetry could affect mentoring dynamics, especially in tech-mediated instruction.
Fourth, a pedagogical disconnect is emerging. The mismatched evaluations and differing strengths imply a need to recalibrate mentorship roles in teacher education, moving beyond traditional hierarchies toward collaborative digital co-learning.
From a pedagogical standpoint, the findings highlight important implications for teacher education and professional development initiatives, and underscore the importance of ongoing enhancement in DL for both mentors and mentees: First, schools and educational institutions should prioritize the development of targeted professional development programs aimed at enhancing DL levels in mentor teachers. These programs should focus on both basic and advanced digital tools.
Second, incorporating the TPACK framework into practicum settings, where mentor teachers actively demonstrate technology integration, can serve as an effective approach for fostering realistic, context-aware digital instructional skills among teaching mentees. Therefore, it is imperative that Farhangian University embed structured DL training into its teacher education curriculum to ensure that student teachers are adequately prepared to utilize digital tools in meaningful and pedagogically sound ways. Although ICT and computer-based skills are formally included in Farhangian University’s curriculum, recent evaluations (Najafian & Rastegarpour, 2014; Saberi & Sharifzade, 2020) suggest that these components have not sufficiently addressed the practical and pedagogical needs of student teachers.
Third, establishing mentorship and peer learning initiatives can help bridge the DL gap between mentor and student teachers. Experienced teachers can share their knowledge and skills, while younger teachers can offer insights into new digital tools and applications.
Finally, improving access to digital resources and technology infrastructure within educational institutions is critical for supporting teachers and students in developing their digital competencies. By identifying and addressing the existing gaps and offering targeted support, educational institutions can more effectively prepare teachers and student teachers to incorporate digital tools into their teaching methods. This, in turn, will elevate the overall quality of education in the digital era.
Despite offering valuable insights, the study carries several limitations. Although the case study design was chosen to address the specific purposes of this study, its findings may not be broadly generalizable beyond the examined context. Additionally, the subjective nature of descriptive surveys may introduce biases, which should be considered when applying these results to other settings. Future research could build on these insights by employing comparative case studies or larger-scale mixed-method approaches to test the transferability of the findings across different contexts.
The use of purposive sampling, while suitable for this context, restricts the generalizability of findings beyond Fatemeh Zahra Branch of Farhangian University. Moreover, the sample included only female participants, limiting gender-based comparative analysis. Including male participants and expanding the sample across multiple branches or regions would allow for broader insights and comparative analyses.
Furthermore, while the questionnaires showed strong reliability, the study relied exclusively on self-reported and perceptual data, which may introduce bias, particularly given the observed discrepancies between self-perceptions and reciprocal evaluations. The absence of observational or qualitative data further restricts the ability to verify actual classroom practices or explore contextual barriers in greater depth. Nevertheless, the findings should be interpreted as indicators of perceived digital literacy rather than direct measures of classroom performance. Future studies could address this limitation by employing a mixed-methods approach, incorporating observational data, performance‑based assessments, or structured diagnostic tasks to triangulate self‑reports and provide a more objective evaluation of teachers’ digital competencies.
Finally, longitudinal studies could also explore how mentor–mentee perceptions evolve over successive practicum experiences or in response to targeted digital training.
Disclosure statement
No potential conflict of interest was reported by the authors.
ORCID
|
Jamileh Rahemi |