PERCEPTIONS OF TEACHERS ABOUT ASSESSMENT PRACTICES AT PRIMARY LEVEL IN PRIVATE SCHOOLS OF LAHORE

http://dx.doi.org/10.31703/gesr.2022(VII-II).25      10.31703/gesr.2022(VII-II).25      Published : Jun 2022
Authored by : Saba Cheema , Malahat Siddiqui , Sajid Massod

25 Pages : 263-274

    Abstract

    Assessment is the heart of the whole teaching-learning process and is considered fundamental for students learning. The perception of the teachers plays a vital role and usually influences their own practices. Therefore, studying the perception of the teachers can be helpful in improving the quality of education. The current study, therefore, aims at studying the perception of primary teachers of private schools in Lahore about their assessment-related practices. 250 teachers across Lahore were conveniently selected as the participants of the study. A questionnaire comprising 36 items was dispersed electronically owing to ongoing COVID-19 restrictions. However, teachers were approached physically whenever possible. The findings show the examination is perceived as a tool to bring improvement both in students learning and teaching practices. It is, therefore, high time to train teachers to effectively use different modes of assessments to come at par with global standards of quality assessment systems. 

    Key Words

    Assessment, Assessment Practices, Perception of Teachers

    Introduction

    Education aims at bringing a desirable positive change in the behavior of students. Assessments have, however, been always a strong determinant of the quantity and quality of any such changes (Wong, 2007). Therefore, assessment is considered an important component of school activities that consumes much of the time in the teaching-learning process (MacBeath, Galton, & Steward, 2004; Zolfaghari & Ashraf, 2015). An average of 10% to 50% of the total classroom time is consumed on examining students and conducting relevant activities (Stiggins, 1992). Moreover, the results from these assessments provide ample information teachers can use not just to interpret students' progress against their learning outcomes, to improve their motivation, to diagnose their needs but also to improve their instruction quality by reflecting on their teaching practices (Brookhart, 1999a; Martinello, Lauris, & Brasolotto, 2011). 

    The teachers carry out much of the assessment activities at the primary level in private schools in Pakistan. It is yet another dilemma that a majority of these teachers usually have no formal training for conducting purposeful assessments (Khattak, 2012; Richard & Conklin, 1992). Therefore, transforming teachers' beliefs, knowledge, and skills in assessment is a prerequisite to an efficient and modern assessment system (Haynes, Lisic, Goltz, Stein, & Harris, 2016). Besides the emergent educational spectrum concentrating on fostering 21st century skills, specifically the higher order thinking skills among the students, considerable empirical evidence shows that most assessment systems focus on teaching content and examining retention (Collins, 2014). Paper-pencil test is perceived as the globally accepted format of assessments and hence is trusted by a dominant majority of the teachers across the globe (Narathakoon, Sapsirin, & Subphadoongchone, 2020; Zhang & Burry-Stock, 2003). However, the level of the class generally defines the contents of assessments (Zhang & Burry-Stock, 2003).

    Therefore, the current study is nested in the perceptions of primary teachers of private schools about assessment and will void the dearth of literature in the context of Pakistan. Since perceptions underpin the implementation, developing a comprehensive cognizance of teachers' perceptions underscoring their behavior modification is significant.

    Literature Review

    Assessments are considered the corner stone of any educational system as they, by and large, ascertain if the learning objectives are being met. Though various forms of assessments are used, almost all of them serve as individual evaluation systems, aiming to measure the effects of the teaching-learning process, gather feedback, and compare students' performance across a spectrum of populations  (Pinkus, 2009). Assessment is pretty much a teacher's ability to reflect on each student's level of achievement individually. Therefore, the goal of assessment is not just about judging student's performance in exams, and it rather deals with teacher's cognition of students' knowledge and its application (Brookhart, 2001). However, what and how to assess remains the defining characteristics of the whole process. Many teachers, especially at the beginning of their careers, find it tedious of all the teaching-learning processes (Gardner, Pyke, Belcheir, & Schrader, 2007). 

    The teachers' competencies for assessing the students have always been a hot debate. In the majority of cases, teachers struggle in communicating expectations and defining scoring criteria. When students are only taught to get through their assessments successfully and paradoxically not with an intention to foster the required skill set (Kantar, 2014), it not only impedes students ability to succeed professionally later in their lives but also adversely affects the furtherance of curriculum and instruction methods (Guskey, 2003). Thereby, enabling the teacher to develop an insight into students' interests and their learning processes and reinforcing the efficacy of teaching-learning processes is critical. Moreover, it promotes teaching as a process that evolves with the input and feedback from the students (Gathuri, Luvanda, Matende, & Kamundi, 2014). 

    Planning and conducting assessments may seem an easy task. It, however, requires teachers to keenly assess their students on multiple levels. Since teachers' beliefs and perceptions are embedded in their assessment practices, wherefore, understanding the factors that shape their beliefs can serve as the overture of a purposeful assessment system. 

    Industrious teachers usually have assessment literacy developed over time accompanied by appropriate teacher-education and professional development courses (Zhao, Mulligan, & Mitchelmore, 2006). Assessment literacy usually helps teachers in understanding the subject through knowledge and skills. Teachers with comprehensive assessment literacy exhibit choices based on their cognition of uses and application of different types of assessments (summative or formative); and approaches (multiple-choice, short answer, long answer, projects, and practical) along with their benefits and limitation (Darling-Hammond, 2003; Richard & Conklin, 1992). 

    Irrespective of assessment literacy, teachers' practices are often shaped by the educational policy of the country. Moreover, teachers are usually on the verge of the brink of malpractices if they confuse non-achievement factors with gradable components of assessments (Brown, 2006). An evident body of research studies shows that much of the assessment practices are said to have been affected by the subject area, class and school level, and years of teaching experience (Alkharusi, 2011; Bol, Stephenson, O'connell, & Nunnery, 1998; Duncan & Noonan, 2007). Also, time constraint usually hinders regular classroom assessments that can be consequential to students performance in their summative exams. Alongside formal assessment techniques, it is noteworthy that informal assessments such as observations and questioning in class are often conducted to obtain information about students learning and are said to have been equally beneficial (Airasian, 2001; Frey & Schmitt, 2007). 

    Stiggins (1999) outlines seven competencies that are applicable in a majority of cases. He suggested that teachers need to develop their abilities to describe the purpose of conducting an assessment with absolute clarity; carefully describe what is expected from them; apply proper assessment techniques; develop and practice quality assessment exercises and well-drafted marking criteria; avoid repugnant assessment practices usually based on their prejudice and bigotry; and use assessment as instructional intervention (Suah & Ong, 2012). Primary teachers tend to have an inclination toward informal assessments such as questioning and observations alongside traditional paper-pencil tests, whereas in secondary classes, teachers usually prefer an objective format more (Suah & Ong, 2012). 

    One of the major concerns is upholding the trustworthiness and validity of assessments carried out by the teachers. Validity usually deals with the choice of the suitable approach regarding the assessment while having reliability and transparency as the core tenets. Trustworthiness, on the other hand, establishes an environment that prevents the use of cheating and all the unfair means during assessment while enacting privacy during the whole process (Brown, 2006).  

     Formative assessments, also known as assessments for learning, tend to provide students with continuous and efficient feedback on their performance achievement and metacognition and are intended to improve their motivation. It also supports struggling students by setting equitable outcomes for learning while keeping summative assessment at the core of the entire assessment process (Doucet, Netolicky, Timmers, & Tuscano, 2020). However, the development of summative exams is yet another arena; a majority of teachers lack the required competiencies wherein (Elmehdi & Ibrahem, 2019; Xiong & Suen, 2018). 

     Communicating results with students is yet another critical aspect that can make or break the effectiveness of the entire process. Teachers' incompetence in communicating the results tends to harbor self-doubts and avolition among the students (Brookhart, 1999b). Also, a teacher should be apt at interpreting the results to make informed decisions about students' academic progress, as well as for school improvement (Even, 2005).

    With a pressing urge for alignment of

    assessments with the teaching-learning process, researchers in Pakistan have a growing interest in studying teachers' perceptions of assessments, their assessment practices and skills. The current study, therefore, is focused on exploring the assessment practices of private primary school teachers in Pakistan. 

    Methods and Materials

    The study aimed to explore the private schools' primary teachers' perception of assessment. The research question was guided by the objectives of the study, which are as under:

    ? To establish teachers' perception of assessment.

    ? To explore the prevailing practices of assessments in schools?


    Research Questions

    ? What is the teachers' perceptions of assessment?

    ? What are the prevailing practices of assessment in schools?


    Instruments

    This quantitative in-nature study used a self-developed questionnaire comprising 36 items concerning demographic information of participants and intended to identify their assessment practices. Based on the opinion of 3 experts, the draft of the questionnaire was critically reviewed and finalized.


    Sampling and Participants 

    The population of the study comprised all the primary teachers of private schools. Necessarily, all the participants had at least two years of teaching experience and had been planning and conducting assessments in their classrooms. Questionnaires were distributed to 500 participants shortlisted through convenient sampling. However, only 250 participants returned a completely filled questionnaire. 


    Pilot Testing

    The researcher conducted the pilot testing of the instrument prior to collecting data from the field. About 60 teachers were taken from private schools teaching at primary levels in Lahore. The researcher administered the instruments herself and observed ambiguities if raised during the process. After pilot testing, the data was used to calculate the reliability of the instrument. The reliability of the tool was 0.755.


    Data Analysis

    Data were coded accordingly and entered in SPSS for analysis. The study employed both descriptive statistics (standard deviation and mean) alongside inferential statistics (t-test) for independent variables.


    Research Ethics

    For this study ethical considerations were taken into account by extending requests for data collection and ensuring anonymity of the respondents. The data gathered was solely for the purpose of analysis and interpretation of the results of this study (Cohen, Manion, & Morrison, 2007).

    Results of the study

    The data were collected through questionnaires from 250 teachers from private schools in Lahore. Perceptions of the respondents were analyzed through descriptive and inferential statistics.


     

    Table 1. Gender Distribution of Participants

    Gender

    f

    %

    Male

    113

    45.2

    Female

    137

    54.8

    Total

    250

    100.0

     


    Table 1 shows the frequency of gender distribution of the sample. Out of 250 respondents, 113 (45.2%) were male and 137(54.8%) were female.


     

    Table 2. Distribution of Respondents on the Basis of Qualification

    Schools

    f

    %

    BA

    80

    18.2

    MA

    77

    17.5

    MPhil

    62

    14.1

    Total

    250

    100

     


    The frequency distribution as shown in table 2 of the sample on the basis of schools. Out of 250 respondents, 80 (18.2%) have done BA, whereas 77(17.5%) have done MA and 62 (14.1%) have done MPhil.


     

    Table 3. Distribution of respondents by teaching experience

    Age

    f

    %

    1-2 years

    82

    32.8

    3-4 years

    101

    40.4

    More than 4 years

    67

    26.8

    Total

    250

    100.0

     


    Table 3 shows the frequency distribution of the sample on the basis of teaching experience. Out of 250 respondents, 82 (32.8%) were between 1-2 years, 101 (40.4%) were between 3-4 years, 67 (26.8%) were more than 4 years


     

    Table 4. Distribution of respondents on the basis of teaching class

    Qualification

    f

    %

    Class 1

    57

    22.8

    Class 2

    64

    25.6

    Class 3

    34

    13.6

    Class 4

    35

    14.0

    Class 5

    60

    24.0

    Total

    250

    100

     


    Table 4 reflects the frequency distribution of the sample on the basis of teaching class. Out of 250 respondents, 57 (22.8%) were teaching class 1  class 2 was taught by 64 (25.6%)  class 3 was taught by 34 ( 13.6%) and class 4 was taught by 60 ( 24.0%) and class 5 was taught by 60( 24.0%)


     

    Table 5. Distribution of respondents on the basis of teaching subject

    Professional Qualification

    f

    %

    Teaching All Subjects

    47

    18.8

    English

    50

    20.0

    Maths

    46

    18.4

    Urdu

    42

    16.8

    Sci

    65

    26.0

    Total

    250

    100

     


    Table 5 shows the frequency distribution of the sample on the basis of the teaching subject. Out of 250 respondents 47 (18.8%) were teaching all subjects, English was taught by 50 ( 20.0%) Maths was taught by 46 (18.4%) Urdu was taught by 42 ( 16.8%) and Sci was taught by 65( 26.0%).

     

    Descriptive Analysis

    The second section of the questionnaire consisted of two parts. The first part was comprised of items related to instructional supervision and the second part was comprised of statements that explored the perceptions of teachers about teacher motivation. Mean and standard deviation was used to identify the strong and weak areas of instructional supervision and teacher motivation as perceived by the teachers (respondents) at different private schools in Lahore.


     

    Table 6. Mean and Standard Deviations of Responses of Teachers about Student Assessment Practices

    Statements

    M

    SD

    I take announced quizzes to assess the learning of my students.

    4.8

    1.3

    I write true/false questions to assess my students.

    4.8

    1.2

    I write fill in the blank questions to assess my students.

    4.7

    1.3

    I use assessment results when deciding about the promotion of students to the next grade.

    4.6

    1.4

    I write multiple choice questions to assess my students.

    4.5

    1.3

    I write essay type questions to assess my students.

    4.5

    1.4

    I always use objective type tests to assess my students.

    4.5

    1.3

    I plan assessment keeping in view the paper pattern of annual examinations.

    4.3

    1.3

    I use worksheets to assess my students in class.

    4.2

    1.2

    I assign grades to my students based on assessments.

    4.2

    1.3

    I am unable to use some assessment practices in my class.

    4

    1.3

    I develop assessments based on clearly defined lesson/chapter objectives.

    4

    1.4

    I make tests myself to assess my students.

    3.9

    1.3

    I use test items given in the textbook for classroom assessment.

    3.8

    1.3

    I always use essay type questions to assess my students.

    3.7

    1.3

    I write matching questions to assess my students.

    3.6

    1.3

    I communicate assessment results to parents.

    3.5

    1.3

    I assess my students by asking oral questions.

    3.2

    1.3

    I use assessment results when planning my teaching.

    3.1

    1.3

    I take unannounced quizzes to assess the learning of students.

    3.1

    1.2

    I provide oral feedback to my students.

    3.0

    1.3

    I assess the individual class participation of students.

    3.0

    1.4

    I choose appropriate assessment methods for instructional decisions.

    3.0

    1.3

    I communicate assessment results to my colleagues.

    2.9

    1.4

    I assess group or team class participation.

    2.9

    1.4

    I match assessments with instructional objectives.

    2.9

    1.3

    I provide written feedback to my students.

    2.9

    1.3

    I obtain diagnostic information from classroom assessments.

    2.9

    1.3

    I assess individual class presentations ( if any )

    2.8

    1.4

    I use a table of specifications to plan assessments

    2.8

    1.2

    I construct a model answer for scoring essay type questions.

    2.8

    1.3

    I need training in classroom assessment techniques

    2.8

    1.3

    I get help from internet resources to prepare an assessment for my class

    2.6

    1.1

    I make tests from all content which was given to students for preparation.

    2.5

    1.2

    I assess my students through observation.

    2.5

    1.3

    I use test items to assess the higher order thinking skills of my students.

    2

    1.4

     


    According to table 6 the respondents have strongly agreed (M=4.8., S.D.=1.3) that they take unannounced quizzes to assess their students. The teachers strongly agreed (M= 4.8, S.D.= 1.2) that they write true/false questions to assess their students. The table further revealed that respondents were highly agreed (M=4.7, S.D.=1.2) that they write fill-in-the-blank questions to assess their students. According to table 7, the respondents have shown their agreement (M=4.6, S.D.= 1.4) that they use assessment results when deciding on the promotion of their students. The respondents also identified (M=4.5, S.D.= 1.3) that they use multiple-choice questions to assess their students.

    According to table 6, the respondents agreed (M=4.5, S.D.=1.4) that they take an assessment in the form of essay type questions. The analysis further identified (M=4.5, S.D.=1.3) that they plan objective type questions for assessment. The respondents have shown agreement (M=4.3, S.D.=1.3) that they plan assessment keeping in view the paper pattern of the annual examination. According to table 4.7, the respondents have shown agreement (M= 4.2, S.D.=1.2 ) that they use worksheets to assess their students in their class. They also agreed (M=4.2, S.D.=1.3) that they assign grades to their students based on assessment. The analysis further indicated that the respondents agreed (M=4.0, S.D.= 1.3) with the statement that they are unable to use some assessment practices in the class.

    According to table 6, the respondents agreed (M=4.0, S.D.=1.4) that they develop assessments based on clearly defined objectives of the chapter. They further indicated (M=3.9, S.D.= 1.3) that they make tests themselves to assess their students. The responses indicated a good classroom agreement. Respondents of (M=3.8, S.D.= 1.3) that they use test items given in the textbook for classroom assessment. According to table 4.7, the respondents agreed (M=3.7, S.D.= 1.3) that they always use essay type questions to assess their students. Most of them chose (M=3.6, S.D.= 1.3) to write matching questions to assess their students.

    According to table 6, the respondents agreed (M=3.6, S.D.=1.3) that they write matching questions to assess their students. These respondents (M=3.5, S.D.=1.3) are of the view that they communicate assessment results to parents. The analysis also indicated that the respondents agreed (M=3.2, S.D.=1.3) that they assess their students by asking oral questions. The data also indicated that (M=3.1, S.D.=1.3) use assessment results when planning their teaching. The analysis indicated that the respondents agreed (M=3.1, S.D.=1.2) that they take unannounced quizzes to assess the learning of students. Some teachers provide oral feedback to their students. (M=3.0, S.D.=1.3).

    According to table 6, the respondents agreed (M=3.0, S.D.=1.4) that they assess the individual class participation of their students. They further agreed (M=3.0, S.D.=1.3) that they choose appropriate assessment practices for instructional decisions. The analysis also indicated that the respondents agreed (M=2.9, S.D.=1.4) that they communicate assessment results to their colleagues as well. According to the above table, the respondents agreed (M=2.9, S.D.=1.4) that they assess group or team class participation. The above table also showed that the respondents agreed (M=2.9, S.D.=1.3) that they match assessments with instructional objectives.

    Table 6 shows that the respondents provided written feedback to their students (M=2.9, S.D.=1.3). Some of them were performing diagnostics tests as well (M=2.9, S.D.=1.3). The respondents have shown their agreement that they assess individual class presentations (if any) by ( M= 2.8, S.D = 1.4), and they showed their agreement by(M=2.8, S.D.1.2) that they use a table of specifications to plan assessments. They also showed their contentment on (M=2.8, S.D.=1.3) that they constructed a model answer for scoring essay type questions. Most of the teachers were of the view that they needed training in classroom assessment techniques (M= 2.8, S.D= 1.3). Most of them were of the view that they get help from internet resources to prepare an assessment for their class (M=2.6, S.D= 1.1). Many of the teachers (M=2.5, S.D.=1.2) were of the view that they make tests from all the content which was given to the students for preparation. Some of the teachers were of the view that they assess their students through observation (M= 2.5, S.D=1.3). The respondents further indicated (M=2.0, S.D.=1.4) that they use test items for assessing higher order thinking skills for their students.

     

    Inferential Statistics

    In inferential statistics, t-test and ANOVA were used to identify the significant differences among different groups of participants regarding their perceptions related to assessment practices in primary schools in Lahore.


     

    Table 7. T-Test Comparing Mean Score on APPL on the Basis of Gender

    Dependent Variables

    Gender

    n

    M

    SD

    t

    p

    Total Score on APPL

    Male

    Female

    113

    137

    106.46

    104.53

    16.03

    14.58

    .994

    .321

     


    An independent sample t-test was performed (Table 7) to compare the perceptions of male and female teachers about assessment practices in private schools in Lahore. No significant difference between the perceptions of the teachers of both genders about assessment practices was identified (t = .994, p=.321). It was evident that both male and female teachers have similar perceptions about

    assessment practices in private schools of Lahore.


     

    Table 8. ANOVA Test Comparing scores on APPL on the basis of age Groups

    Variance

    Sum of Squares

    Df

    Mean Square

    F

    Sig.

    Between Groups

    162.82

    2

    81.41

    .348

    .706

    Within Groups

    57777.37

    247

    233.91

     

     

    Total

    57940.19

    249

     

     

     

     


    One-way ANOVA (Table 8) shows that there was no significant difference in the perception of teachers belonging to different age groups about assessment practices (F=.348, p = .706). It was evident that teachers belonging to different age groups have similar perceptions about assessment practices.


     

    Table 9. ANOVA Test Comparing APPA on the Basis of Academic Qualification

    Variance

    Sum of Squares

    Df

    Mean Square

    F

    Sig.

    Between Groups

    1602.29

    2

    801.15

    3.51

    .031

    Within Groups

    56337.89

    247

    228.08

     

     

    Total

    57946.196

    249

     

     

     

     


    One-way ANOVA (Table 9) shows that there was a significant difference in the perception of teachers in different private schools of Lahore about assessment practices based on their academic qualifications (F=3.51, p=.031). Post hoc analysis was conducted to identify significant differences among different groups.


     

    Table 10. LSD Multiple Comparisons of Mean Score on APPL on the Basis of Teaching Experience of Teachers

    (I) Teaching experience

    (J) teaching experience

    Mean Difference (I-J)

    Std. Error

    Sig.

    95% Confidence Interval

    Lower Bound

    Upper Bound

    1-2yrs

    3-4yrs

    5.87998*

    2.24497

    .009

    1.4583

    10.3017

    morethan4yrs

    2.36986

    2.48714

    .342

    -2.5289

    7.2686

    3-4yrs

    1-2yrs

    -5.87998*

    2.24497

    .009

    -10.3017

    -1.4583

    morethan4yrs

    -3.51012

    2.37962

    .141

    -8.1971

    1.1768

    morethan4yrs

    1-2yrs

    -2.36986

    2.48714

    .342

    -7.2686

    2.5289

    3-4yrs

    3.51012

    2.37962

    .141

    -1.1768

    8.1971

    *. The mean difference is significant at the 0.05 level.

     


    Post hoc multiple comparisons (Table 10) shows significant differences among perceptions of teachers about assessment practices on the basis of teaching experience. Respondents belonged to teaching experience from 1-2 years (Mean diff= 5.87998, p= .009) had better perceptions about assessment practices as compared to respondents who belonged to the age group 3-4 years. No significant differences were observed in other pairs.


    Table 11. ANOVA Test Comparing mean Score of Scale APPL on the basis of Classes Taught by the Teachers

    Variance

    Sum of Squares

    Df

    Mean Square

    F

    Sig.

    Between Groups

    182.13

    4

    456.53

    1.99

    .096

    Within Groups

    56114.06

    245

    229.03

     

     

    Total

    57940.19

    249

     

     

     

     


    One-way ANOVA Table 11 shows that there was no significant difference in the assessment practices of teachers working in different private schools in Lahore (F=1.99, p=.096).

    Discussions

    Findings have revealed that teachers have a favorable perception regarding assessments. Generally, teachers are encouraged to plan formative assessments on their own. School managements usually provide clear instructions and objectives for such assessments ahead of time (Darling-Hammond, 2003). However, the management of the schools does not provide any training on the types of assessments and their purposes (Brookhart, 1999a; Even, 2005; Guskey, 2003; Zhang & Burry-Stock, 2003; Zolfaghari & Ashraf, 2015). Generally, the assessments conducted are based on the content of the book and solely measure the retention rate and memorization of the students (Narathakoon et al., 2020). Hence, compromising the development of the higher-order thinking skills of the students (Gardner et al., 2007; Kantar, 2014; MacBeath et al., 2004; Pinkus, 2009). Fewer teachers have reported conducting concept-based assessments. Additionally, the absolute absence of the graded modes of assessment such as projects and group presentations undermines students' collaborative skills.

     A majority of the teachers are more focused on traditional methods of assessment aiming at examining retention. Modes of assessment were found to be similar as identified by the previous studies (Herrera et al., 2007). Surprise quizzes, and graded worksheets, however, have not been identified as a distinguished mode in prior studies (Bol et al., 1998; Brookhart, 1999a, 2001; Pinkus, 2009). Moreover, teachers showed an inclination towards objective assessment in comparison to subjective/essay type assessments owing to their flexibility and ease of conduction. The verbal examination is another distinguished finding of the study (Zhang & Burry-Stock, 2003). The sample of the study reported asking a question based on the book content to take quick feedback if the students have properly learnt their lessons.  

    It was also found that usually, the results are directly sent to parents through electronic modes (Brookhart, 1999b). However, the results of the assessments are generally discussed with the students in class. It was also revealed that teachers generally use the assessment results as feedback on their teaching methodologies, and reflect on and improve their instructional quality. It was found that teachers of the same institution discuss their assessment results with their colleagues in order to identify the learning problems of specific students and exchange successful way outs for supporting their struggling students. 

    Conclusion

    Teachers' assessment practices differ greatly on the basis of their school culture and their years of experience in practice. Assessments measuring students' retention rate and memorization are too typical of Pakistani educational culture. Not an impressive majority of teachers were found to be involved in promoting higher-order thinking skills (Zhang & Burry-Stock, 2003). Teachers also need to develop their skills to communicate results with students in a constructive way.

References

  • Airasian, P. W. (2001). Classroom assessment: Concepts and applications: ERIC.
  • Alkharusi, H. (2011). Teachers' classroom assessment skills: Influence of gender, subject area, grade level, teaching experience and in-service assessment training. Journal of Turkish Science Education, 8(2), 39-48.
  • Bol, L., Stephenson, P. L., O'connell, A. A., & Nunnery, J. A. (1998). Influence of experience, grade level, and subject area on teachers' assessment practices. The Journal of Educational Research, 91(6), 323-330.
  • Brookhart, S. M. (1999a). The Art and Science of Classroom Assessment. The Missing Part of Pedagogy. ASHE-ERIC Higher Education Report, 27, Number 1: ERIC.
  • Brookhart, S. M. (1999b). Teaching about communicating assessment results and grading. Educational Measurement: Issues Practice, 18(1), 5-13.
  • Brookhart, S. M. (2001). Successful students' formative and summative uses of assessment information. Assessment in Education: Principles, Policy & Practice, 8(2), 153-169.
  • Brown, G. T. (2006). Teachers' conceptions of assessment: Validation of an abridged version. Psychological reports, 99(1), 166- 170.
  • Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education. London: Routledge.
  • Collins, R. (2014). Skills for the 21st Century: teaching higher-order thinking. Curriculum & Leadership Journal, 12(14),
  • Darling-Hammond, L. (2003). Standards and assessments: Where we are and what we need. Teachers College Record, 0-0.
  • Doucet, A., Netolicky, D., Timmers, K., & Tuscano, F. J. (2020). Thinking about pedagogy in an unfolding pandemic: An independent report on approaches to distance learning during COVID19 school closures: Education International.
  • Duncan, C. R., & Noonan, B. (2007). Factors affecting teachers grading and assessment practices. Alberta Journal of Educational Research, 53(1),
  • Elmehdi, H. M., & Ibrahem, A.-M. (2019). Online summative assessment and its impact on students academic performance, perception and attitude towards online exams: University of Sharjah Study Case. In M. Mateev & P. Poutziouris (Eds.), Creative Business and Social Innovations for a Sustainable Future 211-218. Springer
  • Even, R. (2005). Using assessment to inform instructional decisions: How hard can it be? Mathematics Education Research Journal, 17(3), 45-61.
  • Frey, B. B., & Schmitt, V. L. (2007). Coming to terms with classroom assessment. Journal of Advanced Academics, 18(3), 402-423.
  • Gardner, J., Pyke, P., Belcheir, M., & Schrader, C. (2007). Testing our assumptions: Mathematics preparation and its role in engineering student success. Paper presented at the 2007 Annual Conference & Exposition.
  • Gathuri, J. W., Luvanda, A., Matende, S., & Kamundi, S. (2014). Impersonation challenges associated with e-assessment of university students. Journal of Information Engineering Applications, 4(7), 60-68.
  • Guskey, T. R. (Ed.) (2003). How classroom assessments improve learning. Alexendria, VA: ASCD
  • Haynes, A., Lisic, E., Goltz, M., Stein, B., & Harris, K. (2016). Moving beyond Assessment to Improving Students' Critical Thinking Skills: A Model for Implementing Change. Journal of the Scholarship of Teaching and Learning, 16(4), 44-61
  • Kantar, L. D. (2014). Assessment and instruction to promote higher order thinking in nursing students. Nurse Education Today, 34(5), 789-794
  • Khattak, S. G. (2012). Assessment in schools in Pakistan. SA-eDUC, 9(2), 1-13.
  • MacBeath, J., Galton, M., & Steward, S. (2004). A life in secondary teaching: Finding time for learning: University of Cambridge Faculty of Education
  • Martinello, J. G., Lauris, J. R. P., & Brasolotto, A. G. (2011). Psychometric assessments of life quality and voice for teachers within the municipal system, in Bauru, SP, Brazil. Journal of Applied Oral Science, 19(6), 573-578.
  • Narathakoon, A., Sapsirin, S., & Subphadoongchone, P. (2020). Beliefs and Classroom Assessment Practices of English Teachers in Primary Schools in Thailand. International Journal of Instruction, 13(3), 137-156.
  • Pinkus, L. M. (2009). Meaningful Measurement: The Role of Assessments in Improving High School Education in the Twenty-First Century. Alliance for Excellent Education.
  • Richard, S., & Conklin, N. F. (1992). In teachers' hands: Investigating the practices of classroom assessment: suny Press.
  • Stiggins, R. (1992). High quality classroom assessment: what does it mean? Educational Measurement: Issues Practice, 11(2), 35-39.
  • Stiggins, R. (1999). Evaluating classroom assessment training in teacher education programs. Educational Measurement: Issues Practice, 18(1), 23-27.
  • Suah, S. L., & Ong, S. L. (2012). Investigating Assessment Practices of In-service Teachers. International Online Journal of Educational Sciences, 4(1),
  • Wong, H. K. (2007). The well-managed classroom.
  • Xiong, Y., & Suen, H. K. (2018). Assessment approaches in massive open online courses: Possibilities, challenges and future directions. International Review of Education, 64(2), 241-263.
  • Zhang, Z., & Burry-Stock, J. A. (2003). Classroom assessment practices and teachers' self-perceived assessment skills. Applied measurement in education, 16(4), 323-342.
  • Zhao, D.-C., Mulligan, J., & Mitchelmore, M. (2006). Case studies on mathematics assessment practices in Australian and Chinese primary schools. In Mathematics Education in Different Cultural Traditions- A Comparative Study of East Asia and the West 261-275. Springer
  • Zolfaghari, S., & Ashraf, H. (2015). The relationship between EFL teachers' assessment literacy, their teaching experience, and their age: A case of Iranian EFL teachers. Theory Practice in Language Studies, 5(12), 2550.

Cite this article

    APA : Cheema, S., Siddiqui, M., & Massod, S. (2022). Perceptions of Teachers about Assessment Practices at Primary Level in Private Schools of Lahore. Global Educational Studies Review, VII(II), 263-274 . https://doi.org/10.31703/gesr.2022(VII-II).25
    CHICAGO : Cheema, Saba, Malahat Siddiqui, and Sajid Massod. 2022. "Perceptions of Teachers about Assessment Practices at Primary Level in Private Schools of Lahore." Global Educational Studies Review, VII (II): 263-274 doi: 10.31703/gesr.2022(VII-II).25
    HARVARD : CHEEMA, S., SIDDIQUI, M. & MASSOD, S. 2022. Perceptions of Teachers about Assessment Practices at Primary Level in Private Schools of Lahore. Global Educational Studies Review, VII, 263-274 .
    MHRA : Cheema, Saba, Malahat Siddiqui, and Sajid Massod. 2022. "Perceptions of Teachers about Assessment Practices at Primary Level in Private Schools of Lahore." Global Educational Studies Review, VII: 263-274
    MLA : Cheema, Saba, Malahat Siddiqui, and Sajid Massod. "Perceptions of Teachers about Assessment Practices at Primary Level in Private Schools of Lahore." Global Educational Studies Review, VII.II (2022): 263-274 Print.
    OXFORD : Cheema, Saba, Siddiqui, Malahat, and Massod, Sajid (2022), "Perceptions of Teachers about Assessment Practices at Primary Level in Private Schools of Lahore", Global Educational Studies Review, VII (II), 263-274
    TURABIAN : Cheema, Saba, Malahat Siddiqui, and Sajid Massod. "Perceptions of Teachers about Assessment Practices at Primary Level in Private Schools of Lahore." Global Educational Studies Review VII, no. II (2022): 263-274 . https://doi.org/10.31703/gesr.2022(VII-II).25