Ali Baniasadi; Keyvan Salehi; Ebrahim Khodaie; Khosro Bagheri; Balal Izanloo
Abstract
The present study aimed to investigate the psychometric properties of fair classroom assessment Rubric based on Item-Response theory. For this purpose, a sample of 511 students of the University of Tehran was selected by the available sampling method and answered Rubric questions. At this stage, to determine ...
Read More
The present study aimed to investigate the psychometric properties of fair classroom assessment Rubric based on Item-Response theory. For this purpose, a sample of 511 students of the University of Tehran was selected by the available sampling method and answered Rubric questions. At this stage, to determine the application of unidimensional or multidimensional models, DETECT and parallel analysis methods were used. The results of both methods rejected the unidimensionality of the data and the results of the parallel analysis showed the extraction of three factors from the data. Also, the comparison of unidimensional and multidimensional model fit indices including log-likelihood, likelihood ratio, Root Mean Square Error of Approximation and comparison of Bayesian and Akaike information criteria confirmed the better fit of the multidimensional model for the data. Thus, due to the polytomous of the answers to the questions, the multidimensional graded response model was used to estimate the parameters of the questions. The reliability of each of the subscales of procedural fairness, nature of assessment and interactional fairness were 0.85, 0.69 and 0.63, respectively. Estimation of the discrimination parameters ranged from 1.048 to 5.802, which showed that all the questions performed well in the discrimination of the upper and lower levels of the fair classroom assessment, and after controlling the false discovery rate, the S-X2 statistic showed a good fit of all Rubric questions. In general, the results of this study show that the developed Rubric has appropriate psychometric properties to evaluate the quality of fairness in the classroom assessment.
m Habibi; fatemeh moradi; balal Izanlo
Volume 2, Issue 6 , January 2012, , Pages 1-27
Abstract
Background: Discussion about invariance of questions and tests is an important issue in assessment.
Objectives: The present study was conducted to compare the invariance of the parameters in item-response theory and confirmatory factor analysis.
Methods: After reviewing the related basics of each approach, ...
Read More
Background: Discussion about invariance of questions and tests is an important issue in assessment.
Objectives: The present study was conducted to compare the invariance of the parameters in item-response theory and confirmatory factor analysis.
Methods: After reviewing the related basics of each approach, the researcher compared the invariance of the parameters in each approach based on empirical data result from International Reading Literacy Study (PIRLS) Test. The sample was 5000 Iranian students (half female and half male) in 2006 who responded to six questions which were related to the scale of attitude toward reading.
Results: Data analysis showed that question 6 is biased using both item-response theory and confirmatory factor analysis. The results, however, were different considering questions 1, 3 and 4. Question 1 was found to be biased based on item-response theory only; questions 3 and 4, on the other hand, were found to be biased based on confirmatory factor analysis.
Conclusion: It is suggested that both approaches be employed when deciding on the invariance of the parameters, since making decisions otherwise will be misleading. Also, it is offered that intercept and differences in the distribution of the ability of the groups and their effects on the invariance be considered as primary.