Behrooz Kavehie
Abstract
Objective: to find a new method to reduce data processing time and calculate the facility index and discriminant index of questions in large-scale tests.methods: using the statistical method of simple random sampling (SRS) in the calculation of facility index and discriminant index with the classical ...
Read More
Objective: to find a new method to reduce data processing time and calculate the facility index and discriminant index of questions in large-scale tests.methods: using the statistical method of simple random sampling (SRS) in the calculation of facility index and discriminant index with the classical test analysis method and producing a computer program to run the test sample with the large-scale data set.Results: The results showed that this Significantly reduced processing time. For example, the question that took 7393 minutes (more than 133 hours - 5 days and nights) and gave a P index of 0.50, with 10 iterations and only an average time of 129.2 minutes, calculated the average value of this index to be 0.503.Conclusion: Combining the statistical method of simple random sampling (SRS) with the classical test analysis (CTT) method reduces the data processing time, and this method can strongly recommended to use for nationwide organizations such as those that hold tests at the national level. Keywords: CTT, Simple Random Sampling, Large-Scale Data, Process Time.
jalil younesi; ali delavar; mohammad reza falsafi nejhad
Volume 1, Issue 2 , January 2011, , Pages 139-169
Abstract
This study aims to investigate the psychometric characteristics of specialized items of distance education psychology examinations held by Payame Noor University in 2006. To do this, 2000-subject sample was randomly selected from among all those sitting for the distance education psychology exam. Then, ...
Read More
This study aims to investigate the psychometric characteristics of specialized items of distance education psychology examinations held by Payame Noor University in 2006. To do this, 2000-subject sample was randomly selected from among all those sitting for the distance education psychology exam. Then, the sample group was randomly divided into two 1000-subject groups; one of which was used for parameter estimation, and the other for testing model-data fit. In order to analyze the test and its items based on the classical test theory (CTT), in addition to calculating frequency distribution of distracters, items variance, difficulty index, discrimination index and reliability coefficient of the test were computed. In order to analyze the test and its items based on the item- response theory (IRT), assumptions of unidimensionality and local independence were first examined. In this piece of research, in order to determine unidimensionality, all specialized psychology exams were studied in terms of factor analysis by TESTFACT Program. Results of the analyses of the above exams suggested that all of them are unidimensional, and therefore local independence assumptions are verified. Model-data fit was explored by means of BILOG-MG software, and finally, items' parameters (difficulty, discrimination and guessing) estimated along with subjects' ability parameters were extracted. Distracters' analysis showed that all items' distracters were not homogeneous in terms of probability and have had poor performance. It also indicated that in psychology and sociology exams, two-parameter model, and in philosophy exam, three- parameter model were best fit for test items. At the same time, despite the fact that mean of all exams was lower than the criterion score, it seems that the difficulties in the subject of philosophy have been more and this exam had more impact on the subjects' academic failure.