Asghar Minaei; Zahra Ghafari
Abstract
The biggest worry, discussed in tests being inequitable, is the presence of bias possibility or differential functioning. Because bias causes test validity to be suspected and doubted. Objective: In this research differential item functioning on the whole 14 blocks of mathematics tests of TIMSS in grade ...
Read More
The biggest worry, discussed in tests being inequitable, is the presence of bias possibility or differential functioning. Because bias causes test validity to be suspected and doubted. Objective: In this research differential item functioning on the whole 14 blocks of mathematics tests of TIMSS in grade 8 between Iranian girls and boys has been studied using IRT approach. Method: In the order that, first, data have been recoded by SPSS and the assumption of items being unidimensional in all blocks, by NOHARM software have been studied. In the next step the best model that is known as “base model” has been fit for data by BILOG-MG software. In the next process from that base model for studying the items having differential functioning and anchor, IRTLRDIF (Thissen,2001) has been used and at last for final estimation of item and ability parameters, MULTILOG software has been used. Findings show that from 219 items, being studied, of mathematics test in grade 8 of TIMSS 2011, 144 anchor items and 75 items have DIF and items have differential functioning and it is to the loss of girls, the focal group.
asghar minaei; Ali Delavar; Mohammad Reza Falsafinezhad; Ali Reza Kiamanesh; Yahya mohajer
Volume 4, Issue 16 , July 2014, , Pages 138-170
Abstract
Studies of internationalmathematics achievement such as the Trends in Mathematicsand Science Study (TIMSS) have employed classical test theory and item responsetheory to rank individuals within a latent ability continuum. Although these approacheshave provided insights into comparisons between countries, ...
Read More
Studies of internationalmathematics achievement such as the Trends in Mathematicsand Science Study (TIMSS) have employed classical test theory and item responsetheory to rank individuals within a latent ability continuum. Although these approacheshave provided insights into comparisons between countries, they have yet toexamine howspecific attributemastery affects student performance and howthey canprovide information for curricular instruction. In the 2007 administration of TIMSS,two benchmark participants—Massachusetts andMinnesota—were tested followingthe same procedural methods, providing an opportunity for comparison within andacross the United States. Overall comparison of their performance showed Massachusettsand Minnesota to significantly outperform the United States. However,this article shows that there is a greater wealth of fine-grained information that canbe translated directly for classroom application at the attribute level when a cognitivediagnostic model (CDM) such as the deterministic, inputs, noisy, “and” gate (Junker& Sijtsma, 2001) model is used. Results showed a significant disparity betweenproportions of correctly answering and mastering skills required to solve an item.Advantages ofCDMsare discussed aswell as a CDM-basedmethod to filter distractorresponse categories that can aid instructors to diagnose a student’s attribute mastery.