Balal Izanloo; Manouchehr Rezaee; Naser Abbasi
Abstract
Perceived partner responsiveness (PPR) is a construct that can help evaluate intimacy in couple therapy. However, research on PPR has been hampered by the lack of a standardized measurement in this field. The purpose of the present study was to translate and examine the factor structure, invariance, ...
Read More
Perceived partner responsiveness (PPR) is a construct that can help evaluate intimacy in couple therapy. However, research on PPR has been hampered by the lack of a standardized measurement in this field. The purpose of the present study was to translate and examine the factor structure, invariance, validity and internal consistency of the Perceived Responsiveness and Insensitivity (PRI) scale among Iranian samples. The statistical population of the present study was the married teachers of Zanjan province in 2021-2022, and 429 teachers in total participated in this research through judgmental convenience sampling. Descriptive statistics and confirmatory factor analysis, graded response model, parallel analysis, exploratory graph analysis and bootstrap analysis were used for data analysis. The findings demonstrated that the factor structure of PRI in Iranian society is similar to the study of Crasta et al. (2021); that is, PRI consisted of two sub-scales. The fit indices of the scale and factor load of the items were optimal both by gender and in the whole sample. The findings related to the invariance of the scale in different models also indicated that the meaning of the items is the same for men and women. Analyzes based on Item-Response theory showed that the items derived for the PRI short form in this study, which should indicate the most information, were inconsistent with the short form derived from Crasta et al.'s (2021) study. Alpha statistic, composite reliability, AVE index and diagnostic validity of PRI scale were also optimum. The findings related to convergent and divergent validity also indicated the significance association of PRI with other variables. In general, the PRI scale showed optimized psychometric properties, which indicated its applicability in the Iranian society and its consistency with the cultural norms of the country. However, in the present study, there was a possibility of weak diagnostic validity of the two constructs in this scale, especially for the group of women, which should be investigated in future studies with a larger sample size.
Hadi Samadieh; Farhad Tanhaye Reshvanloo; Talieh Saeidi Rezvani; Leila Talebzadeh
Abstract
A fundamental dimension along which all social and personal relationships vary is closeness. The purpose of present study was to investigate the factor structure and Item-Response parameters of the Unidimensional Relationship Closeness Scale. In a descriptive-correlational design and test validation ...
Read More
A fundamental dimension along which all social and personal relationships vary is closeness. The purpose of present study was to investigate the factor structure and Item-Response parameters of the Unidimensional Relationship Closeness Scale. In a descriptive-correlational design and test validation 180 Birjand University students in the first study and 250 students in the second study were selected through multi-stage random sampling and completed the Unidimensional Relationship Closeness Scale (Dibel, Levin & Park, 2012). The data were analyzed by internal consistency, exploratory and confirmatory factor analysis, discrimination and threshold parameters and Item and test information curves. Results showed that the Unidimensional Relationship Closeness scale had a one-factor structure with explained variance of 63.39. Confirmatory factor validity was also confirmed. Cronbach's alpha coefficients were 0.95 and 0.93 and split half coefficients were 0.89 and 0.88, respectively in two studies. The Item-Response parameters were also at the optimum level. It seems that the Unidimensional Relationship Closeness scale, and in particular the structure of its 12 item versions, has a good reliability and validity in students.
behnam karimi; m Falsafinejad; fariborz dortaj
Volume 2, Issue 6 , January 2012, , Pages 1-23
Abstract
Background: ease in scoring,performingand identity of multiple choice tests has caused that those apply as the essential instruments in large scale assessments. There was intense criticism toward multiple choices. For example, those not perform all of educational goals (those assess low cognitive levels) ...
Read More
Background: ease in scoring,performingand identity of multiple choice tests has caused that those apply as the essential instruments in large scale assessments. There was intense criticism toward multiple choices. For example, those not perform all of educational goals (those assess low cognitive levels) and because of using guess to answering questions. Herein, some people for solving of these problems were suggested that we should increase choices of questions.
Objectives: The objective of this research was the study of effects of number of item choices on psychometric characteristics of test and items and also on estimated ability of subjects in classical test theory and item- response theory (IRT).
Methods: The statistical population was all of high school’ students of Shiraz. That 608 of them were randomly selected as sample group. In order to response to study questions, we used the empirical method and for data collecting we used two language and arithmetic tests that were provided to this goal.
Results: Data analysis indicated that there was no significant effect of item choices on item parameters and the effect of item choices on estimated psychometric characteristics of subjects in different tests is equal. Furthermore, there was difference between estimated parameters in classical test theory and item-response theory (IRT).
Conclusion: After checking assumptions of item response theory (IRT), this was appeared that data have better fitted with two- parameter model and there was no difference between item choices and fitting with model. In addition, there was difference between estimated ability and item choices too.
m Habibi; fatemeh moradi; balal Izanlo
Volume 2, Issue 6 , January 2012, , Pages 1-27
Abstract
Background: Discussion about invariance of questions and tests is an important issue in assessment.
Objectives: The present study was conducted to compare the invariance of the parameters in item-response theory and confirmatory factor analysis.
Methods: After reviewing the related basics of each approach, ...
Read More
Background: Discussion about invariance of questions and tests is an important issue in assessment.
Objectives: The present study was conducted to compare the invariance of the parameters in item-response theory and confirmatory factor analysis.
Methods: After reviewing the related basics of each approach, the researcher compared the invariance of the parameters in each approach based on empirical data result from International Reading Literacy Study (PIRLS) Test. The sample was 5000 Iranian students (half female and half male) in 2006 who responded to six questions which were related to the scale of attitude toward reading.
Results: Data analysis showed that question 6 is biased using both item-response theory and confirmatory factor analysis. The results, however, were different considering questions 1, 3 and 4. Question 1 was found to be biased based on item-response theory only; questions 3 and 4, on the other hand, were found to be biased based on confirmatory factor analysis.
Conclusion: It is suggested that both approaches be employed when deciding on the invariance of the parameters, since making decisions otherwise will be misleading. Also, it is offered that intercept and differences in the distribution of the ability of the groups and their effects on the invariance be considered as primary.