Comparison of Q-Matrix Validations for a Cognitive Diagnostic Test between Expert Judgement Method and PVAF Index Method

Main Article Content

Warute Phiwngam
Nhabhat Chaimongkol


The purpose of this research was to compare the quality of two Q-matrix validation methods which were the expert judgement method and the PVAF index method, for a cognitive diagnostic test. The cognitive diagnostic test in this research was a 4-choice test which contained 40 questions. Each item aimed to measure one to three attributes. The sample of this research consisted of 225 grade 10 students in Bangkok, in the academic year 2020. The result of this research was obtained by using G-DINA Model as a cognitive diagnostic model. The R Program was applied to analyze the quality of Q-matrix with PVAF index method.
The result of this research showed that the Q-matrix validated by the expert judgement method could measure the students’ attributes more accurately than the PVAF index method, based on the developed Q-matrix, as the expert judgement method could specify Q-matrix without any misspecifications. Meanwhile, the PVAF index method showed only 22 items (55.0%) from 40 items that were correctly specified, compared to the developed Q-matrix, and 9 items (22.5%) were incorrectly specified, compared to the developed Q-matrix, and there were 9 items (22.5%) that attributes could not be specified.


Download data is not yet available.

Article Details

Research Article


de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76(2), 179–199.

de la Torre, J. & Chiu, C. (2016). A General Method of Empirical Q-matrix Validation. Psychometrika. 81. 10.1007/s11336-015-9467-8. Finn, B., 2015. Measuring Motivation in Low-Stakes Assessments. ETS Research Report Series, 1–17. https://doi:10.1002/ets2.12067

Jang, E.E. & Wagner, M. (2013). Diagnostic Feedback in the Classroom. In The Companion to Language Assessment, A.J. Kunnan (Ed.).

Johnson, R. L., Penny, J. A., & Gordon, B. (2009). Assessing performance: Designing, scoring, and validating performance tasks. Guilford Press.

Ma, W., & de la Torre, J. (2020a). An empirical Q-matrix validation method for the sequential generalized DINA model. The British journal of mathematical and statistical psychology, 73(1), 142–163.

Ma W, de la Torre J. (2020b). GDINA: An R Package for Cognitive Diagnosis Modeling. Journal of Statistical Software, 93(14), 1–26. doi: 10.18637/jss.v093.i14.

Nájera, P., Sorrel, M. A., & Abad, F. J. (2019). Reconsidering cutoff points in the general method of empirical Q-matrix validation. Educational and Psychological Measurement, 79(4), 727–753.

Qin, C., Jia, S., Fang, X., & Yu, X. (2020). Relationship validation among items and attributes. Journal of Statistical Computation and Simulation, 90(18), 3360-3375.

Waugh, C. K., & Gronlund, N. E. (2013). Assessment of Student Achievement (10th ed.). Pearson.

Office of the Education Council. (2019). Guidelines for Development of Students Competency at the Basic Education Level (1st ed.). Ministry of Education Thailand, 1-15. (in Thai)

Kanjanawasee, S. (2013). Classical Test Theory (7th ed). Chulalongkorn University. (in Thai)

Meesakul, S., Naiyapatana, O., Khampalikit ., & Kritkharuehart, S. (2015). The Study for the Efficiency of Diagnostic Classification Models. Research Methodology & Cognitive Science, 13(1), 27-37. (in Thai)

Chaimongkol, N. (2017). Assessment for Cognitive Diagnosis. Journal of The Social Science Research Association of Thailand. 4(1), 14-23. (in Thai)