Developing a Computerized Adaptive Test to Assess Students' Metacognition in Open and Distance Learning Environment
Main Article Content
Abstract
This study assessed the adaptability of the MetaCAT system, which was developed on the Concerto platform for measuring metacognition, in an open and distance learning (ODL) context. In 2022, 70 undergraduate students, each with basic computer skills, from an open university participated in the study.
The findings, based on three indicators, provided evidence of the system's effectiveness in assessing metacognition among students of ODL. The first indicator assessed the relationship between the test taker's ability estimate and the difficulty of the items administered to them. A Pearson's correlation coefficient of 0.70 was obtained, denoting a significant correlation between ability estimates and test difficulty. This result was considered acceptable within the scope of the study. The second indicator evaluated the ratio of the standard deviation of item difficulty to the standard deviation of ability estimates, and was found to be 1.32. This suggests an imbalance in item difficulty parameters. The third indicator, which estimated the reduction in the variance of item difficulty parameters, yielded a value of 0.61. This value, albeit slightly lower than the results of Reckase et al.'s simulation study, still fell within the acceptable levels of fit for the MetaCAT system.
The study demonstrated that MetaCAT has satisfactory adaptability and reliability for measuring metacognition in ODL settings, as supported by indicators 1 and 3. Although minor deviations were observed in indicators 2 and 3, suggesting potential areas for refinement, future research could yield a more precise adaptability assessment by using a larger pool of questions and by increasing the sample size.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The content and information contained in the published article in the Journal of Educational Measurement Mahasarakham University represent the opinions and responsibilities of the authors directly. The editorial board of the journal is not necessarily in agreement with or responsible for any of the content.
The articles, data, content, images, etc. that have been published in the Journal of Educational Measurement Mahasarakham University are copyrighted by the journal. If any individual or organization wishes to reproduce or perform any actions involving the entirety or any part of the content, they must obtain written permission from the Journal of Educational Measurement Mahasarakham University.
References
Arbaugh, J.B., & Duray, R. (2002). Technological and structural characteristics, student learning and satisfaction with Web-based courses: An exploratory study of two on-line MBA programs. Management Learning, 33(3), 331-347.
Baker, F.B. (2001). The basics of item response theory (2nd ed.). ERIC Clearinghouse on Assessment and Evaluation.
Chiu, Y.C., Douglas, J., & Liang, J.C. (2021). Investigating the Effectiveness of Computerized Adaptive Testing for Measuring Metacognition. Journal of Educational Computing Research, 59(5), 1095-1117. https://doi.org/10.1177/0735633120932388
Choi, S.W., & Swartz, R.J. (2011). Comparison of CAT item selection criteria for the graded response model. Educational and Psychological Measurement, 71(1), 115-135. https://doi.org/10.1177/0013164410372102
Flavell, J.H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906-911.
Harrison, C., Loe, B.S., Lis, P., & Sidey-Gibbons, C. (2020). Maximizing the potential of patient-reported assessments by using the open-source concerto platform with computerized adaptive testing and machine learning [Tutorial]. Journal of Medical Internet Research, 22(10), e20950. https://doi.org/10.2196/20950
Jiang, S., Wang, C., & Weiss, D.J. (2016). Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model [Original Research]. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.00109
Joo, Y.J., Lim, K.Y., & Kim, C.J. (2011). Online university students' satisfaction and persistence: Examining perceived level of presence, usefulness and ease of use as predictors in a structural model. Computers & Education, 57(2), 1654-1664.
Koedsri, A., Na Nakorn, N., & Watanasuntorn, K. (2020). Preliminary Study of a Development of Computerized Adaptive Testing on Metacognitive for Primary and Secondary School Students. Paper presented at the 28th Thailand Measurement Evaluation and Research Conference 2020, Faculty of Education Naresuan University.
McCombs, B.L., & Marzano, R.J. (1990). Putting the self in self-regulated learning: The self as agent in integrating will and skill. Educational Psychologist, 25(1), 51-69.
Moore, M.G., & Kearsley, G. (2012). Distance education: A systems view of online learning. Cengage Learning.
Ngudgratoke, S., Na Nakorn, N., Chutinuntakul, S., Phonapichat, P., & Sittirit, P. (2016). The development of a scale to measure and assess metacognition of primary and secondary school [Research Report]. NIETS. https://www.niets.or.th/th/content/view/5869
Pintrich, P.R. (2002). The role of metacognitive knowledge in learning, teaching, and assessing. Theory into Practice, 41(4), 219-225.
Reckase, M.D. (2009). The basics of item response theory (2nd ed.). University of Michigan Press.
Reckase, M.D., Ju, U., & Kim, S. (2018). Some measures of the amount of adaptation for computerized adaptive tests. In M. Wiberg, S. Culpepper, R. Janssen, J. González, & D. Molenaar (Eds.), Quantitative Psychology: The 82nd annual meeting of the Psychometric Society. Springer International Publishing.
Reckase, M.D., Ju, U., & Kim, S. (2019). How Adaptive Is an Adaptive Test: Are All Adaptive Tests Adaptive?. Journal of Computerized Adaptive Testing. Springer International Publishing.
Samejima, F. (1997). Graded response model. In van der Linden, W.J. & Hambleton, R. K. (eds.), Handbook of modern item response theory (pp.85–100). Springer.
Schraw, G. (1997). The effect of generalized metacognitive knowledge on test performance and confidence judgments. The Journal of Experimental Education, 65, 135-146. http://dx.doi.org/10.1080/00220973.1997.9943788
Schraw, G., & Dennison, R.S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475. https://doi.org/10.1006/ceps.1994.1033
van der Linden, W.J. (2016). Computerized adaptive testing. In International Encyclopedia of the Social & Behavioral Sciences. Elsevier. 366-371.
Wainer, H., & Mislevy, R.J. (2019). Computerized adaptive testing: A primer (3rd ed.). Routledge.
Wyse, A.E., & McBride, J.R. (2021). A Framework for Measuring the Amount of Adaptation of Rasch-based Computerized Adaptive Tests. Journal of Educational Measurement, 58(1), 83-103. https://doi.org/10.1111/jedm.12267
Zimmerman, B.J. (2000). Attaining self-regulation: A social cognitive perspective. In Monique Boekaerts, M, Pintrich, P. R. & Zeidner, M. (Eds), Handbook of self-regulation (pp.13-39). Elsevier.