Computerized Adaptive Testing System in Information and Communication Technology Literacy Skills for 21st Century of Undergraduate Students

Main Article Content

Shotiga Pasiphol

Abstract

           This study had three main purposes: 1) to develop components for measuring information and communication technology (ICT) literacy skills of undergraduate students in the 21st century; 2) to develop test items for the test item bank of the ICT literacy skills in the 21st century; and 3) to develop and examine the quality of a computerized adaptive testing (CAT) system for measuring the ICT literacy skills in the 21st century. The sample for the tryout of the test consisted of 1,672 undergraduate students from the state higher education institutions in Bangkok. The sample for the tryout of the testing system consisted of 217 undergraduate students from the state higher education institutions in Bangkok. The analysis of the qualitative data employed content analysis while the analysis of the quantitative data employed descriptive statistics, t-test, and the Item Response Theory was used in the analysis of the test quality. The research results were as follows:


           1. The measurement of the ICT literacy skills in the 21st century had 5 components: 1) information accessibility, 2) information management, 3) information integration, 4) information evaluation, and 5) information communication.


            2. The construction of the test items of the ICT literacy skills in the 21st century according to the operational definition yielded 5-choice test items. Each component had 52 items, therefore there were 260 items. Most of the test items passed the content validity evaluation, representing 79.61 percent. The reliability analysis of the 10 sets of the tests yielded the mean of .63. The analysis of each test item quality according to the Item Response Theory (IRT) found that the 2PL unidimensional model was most congruent with the answers of all of the tests. The selection of the quality test items to be stored in the test item bank yielded 212 test items, with the mean of item difficulty parameters of .18 (SD.= 2.06) and the mean of item discrimination parameters of .70 (SD =.39). On the whole, it can be concluded that the test items selected to be stored in the test item bank of the CAT system had moderate difficulty and discrimination. (3) The CAT system was designed for online purpose using PHP. The system had four stages: 1) registration, 2) test assembly design, 3) test delivery, including parameter estimation, item selection, and test termination, and (4) test score report. Accompanying manuals of the CAT system were developed for two types of users: 1) the manual for general users, and 2) the manual for the system administrators. The quality of the CAT system was evaluated by a group of experts both on the whole and on minor aspects which were utility, possibility of practical use, appropriateness, and accuracy. The evaluation results showed that the quality of the system reached the highest levels for all aspects. Moreover, the students were satisfied at a high level with the functions of the CAT system on the aspects of good display, easy-to-use design, and the system performance.

Article Details

Section
Research Article

References

ณภัทร ชัยมงคล, โชติกา ภาษีผล และศิริชัย กาญจนวาสี. (2558). การทดสอบแบบปรับเหมาะ (ปริญญานิพนธ์ ปริญญาดุษฎีบัณฑิต). กรุงเทพมหานคร: จุฬาลงกรณ์มหาวิทยาลัย.
ณภัทร ชัยมงคล โชติกา ภาษีผล และ ศิริชัย กาญจนวาสี. (2559). การทดสอบแบบปรับเหมาะที่มีการสะท้อน ข้อมูลย้อนกลับในการทดสอบมาตรฐานวิชาชีพไอที. วารสารเทคโนโลยีสารสนเทศ, 12(2). 58-64.
ศิริชัย กาญจนวาสี. (2555). ทฤษฎีการทดสอบแนวใหม่ (พิมพ์ครั้งที่ 4). กรุงเทพมหานคร: สำนักพิมพ์แห่ง จุฬาลงกรณ์มหาวิทยาลัย.
สำนักงานคณะกรรมการการอุดมศึกษา. (2561). นักศึกษารวม ปีการศึกษา 2560. สืบค้นเมื่อ วันที่ 15 เมษายน 2561, ได้จาก: https://www.info.mua.go.th/information/.
Ali, U. S., & Chang, H. -H. (2014). An item-driven adaptive design for calibrating pretest items (Research Report No. RR-14-38). Princeton, NJ: Educational Testing Service. doi:10.1002/ets2.12044.
Ahmad, M., Karim, A. A., Din, R., & Albakri, I. S. M. A. (2013). Assessing ICT competencies among postgraduate students based on the 21st century ICT competency model. Asian Social Science, 9(16), 32–39.
Baker. F. B., & Kim, S. -H. (2017). The basics of item response theory using R. New York: Springer.
Birdsall, M. (2011). Implementing computer adaptive testing to improve achievement opportunities. Retrieved May 6, 2018, from https://assets.publishing.gov.uk/ government /uploads/system/uploads/attachment_data/file/606023/0411_
MichaelBirdsall _implementing-computer-testing-_Final_April_2011_With_ Copyright.pdf.
Bock, R. D., & Mislevy, R. J. (1982). Adaptive EAP estimation of ability in a microcomputer environment. Applied Psychological Measurement, 6(4), 431–444.
Brown, C., Templin, J., & Cohen, A. (2015). Comparing the two- and three-parameter logistic models via likelihood ratio tests: A commonly misunderstood problem. Applied Psychological measurement, 39(5), 335–348.
Claro et al. (2012). Assessment of 21st century ICT skills in Chile: Test design and results from high school level students. Computers & Education, 59(3), 1042–1053.
Educational Testing Service. (2002). Digital transformation a framework for ICT literacy. Retrieved January 5, 2018, from www1.ets.org/Media/Tests/Information_and_ Communication_Techno logy_Literacy/ictreport.pdf.
Haberman, S. J. (2008). When can subscores have value? Journal of Educational and Behavioral Statistics, 33(2), 204–229.
Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications. Boston: Kluwer-Nijhoff.
Katz, I. R., & Macklin, A. S. (2007). Information and communication technology (ICT) literacy: Integration and assessment in higher education. Retrieved March 7, 2018, from https://www.iiisci.org/Journal/CV$/sci/pdfs/P890541.pdf.
Kim, S. H., & Cohen, A. S. (2002). A comparison of linking and concurrent calibration under the grade response model. Applied Psychological Measurement, 26(1), 25–41.
Mao, X., & Xin, T. (2013). The application of the Monte Carlo approach to cognitive diagnostic computerized adaptive testing with content constrains. Applied Psychological Measurement, 37(6), 482–496.
Meijer, R. R., & Nering, M. L. (1999). Computerized adaptive testing: Overview and Introduction. Applied Psychological Measurement, 23(3), 187–194.
Partnership for 21st Century Skills. (2007). P21 framework definitions. Retrieved January 15, 2018, from https://www.p21.org/storage/documents/P21_Framework_Definitions.pdf.
Thomson, N. A., & Weiss, D. J. (2011). A framework for the development of computerized adaptive tests. Practical Assessment, Research & Evaluation, 16(1), 1–9.
UNESCO. (2008). Strategy framework for promoting ICT literacy in the Asia-Pacific region. Retrieved January 9, 2018, from https://www2.unescobkk.org/elib/publications/ 188/promotingICT _literacy.pdf.
Wilson, M., & Moore, S. (2012). An explanative modeling approach to measurement of reading comprehension. In J. P. Sabatini, T. O’Reilly & E. R. Albro (Eds.), Reaching an understanding: Innovations in how we view reading assessment (pp. 147–168). Lanham, MD: Rowman & Littlefield Education.
Zheng, Y., & Chang, H. H. (2015). On-the-fly assembled multistage adaptive testing. Applied Psychological Measurement, 39(2), 1–15.