Alma Mater
ISSN 1026-955X
Vestnik Vysshey Shkoly (Higher School Herald)
The best way to learn all about Higher Education

=

Adaptive assessment in certification procedures for students and graduates

A.A. Malygin
$2.50

UDC 378-044.3:005.6

DOI 10.20339/AM.08-23.039

 

Aleksei A. Malygin, Cand. Sci. (Pedagogy), Docent, Rector at Ivanovo State University; Head of the Ivanovo Scientific Centre Russian Academy of Education; РИНЦ AuthorID: 204438, Scopus Author ID: 57191283114, ORCID: 0000-0002-7812-4439, e-mail: malygin@ivanovo.ac.ru

 

The competency-based approach, which today serves as the basis for the design and implementation of educational programs for the training of professional specialists, provides a certain set of competencies formed by students and graduates (universal, general professional, professional) as learning outcomes. It is possible to assess competencies as latent characteristics only in activity. This latent characteristic, which is the reason why students and graduates are able to perform professional tasks, is the goal of measurement. But in practice, observable assessments of abilities or skills for performing quasi-professional tasks are obtained, according to which conclusions about the level of formation of latent competencies are made.

In turn, the implementation of the competency-based approach involves a change in approaches to assess learning outcomes, since it is impossible to talk about obtaining objective (reliable), comparable (valid) and reliable information about achieved learning outcomes and the level of competency formation. This gives rise to the need to refer to a mixed (biparadigm) methodology of educational measurements, on the one hand, and a special mathematical apparatus of the modern test theory (Item Response Theory — IRT), designed to assess the latent parameters of the subjects and the parameters of the tasks of the assessment tools, on the other hand.

The proposed approaches to the organization and conduct of certification procedures for students and graduates provide objective, comparable and reasonable results. Unlike traditional formats for conducting the state exam for the final certification of graduates, adaptive assessment is attractive from the point of view of obtaining more accurate estimates of examinees’ parameters (level of preparedness or level of competence formation) due to fewer tasks and creating a “success situation” for each student in the measurement process through machine algorithms selection of such tasks that he will be able to perform. IRT algorithms based, for example, on the maximum likelihood method, make it possible to implement the humanistic ideas of control and evaluation activities.

To implement any of the described adaptive approaches in the intermediate or final certification, the following conditions must be met: availability of a bank of calibrated tasks with stable characteristics (difficulties, differentiating ability) or algorithms for their cloning; availability of computer programs or software and instrumental environment (service, platform) that use one or more selected IRT models that help achieve the highest possible measurement accuracy when assessing the level of preparedness or the formation of student competencies; availability of specifications for assessment tools that ensure the meaningful validity of the measurement results.

From a didactic point of view, the latter condition is especially important, since in order to obtain objective and comparable results during certification, it is necessary to take into account the content elements of the educational program, the verification of which is planned in the specification of the assessment tools.

Keywords: adaptive assessment, certification, validity, competence, competencies, reliability, learning outcomes, standards.

 

References

  1. Ananyev, B.G. Psychology of pedagogical evaluation. Selected psychological works. Мoscow: Pedagogy, 1990. 288 p.
  2. Anastasi, A., Urbina, S. Psychological testing. 7th ed. St. Persburg: Piter, 2009. 668 с.
  3. Bolotov, V.A. Serikov, V.V. Competence model: from idea to educational programme. Pedagogy. 2003. No. 10. P. 8–14.
  4. Drondin, A.L. Independent quality assessment of higher education as a scientific problem and practical task. Higher Education Today. 2019. No. 3. P. 17–23.
  5. Efremova, N.F. Arguments and proofs of reliability of students' competence assessments. Izvestiya Dagestanskogo gosudarstvennogo pedagogicheskogo universiteta. Psychological and pedagogical sciences. 2018. V. 12. No. 2. P. 43–50. DOI: 10.31161/1995-0659-2018-12-2-43-50
  6. Zvonnikov V.I. How to raise the efficiency of accreditation of professional educational programmes and reduce its burden on universities? Higher Education Today. 2019. No. 3. P. 11–16.
  7. Zvonnikov, V.I., Chelyshkova, M.B. Modern approaches to assessing the quality of higher education results. Pedagogical Measurements. 2016. No. 1. P. 32–38.
  8. Zvonnikov, V.I., Malygin, A.A., Semyonova, T.V., Sizova, J.M., Chelyshkova, M.B. Fairness of assessments in accreditation of specialists as a problem. Values and meanings. 2023. No. 2 (84). P. 53–71. DOI: 10.24412/2071-6427-2023-2-53-71
  9. Zimnyaya, I.A. Competence and competence in the context of competence-based approach in education. Scientific Notes of the National Society of Applied Linguistics. 2013. No. 4 (4). P. 16–31.
  10. Kardanova, E.Yu. Modelling and Parametrization of Tests: Basics of Theory and Applications. Мoscow: Federal Testing Centre, 2008. 292 p.
  11. Blinov, V.I., Batrova, O.F., Yesenina, E.Yu., Faktorovich, A.A. Concept of qualifications assessment. Education and Science. 2012. No. 10. P. 46–66.
  12. Kuravsky, L.S., Artemenkov, S.L., Yuriev, G.A., Grigorenko, E.L. New approach to computerised adaptive testing. Experimental Psychology. 2017. V. 10. No. 3. P. 33–45. DOI: 10.17759/exppsy.2017100303
  13. Malygin, A.A. Adaptive testing of students’ educational achievements in distance learning: autoref. Diss. ... Cand. Sci. (Pedagogy). Мoscow, 2011.
  14. Malygin, A.A.; Chelyshkova, M.B. Quality assurance of students' evaluations in the final attestation. Otechestvennaya i zarubezhnaya pedagogika. 2023. V. 1. No. 1 (89). P. 7–23. DOI: 10.24412/2224–0772–2023–89–7–23
  15. On Education in the Russian Federation: Federal Law No. 273-ФЗ of 29 December 2012 as amended in 2022. (Includes all changes until 1 January 2023). URL: http://consultant.ru/ (accessed on: 12.05.2023).
  16. Sizova, J.M., Chelyshkova, M.B. Improving the quality of assessment tools for accreditation of health care professionals. Medical Education and Higher Education Science. 2018. No. 1 (11). P. 19–23.
  17. Spencer, L., Spencer, S. Competences at Work. Models of maximum work efficiency / transl. from Eng. Moscow: HIPPO, 2005. 384 p. (in Rus.)
  18. Khlopotov, M.V. Application of Bayesian network in building models of students to assess the level of competences. Internet-journal “Naukovedenie”. 2014. No. 5 (24).
  19. Chelyshkova, M.B. Theoretical, methodological and technological foundations of adaptive testing in education: autoref. diss. . Dr. Sci. (Pedagogy). Мoscow, 2001.
  20. Shmelev, A.G. Practical testology: Testing in education, applied psychology and personnel management. Moscow: Mask, 2013. 668 p.
  21. Advances in Educational and Psychological Testing: Theory and Applications. R.K. Hambleton, J. Zaal (eds.). 4 ed. Boston : Kluwer, 2000. 458 p.
  22. Automatic Item Generation: Theory and Practice. M.J. Gierl, T.M. Haladyna (eds.). N. Y.: Teylor & Francis, 2013. 246 p.
  23. Bartram, D., Hambleton, R.K. Computer-based Testing and the Internet: Issues and Advances. L.: Wiley, 2006. 263 p.
  24. Bock, R.D., Gibbons, R.D. Item Response Theory. University of Chicago. Hoboken: Wiley, 2021. 384 p.
  25. Hambleton, R.K., Swaminathan, H., Rogers, H.J. Fundamentals of Item Response Theory. N. Y.: Sage Publications, 1991. 174 p.
  26. Linden, W.J. van der. Handbook of Item Response Theory: Models. N. Y.: CRC Press, 2016. 624 p.
  27. Wainer, H. Computerized Adaptive Testing : A Primer. 2nd ed. Mahwah (NJ): Lawrence Erlbaum Associates, 2000. 278 p.