EXPLAINING ARTIFICIAL INTELLIGENCE IN EDUCATION: APPROACHES TO INTEGRATION IN AUTOMATED INFORMATION SYSTEMS
EXPLAINING ARTIFICIAL INTELLIGENCE IN EDUCATION: APPROACHES TO INTEGRATION IN AUTOMATED INFORMATION SYSTEMS
DOI:
https://doi.org/10.46687/jsar.v29i1.468Keywords:
Explainable artificial intelligence, Automated information systems, Machine learning, Education, Interpretability, Early warning, Learner dropoutAbstract
This paper provides a systematic review of approaches for integrating Explainable Artificial Intelligence (XAI) into educational Automated Information Systems (AIS). It categorizes XAI methods into post-hoc explanations, inherently interpretable models, hybrid approaches, and calibration/visualization techniques, analyzing their strengths, limitations, and applicability for tasks such as learner dropout prediction, early warning, and personalized mentoring. Practical examples illustrate the benefits and challenges of XAI adoption, including the trade-off between accuracy and interpretability, technical barriers, and privacy concerns. Future directions include role-adaptive explanations, visual and interactive interfaces, and standardized quality metrics for educational contexts.
References
Siemens, G., & Long, P. (2011). Penetrating the fog: Analytics in learning and education. EDUCAUSE Review, 46(5), 30–40. (https://er.educause.edu/articles/2011/9/penetrating-the-fog-analytics-in-learning-and-education) (visited on 13.10.2025).
Pane, J. F., Steiner, E. D., Baird, M. D., & Hamilton, L. S. (2017). Informing progress: Insights on personalized learning implementation and effects. RAND Corporation. (https://www.rand.org/pubs/research_
reports/RR2042.html) (visited on 13.10.2025).
Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://arxiv.org/abs/1811.10154.
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://arxiv.org/abs/1802.01933.
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (Vol. 30, pp. 4765–4774). https://arxiv.org/abs/1705.07874.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 93. https://arxiv.org/abs/1802.01933.
Fancsali, S. E., Li, H., & Ritter, S. (2018). Towards an Early Warning System for At-risk Students Using Explainable Machine Learning. Proceedings of the 11th International Conference on Educational Data Mining, 1–10. International Educational Data Mining Society. https://doi.org/10.5281/zenodo.3554740.
Pane, J. F., Steiner, E. D., Baird, M. D., Hamilton, L. S., & Pane, J. D. (2017). Informing Progress: Insights on Personalized Learning Implementation and Effects. RAND Corporation. https://doi.org/10.7249/RR2042.
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning.( http://fairmlbook.org) (visited on 13.10.2025).
Bauer, K. (2023). Expl(AI)ned: The impact of explainable AI on trust in educational systems. INFORMS Journal on Education Technology. (https://madoc.bib.uni-mannheim.de/65911/1/bauer-et-al-2023-expl%28ai%29ned-the-impact-of-explainable-artificial-intelligence-on-users-information-processing.pdf ) (visited on 13.10.2025)
Poikola, A., Kuikkaniemi, K., & Honko, H. (2020). MyData – An introduction to human-centered personal data management. MyData Global. (https://mydata.org/wp-content/uploads/2020/08/mydata-white-paper-english-2020.pdf) (visited on 13.10.2025).
Molnar, C. (2022). Interpretable machine learning: A guide for making black box models explainable. (https://christophm.github.io/interpretable-ml-book/) (visited on 13.10.2025).
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235. https://doi.org/10.1080/17439884.2020.1798995.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012.
Gunasekara, S., & Saarela, M. (2025). Explainable AI in Education : Techniques and Qualitative Assessment. Applied Sciences, 15(3), Article 1239. https://doi.org/10.3390/app15031239.
Atanasov, V. (2024). An approach to support web application development using cognitive machine. Yearbook of Shumen University "Bishop Konstantin Preslavsky", Vol. XIV F. Bishop Konstantin Preslavsky Publishing House. ISSN: 1314-8818.
Atanasov, V, T., Transposition issues in digital learning process, Conference proceedings, vol. 1, Konstantin Preslavsky University Press, 2020, pp. 117 - 124, ISSN: 1314-3921.
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2025 JOURNAL SCIENTIFIC AND APPLIED RESEARCH

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
https://orcid.org/0000-0003-3668-6713