Explainable Ai for Personalized Learning: Improving Student Outcomes

Authors

  • Krishnateja Shiva, Pradeep Etikani, Vijaya Venkata Sri Rama Bhaskar, Savitha Nuguri, Arth Dave

Keywords:

Explainable AI (XAI), personalized learning, educational technology, artificial intelligence in education, student engagement, learning outcomes, LIME, SHAP.

Abstract

The goal of this project is to enhance students’ outcome through the application of Explainable Artificial Intelligence (XAI) within learning contexts. The effectiveness of several XAI methods, including LIME, SHAP, and attention processes, for increasing the interpretability of AI-based education systems is explored. The paper focuses on the significance of data processing, data collections, and ethical considerations when applying XAI. This research work reveals that the adoption of XAI enhances student performance, enhances learning experiences, and increases motivation level. But solving privacy related problems, and finding the right balance between interpretability and model complexity still remains challenging. In the further development, XAI will be used in more extensive learning scenarios, even more sophisticated layered models will be built, and XAI will be combined with the subsequent technologies. Consequently, it can be concluded that XAI has a vast potential regarding revolutionizing personalized learning by enhancing the transparency, efficiency, and modifiability facets.

Downloads

Published

2024-04-24

How to Cite

Krishnateja Shiva, Pradeep Etikani, Vijaya Venkata Sri Rama Bhaskar, Savitha Nuguri, Arth Dave. (2024). Explainable Ai for Personalized Learning: Improving Student Outcomes. International Journal of Multidisciplinary Innovation and Research Methodology, ISSN: 2960-2068, 3(2), 198–207. Retrieved from https://ijmirm.com/index.php/ijmirm/article/view/100