Search In this Thesis
   Search In this Thesis  
العنوان
Extended constructed response questions scoring with adaptive feedback /
الناشر
Mohamed Abdellatif Hussein Mohamed ,
المؤلف
Mohamed Abdellatif Hussein Mohamed
هيئة الاعداد
باحث / Mohamed Abdellatif Hussein Mohamed
مشرف / Hesham Ahmed Hassan
مشرف / Mohammed Nassef Fatouh
مناقش / Hesham Ahmed Hassan
تاريخ النشر
2021
عدد الصفحات
114 Leaves :
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
Computer Science Applications
تاريخ الإجازة
29/3/2021
مكان الإجازة
جامعة القاهرة - كلية الحاسبات و المعلومات - Computer Science and Artificial Intelligence
الفهرس
Only 14 pages are availabe for public view

from 134

from 134

Abstract

Over the past years, there are many Automated Essay Scoring (AES) systems that have been created based on Artificial Intelligence (AI) models. The improvement in deep learning has demonstrated that applying neural network approaches to AES systems has achieved state-of-the-art solutions. Most neural-based AES systems would allocate an overall score or mark to essays, even if they scored by using analytical scoring rubrics. The scoring of each trait in analytical rubrics helps to detect learners’ levels of performance. Additionally, offering adaptive feedback to each learner about his/her writing is a vital component of assessing the performance. Constructing adaptive feedback to each learner empowers the identification of the learner’s strengths and weaknesses. It also helps in improving learner’s future writings. In this thesis, a framework has been built up to reinforce the validity of the scoring process and increase the reliability of a baseline neural-based AES model by evaluating the writing traits in addition to the overall writing. The model has been extended based on the prediction of the traits’ scores to deliver trait-specific adaptive feedback. Multiple deep learning models of the automatic scoring were explored, and several analyses took place to come up with some indicators from these models. The findings of the experiments demonstrate that Long Short-Term Memory (LSTM) based system beat the baseline study by 4.6% in terms of the Quadratic Weighted Kappa (QWK). Likewise, the prediction of the traits’ scores improves the efficacy of the prediction of the overall essay score. It is also found that the LSTM model is the best model to predict scores for essays that include relatively long sequences of words, which is consistent with the nature of the LSTM models. It is also found that the clarity of the scoring rubrics influences the accuracy of both human and the proposed model (AESAUG) scores