dc.description.abstract |
The evaluation of answer scripts is vital for assessing a student’s performance. The manual evaluation of
the answers can sometimes be biased. The assessment depends on various factors, including the evaluator’s
mental state, their relationship with the student, and their level of expertise in the subject matter. These
factors make evaluating descriptive answers a very tedious and time-consuming task. Automatic scoring
approaches can be utilized to simplify the evaluation process. This paper presents an automated answer script
evaluation model that intends to reduce the need for human intervention, minimize bias brought on by evaluator
psychological changes, save time, maintain track of evaluations, and simplify extraction. The proposedmethod
can automatically weigh the assessing element and produce results nearly identical to an instructor’s. We
compared the model’s grades to the grades of the teacher, as well as the results of several keyword matching
and similarity check techniques, in order to evaluate the developed model |
en_US |