|dc.description.abstract||Automated essay analysis is one of the most important educational applications of natural language processing. The practical value of automated essay analysis systems is that they can perform essay analysis tasks faster, more cheaply, and more consistently than a human can, and can be made available for student use at any time. For example, a human teacher may spend hours grading a class's essay assignments, and due to issues like fatigue may not grade the last essay with as much care as the first. An automated essay analysis system, by contrast, can grade a stack of essays in seconds, and can be guaranteed to grade the last essay as carefully as the first.
Unfortunately, automated essay analysis is complicated by the fact that essay analysis tasks often require a deep understanding of essay text, so designing accurate automated essay analysis systems is not a trivial task. For example, we can't judge how well an essay is organized from just the words it contains. Instead, we must often develop task-specific features in order to help a computer make sense of it.
This dissertation focuses on advancing the state-of-the-art in automated essay analysis. Specifically, we define and present new computational approaches to seven different essay analysis tasks, namely 1) scoring how well an essay is organized, 2) scoring the clarity of its thesis, 3) detecting which errors it makes that hinder its thesis's clarity, 4) scoring how well it adheres to the prompt it was written in response to, 5) scoring its argument's quality, 6) detecting the stance its author takes on a given topic, and 7) detecting the structure of the argument it makes. For each of these tasks, our approach significantly outperforms competing approaches in an evaluation on student essays annotated for the task. To stimulate future work on automated essay analysis, we make the annotations we produced for these tasks publicly available.||