CriterionSM Online Essay Evaluation:
An Application for Automated Evaluation of Student Essays
Jill Burstein
Educational Testing Service
Rosedale Road, 18E
Princeton, NJ 08541
jburstein@ets.org
Martin Chodorow
Department of Psychology
Hunter College
695 Park Avenue
New York, NY 10021
martin.chodorow@hunter.cuny.edu
Claudia Leacock
Educational Testing Service
Rosedale Road, 18E
Princeton, NJ 08541
cleacock@ets.org
Abstract
This paper describes a deployed educational technology
application: the CriterionSM Online Essay Evaluation
Service, a web-based system that provides automated
scoring and evaluation of student essays. Criterion has
two complementary applications: E-rater, an automated
essay scoring system and Critique Writing Analysis
Tools , a suite of programs that detect errors in grammar,
usage, and mechanics, that identify discourse elements in
the essay, and that recognize elements of undesirable
style. These evaluation capabilities provide students with
feedback that is specific to their writing in order to help
them improve their writing skills. Both applications em-
ploy natural language processing and machine learning
techniques. All of these capabilities outperform baseline
algorithms, and some of the tools agree with human
judges as often as two judges agree with each other.
1. Introduction
The best way to improve one’s writing skills is to write,
receive feedback from an instructor, revise based on the
feedback, and then repeat the whole process as often as
possible. Unfortunately, this puts an enormous load on the
classroom teacher who is faced with reading and providing
feedback for perhaps 30 essays or more every time a topic
is assigned. As a result, teachers are not able to give writ-
ing assignments as often as they would wish.
With this in mind, researchers have sought to develop
applications that automate essay scoring and evaluation.
Work in automated essay scoring began in the early 1960’s
and has been extremely productive (Page 1966; Burstein et
al., 1998; Foltz, Kintsch, and Landauer 1