Experimental Evaluation of a Planning
Language Suitable for Formal Verification?
Rick W. Butler1, César A. Muñoz2, and Radu I. Siminiceanu2
1 NASA Langley Research Center, Hampton, Virginia, USA.
2 National Institute of Aerospace, Hampton, Virginia, USA.
Abstract. The marriage of model checking and planning faces two seem-
ingly diverging alternatives: the need for a planning language expressive
enough to capture the complexity of real-life applications, as opposed to
a language simple, yet robust enough to be amenable to exhaustive verifi-
cation and validation techniques. In an attempt to reconcile these differ-
ences, we have designed an abstract plan description language, ANMLite,
inspired from the Action Notation Modeling Language (ANML) [17]. We
present the basic concepts of the ANMLite language as well as an au-
tomatic translator from ANMLite to the model checker SAL (Symbolic
Analysis Laboratory) [7]. We discuss various aspects of specifying a plan
in terms of constraints and explore the implications of choosing a robust
logic behind the specification of constraints, rather than simply propose
a new planning language. Additionally, we provide an initial assessment
of the efficiency of model checking to search for solutions of planning
problems. To this end, we design a basic test benchmark and study the
scalability of the generated SAL models in terms of plan complexity.
1 Introduction
Historically, the fields of planning and formal verification have had very little
interaction. As branches of Artificial Intelligence, planning and scheduling have
mainly focused on developing powerful search heuristics for efficiently finding
solutions to extremely complex, specialized problems that take into account in-
tricate domain specific information. Traditionally, this field and has been heavily
influenced by the goals of one of the major sponsoring agencies (NASA, Ames
Center) and its affiliated institutes (RIACS, JPL). The planning software pro-
duced is in a perpetual process of expansion to include the lates