A Brief Introduction to
Dynamic Logic Programming

Michael Genesereth
Computer Science Department
Stanford University

1. Introduction

Logic Programming is programming by description. In writing a logic program, the programmer uses the language of symbolic logic to describe the application area of the program without reference to the internal data structures or operations of the system executing the program. In this regard, a logic program is more of a specification than an implementation; and logic programs are often called runnable specifications.

Programming systems in this way has multiple benefits. In writing or reading or modifying logic programs, programmers can get by with little or no knowledge of the capabilities and limitations of the systems executing those programs. As a result, logic programs are easier to produce than traditional programs; they are easier to understand; and they are easier to modify. Logic programs are more versatile than traditional programs - they can be used in multiple ways for multiple purposes without modification. And they are more amenable to programmatic analysis and optimization.

Over the years, numerous variations of Logic Programming have been explored (e.g. Disjunctive Logic Programming, Constraint Logic Programming, Abductive Logic Programming, Inductive Logic Programming, and Answer Set Programming), and a variety of languages have been developed (e.g. Datalog and Prolog and Golog). In this paper, we focus on the most basic form of Logic Programming, called Basic Logic Programming, and we describe a corresponding language, called Epilog.

This paper is a brief introduction to Basic Logic Programming. In the first section, we talk about sentential databases, i.e. sets of simple facts. After that, we introduce basic logic programs, i.e. definitions of relations in terms of other relations, and we show how to combine datasets and view definitions to form deductive databases. Finally, we introduce transition rules and dynamic logic programs, and we show how they can be used to specify behavioral information.

2. Sentential Databases

When we think about the world, we usually think in terms of objects and relationships among these objects. Objects include things like people and offices and buildings. Relationships include things like the parenthood, ancestry, office assignments, office locations, and so forth.

In sentential databases, we encode each instance of a relationship in the form of a simple sentence consisting of a relation constant (representing the relationship) and some object constants (representing the objects involved in the instance). For example, we can use the relation constant parent to represent the relationship between a parent and his or her child; we can use the object constants art and bob to refer to two people; and, as we shall see shortly, we can ue this vocabulary to write a sentence stating that art is the parent of bob.

The vocabulary of a database is a collection of object constants and relation constants. Each relation constant has an associated arity, i.e. the number of objects involved in any instance of the corresponding relation.

A simple sentence (or datum or factoid) is an expression formed from an n-ary relation constant and n object constants. We write data in mathematical notation. For example, we can write parent(art,bob) to express the fact that Art is the parent of Bob.

A database instance, or dataset, is any set of data that can be formed from the vocabulary of a database. Intuitively, we can think of the data in a database instance as the facts that we believe to be true in the world; data that are not in the instance are assumed to be false.

As an example of these concepts, consider a small interpersonal database. The terms in this case represent people. The relation constants name properties of these people and their relationships with each other.

In our example, we use the binary relation constant parent to specify that one person is a parent of another. The sentences below constitute a database describing six instances of the parent relation. The person named art is a parent of the person named bob; art is also a parent of bea, and so forth.

parent(art,bob)
parent(art,bea)
parent(bob,carl)
parent(bea,coe)
parent(carl,daisy)
parent(carl,daniel)

The adult relation is unary relation, i.e. a simple property of a person, not a relationship other people. Everyone in our database is an adult except for daisy and daniel.

adult(art)
adult(bob)
adult(bea)
adult(carl)
adult(coe)

We can express gender with two unary relation constants male and female. The following sentences expresses the genders of all of the people in our database. Note that, in principle, we need only one relation here, since one gender is the complement of the other. However, representing both allows us to enumerate instances of both genders equally efficiently, which can be useful in certain applications.

male(art)female(bea)
male(bob)female(coe)
male(cal)female(daisy)
male(daniel)

As an example of a ternary relation, consider the data shown below. Here, we use prefers to represent the fact that the first person likes the second person more than the third person. For example, the first sentence says that Art prefers bea to bob; the second sentence says that carl prefers daisy to daniel.

prefers(art,bea,bob)
prefers(carl,daisy,daniel)

Note that the order of arguments in such sentences is important. Given the meaning of the prefers relation in our example, the first argument denotes the subject, the second argument is the person who is preferred, and the third argument denotes the person who is less preferred. Of course, we could equally well have interpreted the arguments in other orders. The important thing is consistency - once we choose to interpret the arguments in one way, we must stick to that interpretation everywhere.

3. View Definitions

The language of logic programs includes the language of sentential databases but provides some additional features that make it more expressive.

One key difference is the inclusion of a new type of symbol, called a variable. Variables allow us to state relationships among objects without explicitly naming those objects. In what follows, we use individual capital letters as variables, e.g. X, Y, Z.

An atom in a logic program is analogous to a simple sentence in a database except that it may include variables. For example, p(a,b) is an atom but so is p(a,X) and p(X,b) and p(X,X) and p(X,Y).

A literal is either an atom or a negation of an atom (i.e. an expression stating that the atom is false). A simple atom is called a positive literal, The negation of an atom is called a negative literal. In what follows, we write negative literals using the negation sign ~. For example, if p(a,b) is an atom, then ~p(a,b) denotes the negation of this atom.

A rule is an expression consisting of a distinguished atom, called the head, and a conjunction of zero or more literals, called the body. The literals in the body are called subgoals. In what follows, we write rules as in the example shown below. Here, r(a,b) is the head, p(a,b) & ~q(b) is the body; and p(a,b) and ~q(b) are subgoals.

r(a,b) :- p(a,b) & ~q(b)

Semantically, a rule is something like a reverse implication. It is a statement that the head of the rule is true whenever the subgoals are true. For example, the rule above states that r(a,b) is true if p(a,b) is true and q(b) is not true.

The expressive power of rules is greatly enhanced through the use of variables. Consider, for example, the rule shown below. This is a more general version of the rule shown above. Instead of applying to just the specific objects a and b it applies to all objects. In this case, the rule states that r is true of any object X and any object Y if p is true of X and Y and q is not true of Y.

r(X,Y) :- p(X,Y) & ~q(Y)

A static logic program is a set of facts and rules of the form just described. Note that, for the purposes or analysis, it is sometimes useful to think about infinite sets of rules. However, we cannot write down such sets; so, in what follows, we restrict our attention to finite logic programs.

The principal use of rules is to define new relations in terms of existing relations. The new relations defined in this way are often called view relations (or simply views) to distinguish them from base relations, which are defined by explicit enumeration of instances.

To illustrate the use of rules in defining views, consider once again the world of interpersonal relations. Starting with the base relations, we can define various interesting view relations.

As an example, consider the sentences shown below. The first sentence defines the father relation in terms of parent and male. The second sentence defines mother in terms of parent and female.

father(X,Y) :- parent(X,Y) & male(X)
mother(X,Y) :- parent(X,Y) & female(X)

The rule below defines the grandparent relation in terms of the parent relation. A person X is the grandparent of a person Z if X is the parent of a person Y and Y is the parent of Z. The variable Y here is a thread variable that connects the first subgoal to the second but does not itself appear in the head of the rule.

grandparent(X,Z) :- parent(X,Y) & parent(Y,Z)

Note that the same relation can appear in the head of more than one rule. For example, the person relation is true of a person Y if there is an X such that X is the parent of Y or if Y is the parent of some person Z. Note that in this case the conditions are disjunctive (at least one must be true), whereas the conditions in the grandfather case are conjunctive (both must be true).

person(X) :- parent(X,Y)
person(Y) :- parent(X,Y)

A person X is an ancestor of a person Z if X is the parent of Z or if there is a person Y such that X is an ancestor of and Y is an ancestor of Z. This example shows that is possible for a relation to appear in its own definition. (But recall our discussion of stratification for a restriction on this capability.)

ancestor(X,Y) :- parent(X,Y)
ancestor(X,Z) :- ancestor(X,Y) & ancestor(Y,Z)

A childless person is one who has no children. We can define the property of being childless with the rules shown below. The first rule states that a person X is childless if X is a person and it is not the case that X is a parent. The second rule says that isparent is true of X if X is the parent of some person Y.

childless(X) :- person(X) & ~isparent(X,Y)
isparent(X) :- parent(X,Y)

Note the use of the helper relation isparent here. It is tempting to write the childless rule as childless(X) :- person(X) & ~parent(X,Y). However, this would be wrong. This would define X to be childless if X is a person and there is some Y such that X is ~parent(X,Y) is true. But we really want to say that ~parent(X,Y) holds for all Y. Defining isparent and using its negation in the definition of childless allows us to express this universal quantification.

In addition to writing definitions of common relations, rules are frequently used to write one-off queries on databases, including cases where the queries do not correspond to common relations.

As a simple example, suppose we wanted to know all people who are grandparents of adults. We could ask this question by writing the query shown below.

query(Z) :- grandparent(art,Z) & adult(Z)

Another use of rules is encoding constraints on databases. Such constraints often limit the set of possible databases. For example, a person cannot be his or her own parent. In some cases, constraints involve multiple relations. For example, all parents are adults; in other words, if a person appears as the first argument of a parent fact, the person must also appear as an argument in the adult relation.

In many database texts, constraints are written in direct form - by writing rules that say, in effect, that if certain things are true in an extension, then other things must also be true. The inclusion dependency mentioned above is an example - if an entity appears in the first column of the parent relation, it must also appear as an entry in the adult relation.

In what follows, we use a slightly less direct approach - we encode limitations by writing rules that say when a database is not well-formed. We simply invent a new 0-ary relation, here called illegal, and define it to be true in any extension that does not satisfy our constraints.

This approach works particularly well for consistency constraints like the one stating that a person cannot be his own parent.

illegal :- parent(X,X)

It also works well for mutual exclusion constraints like the one below, which states that a person cannot be in both the male and the female relations.

illegal :- male(X) & female(X)

Using this technique, we can also write the inclusion dependency mentioned earlier. There is an error if an entity is in the first column of the parent relation and it does not occur in the adult relation.

illegal :- parent(X,Y) & ~adult(X)

Database management systems can use such constraints in a variety of ways. They can be used to optimize the processing of queries. They can also be used to check that updates do not lead to unacceptable extensions.

4. Operation Definitions

In the preceding chapter, we saw how to write rules to define view relations in terms of base relations. Once defined, we can use these views in queries and in the definitions of other views. In this chapter, we look at how to write rules that define operations as changes to base relations. Once defined, we can use those operations directly or in the definitions of other operations. It is important to keep in mind the differences between views and operations - views are used in talking about facts that are true in states whereas operations are used in talking about changes to states.

The syntax of operation definitions is analogous to the syntax for view definitions. The various types of constants are the same, and the notions of term and atom and literal are also the same. However, to these, we add a few new items.

To denote operations, we designate some constants as operation constants. As with constructors and relation constants, each operation constant has a fixed arity - unary, binary, and so forth.

An action is an application of an operation to specific objects. In what follows, we denote actions using a syntax similar to that of atomic sentences, viz. an n-ary operation constant followed by n terms enclosed in parentheses and separated by commas. For example, if f is a binary operation constant and a and b are symbols, then f(a,b) denotes the action of applying the operation f to a and b.

An operation definition rule (or, more simply, an operation rule) is an expression of the form shown below. Each rule consists of (1) an action expression, (2) a double colon, (3) a literal or a conjunction of literals, (4) a double shafted forward arrow, and (5) a literal or an action expression or a conjunction of literals and action expressions. The action expression to the left of the double colon is called the head; the literals to the left of the arrow are called conditions; and the literals to its right are called effects.

γ   ::    [~]φ1 & ... & [~]φm    ==>    [~]ψ1 & ... & [~]ψn & γ1 & ... & γk

Intuitively, the meaning of an operation rule is simple. If the conditions of a rule are true in any state, then executing the action in the head requites that we execute the effects of the rule.

For example, the following rule states that in any state in which p(a,b) is true and q(a) is false, the executing click(a) requires that we remove p(a,b) from our dataset, add q(b), perform action click(b).

click(a) :: p(a,b) & ~q(a) ==> ~p(a,b) & q(a) & click(b)

As with rules defining views, operation rules may contain variables to express information in a compact form. For example, we can write the following rule to generalize the preceding rule to all objects.

click(X) :: p(X,Y) & ~q(X) ==> ~p(X,Y) & q(X) & click(Y)

As with view rules, safety is a consideration. Safety in this case means that every variable among the effects of a rule or in negative conditions also appears in the head of the rule or in the positive conditions.

The operation rules shown above are both safe. However, the rules shown below are not. The second effect of the first rule contains a variable that does not appear in the head or in any positive condition. In the second rule, there is a variable that appears in a negative condition that does not appear in the head or in any positive condition.

click(X) :: p(X,Y) & ~q(X) ==> ~p(X,Y) & q(Z) & click(Y)
click(X) :: p(X,Y) & ~q(Z) ==> ~p(X,Y) & q(X) & click(Y)

In some operation rules there is no condition, i.e. the effects of the transition rule take place on all datasets. We can, of course, write such rules by using the condition true, as in the following example.

click(X) :: true ==> ~p(X) & q(X)

For the sake of simplicity in writing our examples, we sometimes abbreviate such rules by dropping the conditions and the transition operator and instead write just the effects of the transition as the body of the operation rule. For example, we can abbreviate the rule above as shown below.

click(X) :: ~p(X) & q(X)

An operation definition is a collection of operation rules in which the same operation appears in the head of every rule. As with view definitions, we are interested primarily in rulesets that are finite. However, in analyzing operation definitions, we occasionally talk about the set of all ground instances of the rules, and in some cases these sets are infinite.

The semantics of operation definitions is similar to semantics of view definitions. In what follows, we first define the expansion of an action in the context of a given dataset, and we then define the result of performing that action on that dataset.

Suppose we are given a set Ω of rules, a set Γ of actions (factoids, negated factoids, and actions), and a dataset Δ. We say that an instance of a rule in Ω is active with respect to Γ and Δ if and only if the head of the rule is in Γ and the conditions of the rule are all true in Δ.

Given this notion, we define the expansion of action γ with respect to rule set Ω and dataset Δ as follows. Let Γ0 be {γ} and let Γi+1 be the set of all effects in any instance of any rule in Ω with respect to Γi and Δ. We define our expansion U(γ,Ω,Δ) as the fixpoint of this series. Equivalently, it is the union of the sets Γi, for all non-negative integers i.

Next, we define the positive updates A(γ,Ω,Δ) to be the positive base factoids in U(γ,Ω,Δ). We define the negative updates D(γ,Ω,Δ) to be the set of all negative factoids in U(γ,Ω,Δ).

Finally, we define the result of applying an action γ to a dataset Δ as the result of removing the negative updates from Δ and adding the positive updates, i.e. the result is (Δ - D(γ,Ω,Δ)) ∪ A(γ,Ω,Δ).

To illustrate these definitions, consider an application with a dataset representing a directed acyclic graph. In the sentences below, we use symbols to designate the nodes of the graph, and we use the edge relation to designate the arcs of the graph.

edge(a,b)
edge(b,d)
edge(b,e)

The following operation definition defines a ternary operation copy that copies the outgoing arcs in the graph from its first argument to its second argument.

copy(X,Y) :: edge(X,Z) ==> edge(Y,Z)

Given this operation definition and the dataset shown above, the expansion of copy(b,c) consists of the changes shown below. In this case, the factoids representing the two arcs emanating from b are all copied to c.

edge(c,d)
edge(c,e)

After executing this event, we end up with the following dataset.

edge(a,b)
edge(b,d)
edge(b,e)
edge(c,d)
edge(c,e)

The following rule defines a unary operation invert that reverses the incoming arcs of the node specified as it argument.

invert(Y) :: edge(X,Y) ==> ~edge(X,Y) & edge(Y,X)

The expansion of invert(c) is shown below. In this case, the arguments in the factoid with c as second argument have all been reversed.

~edge(c,d)
~edge(c,e)
edge(d,c)
edge(e,c)

After executing this event, we end up with the following dataset.

edge(a,b)
edge(b,d)
edge(b,e)
edge(d,c)
edge(e,c)

Finally, the following operation rules define a binary operation that inserts a new node into the graph (the first argument) with an arc to the second argument and arcs to all of the nodes that are reachable from the second argument.

insert(X,Y) :: edge(X,Y)
insert(X,Y) :: edge(Y,Z) ==> insert(X,Z)

The expansion of insert(w,b) is shown below. The first rule adds edge(w,b) to the expansion. The second rule adds insert(w,d) and insert(w,e). Given these events, on the next round of expansion, the first rule adds edge(w,d) and edge(w,e) and the second rules adds insert(w,c). On the third round of expansion, we get edge(w,c). At this point, neither rule adds any additional items to our expansion, and the process terminates.

insert(w,b)
edge(w,b)
insert(w,d)
insert(w,e)
edge(w,d)
edge(w,e)
insert(w,c)
edge(w,c)

Applying this event to the preceding dataset produces the result shown below.

edge(a,b)
edge(b,d)
edge(b,e)
edge(d,c)
edge(e,c)
edge(w,b)
edge(w,d)
edge(w,e)
edge(w,c)

Note that it is possible to define insert in other ways. We could, for example, define a view of edge that relates each node to every node that can be reached from the node; and we could then use this view in a non-recursive definition of insert. However, this would require us to introduce a new view into our vocabulary; and, for many people, this is less clear than the definition shown above.

5. Conclusion

In practice, it is common to extend the simple version of Dynamic Logic Programming described here to include "built-in" relations (e.g. arithmetic) and other operators (e.g. aggregates). The syntax and semantics of such extensions are a little messy. Luckily, they pose no significant theoretical challenges; and, in the interest of brevity, they are not covered here.

The intent of this article is to provide a concise but reasonably rigorous account of the syntax and semantics of Dynamic Logic Programming. For motivation and examples of all of these concepts, see the textbook Dynamic Logic Programming.