1. IntroductionLogic Programming is programming by description. In writing a logic program, the programmer uses the language of symbolic logic to describe the application area of the program without reference to the internal data structures or operations of the system executing the program. In this regard, a logic program is more of a specification than an implementation; and logic programs are often called runnable specifications. Programming systems in this way has multiple benefits. In writing or reading or modifying logic programs, programmers can get by with little or no knowledge of the capabilities and limitations of the systems executing those programs. As a result, logic programs are easier to produce than traditional programs; they are easier to understand; and they are easier to modify. Logic programs are more versatile than traditional programs - they can be used in multiple ways for multiple purposes without modification. And they are more amenable to programmatic analysis and optimization. Over the years, numerous variations of Logic Programming have been explored (e.g. Disjunctive Logic Programming, Constraint Logic Programming, Abductive Logic Programming, Inductive Logic Programming, and Answer Set Programming), and a variety of languages have been developed (e.g. Datalog and Prolog and Golog). In this paper, we focus on the most basic form of Logic Programming, called Basic Logic Programming, and we describe a corresponding language, called Epilog. This paper is a brief introduction to Basic Logic Programming. In the first section, we talk about sentential databases, i.e. sets of simple facts. After that, we introduce basic logic programs, i.e. definitions of relations in terms of other relations, and we show how to combine datasets and view definitions to form deductive databases. Finally, we introduce transition rules and dynamic logic programs, and we show how they can be used to specify behavioral information. 2. Sentential DatabasesWhen we think about the world, we usually think in terms of objects and relationships among these objects. Objects include things like people and offices and buildings. Relationships include things like the parenthood, ancestry, office assignments, office locations, and so forth. In sentential databases, we encode each instance of a relationship in the form of a simple sentence consisting of a relation constant (representing the relationship) and some object constants (representing the objects involved in the instance). For example, we can use the relation constant parent to represent the relationship between a parent and his or her child; we can use the object constants art and bob to refer to two people; and, as we shall see shortly, we can ue this vocabulary to write a sentence stating that art is the parent of bob. The vocabulary of a database is a collection of object constants and relation constants. Each relation constant has an associated arity, i.e. the number of objects involved in any instance of the corresponding relation. A simple sentence, or datum, is an expression formed from an n-ary relation constant and n object constants. We write data in mathematical notation. For example, we can write parent(art,bob) to express the fact that Art is the parent of Bob. A database instance, or dataset, is any set of data that can be formed from the vocabulary of a database. Intuitively, we can think of the data in a database instance as the facts that we believe to be true in the world; data that are not in the instance are assumed to be false. As an example of these concepts, consider a small interpersonal database. The terms in this case represent people. The relation constants name properties of these people and their relationships with each other. In our example, we use the binary relation constant parent to specify that one person is a parent of another. The sentences below constitute a database describing six instances of the parent relation. The person named art is a parent of the person named bob; art is also a parent of bea, and so forth.
The adult relation is unary relation, i.e. a simple property of a person, not a relationship other people. Everyone in our database is an adult except for daisy and daniel.
We can express gender with two unary relation constants male and female. The following sentences expresses the genders of all of the people in our database. Note that, in principle, we need only one relation here, since one gender is the complement of the other. However, representing both allows us to enumerate instances of both genders equally efficiently, which can be useful in certain applications.
As an example of a ternary relation, consider the data shown below. Here, we use prefers to represent the fact that the first person likes the second person more than the third person. For example, the first sentence says that Art prefers bea to bob; the second sentence says that carl prefers daisy to daniel.
Note that the order of arguments in such sentences is important. Given the meaning of the prefers relation in our example, the first argument denotes the subject, the second argument is the person who is preferred, and the third argument denotes the person who is less preferred. Of course, we could equally well have interpreted the arguments in other orders. The important thing is consistency - once we choose to interpret the arguments in one way, we must stick to that interpretation everywhere. 3. View DefinitionsThe language of logic programs includes the language of sentential databases but provides some additional features that make it more expressive. One key difference is the inclusion of a new type of symbol, called a variable. Variables allow us to state relationships among objects without explicitly naming those objects. In what follows, we use individual capital letters as variables, e.g. X, Y, Z. An atom in a logic program is analogous to a simple sentence in a database except that it may include variables. For example, p(a,b) is an atom but so is p(a,X) and p(X,b) and p(X,X) and p(X,Y). A literal is either an atom or a negation of an atom (i.e. an expression stating that the atom is false). A simple atom is called a positive literal, The negation of an atom is called a negative literal. In what follows, we write negative literals using the negation sign ~. For example, if p(a,b) is an atom, then ~p(a,b) denotes the negation of this atom. A rule is an expression consisting of a distinguished atom, called the head, and a conjunction of zero or more literals, called the body. The literals in the body are called subgoals. In what follows, we write rules as in the example shown below. Here, r(a,b) is the head, p(a,b) & ~q(b) is the body; and p(a,b) and ~q(b) are subgoals.
Semantically, a rule is something like a reverse implication. It is a statement that the head of the rule is true whenever the subgoals are true. For example, the rule above states that r(a,b) is true if p(a,b) is true and q(b) is not true. The expressive power of rules is greatly enhanced through the use of variables. Consider, for example, the rule shown below. This is a more general version of the rule shown above. Instead of applying to just the specific objects a and b it applies to all objects. In this case, the rule states that r is true of any object X and any object Y if p is true of X and Y and q is not true of Y.
A static logic program is a set of facts and rules of the form just described. Note that, for the purposes or analysis, it is sometimes useful to think about infinite sets of rules. However, we cannot write down such sets; so, in what follows, we restrict our attention to finite logic programs. The principal use of rules is to define new relations in terms of existing relations. The new relations defined in this way are often called view relations (or simply views) to distinguish them from base relations, which are defined by explicit enumeration of instances. To illustrate the use of rules in defining views, consider once again the world of interpersonal relations. Starting with the base relations, we can define various interesting view relations. As an example, consider the sentences shown below. The first sentence defines the father relation in terms of parent and male. The second sentence defines mother in terms of parent and female.
The rule below defines the grandparent relation in terms of the parent relation. A person X is the grandparent of a person Z if X is the parent of a person Y and Y is the parent of Z. The variable Y here is a thread variable that connects the first subgoal to the second but does not itself appear in the head of the rule.
Note that the same relation can appear in the head of more than one rule. For example, the person relation is true of a person Y if there is an X such that X is the parent of Y or if Y is the parent of some person Z. Note that in this case the conditions are disjunctive (at least one must be true), whereas the conditions in the grandfather case are conjunctive (both must be true).
A person X is an ancestor of a person Z if X is the parent of Z or if there is a person Y such that X is an ancestor of and Y is an ancestor of Z. This example shows that is possible for a relation to appear in its own definition. (But recall our discussion of stratification for a restriction on this capability.)
A childless person is one who has no children. We can define the property of being childless with the rules shown below. The first rule states that a person X is childless if X is a person and it is not the case that X is a parent. The second rule says that isparent is true of X if X is the parent of some person Y.
Note the use of the helper relation isparent here. It is tempting to write the childless rule as childless(X) :- person(X) & ~parent(X,Y). However, this would be wrong. This would define X to be childless if X is a person and there is some Y such that X is ~parent(X,Y) is true. But we really want to say that ~parent(X,Y) holds for all Y. Defining isparent and using its negation in the definition of childless allows us to express this universal quantification. In addition to writing definitions of common relations, rules are frequently used to write one-off queries on databases, including cases where the queries do not correspond to common relations. As a simple example, suppose we wanted to know all people who are grandparents of adults. We could ask this question by writing the query shown below.
Another use of rules is encoding constraints on databases. Such constraints often limit the set of possible databases. For example, a person cannot be his or her own parent. In some cases, constraints involve multiple relations. For example, all parents are adults; in other words, if a person appears as the first argument of a parent fact, the person must also appear as an argument in the adult relation. In many database texts, constraints are written in direct form - by writing rules that say, in effect, that if certain things are true in an extension, then other things must also be true. The inclusion dependency mentioned above is an example - if an entity appears in the first column of the parent relation, it must also appear as an entry in the adult relation. In what follows, we use a slightly less direct approach - we encode limitations by writing rules that say when a database is not well-formed. We simply invent a new 0-ary relation, here called illegal, and define it to be true in any extension that does not satisfy our constraints. This approach works particularly well for consistency constraints like the one stating that a person cannot be his own parent. It also works well for mutual exclusion constraints like the one below, which states that a person cannot be in both the male and the female relations. Using this technique, we can also write the inclusion dependency mentioned earlier. There is an error if an entity is in the first column of the parent relation and it does not occur in the adult relation.
Database management systems can use such constraints in a variety of ways. They can be used to optimize the processing of queries. They can also be used to check that updates do not lead to unacceptable extensions. 4. Transition RulesIn our discussion thus far, we have been talking about the use of datasets and static logic programs to describe individual states of the world. In many application areas, it is necessary to describe not just individual states but also transitions between states. Transition rules and dynamic logic programs provide the means for us to describe such changes. The language of dynamic logic programming is a superset of the language of ordinary logic programing. Every ordinary logic program is also a dynamic logic program. As in ordinary logic programming, we can write ground atoms and view definitions. However, in dynamic logic programming, we can also write "transition rules", which encode information about how the state of the world changes (over time or in response to external stimuli). A transition rule is an expression of the form shown below. Each rule consists of (1) a literal (an atom or a negation of an atom) or a conjunction of literals, (2) a double shafted forward arrow, and (3) a literal or a conjunction of literals. The literals on the left are called conditions, and the literals on the right are called effects.
Intuitively, the meaning of a transition rule is simple. If the conditions of a rule are true in any state, then the effects must be true in the next state. (Remember that, for a literal to be true, the atom inside the negation must be false.) For example, the following rule expresses the fact that, when p(a) is true and q(a) is false, then p(a) becomes false and q(a) becomes true in the next state.
As with view definitions, the conditions and effects of rules may contain variables to express dynamic information in a compact form. For example, we can write the following rule to express the fact that the preceding transition rule holds for all objects.
As with view definitions, transition rules are required to be safe. In this case, this means that every variable among the effects of a rule or in negative conditions must appear in the positive conditions of that rule. The transition rule rules shown above are all safe. However, the rules shown below are not. The effect of the first rule contains a variable that does not appear in any condition. In the second rule, there is a variable that appears in a negative condition that does not appear in any positive condition.
If we were to allow the first of these rules, the resulting dataset might contain infinitely many effects (all the instances of the rule's effect). If we were to allow the second rule, we might have to check infinitely many conditions (all of the instances of the negated condition). A dynamic logic program (DLP) is simply a collection of view definitions and transition rules. As with ordinary logic programs, we are interested primarily in dynamic logic programs that are finite. However, in analyzing dynamic logic programs, we occasionally talk about infinite DLPs, e.g. the set of all ground instances of the rules. A deterministic closed dynamic system is one that operates without external input, changing from one state to the next in a deterministic fashion. These are sometimes called closed systems or Markov systems. Note that, in addition to deterministic closed dynamic systems, there are also non-deterministic closed dynamic systems (systems in which there are multiple successor states for each state); and there are probabilistic closed dynamic systems (systems in which the transitions between states have different probabilities of occurring). However, in this book, we concentrate exclusively on deterministic closed dynamic systems. Transition rules are an effective way of describing the behavior of closed systems. As an example, consider a simple dynamic system like the one shown below There are three conditions that can hold in a state - p, q, and r - meaning that the system has a total of eight possible states. The state transition diagram (or state graph) shown here illustrates the behavior of the system. Note that each state has a unique successor state. Note that, once the system enters some states, it never returns to those states. At the same time, the system loops among some states over and over again. The following transition rules express the behavior of this system. Given any state in which p is true, p becomes false in the next state; and vice versa. If p is true in a state, then q becomes true in the next state. If p and q are true in a state, then r becomes true in the next state; otherwise it becomes false.
This, of course, is a very simple closed system. There are many examples of closed systems in the real world. In fact, the universe as a whole can be viewed as a closed system (though there are some who would argue that it is really a non-deterministic closed system). An open dynamic system is one in which state changes depend not only on the current state of the system but also on inputs from the system's external environment. The most common example of such inputs are actions performed by one or more agents operating on the system. As an example, consider a variation on the closed system describe in Section 2. Again, the state of the system is based on three conditions p, q, and r. However, in this variation, the behavior of the system is determined not just by the current state but also by the actions on an external agent. In the version shown here, the agent has three possible actions - a, b, and c. Doing action a toggles p; doing action b interchanges p and q; and doing action r interchanges q and r. We can described the behavior of this simple system with the transition rules shown below.
Note that, if the system started in a state in which all three conditions were false, it could achieve a state in which they are true by executing the action sequence a, b, c, a, b, a. Can you think of a different sequence of actions that would do the trick? How many sequences are there that produce the desired state? ReadingsMichael Genesereth, Matt Ginsberg: Logic Programmming, Communications of the Association for Computing Machinery, Volume 28 Issue 9, September 1985, pages 933-941. Doug DeGroot, Gary Lindstrom: Logic Programming - Functions, Relations, and Equations, Prentice-Hall, 1986. |