Quantifier Elimination Over Linear Real Arithmetic by Enumerating all Possible Root Orders

Quantifier elimination for linear real arithmetic (LRA) is the problem of taking in a formula with quantifiers and producing a logically equivalent formula without quantifiers.

Linear Real Arithmetic (LRA)

LRA includes all first order formulas whose atomic formulas consist of a linear expression and comparator. For example:

 3x + 2 = 0

is a formula in linear real arithmetic, as is:

 \exists x. \forall y. (2x - 5z + 7 > 0) \land ((2x + y \leq 0) \lor (5y - 35 = 0))

A formula like:

 \exists x. x^2 - y + 1 < 0

is not in LRA because the  x term is raised to the power of 2.


Lets eliminate the quantifier from:

 \exists x. x + 2 = 0 \land -x + 3 > 0

Readers who are good at algebra can probably solve this problem in their heads, but to get a sense of what an algorithm for quantifier elimination over LRE would look like lets go through it in detail.

There are two linear expressions of interest:

 x + 2
 -x + 3

A plot of these looks like so:

The function  x + 2 is negative at every point from  -\infty to -2, is zero at  -2 and is positive at every point after  -2 .

The function  -x + 3 is positive from  -\infty to  3 , is zero at  3 and is negative at every point after that.

We can express this compactly in a table (called a sign table) like so:

To satisfy our original formula we need a value of  x where  x + 2 is zero and  -x + 3 is positive. We can check if such a value exists by scanning each row of the table and checking if the column for  x + 2 is 0 and the column for  -x + 3 is +. Row two, corresponding to the point x = 2 satisfies this condition, so the formula  True is the quantifier free equivalent of the original formula.

Notice that because we know that both functions involved are lines, and because we know each line's slope the structure of the sign table is completely determined by the ordering of the roots of each line.

This example had only one variable, so it was possible to solve each linear equation explicitly for its root, and then use the values of the roots to determine the layout of the sign table, but nothing in our analysis changes if we replace the numerical values of the roots with symbols like  R_1 and  R_2 , compute the order in which the roots occur, and then use the ordering of the roots to compute the sign table.

The catch for more complex formulas is that when a problem contains more than one variable there isn't any one correct ordering of the equation's roots since the ordering of the roots depends on the values of the variables.

The solution to this problem is to compute every possible ordering of the roots and compute a sign table for each ordering. We will see how this works in the next example.

Example with Multiple Variables

Lets eliminate the quantifier from:

 \exists x. (3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)

This formula contains 2 linear expressions:

 3x + 4y - 7

2x - y + 3

Since we are eliminating the quantified variable  x from our formula lets think of these as linear expressions in  x :

 3x + (4y - 7)

2x - (y - 3)

The first of these expressions is zero when:

 x = \frac{-4y + 7}{3}

And the second has a root when:

 x = \frac{y - 3}{2}

For brevity lets write:

 R_1 \equiv \frac{-4y + 7}{3}

 R_2 \equiv \frac{y - 3}{2}

Depending on the value of  y there are 3 possible orderings for these roots:

 R_1 < R_2

 R_1 = R_2

 R_2 < R_1

For each of these three possibilities we can construct a sign table that will break up the  x axis into intervals along which both of our expressions have constant signs.

For  R_1 < R_2 the sign table looks like:

The sign table has 5 intervals.  3x + 4y - 7 is linear in x, has a slope of 3 in  x and has a root at  R_1 , so it is zero at  R_1 , negative at all points before  R_1 and positive after  R_1 . For  2x - y + 3 the table is similar.

For  R_1 = R_2 the sign table looks like:

Notice that because  R_1 = R_2 we do not need separate intervals for each of them like we did in the case where  R_1 < R_2 .

Finally the sign table for  R_2 < R_1 is:

Note that given one of our 3 root order constraints we can decide if the original formula is true by table lookup. The problem is solvable if and only if 3x + 4y - 7 < 0 and  2x - y + 3 \geq 0 so it has a solution only if the sign table has a row where the sign of  3x + 4y - 7 is negative and the sign of  2x - y + 3 is positive or zero.

By inspection we can see that this is only true when  R_2 < R_1 , so the only way there is an  x such that our formula is satisified is if:

 \frac{y - 3}{2} < \frac{-4y + 7}{3}

which can be simplified to the condition:

 3y - 9 < -8y + 14

simplifying further:

 11y - 23 < 0

Notice that this last expression is an expression in LRA! In fact all three of our possible root orderings can be written as formulas in linear real arithmetic.

The other crucial point is that since the real numbers are a totally ordered field one of the three orderings must be true, so we can write:

 \exists x. (3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)

is equivalent to:

 \exists x. (((3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)) \land True)

Which is equivalent to:

 \exists x. (((3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)) \land ((R_1 - R_2 < 0) \lor (R_1 - R_2 = 0) \lor (R_2 - R_1 < 0)))

Now for brevity lets write:

 F \equiv ((3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0))

So we have:

 \exists x. (F \land ((R_1 - R_2 < 0) \lor (R_1 - R_2 = 0) \lor (R_2 - R_1 < 0)))

Now we can distribute F over the disjunction to get:

 \exists x. (((F \land (R_1 - R_2 < 0)) \lor (F \land (R_1 - R_2 = 0)) \lor (F \land (R_2 - R_1 < 0)))

Then we can distribute the existential quantifier over each formula:

 (\exists x. F \land (R_1 - R_2 < 0)) \lor (\exists x. F \land (R_1 - R_2 = 0)) \lor (\exists x. F \land (R_2 - R_1 < 0))

And since  x does not appear in any of the root order conditions this is just:

 ((\exists x. F) \land (R_1 - R_2 < 0)) \lor ((\exists x. F) \land (R_1 - R_2 = 0)) \lor ((\exists x. F) \land (R_2 - R_1 < 0))

But since each of the expressions for  R_1 and  R_2 are just logical encodings of the possible root orders, and each root order implies a sign table that we can use to decide whether there is any x such that  F is True we can just solve F in each clause of the disjunction via the sign table.

As we already saw in the discussion of sign tables the only root ordering where there is an x that makes F  True is the condition  R_2 - R_1 < 0 , so our formula becomes:

 (False \land (R_1 - R_2 < 0)) \lor (False \land (R_1 - R_2 = 0)) \lor (True \land (R_2 - R_1 < 0))

Which simplifies to:

 11y - 23 < 0

And we have eliminated the quantifier and all occurences of  x !

Outline of the Algorithm

Suppose we have a quantifier free formula (QFF) F in linear real arithmetic with variables x_1, x_2, ..., x_n .

The idea of the algorithm is to distribute F[x_1, ..., x_n] over a disjunction of every possible root ordering. Then find the truth value of F[x_1, ..., x_n] under that ordering. For brevity we will write  F[x_1, ..., x_n] as  F

 \exists x . F \equiv \exists x. F \land True

Since the real line is totally ordered one of the possible orders of roots must be true. Suppose  D is the set of all possible orders, then we can write the formula as:

 \exists x. F \land (\bigvee \limits_{d \in D} d)

Distributing F we get:

 \exists x. \bigvee \limits_{d \in D} (F \land d)

Distributing the existential quantifier we get:

 \bigvee \limits_{d \in D} ((\exists x. F) \land d)

Now in each conjunction the root ordering  d holds, so we can evaluate \exists x. F to True or False using the root ordering:

 \bigvee \limits_{d \in D} (evaluate(d, \exists x. F) \land d)

Now the only formulas that appear are the root orderings,  d , which depend only on  x_2, ..., x_n and the terms  evaluate(d, \exists x. F) each of which is either True or False, so we are done.

To shorten the formula we can drop any of the formulas in the disjunction where  evaluate(d, F) = False .

Evaluating a Formula Given a Root Ordering

One detail we glossed over is how to use the sign table to evaluate  \exists x. F . In both of our examples we could just look at the sign tables and check, but for a computer to do this task we need a more precise procedure.

The idea is to use a recursive algorithm to walk down the formula collecting all intervals in the sign table that satisfy the conditions of the formula. If there is at least one satisfying interval then the formula evaluates to true given the root ordering.

The recursive procedure is as follows:

 satIntervals(table, A \land B) = satIntervals(table, A) \cap satIntervals(table, B)
 satIntervals(table, A \lor B) = satIntervals(table, A) \cup satIntervals(table, B)
 satIntervals(table, \lnot A) = allRows(table) - satIntervals(A)
 satIntervals(table, p > 0) = rowsWithPSign(p, \{+\}, table)
 satIntervals(table, p < 0) = rowsWithPSign(p, \{-\}, table)
 satIntervals(table, p = 0) = rowsWithPSign(p, \{0\}, table)
 satIntervals(table, p \geq 0) = rowsWithPSign(p, \{0, +\}, table)
 satIntervals(table, p \leq 0) = rowsWithPSign(p, \{-, 0\}, table)
 satIntervals(table, p \neq 0) = rowsWithPSign(p, \{-, +\}, table)

Where  rowsWithSign(p, signs, table) returns the set of all rows of the sign table where linear polynomial  p has one of the signs in the list signs.

Complexity and Better Alternatives

The complexity of this algorithm is horrifyingly bad. We have to enumerate all possible orderings of roots to eliminate one quantifier so for a formula containing n expressions it is approximately O(n!) for a single quantifier.

Much faster algorithms exist. An early algorithm called the Ferrante-Rakoff method can eliminate all quantifiers from a formula with N quantifiers in O(2^{2^{cN}}) where c is a positive constant.

Later Loos and Weispfenning  invented an improved method with a smaller constant c. That method is implemented in the computer algebra package redlog.

Leave a Reply

Your email address will not be published. Required fields are marked *