## A C++ 11 Header Only Convenience Function Library

Having spent several years programming in C++ every day I've had time to build up a library of functions that make programming easier. The focus of this library is on making calls to STL functionality less verbose, wrapping up common STL usage patters like calls to erase after remove, and adding a few functions that I thought were missing.

This library is not dogmatic. The only guiding principles were what I found convenient as I was programming. However, over time two themes emerged.

# 1. Wrapping Iterators Up

Lets face it, most of the time when you call an STL function that uses iterators the inputs are the begin and end iterators of the data structure.

As a general rule in any language the most frequently used words should be the shortest. This convenience library includes short functions that take in references to a data structure and operate on the entire thing.

The function calls for sorting, finding maxs and mins, and removing items from collections have short call signatures that don't require use of iterator related functions like begin and end.

# 2. Returning Data Structures as Values

Pre-C++ 11 there was no convenient way to return a data structure that had been created in a function as a value. Thanks to move semantics returning data structures as values is no longer a big deal, and this convenience library takes advantage of that.

# Library Functionality

## Searching By Scores

A very common pattern in searching is to compare objects by a function that generates a value from the items being compared and then compares the generated values. afk wraps up this functionality into functions max_e and min_e.

The result is returned as a value rather than as an iterator.

## Data Structure Filtering

Copying and selection functions

Delete functions

## Set operations

afk includes functions for performing intersection, union, and difference on vectors and sets.

## Miscellaneous

Chain functions (apply between). Functions that operate on adjacent elements.

## Quantifier Elimination Over Linear Real Arithmetic by Enumerating all Possible Root Orders

Quantifier elimination for linear real arithmetic (LRA) is the problem of taking in a formula with quantifiers and producing a logically equivalent formula without quantifiers.

# Linear Real Arithmetic (LRA)

LRA includes all first order formulas whose atomic formulas consist of a linear expression and comparator. For example:

$3x + 2 = 0$

is a formula in linear real arithmetic, as is:

$\exists x. \forall y. (2x - 5z + 7 > 0) \land ((2x + y \leq 0) \lor (5y - 35 = 0))$

A formula like:

$\exists x. x^2 - y + 1 < 0$

is not in LRA because the $x$ term is raised to the power of 2.

# Example

Lets eliminate the quantifier from:

$\exists x. x + 2 = 0 \land -x + 3 > 0$

Readers who are good at algebra can probably solve this problem in their heads, but to get a sense of what an algorithm for quantifier elimination over LRE would look like lets go through it in detail.

There are two linear expressions of interest:

$x + 2$
$-x + 3$

A plot of these looks like so:

The function $x + 2$ is negative at every point from $-\infty$ to $-2$, is zero at $-2$ and is positive at every point after $-2$.

The function $-x + 3$ is positive from $-\infty$ to $3$, is zero at $3$ and is negative at every point after that.

We can express this compactly in a table (called a sign table) like so:

To satisfy our original formula we need a value of $x$ where $x + 2$ is zero and $-x + 3$ is positive. We can check if such a value exists by scanning each row of the table and checking if the column for $x + 2$ is $0$ and the column for $-x + 3$ is $+$. Row two, corresponding to the point $x = 2$ satisfies this condition, so the formula $True$ is the quantifier free equivalent of the original formula.

Notice that because we know that both functions involved are lines, and because we know each line's slope the structure of the sign table is completely determined by the ordering of the roots of each line.

This example had only one variable, so it was possible to solve each linear equation explicitly for its root, and then use the values of the roots to determine the layout of the sign table, but nothing in our analysis changes if we replace the numerical values of the roots with symbols like $R_1$ and $R_2$, compute the order in which the roots occur, and then use the ordering of the roots to compute the sign table.

The catch for more complex formulas is that when a problem contains more than one variable there isn't any one correct ordering of the equation's roots since the ordering of the roots depends on the values of the variables.

The solution to this problem is to compute every possible ordering of the roots and compute a sign table for each ordering. We will see how this works in the next example.

# Example with Multiple Variables

Lets eliminate the quantifier from:

$\exists x. (3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)$

This formula contains 2 linear expressions:

$3x + 4y - 7$

$2x - y + 3$

Since we are eliminating the quantified variable $x$ from our formula lets think of these as linear expressions in $x$:

$3x + (4y - 7)$

$2x - (y - 3)$

The first of these expressions is zero when:

$x = \frac{-4y + 7}{3}$

And the second has a root when:

$x = \frac{y - 3}{2}$

For brevity lets write:

$R_1 \equiv \frac{-4y + 7}{3}$

$R_2 \equiv \frac{y - 3}{2}$

Depending on the value of $y$ there are 3 possible orderings for these roots:

$R_1 < R_2$

$R_1 = R_2$

$R_2 < R_1$

For each of these three possibilities we can construct a sign table that will break up the $x$ axis into intervals along which both of our expressions have constant signs.

For $R_1 < R_2$ the sign table looks like:

The sign table has 5 intervals. $3x + 4y - 7$ is linear in x, has a slope of 3 in $x$ and has a root at $R_1$, so it is zero at $R_1$, negative at all points before $R_1$ and positive after $R_1$. For $2x - y + 3$ the table is similar.

For $R_1 = R_2$ the sign table looks like:

Notice that because $R_1 = R_2$ we do not need separate intervals for each of them like we did in the case where $R_1 < R_2$.

Finally the sign table for $R_2 < R_1$ is:

Note that given one of our 3 root order constraints we can decide if the original formula is true by table lookup. The problem is solvable if and only if $3x + 4y - 7 < 0$ and $2x - y + 3 \geq 0$ so it has a solution only if the sign table has a row where the sign of $3x + 4y - 7$ is negative and the sign of $2x - y + 3$ is positive or zero.

By inspection we can see that this is only true when $R_2 < R_1$, so the only way there is an $x$ such that our formula is satisified is if:

$\frac{y - 3}{2} < \frac{-4y + 7}{3}$

which can be simplified to the condition:

$3y - 9 < -8y + 14$

simplifying further:

$11y - 23 < 0$

Notice that this last expression is an expression in LRA! In fact all three of our possible root orderings can be written as formulas in linear real arithmetic.

The other crucial point is that since the real numbers are a totally ordered field one of the three orderings must be true, so we can write:

$\exists x. (3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)$

is equivalent to:

$\exists x. (((3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)) \land True)$

Which is equivalent to:

$\exists x. (((3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0)) \land ((R_1 - R_2 < 0) \lor (R_1 - R_2 = 0) \lor (R_2 - R_1 < 0)))$

Now for brevity lets write:

$F \equiv ((3x + 4y - 7 < 0) \land (2x - y + 3 \geq 0))$

So we have:

$\exists x. (F \land ((R_1 - R_2 < 0) \lor (R_1 - R_2 = 0) \lor (R_2 - R_1 < 0)))$

Now we can distribute F over the disjunction to get:

$\exists x. (((F \land (R_1 - R_2 < 0)) \lor (F \land (R_1 - R_2 = 0)) \lor (F \land (R_2 - R_1 < 0)))$

Then we can distribute the existential quantifier over each formula:

$(\exists x. F \land (R_1 - R_2 < 0)) \lor (\exists x. F \land (R_1 - R_2 = 0)) \lor (\exists x. F \land (R_2 - R_1 < 0))$

And since $x$ does not appear in any of the root order conditions this is just:

$((\exists x. F) \land (R_1 - R_2 < 0)) \lor ((\exists x. F) \land (R_1 - R_2 = 0)) \lor ((\exists x. F) \land (R_2 - R_1 < 0))$

But since each of the expressions for $R_1$ and $R_2$ are just logical encodings of the possible root orders, and each root order implies a sign table that we can use to decide whether there is any $x$ such that $F$ is $True$ we can just solve F in each clause of the disjunction via the sign table.

As we already saw in the discussion of sign tables the only root ordering where there is an $x$ that makes $F$ $True$ is the condition $R_2 - R_1 < 0$, so our formula becomes:

$(False \land (R_1 - R_2 < 0)) \lor (False \land (R_1 - R_2 = 0)) \lor (True \land (R_2 - R_1 < 0))$

Which simplifies to:

$11y - 23 < 0$

And we have eliminated the quantifier and all occurences of $x$!

# Outline of the Algorithm

Suppose we have a quantifier free formula (QFF) $F$ in linear real arithmetic with variables $x_1, x_2, ..., x_n$.

The idea of the algorithm is to distribute $F[x_1, ..., x_n]$ over a disjunction of every possible root ordering. Then find the truth value of $F[x_1, ..., x_n]$ under that ordering. For brevity we will write $F[x_1, ..., x_n]$ as $F$

$\exists x . F \equiv \exists x. F \land True$

Since the real line is totally ordered one of the possible orders of roots must be true. Suppose $D$ is the set of all possible orders, then we can write the formula as:

$\exists x. F \land (\bigvee \limits_{d \in D} d)$

Distributing F we get:

$\exists x. \bigvee \limits_{d \in D} (F \land d)$

Distributing the existential quantifier we get:

$\bigvee \limits_{d \in D} ((\exists x. F) \land d)$

Now in each conjunction the root ordering $d$ holds, so we can evaluate $\exists x. F$ to True or False using the root ordering:

$\bigvee \limits_{d \in D} (evaluate(d, \exists x. F) \land d)$

Now the only formulas that appear are the root orderings, $d$, which depend only on $x_2, ..., x_n$ and the terms $evaluate(d, \exists x. F)$ each of which is either $True$ or $False$, so we are done.

To shorten the formula we can drop any of the formulas in the disjunction where $evaluate(d, F) = False$.

# Evaluating a Formula Given a Root Ordering

One detail we glossed over is how to use the sign table to evaluate $\exists x. F$. In both of our examples we could just look at the sign tables and check, but for a computer to do this task we need a more precise procedure.

The idea is to use a recursive algorithm to walk down the formula collecting all intervals in the sign table that satisfy the conditions of the formula. If there is at least one satisfying interval then the formula evaluates to true given the root ordering.

The recursive procedure is as follows:

$satIntervals(table, A \land B) = satIntervals(table, A) \cap satIntervals(table, B)$
$satIntervals(table, A \lor B) = satIntervals(table, A) \cup satIntervals(table, B)$
$satIntervals(table, \lnot A) = allRows(table) - satIntervals(A)$
$satIntervals(table, p > 0) = rowsWithPSign(p, \{+\}, table)$
$satIntervals(table, p < 0) = rowsWithPSign(p, \{-\}, table)$
$satIntervals(table, p = 0) = rowsWithPSign(p, \{0\}, table)$
$satIntervals(table, p \geq 0) = rowsWithPSign(p, \{0, +\}, table)$
$satIntervals(table, p \leq 0) = rowsWithPSign(p, \{-, 0\}, table)$
$satIntervals(table, p \neq 0) = rowsWithPSign(p, \{-, +\}, table)$

Where $rowsWithSign(p, signs, table)$ returns the set of all rows of the sign table where linear polynomial $p$ has one of the signs in the list $signs$.

# Complexity and Better Alternatives

The complexity of this algorithm is horrifyingly bad. We have to enumerate all possible orderings of roots to eliminate one quantifier so for a formula containing $n$ expressions it is approximately $O(n!)$ for a single quantifier.

Much faster algorithms exist. An early algorithm called the Ferrante-Rakoff method can eliminate all quantifiers from a formula with $N$ quantifiers in $O(2^{2^{cN}})$ where $c$ is a positive constant.

Later Loos and Weispfenning  invented an improved method with a smaller constant $c$. That method is implemented in the computer algebra package redlog.

## Generating Shape Overlap Tests via Real Quantifier Elimination

Sites like stackoverflow and math stackexchange are packed with people working on geometry intensive applications like games asking for code snippets to decide whether shapes do or don't overlap:

https://math.stackexchange.com/questions/166863/intersection-between-a-cylinder-and-an-axis-aligned-bounding-box

http://stackoverflow.com/questions/401847/circle-rectangle-collision-detection-intersection

http://stackoverflow.com/questions/22093749/c-plane-sphere-collision-detection

The answers are usually given by human ingenuity, but as the answerer of the first question listed above points out algorithms to solve these kinds of shape intersection problems can be automatically invented by a procedure called cylindrical algebraic decomposition (CAD).

CAD is the most popular algorithm used for real quantifier elimination (RQE). RQE takes any problem in nonlinear real arithmetic (first order logic where all the atomic formulas are real polynomial equations and inequalities) and spits out a logically equivalent formula with no quantifiers.

To get a feel for RQE here are some formulas and their quantifier free equivalents:

$rqe( \exists x . 3x = 0 )$ is $True$

$rqe( \exists x . x^2 < 0 )$ is $False$

$rqe( \exists x . ax^2 < 0 )$ is $a < 0$

$rqe( \exists x . ax^2 + bx + c = 0 )$ is $b^2 - 4ac \geq 0$

This procedure can be used to generate intersection tests for 2 shapes like so: Come up with a logical formula, $F(x, y)$, that is true when the point $(x, y)$ is in shape 1. Then come up with a formula, $G$, that is true when the point $(x, y)$ is in shape 2. To find an intersection test perform quantifier elimination on:

$\exists x, y . F(x, y) \land G(x, y)$

and transliterate the resulting formula into the programming language of your choice.

This method can automatically generate intersection tests for any number of shapes, but before we get to that lets look at an example.

# Example: Circle Square Intersection

A formula using quantifiers to describe whether a circle centered at point $(a, b)$ with radius $r$, and a square with bottom left corner at $(c, d)$ and side length $l$ intersect is:

$\exists x, y . inCircle(x, y, a, b, r) \land inSquare(x, y, c, d, l)$

where:

$inCircle(x, y, a, b, r) \equiv (x - a)^2 + (y - b)^2 \leq r^2$

$inSquare(x, y, c, d, l) \equiv (c \leq x \leq c + l) \land (d \leq y \leq d + l)$

Using the quantifier elimination command in redlog we can produce a new formula that is logically equivalent to the original one, but does not contain $x$ or $y$.

Since this formula only uses known values ($a$, $b$, $r$, $c$, $d$, and $l$) and has no quantifiers it can be transliterated into an executable intersection test in a programming language of the users choice.

After using redlog to eliminate quantifiers we can pretty print the resulting formula as a c++ function:


#include<cmath>

bool test( const double r,
const double l,
const double b,
const double d,
const double a,
const double c ) {
return ( ( ( ( ( ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c - r >= 0.0 ) ) && ( a - c - l - r <= 0.0 ) ) || ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c + r >= 0.0 ) ) && ( a - c - l + r <= 0.0 ) ) ) || ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( pow( b, 2 ) - 2.0 * b * d + pow( d, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d - l <= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d - l <= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ) || ( ( pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d >= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d >= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ) || ( ( pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ) || ( ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( c, 2 ) + 2.0 * c * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ); }



As you can see, the result isn't pretty. The downside of automatically generated formulas is that there is no human intuition behind them to make them comprehensible.

# Other Shapes

In theory this method can handle just about any 3D shape. A few 3D shapes that can be handled with fairly compact formulas:

Spheres:

$inSphere(x, y, z, a, b, c, r) \equiv (x - a)^2 + (y - b)^2 + (z - c)^2 \leq r^2$

Axis aligned cubes and cuboids:

$inCube(x, y, z, a, b, c, l) \equiv (a \leq x) \land (x \leq a + l) \land (b \leq y) \land (y \leq b + l) \land (c \leq z) \land (z \leq c + l);$

$inCuboid(x, y, z, a, b, c, u, v, w) \equiv (a \leq x) \land (x \leq u) \land (b \leq y) \land (y \leq v) \land (c \leq z) \land (z \leq w);$

Axis aligned ellipses:

$inEllipse(x, y, a, b, h, k) \equiv b^2(x - h)^2 + a^2(y - k)^2 \leq a^2b^2$

# Performance

Below are results for several other shape intersections

Shape CombinationGenerated in under 10 seconds
Circle-SquareYes
Circle-CircleYes
Sphere-CubeYes
Line-LineYes
Ellipse-LineYes
Ellipse-SquareYes
Circle-Ellipse
Ellipse-RectangleYes
Rectangle-CircleYes
Sphere-PlaneYes
Sphere-Halfspace below planeyes

# Limitations

RQE is extremely slow. Any algorithm for RQE is at best $O(2^{2^n})$, though there are specialized algorithms for the linear and quadratic cases that are faster in practice than general methods.

Redlog seems capable of generating intersection tests for 2D shapes, and some simple 3D shape combinations like sphere-cube intersection.

More complicated 3D shapes like cylinders seem out of reach for Redlog since the logical formulas that can be used to represent them have more variables and higher degree polynomials (4+) that are difficult to eliminate.

## G-code Operations MRR Histogram

A few months ago I was running test programs generated by a new CAM system we are working on. Our target CNC machine is a 3-axis desktop mill: the Emco F1, with a 0.7 horsepower motor. Shown below:

To help avoid machine crashes I added a final check before emitting G-code. The system would run a simulation of the G-code, calculate the average material removal rate (in cubic inches per minute) and warn me if it was excessive.

We were cutting 6061 aluminum and I wanted to be extremely conservative since our machine is so wimpy, so I set the system to warn me about anything above 0.6 cubic inches per minute.

After a few very conservative part programs went through the simulator without incident I made some changes to the toolpath generation system to speed things up.

The first G-code program I generated with the new system caused a warning, but in a rush I ignored it, thinking it was probably not too bad. As soon as the program reached the toolpath that produced the warning it crashed.

Below is a histogram showing the MRR for each operation I ran on the mill up to and including the crashing operation (on the far right):

It's a bad idea to ignore warnings!

## How to Write an LLDB Plugin with Python: Customized Stepping Logic

In a previous post we saw an example of a new debugger command that can be added using the LLDB API.

Custom commands allow you to create custom logic for inspecting the state of the target program during a break. Custom stepping logic allows you to create customized logic to determine how to progress through the target program.

As an example we will create a custom step class, FindVar, that will step through a program until it finds a variable named r1.

# The Plugin Code


import lldb

class FindVar:

def __init__(self, thread_plan, dict):

def explains_stop(self, event):
return res

def should_stop(self, event):

a_var = frame.FindVariable("r1")

if not a_var.IsValid():
print "Havent found r1 yet"
return False
else:
print "Found r1!"
return True

def should_step(self):
return True



# Code Explanation

The three crucial methods are explains_stop, should_stop, and should_step.

## explains_stop

To understand explains_stop lets look at the output of an ordinary LLDB session. If you run an ordinary LLDB session tracking a program, say a.out, and create a breakpoint at the function main, then run the program you should see output like this:


bash-3.2$lldb ./a.out (lldb) target create "./a.out" Current executable set to './a.out' (x86_64). (lldb) breakpoint set --n main Breakpoint 1: where = a.outmain + 23 at main.cpp:8, address = 0x0000000100000d77 (lldb) r  The code will hit main immediately and break, spitting out the following info:  Process 18492 launched: './a.out' (x86_64) Process 18492 stopped * thread #1: tid = 0x58329e, 0x0000000100000d77 a.outmain + 23 at main.cpp:8, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x0000000100000d77 a.outmain + 23 at main.cpp:8  Notice the text of line 6: stop reason = breakpoint 1. 1. In LLDB every stop has a reason. When the debugger stops it calls each active thread plan's explains_stop method. The first one to return true is the stop reason. LLDB supports several stop reasons, in the python API the supported stop reasons are: Our example plugin is just supposed to step through the program, so it only explains the stop if the the stop reason is eStopReasonTrace. ## should_stop In the event that our plan does explain the stop we get to decide whether to return control to the user or continue execution. This decision is made by the should_stop method. For our plugin we are trying to run the program until a variable named r1 is defined, so we search for a variable named r1 in the current frame. If it exists we return control to the user. If it does not exist we continue execution. ## should_step Once we have decided to continue execution our plan has two options. It can resume normal execution, or it can take one step. In our case we want to step through the program one instruction at a time until r1 is defined, so we always return True, saying that we want to take a single step and then stop again. # Using the plugin To use this plugin we first need to create an executable to use as the target. In this example we will use a simple C++ program. First enter this code into a file named contrived.cpp:  #include "contrived.h" int contrived_1(const int l, const int v, const int q, my_value my_v) { int r1 = l - v; int r2 = q*l; if (my_v.useless) { return r1 - r2; } else { return my_v.x + my_v.y + my_v.z - r1*r2; } }  Next create the corresponding contrived.h file: struct my_value { int x; int y; int z; bool useless; }; int contrived_1(const int l, const int v, const int q, my_value my_v);  Finally create a main.cpp file:  #include <iostream> #include "contrived.h" using namespace std; int main() { int x = 2; int y = 4; int z = 9; bool useless = true; my_value v{x, y, z, useless}; int k = contrived_1(3, 4, -12, v); cout << k << endl; }  With these files created compile them with:  clang++ -g -O0 -std=c++11 contrived.cpp main.cpp  This will create an executable, a.out. With a.out created we can debug it and use our plugin. In the directory lldb_plugin run the commands:  bash-3.2$ lldb ./a.out
(lldb) target create "./a.out"
Current executable set to './a.out' (x86_64).
(lldb) breakpoint set --n main
Breakpoint 1: where = a.outmain + 23 at main.cpp:8, address = 0x0000000100000d77
(lldb) r
Process 18568 launched: './a.out' (x86_64)
Process 18568 stopped
* thread #1: tid = 0x58688d, 0x0000000100000d77 a.outmain + 23 at main.cpp:8, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000100000d77 a.outmain + 23 at main.cpp:8
5       using namespace std;
6
7       int main() {
-> 8         int x = 2;
9         int y = 4;
10        int z = 9;
11
(lldb) command script import find_var.py
(lldb) thread step-scripted -C find_var.FindVar
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Havent found r1 yet
Found r1!
Process 18568 stopped
* thread #1: tid = 0x58688d, 0x0000000100000d00 a.outcontrived_1(l=9, v=16777216, q=1, my_v=(x = 4, y = 2, z = 1606417248, useless = true)) at contrived.cpp:3, queue = 'com.apple.main-thread', stop reason = Python thread plan implemented by class find_var.FindVar.
frame #0: 0x0000000100000d00 a.outcontrived_1(l=9, v=16777216, q=1, my_v=(x = 4, y = 2, z = 1606417248, useless = true)) at contrived.cpp:3
1       #include "contrived.h"
2
-> 3       int contrived_1(const int l, const int v, const int q, my_value my_v) {
4         int r1 = l - v;
5         int r2 = q*l;
6
7         if (my_v.useless) {


## How to Write an LLDB Plugin with Python: New Commands

Adding a new LLDB command using a python script is pretty straightforward. We are going to write a command that does the following:

1. Print each frame in the stack trace
2. For each frame print all arguments and local variables

# The Plugin Code

Place this code in a new directory, say lldb_plugin, in a file named custom_command.py:


import lldb
import commands
import optparse
import shlex

def my_cmd(debugger, command, result, internal_dict):
target = debugger.GetSelectedTarget()
process = target.GetProcess()

for frame in thread:

print str(frame)

function = frame.GetFunction()
print 'FUNCTION = ', function

if frame.IsInlined():
print 'INLINED'
else:
args = frame.get_arguments()

print '# of arguments = ', len(args)

for arg in args:
print arg

vars = frame.get_all_variables()

print '# of vars =', len(args)

for var in vars:
print var

def __lldb_init_module(debugger, internal_dict):
debugger.HandleCommand('command script add -f custom_command.my_cmd my_cmd')
print 'The "my_cmd" python command has been installed and is ready for use.'



This code contains two functions. __lldb_init_module is boilerplate that is used to load the command during an lldb session.

The meat of the command is the my_cmd function. This function takes four arguments, but the main one we will use is debugger. debugger represents the actual debug session, we call its GetSelectedTarget method to get the target, or running program, that we are debugging. From the target we get the running process, and from the process we get the running thread.

Things get interesting inside the thread. We can check if the frame is inlined, and if not we can list all arguments, list all variables and print out information about the function itself.

# Using the Plugin

To use this plugin we first need to create an executable to use as the target. In this example we will use a simple C++ program. First enter this code into a file named contrived.cpp:


#include "contrived.h"

int contrived_1(const int l, const int v, const int q, my_value my_v) {
int r1 = l - v;
int r2 = q*l;

if (my_v.useless) {
return r1 - r2;
} else {
return my_v.x + my_v.y + my_v.z - r1*r2;
}
}



Next create the corresponding contrived.h file:

struct my_value {
int x;
int y;
int z;

bool useless;
};

int contrived_1(const int l, const int v, const int q, my_value my_v);



Finally create a main.cpp file:


#include <iostream>

#include "contrived.h"

using namespace std;

int main() {
int x = 2;
int y = 4;
int z = 9;

bool useless = true;

my_value v{x, y, z, useless};
int k = contrived_1(3, 4, -12, v);

cout << k << endl;
}



With these files created compile them with:


clang++ -g -O0 -std=c++11 contrived.cpp main.cpp



This will create an executable, a.out. With a.out created we can debug it and use our plugin.

In the directory lldb_plugin run the commands:


lldb ./a.out
breakpoint set --n contrived_1
r



command script import custom_command.py

After the import you should see:

The "my_cmd" python command has been installed and is ready for use.

Now run the command:


my_cmd



and you should see something like:


frame #0: 0x0000000100000d15 a.outcontrived_1(l=3, v=4, q=-12, my_v=(x = 2, y = 4, z = 9, useless = true)) + 21 at contrived.cpp:4
FUNCTION =  SBFunction: id = 0x0000008d, name = contrived_1(int, int, int, my_value), type = contrived_1
# of arguments =  4
(const int) l = 3
(const int) v = 4
(const int) q = -12
(my_value) my_v = (x = 2, y = 4, z = 9, useless = true)
# of vars = 4
(const int) l = 3
(const int) v = 4
(const int) q = -12
(my_value) my_v = (x = 2, y = 4, z = 9, useless = true)
(int) r1 = 1606579087
(int) r2 = 1
frame #1: 0x0000000100000dc8 a.outmain + 104 at main.cpp:15
FUNCTION =  SBFunction: id = 0x00008d35, name = main, type = main
# of arguments =  0
# of vars = 0
(int) x = 2
(int) y = 4
(int) z = 9
(bool) useless = true
(my_value) v = (x = 2, y = 4, z = 9, useless = true)
(int) k = 0
frame #2: 0x00007fff9dca45ad libdyld.dylibstart + 1
FUNCTION =  No value
# of arguments =  0
# of vars = 0



# More LLDB Info

Full documentation for the LLDB python API can be found here

## C++ forever template

Here is a fun little template to crash your compiler. Copy paste it into a file and check out the results.


template <int N>
struct forever {
enum { value = forever<N-1>::value * forever<N-1>::value };
};

enum forever_res { result = forever<1>::value };



Since integer arithmetic wraps this template just expands for all time. On clang the error is pretty neat and well reported:

main.cpp:3:18: fatal error: recursive template instantiation exceeded maximum depth
of 256
enum { value = forever<N-1>::value * forever<N-1>::value };
^
main.cpp:3:18: note: in instantiation of template class 'forever<-255>' requested
here
main.cpp:3:18: note: in instantiation of template class 'forever<-254>' requested
here
main.cpp:3:18: note: in instantiation of template class 'forever<-253>' requested
here
main.cpp:3:18: note: in instantiation of template class 'forever<-252>' requested
here
main.cpp:3:18: note: in instantiation of template class 'forever<-251>' requested
here
main.cpp:3:18: note: (skipping 247 contexts in backtrace; use
-ftemplate-backtrace-limit=0 to see all)
main.cpp:3:18: note: in instantiation of template class 'forever<-3>' requested here
main.cpp:3:18: note: in instantiation of template class 'forever<-2>' requested here
main.cpp:3:18: note: in instantiation of template class 'forever<-1>' requested here
main.cpp:3:18: note: in instantiation of template class 'forever<0>' requested here
main.cpp:6:29: note: in instantiation of template class 'forever<1>' requested here
enum forever_res { result = forever<1>::value };
^
main.cpp:3:18: note: use -ftemplate-depth=N to increase recursive template
instantiation depth
enum { value = forever<N-1>::value * forever<N-1>::value };
^
1 error generated.

## C++ __func__ vs __PRETTY_FUNCTION__

__func__ and __PRETTY_FUNCTION__ are both predefined symbols in clang. __func__ is actually part of the c++ 11 specification, while __PRETTY_FUNCTION__ is supported in clang.

To see how they work you can copy and paste the following code snippet into a file, say main.cpp, and compile it with:

clang++ -std=c++11 main.cpp


#include <iostream>

using namespace std;

class myclass {
public:
string func() const { return __func__; }
string pretty_function() const { return __PRETTY_FUNCTION__; }
};

class otherclass {
public:
string func() const { return __func__; }
string pretty_function() const { return __PRETTY_FUNCTION__; }
};

int main() {
cout << __func__ << endl;
cout << __PRETTY_FUNCTION__ << endl;

myclass c;
cout << c.func() << endl;
cout << c.pretty_function() << endl;

otherclass oc;
cout << oc.func() << endl;
cout << oc.pretty_function() << endl;

return 0;
}



When you run it you should see:

main
int main()
func
string myclass::pretty_function() const
func
string otherclass::pretty_function() const

As you can see the major difference is that __PRETTY_FUNCTION__ prints more information. It shows the type signature and containing class of the function it is contained in as well as the name, while __func__ prints only the name.

## Understanding vtkPolyData: The Basics

A vtkPolyData is just a list of points in 3D space along with lists of lines, polygons, triangles, and other shapes defined on those points. That is a pretty abstract description so lets look at an example of how we could represent some shapes as a vtkPolyData. Consider the picture below:

This group of shapes consists of a triangle and 4 lines. If this arrangement were stored in a vtkPolyData it would be stored as 3 lists like so:

Each of the three lists shown above (points, lines, and polygons) correspond to private members of vtkPolyData. All the points needed to define all the shapes that make up the vtkPolyData are stored in the points buffer. Then lines and polygons are stored as lists of references to points in the buffer.

For example the first element in the lines list is {0, 1}, which represents the line between (0, 0, 0) (the 0th element of points) and (1, 0, 0) (the 1th element of points).

In the language of VTK the points array is an instance of the vtkPoints class, and the lines and polygons arrays are instances of the vtkCellArray class. Here is the code used to create the vtkPolyData for our group of shapes:


#include <vtkTriangle.h>
#include <vtkSmartPointer.h>
#include <vtkPoints.h>
#include <vtkLine.h>
#include <vtkCellArray.h>
#include <vtkPolyData.h>

using namespace std;

int main() {

// Set up the array of points
vtkSmartPointer<vtkPoints> points =
vtkSmartPointer<vtkPoints>::New();

vtkIdType p0_0_0 =
points->InsertNextPoint(0, 0, 0);

cout << "p0_0_0 = " << p0_0_0 << endl;

vtkIdType p1_0_0 =
points->InsertNextPoint(1, 0, 0);

cout << "p1_0_0 = " << p1_0_0 << endl;

vtkIdType p1_1_0 =
points->InsertNextPoint(1, 1, 0);

cout << "p1_1_0 = " << p1_1_0 << endl;

vtkIdType p0_1_0 =
points->InsertNextPoint(0, 1, 0);

cout << "p0_1_0 = " << p0_1_0 << endl;

// Create cells for lines
vtkSmartPointer<vtkLine> l01 =
vtkSmartPointer<vtkLine>::New();
l01->GetPointIds()->SetId(0, p0_0_0);
l01->GetPointIds()->SetId(1, p1_0_0);

vtkSmartPointer<vtkLine> l13 =
vtkSmartPointer<vtkLine>::New();
l13->GetPointIds()->SetId(0, p1_0_0);
l13->GetPointIds()->SetId(1, p0_1_0);

vtkSmartPointer<vtkLine> l30 =
vtkSmartPointer<vtkLine>::New();
l30->GetPointIds()->SetId(0, p0_1_0);
l30->GetPointIds()->SetId(1, p0_0_0);

vtkSmartPointer<vtkLine> l23 =
vtkSmartPointer<vtkLine>::New();
l23->GetPointIds()->SetId(0, p1_1_0);
l23->GetPointIds()->SetId(1, p0_1_0);

// Add the lines to the lines cell array
vtkSmartPointer<vtkCellArray> lines =
vtkSmartPointer<vtkCellArray>::New();

lines->InsertNextCell(l01);
lines->InsertNextCell(l13);
lines->InsertNextCell(l30);
lines->InsertNextCell(l23);

// Create the triangle
vtkSmartPointer<vtkTriangle> triangle =
vtkSmartPointer<vtkTriangle>::New();
triangle->GetPointIds()->SetId(0, p0_0_0);
triangle->GetPointIds()->SetId(1, p1_0_0);
triangle->GetPointIds()->SetId(2, p0_1_0);

// Add the triangle to polygons
vtkSmartPointer<vtkCellArray> polygons =
vtkSmartPointer<vtkCellArray>::New();

polygons->InsertNextCell(triangle);

// Create the actual vtkPolyData
vtkSmartPointer<vtkPolyData> polyData =
vtkSmartPointer<vtkPolyData>::New();

// Add the geometry and topology to the polydata
polyData->SetPoints(points);
polyData->SetLines(lines);
polyData->SetPolys(polygons);

// Print out some data about the vtkPolydata
cout << "# of lines    = " << polyData->GetNumberOfLines() << endl;
cout << "# of polygons = " << polyData->GetNumberOfPolys() << endl;

}
`

The full example is available here.

# What is going on in this code snippet?

Really we are just building the three lists shown in the picture above and then wrapping a vtkPolyData around them.

First we create the points list. The important thing to note is that as we add points to the vtkPoints data structure it returns the handle (really just an offset) that will let us refer to the location of the point in the vtkPoints list.

Next we build the lists of lines and polygons using the handles for points. These lists have the very odd name vtkCellArray. vtkCell is a very general base class from which things like lines, polygons, hexadrons, and voxels are derived. For our purposes vtkCellArray could be called vtkShapeArray. It is just a list of shapes.

## Proving that Polynomial Division Works

I've been reading about polynomials lately and found that the polynomial division algorithm is a great example of a real algorithm that can be proved correct, and where a proof actually provides some insight in to how the algorithm works.

First lets get some notation out of the way: A polynomial, say $a$, over a field $F$ is a formal sum:

$a = \sum_{k=0}^{n} a_k x^k$

Expressions like

$1$

$3 + 2x^4$

$-19 + 2.3x + 1.5x^5$

are all polynomials.

The set of all polynomials in variable $x$ over a field $F$ is denoted $F[x]$.

I assume that if you are reading this you are familiar with the rules for polynomial addition, subtraction and multiplication, and that you know what the degree of a polynomial is, and what a monomial is.

## The polynomial division algorithm

The polynomial division problem is: given $u, v \in F[x]$ with $v \neq 0$ find $q, r \in F[x]$ such that:

$u = qv + r$

$deg(r) < deg(v)$ or $r = 0$

We need to add the extra possibility $r = 0$ because the degree of the zero polynomial is undefined.

An algorithm that can compute $q$ and $r$ is:

Input: $u \in F[x]$, $v \in F[x]$, $v \neq 0$

$r \gets u$

$q \gets 0$

while $r \neq 0$ and $lt(v) | lt(r)$:

$r \gets r - \frac{lt(v)}{lt(r)} v$

$q \gets q + \frac{lt(v)}{lt(r)}$

return $(q, r)$

## The proof of correctness

We will break the proof up into 2 parts

1. If the algorithm terminates it produces the correct result
2. The algorithm always terminates

### Proving the algorithm is correct when it terminates

First lets address part 1: proving that if the algorithm terminates it produces the correct result. This problem in turn will be broken into two parts

1. First we prove that at the start of the while loop and after every iteration of the loop: $u = qv + r$. The statement $u = qv + r$ is the loop invariant
2. Second we prove that if the loop terminates $deg(r) < deg(v)$ or $r = 0$

#### Proving that the invariant holds

At the start of the loop:

$r = u$

$q = 0$

So $qv + r = 0v + u = u$

Now we need to show that if $u = qv + r$ on entry to the while loop, then it is true after one execution of the loop. To do this we need to introduce new variables to represent the values of $r$ and $q$ before and after the loop executes.

Lets call the values of $r$ and $q$ on entry to the loop $r_i$ and $q_i$, and the values after execution of the loop $r_{i + 1}$ and $q_{i + 1}$.

So assuming $u = q_{i}v + r_i$, we need to show that after the execution of the loop $u = q_{i+1}v + r_{i+1}$.

Looking at the body of the loop we can see that:

$r_{i+1} = r_i - \frac{lt(v)}{lt(r)} v$

$q_{i+1} = q_i + \frac{lt(v)}{lt(r)}$

So:

$q_{i+1}v + r_{i+1} = ( q_i + \frac{lt(v)}{lt(r)} )v + r_i - \frac{lt(v)}{lt(r)}v =$

$q_i v + \frac{lt(v)}{lt(r)} v + r_i - \frac{lt(v)}{lt(r)}v =$

$q_i v + r_i + \frac{lt(v)}{lt(r)} v - \frac{lt(v)}{lt(r)}v =$

$q_i v + r_i = u$

#### Proving the degree condition

Now we need to prove that  if the loop terminates:

$deg(v) > deg(r)$ or $r = 0$

We split this up into two cases: $r = 0$ an $r \neq 0$. If $r = 0$ then we are done. If $r \neq 0$ we have a little more work to do.

Looking at the loop we can see that the only way it will terminate is if

$\neg (lt(v) | lt(r))$

So we need to prove that

$\neg (lt(v) | lt(r)) \Rightarrow deg(v) > deg(r)$

This is actually pretty straightforward. By definition of the leading term of  a polynomial we know:

$lt(v) = c_v x^{deg(v)}$

$lt(r) = c_r x^{deg(r)}$

So by construction the term: $\frac{c_r}{c_v} x^{deg(r) - deg(v)}$ is the result of dividing $lt(r)$ by $lt(v)$. $\neg (lt(v) | lt(r))$, if and only if this term is not defined.

There are only 2 ways the monomial $\frac{c_r}{c_v}x^{deg(r) - deg(v)}$ could be undefined: either $c_v = 0$, or $deg(r) - deg(v) < 0$, but remember that the division algorithm assumes that $v \neq 0$, so it must be that:

$deg(r) - deg(v) < 0$

Which is equivalent to the statement:

$deg(v) > deg(r)$

Which is exactly what we set out to prove! Now we know that if the algorithm terminates it produces the correct quotient and remainder.

### Proving termination

Producing the correct result when the algorithm stops is great, but does it always stop? Yes. The outline of why is below:

1. Initially $r = 0$ or $deg(r) \geq 0$
2. When $r = 0$ or $deg(r) = 0$ the algorithm terminates
3. After every iteration of the loop either the degree of $r$ decreases or $r$ becomes zero.

The first statement follows from the definition of the degree of a polynomial. The degree of a polynomial is non-negative if it is defined, and is undefined for the zero polynomial.

The second statement follows straight from the algorithm itself. A program with no loops always terminates, and this program has only one loop, so if that loop terminates the program is guaranteed to terminate, and $r = 0$ or $deg(r) = 0$ is just saying that the loop condition is not met.

Proving that statement 3 is more involved. Lets suppose that the value of $r$ going into the loop is $r_i$ and the value after an iteration is $r_{i+1}$. Then by definition:

$r_{i+1} = r_i - \frac{lt(r_i)}{lt(v)} v$

So we need to prove that either:

$deg(r_i) > deg(r_i - \frac{lt(r_i)}{lt(v)} v)$

or

$r_i - \frac{lt(r_i)}{lt(v)} v = 0$

Without loss of generality we can rewrite $r$ as:

$r_i = lt(r_i) + r_l$

And $v$ as:

$v = lt(v) + v_l$

With this rewrite we can see that:

$r_{i+1} = lt(r_i) + r_l - \frac{lt(r_i)}{lt(v)} (lt(v) + v_l) =$

$lt(r_i) + r_l - \frac{lt(r_i)}{lt(v)} lt(v) - \frac{lt(r_i)}{lt(v)} v_l =$

$lt(r_i) + r_l - lt(r_i) - \frac{lt(r)}{lt(v)} v_l =$

$r_l - \frac{lt(r)}{lt(v)} v_l$

Now we have a few cases to consider. First lets suppose that: $r_l \neq 0$, $\frac{lt(r)}{lt(v)}v_l \neq 0$, and $r_l - \frac{lt(r)}{lt(v)} v_l \neq 0$.

Then by the properties of polynomial degree we know that:

$deg(r_l - \frac{lt(r)}{lt(v)} v_l) \leq$

$max(deg(r_l), deg(\frac{lt(r)}{lt(v)} v_l))$

By definition $deg(r_l) < deg(r)$, and $deg(v_l) < deg(v)$.

Now suppose without loss of generality that :

$lt(v) = c_v x^{deg(v)}$

$lt(r) = c_r x^{deg(r)}$

Then:

$\frac{lt(r)}{lt(v)} = \frac{c_v}{c_r} x^{deg(v) - deg(r)}$

so:

$deg(\frac{lt(r)}{lt(v)} v_l) = deg(\frac{lt(r)}{lt(v)}) + deg(v_l) =$

$deg(v) - deg(r) + deg(v_l)$

But since $deg(v_l) < deg(v)$ we know:

$deg(v) - deg(r) + deg(v_l) < deg(v) - deg(r) + deg(v) =$

$2deg(v) - deg(r)$

Since we are in the body of the loop we know that $lt(v) | lt(r)$ so $deg(v) < deg(r)$, so:

$2deg(v) - deg(r) < 2deg(r) - deg(r) = deg(r)$

So $deg(\frac{lt(r)}{lt(v)} v_l) < deg(r)$ and $deg(r_l) < deg(r)$. It follows that:

$deg(r_{i+1}) = max(deg(r_l), deg(\frac{lt(r)}{lt(v)} v_l)) < deg(r)$

So the degree of $r$ reduces after each step.

That resolves the most complicated case. Now lets take a look at the others: first suppose that $r_i - \frac{lt(r_i)}{lt(v)} v = 0$. Since $r_{i+1} = r_i - \frac{lt(r_i)}{lt(v)} v = 0$ so statement 3 is true and we are done.

If either $r_i = 0$ or $\frac{lt(r_i)}{lt(v)} v = 0$