The post How Important Is Age in MMA? appeared first on Dillon Huff.

]]>Conventional wisdom in MMA holds that fighters peak somewhere between their mid twenties and their early thirties, with fighters in higher weight classes peaking later.

The idea that athletes reach their peak potential as late as their early thirties didn't match up with data I had seen about Olympic track record setters, which suggested that athletes peak at around 23 years old. Granted Olympic running is a completely different set of sports, but considering how much more brutal MMA is you would think athletes would peak even earlier.

Thankfully there is some empirical data on how MMA fighers performance changes with age. The sherdog fight database is a comprehensive database of MMA fights and fighters that includes over 200,000 fights going back to 1980.

Not all of the 200,000 fights are relevant for age studies. There are 3 types of fights that need to be filtered ou:

- Fights where the database does not include the age of one (or both) of the fighters
- Fights that ended in no-contests or disqualifications. These fights don't say anything about the ability of the 2 fighters involved. I also excluded draws, but that is a minor adjustment since draws are pretty rare.
- Fights where the 2 opponents were the same age. In this analysis I only use the year of birth and year of the fight to compute ages, which may introduce a slight bias

After filtering out the types of fights listed above there were 52,299 fights as of January 6, 2018. Of the 52,299 fights I looked at 28,952 were won by the younger challenger, so on average the younger fighter wins 55% of the time. That data says that youth is an advantage, but it doesn't say much about how fast fighters performance deteriorates with age.

To get an idea of when fighters start to decline we can break up the sherdog fight statistics by age. For every fight in the database (ignoring no contests and draws) we check the age (year of birth) of the two fighters and the year of the fight. If the two fighters were different ages we add one fight to each age groups total fight tally, and then we add one fight to the win tally of whatever age the winning fighter was. For example, if the entire fight database contained only 1 fight between a 25 year old and a 27 year old, and the 25 year old won we would compute that 25 year olds have 1 fight and 1 win, for a win percentage of 100%, and 27 year olds have 1 fight and 0 wins, for a win percentage of 0%.

The chart below shows the win rate for MMA fighters by age. As you can see aging starts very young. 22 year olds have the best average win rate at about 55%. Through the mid twenties the win rate declines slightly, and then starts to drop precipitously around age 27.

To be fair some of this rapid decline is probably due to the fact that as fighters age they move up the ranks and face tougher competition. However, the increase in competition quality is offset by the fact that fighters skills tend to improve as they age due to additional practice time.

At all age gaps the younger opponent has an advantage, and the youth advantage grows quickly as the age gap expands. Fighters beat opponents who are a year older about 52% of the time, and they beat opponents who are 15 years older about 64% of the time.

Of course MMA fighters and promoters are aware of the youth advantage, so fights with wide age gaps are less common than ones with small age gaps. There are 7729 fights in the database where the opponents were one year apart, while there were only 14 fights in the database where the age gap was 25 years.

Because of this the results for fights with age differences larger than 20 years swing wildly. I included them just for show, but they really aren't statistically significant. In the range where we have reasonable numbers of fights (at least a few hundred per category) a fighters chance of victory goes up about 1% for every year younger they are than their opponent.

Fighters peak around 22 years old. The window for peak performance is between 22 and 27. Like athletes in other sports MMA fighters have a very narrow window in which they can reach peak performance.

Age differences as small as one year confer an advantage, and every year younger a fighter is than his opponent increases his chance of victory by about 1%.

Overall, fighters peak very early, but the data above does not separate fighters out by weight class. The sherdog dataset includes age labels, and in a future post I'll take a look at whether different weight classes age at different rates.

Bayesian analysis of MMA fighter skill

The post How Important Is Age in MMA? appeared first on Dillon Huff.

]]>The post GCA: An Open Source Process Planner for 3-Axis CNC Mills appeared first on Dillon Huff.

]]>I spent a lot of time last year writing a process planner for 3-axis mills. While writing the planner, which I called GCA, I learned that there isn't much open source code available for CNC machining or process planning. OpenCAMlib is great for generating toolpaths, but for tasks like feature recognition, access direction selection, and setup selection there just isn't much code out there.

To start fixing this problem I decided to make the source code of GCA public and add some explanation of its capabilities and limitations.

GCA is a basic process planner written in C++ that automatically generates machining plans for 3-axis mills. Below is an example part milled on an Emco F1 using a process plan produced by GCA.

GCA does not require any human guidance to produce a machining plan. It takes in the part the user wants to make as an STL file, the workpiece the user wants to make it out of, and the tools and fixtures that are available. From these inputs GCA produces a complete, ready to use process plan. The plan consists of a list of setups complete with G-code programs for every toolpath needed to make the part.

GCA requires only a few inputs to produce a process plan for a part.

- An STL file containing the part to be manufactured
- A list of the available cutting tools for the mill. GCA supports flat nosed and ball nosed tools
- The dimensions of the stock
- The dimensions of the vice that will be used, and the heights of any available parallel plates

GCA produces a list of setups that when carried out on a 3-axis mill will manufacture the given part from the given stock. Each setup is complete with a description of how to place the part in the vice, which parallel plates to use (if any), ready to run G-code programs for each toolpath, and instructions on which tool to use in each path.

To see some examples of how to use GCAs C++ API see the tests in the github repo. The function that takes in the inputs and converts them into a process plan can be found here.

GCA is a feature based process planner. Planning takes place in 4 phases: stock alignment, feature recognition, setup ordering, and toolpath generation.

Making a part with a mill is a process of gradually removing material from the initial stock until it has the shape of the desired part, but during planning GCA (and all modern CAM software) relies on a slightly different model of machining. We imagine that the part is an object that is trapped within the workpiece like a diamond surrounded by rock. In this model the process planner is like a gemcutter carefully removing worthless material to reveal the precious object underneath. There is a crucial difference though: gemcutters do not get to decide how nature will place a diamond in a rock, but a process planner does get to decide how the final part is oriented inside the initial stock. Making this decision about stock alignment is the first step of process planning.

Stock alignment determines which faces of the part are prismatic in the first setup of the plan. When a workpiece is gripped by a vice two of its faces must be gripped by the jaws

of the vice, and one of the faces that is perpendicular to the other two (and antiparallel to the tray of the mill) must be seated on the base of the vice or on parallel plates just above it. The face that is antiparallel to the tray of the mill (facing the floor) is called the locating face of the setup, and the two surfaces in contact with the jaws are called the clamping faces. Suppose the locating face has normal . Then faces with normal are horizontal with respect to the mill table, and faces perpendicular to are vertical with respect to the mill table. Hence the choice of locating face for the fixture determines which part surfaces are prismatic in the setup. The vector , the negative of the normal of the locating face, is called the mill orientation or access direction.

By definition the locating face of the first setup is a face of the initial stock, so any surface of the part that is not parallel or horizontal with respect to one of the normal vectors of the faces of the initial stock has no chance of being prismatic in the first setup. Actually in most parts made on a 3-axis mill every setup has a locating face that is parallel to one of the normals of the six initial faces of the stock so all of the prismatic surfaces in each setup are determined by the time stock alignment is finished.

GCA's alignment algorithm first picks three orthogonal directions, , , , computes the oriented bounding box of along those 3 directions, and then computes the shape of the stock by inflating the bounding box until it reaches the dimensions of the stock. This box is then converted into a triangular mesh, , and returned.

The vectors , , and , are computed as follows. First, find all flat surfaces of that lie on its convex hull. Then pick out the group of two orthogonal surfaces that have the largest total surface area. Then and are the normals of these two surfaces and . If there is not a group of two orthogonal flat surfaces of on the convex hull the algorithm fails. A part that does not have two orthogonal flat surfaces on its convex hull will almost certainly require a custom fixture to be machined on a 3-axis mill.

The actual function that performs this surface analysis is called align_workpiece can be viewed here in the github repo.

During this step GCA selects a set of directions that look like promising mill orientations. There are two competing incentives at work during this phase. Every setup used in the final plan will have a mill orientation picked out from this set, so we want to pick a large enough set of directions to allow setup ordering to find a good plan. On the other hand the setup ordering algorithm is in the number of possible access directions, so we want to control the size of the direction set to keep runtimes down.

As we have stressed before freeform features are both more time consuming and more expensive to cut, so minimizing the number of them that are used in a plan is key. Direction selection actually outputs two sets of directions. One group, which we call prismatic only, are directions along which we will only look for prismatic features, which are features whose volume is enclosed by prismatic surfaces. In the other group, called freeform, we will look for both prismatic and freeform features. During direction analysis we only start allowing freeform directions as a last resort.

The code that performs direction analysis can be found here.

The goal of feature recognition is to produce data structures that break up the millable volume along each candidate direction into features, describe the order in which each of these features can be cut, and tell toolpath generation what tools from our initial set can be used to cut them.

GCA's feature recognition algorithm is located here.

A feature is just a word for a volume that we know we can generate toolpaths for. In our system each feature along a direction corresponds to one surface that needs to be cut out and the volume above it that needs to be removed to reach it. All features in a direction, say , can be represented as extrusions of surfaces along . Features are divided into one of two categories: prismatic and freeform. Prismatic features are defined by a flat surface and a depth, so they can be represented as a polygon and a depth, while freeform surfaces are represented as a triangular mesh and a depth.

Prismatic features have two additional property flags associated with them: open and through. Open features are features whose side is the initial stock, and through features are features which pass all the way through the part

Rather than just returning a list of features our feature recognition algorithm returns a tree of features that describe the spatial dependencies between them. In order for a setup to be legal a feature can only be cut out once its parent in the tree has been cut.

Eventually the volumes that we select will have to be translated into toolpaths, which will mean having to select tools to use to cut away the material making up the feature. Not all tools that are available can be used on any given feature both because the feature may be too small and because the feature may be so deep in the part that it cannot be reached by some tools. The last goal of feature recognition is to produce a map from features in the tree to lists of tools that could be safely used to cut out the part.

We generate the feature tree using an algorithm very similar to the feature recognition algorithm used by CyberCut. However, unlike the CyberCut feature recognition algorithm our algorithm works along any access direction, not just the 6 orthonormal directions defined by the initial stock.

First we find all of the surfaces of that could be part of the base of a feature along . Next we sort the surfaces by depth along and build a polygon describing the surface of the top of the stock along . Finally we descend into the part along recursively building up trees of features by subtracting surfaces from the initial stock outline polygon.

The first step in feature recognition is to discard all triangles of that are not visible from a parallel view camera looking at the part along . The remaining triangles (those that are visible) are the triangles of the mesh that could form the base of a feature along .

Next these triangles are grouped into connected components. These are the possible base surfaces of features along . Now we are at an important juncture. We have discarded all the vertical triangles along , so every one of these surfaces is either horizontal or freeform. The horizontal surfaces will eventually become the bases of prismatic features, while the freeform surfaces will become the bases of freeform surfaces.

Our feature recognition algorithm has two phases. The first phase identifies prismatic features, and the second recognizes freeform features. To build the set of possible feature bases for the first phase we cover each freeform surface along with a flat virtual surface. If a freeform surface along is comprised of a list of triangles , then the virtual surface for is the flat surface formed by projecting onto the plane whose normal is and whose defining point is the vertex in all triangles of that has the largest signed distance along .

Once the virtual surfaces have been added we have a list of flat surfaces, all with normal . Next the list is sorted by distance along . Then the sorted list is broken into a sorted list of lists, where each list contains all surface polygons that have the same depth along . Then the top outline of the stock is created. The stock outline is the starting surface of the volume decomposition that is to follow.

The volume decomposition procedure is recursive. If there are no more surface levels left then the polygon describing the current surface level, current level is a through feature and the decomposition is done. If there is a level left then we subtract the next surface level from current level. If the subtraction does not change current level polygon (there is no overlap between the current level and and the surface polygons at the next level) then we recurse without changing the active parent node. If the subtraction does produce a new set of polygons we generate a non-through feature with the current level as its base, then add that feature to the tree and recursively call the function to generate features with each polygon in the subtraction result as the new current level and the feature that was just created as the new parent. The figures above show a simplified 2D representation of this volume decomposition.

After feature generation is complete a post processing phase walks down the tree tagging features as open if they share a boundary with the stock.

If this direction was not labeled as a freeform direction then we are done with feature recognition. If it was then we build freeform features. This is done by going through each original freeform surface and building a freeform feature whose base is the original surface and whose top is the top of the covering virtual surface.

Once the feature tree has been produced we can decide which of the input tools can be used to mill each feature. The pictures below show several reasons why a tool might not be able to cut out a feature.

Tool access testing for prismatic and freeform features are very similar in GCA, so here I'll first describe tool access tests for prismatic features. The full algorithm to test whether a tool can access a feature is in the codebase here.

Prismatic features are extrusions defined by a base polygon, , that is horizontal with respect to the access direction, , and a depth, , which is the length of the extrusion of along .

First we check to make sure the cutter is long enough to cut the walls of the feature. If the feature in question is the root of the tree then there are no walls (it is a facing operation), so we move on. If the feature is not the root feature then a tool cannot cut it unless its cut length is greater than the depth of the feature.

Next we compute the volumes swept out by the shank and tool holder of the tool during machining of the target feature. If these volumes intersect the part the tool cannot access the target feature, if they do not intersect the tool can access the target feature.

Access testing for freeform surfaces in proceeds in the same way, except that the initial polygon is formed by projecting the surface onto the plane defined by normal vector and point , the lowest point on the surface being access tested.

If there are no usable tools for a feature we have two options: delete it, and all of its descendants in the feature tree, or try to modify the feature to improve accessibility. In our system if the feature is not a through feature then we delete it. If the feature is a through feature (say along ) we check to see if is also used as an access direction, and if so we check to see if any features along subsume part of the base of the inaccessible feature. If a subsuming feature exists we shorten the inaccessible feature along until there is no overlap with any feature along and retry the tool access test.

In all but the simplest parts feature recognition will recognize more features than are actually needed to cut the part.

With feature recognition finished we have data structures that describe what volumes can be cut in each of the candidate access directions, but we don't want to use all of these directions. During setup ordering GCA uses a greedy algorithm to select a sequence of access directions to use in the final plan, find fixtures for those directions, pick which features to use in each direction, and clip any remaining features to ensure that the features used in the final plan do not have any intersections.

GCA uses the CGAL implementation of Nef polyhedra to update feature shapes and the workpiece shape as directions are selected. These operations are the most time consuming part of planning, but they are essential for determining how much volume is left to cut in each direction and what the shape of the workpiece is after each new setup is selected.

At each step GCA selects the next access direction using the following rule: If there is any remaining direction whose feature tree contains an outer, through feature whose base is not square then pick that direction. If there is more than one direction meeting this criteria the one whose base polygon has the most interior angles that are not right angles is returned. Otherwise pick the direction which has the most remaining millable volume.

Choosing the direction with the most remaining volume is not a good choice for some parts, since removing a lot of material in an earlier setup can reduce the structural stability of the workpiece, The current direction selection rule works well for parts that are solid enough to be held in the vice without deformation, but it works poorly for parts which are more fragile. A more sophisticated rule that takes the stress associated with fixturing into account would be needed to handle parts with thin walls.

The major setup ordering function can be found here.

Once the sequence of setups has been determined all that is left to do is generate G-code to cut out each volume.

Toolpath generation takes in the list of setups and converts them to a list of completed setups in which the list of features in each setup is replaced by a list of toolpaths. Each toolpath includes G-code to control the mill motions as well as comments describing the tool to be used.

GCA does not have support for special tools like chamfers or corner rounding endmills, though I have added some stubs for chamfer and fillet detection to the codebase.

The part must be fixtured only using a vice and (optionally) parallel plates to lift the part in the vice jaw. GCA does not support custom fixtures.

GCA has been tested on an Emco F1 mill run using a linuxCNC controller. The G-code it produces should be linuxCNC compliant.

Floating point errors are a constant issue. I've considered writing an entire post on the challenge of preserving accuracy and avoiding floating point exceptions during the long, complex geometry operations needed for process planning.

The idea of using computers to do process planning dates back to the late sixties, but process planning in commercial CAM systems is still in its infancy. Even highly automated CAM systems like FeatureCAM still require significant human effort to produce a part.

In academic research the process planner designs that have been tried fall into two categories: feature-based and operation-based.

Feature-based systems like Van Houten et als PART system, CyberCut, GCA, and Fu's concurrent planner split the planning problem into two phases. First they try to break up the volume that needs to be removed from the stock into volumes that correspond to individual machining operations (features), and assign the features to setups. Once the proto-plan consisting of a list of setups and assigned features is created it can be translated into a complete plan by generating appropriate toolpaths for the features, a task which is easy by design since each feature corresponds to a machining operation.

Operation based planners take a completely different approach. They try to go directly from the capabilities of the available machine and cutting tools to a plan that will cut out the part. I don't know of any commercial operation based planners, but some academic operation based planners include Nelaturi et al. and Ertelt.

The post GCA: An Open Source Process Planner for 3-Axis CNC Mills appeared first on Dillon Huff.

]]>The post A C++ Header Only Arena Allocator appeared first on Dillon Huff.

]]>The linked github repo is a single header file arena allocator. The allocator allocates a single chunk of memory when it is created. The chunk is deallocated when the arena_allocator object is destroyed.

Allocation consists of a incrementing a pointer and returning. In debug mode the allocator will crash with an assert if an allocation that exceeds available memory is requested.

For convenience the allocator includes a templatized allocate function that automatically determines the size of the allocated object and handles casting boilerplate for you.

The post A C++ Header Only Arena Allocator appeared first on Dillon Huff.

]]>The post A C++ Bit Vector Arithmetic Library appeared first on Dillon Huff.

]]>bsim is a header only library that provides arbitrary length bit vectors.

There are many bit vector libraries designed for representing sets, for example std::bitset. Unlike these libraries bsim was designed with hardware simulation in mind. bsim supports bit operations such as "xor", "and", and "or" as well as unsigned and twos complement arithmetic and comparisons.

bsim supports both static and dynamic bit vectors. Static bit vectors, like those provided in std::bitset, are a class template with a template argument for the length of the bit vector. They are useful for applications where the length of a bit vector is known at compile time.

Dynamic bit vectors support the same operations, but they are used when the length of the bit vector is not known at compile time.

- Bitwise logical operators: and, or, nand, nor, xor, not
- Bitwise comparisons: equal and not equal
- Reduction operations: "andr", "orr", "xorr"
- Shifts: left shifts (shl), arithmetic right shifts (ashr), and logical right shifts (lshr).
- Unsigned integer arithmetic: add, subtract, multiply, and divide
- Unsigned integer comparisons: greater, less, greater than or equal, less than or equal
- Signed integer arithmetic: add, subtract, and multiply
- Signed integer comparisons: greater, less, greater than or equal, less than or equal

I am currently working on adding support for signed division as well as signed and unsigned remainder operations.

bsim supports operating on arbitrary length bit vectors, but for operations on bit vectors with less than 65 bits the library uses conditional compilation to native instructions rather than arbitrary length algorithms.

Use of native operations comes at a slight storage cost. For example a 33 bit vector must be stored as 64 bits in order to use 64 bit operations without changing bits in memory beyond the object.

If you want to use static bit vectors then copy the file src/bit_vector.h into your project.

If you want to use dynamic bit vectors then use src/dynamic_bit_vector.h instead.

The post A C++ Bit Vector Arithmetic Library appeared first on Dillon Huff.

]]>The post A C++ 11 Header Only Convenience Function Library appeared first on Dillon Huff.

]]>Having spent several years programming in C++ every day I've had time to build up a library of functions that make programming easier. The focus of this library is on making calls to STL functionality less verbose, wrapping up common STL usage patters like calls to erase after remove, and adding a few functions that I thought were missing.

This library is not dogmatic. The only guiding principles were what I found convenient as I was programming. However, over time two themes emerged.

Lets face it, most of the time when you call an STL function that uses iterators the inputs are the begin and end iterators of the data structure.

As a general rule in any language the most frequently used words should be the shortest. This convenience library includes short functions that take in references to a data structure and operate on the entire thing.

The function calls for sorting, finding maxs and mins, and removing items from collections have short call signatures that don't require use of iterator related functions like begin and end.

Pre-C++ 11 there was no convenient way to return a data structure that had been created in a function as a value. Thanks to move semantics returning data structures as values is no longer a big deal, and this convenience library takes advantage of that.

A very common pattern in searching is to compare objects by a function that generates a value from the items being compared and then compares the generated values. afk wraps up this functionality into functions max_e and min_e.

The result is returned as a value rather than as an iterator.

Copying and selection functions

Delete functions

afk includes functions for performing intersection, union, and difference on vectors and sets.

Chain functions (apply between). Functions that operate on adjacent elements.

The post A C++ 11 Header Only Convenience Function Library appeared first on Dillon Huff.

]]>The post Quantifier Elimination Over Linear Real Arithmetic by Enumerating all Possible Root Orders appeared first on Dillon Huff.

]]>LRA includes all first order formulas whose atomic formulas consist of a linear expression and comparator. For example:

is a formula in linear real arithmetic, as is:

A formula like:

is not in LRA because the term is raised to the power of 2.

Lets eliminate the quantifier from:

Readers who are good at algebra can probably solve this problem in their heads, but to get a sense of what an algorithm for quantifier elimination over LRE would look like lets go through it in detail.

There are two linear expressions of interest:

A plot of these looks like so:

The function is negative at every point from to , is zero at and is positive at every point after .

The function is positive from to , is zero at and is negative at every point after that.

We can express this compactly in a table (called a sign table) like so:

To satisfy our original formula we need a value of where is zero and is positive. We can check if such a value exists by scanning each row of the table and checking if the column for is and the column for is . Row two, corresponding to the point satisfies this condition, so the formula is the quantifier free equivalent of the original formula.

Notice that because we know that both functions involved are lines, and because we know each line's slope the structure of the sign table is completely determined by the *ordering *of the roots of each line.

This example had only one variable, so it was possible to solve each linear equation explicitly for its root, and then use the values of the roots to determine the layout of the sign table, but nothing in our analysis changes if we replace the numerical values of the roots with symbols like and , compute the order in which the roots occur, and then use the ordering of the roots to compute the sign table.

The catch for more complex formulas is that when a problem contains more than one variable there isn't any one correct ordering of the equation's roots since the ordering of the roots depends on the values of the variables.

The solution to this problem is to compute every possible ordering of the roots and compute a sign table for each ordering. We will see how this works in the next example.

Lets eliminate the quantifier from:

This formula contains 2 linear expressions:

Since we are eliminating the quantified variable from our formula lets think of these as linear expressions in :

The first of these expressions is zero when:

And the second has a root when:

For brevity lets write:

Depending on the value of there are 3 possible orderings for these roots:

For each of these three possibilities we can construct a sign table that will break up the axis into intervals along which both of our expressions have constant signs.

For the sign table looks like:

The sign table has 5 intervals. is linear in x, has a slope of 3 in and has a root at , so it is zero at , negative at all points before and positive after . For the table is similar.

For the sign table looks like:

Notice that because we do not need separate intervals for each of them like we did in the case where .

Finally the sign table for is:

Note that given one of our 3 root order constraints we can decide if the original formula is true by table lookup. The problem is solvable if and only if and so it has a solution only if the sign table has a row where the sign of is negative and the sign of is positive or zero.

By inspection we can see that this is only true when , so the only way there is an such that our formula is satisified is if:

which can be simplified to the condition:

simplifying further:

Notice that this last expression is an expression in LRA! In fact all three of our possible root orderings can be written as formulas in linear real arithmetic.

The other crucial point is that since the real numbers are a totally ordered field one of the three orderings must be true, so we can write:

is equivalent to:

Which is equivalent to:

Now for brevity lets write:

So we have:

Now we can distribute F over the disjunction to get:

Then we can distribute the existential quantifier over each formula:

And since does not appear in any of the root order conditions this is just:

But since each of the expressions for and are just logical encodings of the possible root orders, and each root order implies a sign table that we can use to decide whether there is any such that is we can just solve F in each clause of the disjunction via the sign table.

As we already saw in the discussion of sign tables the only root ordering where there is an that makes is the condition , so our formula becomes:

Which simplifies to:

And we have eliminated the quantifier and all occurences of !

Suppose we have a quantifier free formula (QFF) in linear real arithmetic with variables .

The idea of the algorithm is to distribute over a disjunction of every possible root ordering. Then find the truth value of under that ordering. For brevity we will write as

Since the real line is totally ordered one of the possible orders of roots must be true. Suppose is the set of all possible orders, then we can write the formula as:

Distributing F we get:

Distributing the existential quantifier we get:

Now in each conjunction the root ordering holds, so we can evaluate to True or False using the root ordering:

Now the only formulas that appear are the root orderings, , which depend only on and the terms each of which is either or , so we are done.

To shorten the formula we can drop any of the formulas in the disjunction where .

One detail we glossed over is how to use the sign table to evaluate . In both of our examples we could just look at the sign tables and check, but for a computer to do this task we need a more precise procedure.

The idea is to use a recursive algorithm to walk down the formula collecting all intervals in the sign table that satisfy the conditions of the formula. If there is at least one satisfying interval then the formula evaluates to true given the root ordering.

The recursive procedure is as follows:

Where returns the set of all rows of the sign table where linear polynomial has one of the signs in the list .

The complexity of this algorithm is horrifyingly bad. We have to enumerate all possible orderings of roots to eliminate one quantifier so for a formula containing expressions it is approximately for a single quantifier.

Much faster algorithms exist. An early algorithm called the Ferrante-Rakoff method can eliminate all quantifiers from a formula with quantifiers in where is a positive constant.

Later Loos and Weispfenning invented an improved method with a smaller constant . That method is implemented in the computer algebra package redlog.

The post Quantifier Elimination Over Linear Real Arithmetic by Enumerating all Possible Root Orders appeared first on Dillon Huff.

]]>The post Generating Shape Overlap Tests via Real Quantifier Elimination appeared first on Dillon Huff.

]]>http://stackoverflow.com/questions/401847/circle-rectangle-collision-detection-intersection

http://stackoverflow.com/questions/22093749/c-plane-sphere-collision-detection

The answers are usually given by human ingenuity, but as the answerer of the first question listed above points out algorithms to solve these kinds of shape intersection problems can be automatically invented by a procedure called *cylindrical algebraic decomposition* (CAD).

CAD is the most popular algorithm used for *real quantifier elimination* (RQE). RQE takes any problem in nonlinear real arithmetic (first order logic where all the atomic formulas are real polynomial equations and inequalities) and spits out a logically equivalent formula with no quantifiers.

To get a feel for RQE here are some formulas and their quantifier free equivalents:

is

is

is

is

This procedure can be used to generate intersection tests for 2 shapes like so: Come up with a logical formula, , that is true when the point is in shape 1. Then come up with a formula, , that is true when the point is in shape 2. To find an intersection test perform quantifier elimination on:

and transliterate the resulting formula into the programming language of your choice.

This method can automatically generate intersection tests for any number of shapes, but before we get to that lets look at an example.

A formula using quantifiers to describe whether a circle centered at point with radius , and a square with bottom left corner at and side length intersect is:

where:

Using the quantifier elimination command in redlog we can produce a new formula that is logically equivalent to the original one, but does not contain or .

Since this formula only uses known values (, , , , , and ) and has no quantifiers it can be transliterated into an executable intersection test in a programming language of the users choice.

After using redlog to eliminate quantifiers we can pretty print the resulting formula as a c++ function:

#include<cmath> bool test( const double r, const double l, const double b, const double d, const double a, const double c ) { return ( ( ( ( ( ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c - r >= 0.0 ) ) && ( a - c - l - r <= 0.0 ) ) || ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c + r >= 0.0 ) ) && ( a - c - l + r <= 0.0 ) ) ) || ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( pow( b, 2 ) - 2.0 * b * d + pow( d, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d - l <= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d - l <= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + pow( d, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ) || ( ( pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( l * 2.0 * b - 2.0 * d - l >= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d >= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) || ( ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d >= 0.0 ) || ( l * 2.0 * b - 2.0 * d - l <= 0.0 ) ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ) || ( ( pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( a - c >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( a - c - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( a - c >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + pow( d, 2 ) + 2.0 * d * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ) || ( ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( c, 2 ) + 2.0 * c * l + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) && ( ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d >= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d - l <= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) || ( ( ( ( ( r != 0.0 ) && ( l != 0.0 ) ) && ( b - d - l <= 0.0 ) ) && ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d - 2.0 * b * l + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + 2.0 * d * l + 2.0 * pow( l, 2 ) - pow( r, 2 ) >= 0.0 ) ) && ( ( b - d >= 0.0 ) || ( pow( a, 2 ) - 2.0 * a * c - 2.0 * a * l + pow( b, 2 ) - 2.0 * b * d + pow( c, 2 ) + 2.0 * c * l + pow( d, 2 ) + pow( l, 2 ) - pow( r, 2 ) <= 0.0 ) ) ) ) ) ); }

As you can see, the result isn't pretty. The downside of automatically generated formulas is that there is no human intuition behind them to make them comprehensible.

In theory this method can handle just about any 3D shape. A few 3D shapes that can be handled with fairly compact formulas:

Spheres:

Axis aligned cubes and cuboids:

Axis aligned ellipses:

Below are results for several other shape intersections

RQE is extremely slow. Any algorithm for RQE is at best , though there are specialized algorithms for the linear and quadratic cases that are faster in practice than general methods.

Redlog seems capable of generating intersection tests for 2D shapes, and some simple 3D shape combinations like sphere-cube intersection.

More complicated 3D shapes like cylinders seem out of reach for Redlog since the logical formulas that can be used to represent them have more variables and higher degree polynomials (4+) that are difficult to eliminate.

The post Generating Shape Overlap Tests via Real Quantifier Elimination appeared first on Dillon Huff.

]]>The post G-code Operations MRR Histogram appeared first on Dillon Huff.

]]>To help avoid machine crashes I added a final check before emitting G-code. The system would run a simulation of the G-code, calculate the average material removal rate (in cubic inches per minute) and warn me if it was excessive.

We were cutting 6061 aluminum and I wanted to be extremely conservative since our machine is so wimpy, so I set the system to warn me about anything above 0.6 cubic inches per minute.

After a few very conservative part programs went through the simulator without incident I made some changes to the toolpath generation system to speed things up.

The first G-code program I generated with the new system caused a warning, but in a rush I ignored it, thinking it was probably not too bad. As soon as the program reached the toolpath that produced the warning it crashed.

Below is a histogram showing the MRR for each operation I ran on the mill up to and including the crashing operation (on the far right):

It's a bad idea to ignore warnings!

The post G-code Operations MRR Histogram appeared first on Dillon Huff.

]]>The post How to Write an LLDB Plugin with Python: Customized Stepping Logic appeared first on Dillon Huff.

]]>Custom commands allow you to create custom logic for inspecting the state of the target program during a break. Custom stepping logic allows you to create customized logic to determine how to progress through the target program.

As an example we will create a custom step class, FindVar, that will step through a program until it finds a variable named r1.

import lldb class FindVar: def __init__(self, thread_plan, dict): self.thread_plan = thread_plan self.start_frame = thread_plan.GetThread().GetFrameAtIndex(0) def explains_stop(self, event): res = self.thread_plan.GetThread().GetStopReason() == lldb.eStopReasonTrace return res def should_stop(self, event): frame = self.thread_plan.GetThread().GetFrameAtIndex(0) a_var = frame.FindVariable("r1") if not a_var.IsValid(): print "Havent found r1 yet" return False else: print "Found r1!" self.thread_plan.SetPlanComplete(True) return True def should_step(self): return True

The three crucial methods are explains_stop, should_stop, and should_step.

To understand explains_stop lets look at the output of an ordinary LLDB session. If you run an ordinary LLDB session tracking a program, say a.out, and create a breakpoint at the function main, then run the program you should see output like this:

bash-3.2$ lldb ./a.out (lldb) target create "./a.out" Current executable set to './a.out' (x86_64). (lldb) breakpoint set --n main Breakpoint 1: where = a.out`main + 23 at main.cpp:8, address = 0x0000000100000d77 (lldb) r

The code will hit main immediately and break, spitting out the following info:

Process 18492 launched: './a.out' (x86_64) Process 18492 stopped * thread #1: tid = 0x58329e, 0x0000000100000d77 a.out`main + 23 at main.cpp:8, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x0000000100000d77 a.out`main + 23 at main.cpp:8

Notice the text of line 6: stop reason = breakpoint 1. 1. In LLDB every stop has a reason. When the debugger stops it calls each active thread plan's explains_stop method. The first one to return true is the stop reason.

LLDB supports several stop reasons, in the python API the supported stop reasons are:

eStopReasonBreakpoint

eStopReasonException

eStopReasonExec

eStopReasonInstrumentation

eStopReasonInvalid

eStopReasonNone

eStopReasonPlanComplete

eStopReasonSignal

eStopReasonThreadExiting

eStopReasonTrace

eStopReasonWatchpoint

Our example plugin is just supposed to step through the program, so it only explains the stop if the the stop reason is eStopReasonTrace.

In the event that our plan does explain the stop we get to decide whether to return control to the user or continue execution. This decision is made by the should_stop method.

For our plugin we are trying to run the program until a variable named r1 is defined, so we search for a variable named r1 in the current frame. If it exists we return control to the user. If it does not exist we continue execution.

Once we have decided to continue execution our plan has two options. It can resume normal execution, or it can take one step. In our case we want to step through the program one instruction at a time until r1 is defined, so we always return True, saying that we want to take a single step and then stop again.

To use this plugin we first need to create an executable to use as the target. In this example we will use a simple C++ program. First enter this code into a file named contrived.cpp:

#include "contrived.h" int contrived_1(const int l, const int v, const int q, my_value my_v) { int r1 = l - v; int r2 = q*l; if (my_v.useless) { return r1 - r2; } else { return my_v.x + my_v.y + my_v.z - r1*r2; } }

Next create the corresponding contrived.h file:

struct my_value { int x; int y; int z; bool useless; }; int contrived_1(const int l, const int v, const int q, my_value my_v);

Finally create a main.cpp file:

#include <iostream> #include "contrived.h" using namespace std; int main() { int x = 2; int y = 4; int z = 9; bool useless = true; my_value v{x, y, z, useless}; int k = contrived_1(3, 4, -12, v); cout << k << endl; }

With these files created compile them with:

clang++ -g -O0 -std=c++11 contrived.cpp main.cpp

This will create an executable, a.out. With a.out created we can debug it and use our plugin.

In the directory lldb_plugin run the commands:

bash-3.2$ lldb ./a.out (lldb) target create "./a.out" Current executable set to './a.out' (x86_64). (lldb) breakpoint set --n main Breakpoint 1: where = a.out`main + 23 at main.cpp:8, address = 0x0000000100000d77 (lldb) r Process 18568 launched: './a.out' (x86_64) Process 18568 stopped * thread #1: tid = 0x58688d, 0x0000000100000d77 a.out`main + 23 at main.cpp:8, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x0000000100000d77 a.out`main + 23 at main.cpp:8 5 using namespace std; 6 7 int main() { -> 8 int x = 2; 9 int y = 4; 10 int z = 9; 11 (lldb) command script import find_var.py (lldb) thread step-scripted -C find_var.FindVar Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Havent found r1 yet Found r1! Process 18568 stopped * thread #1: tid = 0x58688d, 0x0000000100000d00 a.out`contrived_1(l=9, v=16777216, q=1, my_v=(x = 4, y = 2, z = 1606417248, useless = true)) at contrived.cpp:3, queue = 'com.apple.main-thread', stop reason = Python thread plan implemented by class find_var.FindVar. frame #0: 0x0000000100000d00 a.out`contrived_1(l=9, v=16777216, q=1, my_v=(x = 4, y = 2, z = 1606417248, useless = true)) at contrived.cpp:3 1 #include "contrived.h" 2 -> 3 int contrived_1(const int l, const int v, const int q, my_value my_v) { 4 int r1 = l - v; 5 int r2 = q*l; 6 7 if (my_v.useless) {

The post How to Write an LLDB Plugin with Python: Customized Stepping Logic appeared first on Dillon Huff.

]]>The post How to Write an LLDB Plugin with Python: New Commands appeared first on Dillon Huff.

]]>- Print each frame in the stack trace
- For each frame print all arguments and local variables

Place this code in a new directory, say lldb_plugin, in a file named custom_command.py:

import lldb import commands import optparse import shlex def my_cmd(debugger, command, result, internal_dict): target = debugger.GetSelectedTarget() process = target.GetProcess() thread = process.GetSelectedThread() for frame in thread: print str(frame) function = frame.GetFunction() print 'FUNCTION = ', function if frame.IsInlined(): print 'INLINED' else: args = frame.get_arguments() print '# of arguments = ', len(args) for arg in args: print arg vars = frame.get_all_variables() print '# of vars =', len(args) for var in vars: print var def __lldb_init_module(debugger, internal_dict): debugger.HandleCommand('command script add -f custom_command.my_cmd my_cmd') print 'The "my_cmd" python command has been installed and is ready for use.'

This code contains two functions. __lldb_init_module is boilerplate that is used to load the command during an lldb session.

The meat of the command is the my_cmd function. This function takes four arguments, but the main one we will use is debugger. debugger represents the actual debug session, we call its GetSelectedTarget method to get the target, or running program, that we are debugging. From the target we get the running process, and from the process we get the running thread.

Things get interesting inside the thread. We can check if the frame is inlined, and if not we can list all arguments, list all variables and print out information about the function itself.

To use this plugin we first need to create an executable to use as the target. In this example we will use a simple C++ program. First enter this code into a file named contrived.cpp:

#include "contrived.h" int contrived_1(const int l, const int v, const int q, my_value my_v) { int r1 = l - v; int r2 = q*l; if (my_v.useless) { return r1 - r2; } else { return my_v.x + my_v.y + my_v.z - r1*r2; } }

Next create the corresponding contrived.h file:

struct my_value { int x; int y; int z; bool useless; }; int contrived_1(const int l, const int v, const int q, my_value my_v);

Finally create a main.cpp file:

#include <iostream> #include "contrived.h" using namespace std; int main() { int x = 2; int y = 4; int z = 9; bool useless = true; my_value v{x, y, z, useless}; int k = contrived_1(3, 4, -12, v); cout << k << endl; }

With these files created compile them with:

clang++ -g -O0 -std=c++11 contrived.cpp main.cpp

This will create an executable, a.out. With a.out created we can debug it and use our plugin.

In the directory lldb_plugin run the commands:

lldb ./a.out breakpoint set --n contrived_1 r

command script import custom_command.py

After the import you should see:

The "my_cmd" python command has been installed and is ready for use.

Now run the command:

my_cmd

and you should see something like:

frame #0: 0x0000000100000d15 a.out`contrived_1(l=3, v=4, q=-12, my_v=(x = 2, y = 4, z = 9, useless = true)) + 21 at contrived.cpp:4 FUNCTION = SBFunction: id = 0x0000008d, name = contrived_1(int, int, int, my_value), type = contrived_1 # of arguments = 4 (const int) l = 3 (const int) v = 4 (const int) q = -12 (my_value) my_v = (x = 2, y = 4, z = 9, useless = true) # of vars = 4 (const int) l = 3 (const int) v = 4 (const int) q = -12 (my_value) my_v = (x = 2, y = 4, z = 9, useless = true) (int) r1 = 1606579087 (int) r2 = 1 frame #1: 0x0000000100000dc8 a.out`main + 104 at main.cpp:15 FUNCTION = SBFunction: id = 0x00008d35, name = main, type = main # of arguments = 0 # of vars = 0 (int) x = 2 (int) y = 4 (int) z = 9 (bool) useless = true (my_value) v = (x = 2, y = 4, z = 9, useless = true) (int) k = 0 frame #2: 0x00007fff9dca45ad libdyld.dylib`start + 1 FUNCTION = No value # of arguments = 0 # of vars = 0

Full documentation for the LLDB python API can be found here

The post How to Write an LLDB Plugin with Python: New Commands appeared first on Dillon Huff.

]]>