A Safe Word on Constraints

In the past I have tweeted that one should never use constraints in their rigs. It seems like an odd thing to say. Constraints are useful, in theory. You can make one transform follow another transform’s position, orientation, and/or scale. This lets you build your rig in smaller pieces and hook them together with constraints, thus avoiding the dreaded “single hierarchy” issue where your outliner takes up half your screen just to see your wrist control.

Sounds great, right? Do something for me.

Open a scene, create two locators, and create a pointConstraint between them. Now open the node editor. How any connections are created by a simple constraint?  By my count, eleven. And there are connections between the constraint and the constrained transform in both directions.

Why!?

 

Maya’s constraints do a good job of hiding their complexity from the average user. When you constrain one transform to another, you can re-parent the targets and the constrained object freely and the constraint will probably still work. Unfortunately, this means that the constraints have to be fairly robust and capable of handling all the strange things an artist may to do a transform.

But we are not artists. We are riggers. We (should) be deliberate and precise in our choices. And with the advent of Parallel Evaluation in Maya 2016, we want our choices to be efficient and performant. Constraints are neither.

There must be a better way!

Well, there is.

What does a pointConstraint do? In its most basic use case, it makes one transform follow another transform’s world position, plus an offset. Can we do this with math?

You can do anything with math.

A hand made constraint requires two nodes: multMatrix and decomposeMatrix. The multMatrix node let’s you multiple the .worldMatrix of the target by the .worldInverseMatrix of the constrained transform’s parent, which produces a matrix that in the same space as the constrained transform. The decomposeMatrix node unpacks a matrix into its component – translate, rotate (euler or quaternion), scale, and shear. It’s literally that simple.

But what if you have more than one target? Well, that depends. If you’re have two targets, I suggest you get the local matrix of both targets and use blendColors and animBlendNodeAdditiveRotation to blend the outputs in local space. If you have between three or more targets, the complexity depends on how you are following the targets. If you are only following one target at a time, as in a space switch, then you can use a choice node upstream of the decomposeMatrix node to switch inputs. If you are actually blending between multiple targets, then you can do the blend with math nodes for translate, scale, and shear. Blending between multiple arbitrary rotations is a harder problem and outside of the scope of this post.

When I have tweeted about not using constraints, I have been @-ed with people asking, “why not run a script that re-routes the connections?”. Well, there’s certainly nothing stopping you from doing that, provided that the rotatePivot and rotatePivotTranslate of your target transforms and constrained transform don’t change. Also, a nodal solution runs slightly faster, but it the difference is only in tens of microseconds, or a hundred at the most.

What about an aimConstraint? If you just need something to point at something, you can use an angleBetween node. However, this only gives you direction – you have no control if the twist around the axis. At time of writing, I still use an aimConstraint node when I need to, but I fix the bi-directional connections. You can calculate the rotation necessary to aim one transform at another using math nodes, but it requires at least half a dozen nodes, and you are locked to X forward, Y up, unless you want to spend a few more nodes to re-orient the result.

Leave a comment