A Modernized Methodology for Magical Mustelids

otters.png

Hey, the first paid project I ever did with the ephemeral system is online!

I used the system to do this ad for Significant Otter:

I also used it to do the in-app otter animations, although in somewhat modified form—because we were originally planning to make the in-app animations 60fps, I couldn’t do them interpolationlessly (as I did for the ad above). So I tried out just taking the ephemeral controls—despite the lack of hierarchy—and simply splining them as-is.

Surprisingly, this actually worked pretty well. It wouldn’t be my preferred approach to most things, but it does have some advantages. For instance, since all controls are in world space and the camera never moved, the graph editor actually becomes a lot easier to use. Y means up in screen space, X means across in screen space, no exceptions. Of course that also means that, as far as interpolation is concerned, the controls have absolutely no relationship to each other at all. For this kind of very lose, cartoony animation, that turns out to be fine!

I’ll post some of the in-app animations too once I get permission from Pine Labs.

SIGGRAPH epilogue

SIGGRAPH was an incredibly positive experience, and I had many conversations that will shape the future of what I want to do with ephemeral rigging/interpolationless animation. I’ve even almost recovered from the inevitable post-convention plague!

SIGGRAPH tells me that recordings of my talk will be available publicly, but probably not till October. In the meantime, here’s another speed animation recording of the Chandelier Swing test shown as part of the presentation. I’ve annotated this one with some notes on how I’m using the system, and overall animation technique.

In addition, I realized that anyone who clicked on the link to my Powerpoint slides was probably viewing them on Dropbox. Unfortunately the Dropbox viewer doesn’t play video, which was kind of essential to the presentation! One could always download the file, but for those who don’t have Powerpoint or want to get a Microsoft account to use Powerpoint Online I’ve added some of the videos to my Vimeo:



SIGGRAPH 2019

Eleanor Rigging-by is happening at 2pm in room 153! Be there, or you will surely be a polygon with four vertices and four edges of equal length (all of which are at right angles to each other).

If you nevertheless can’t make it—if, for instance, like the majority of people in the world you are not at SIGGRAPH—I’ve posted my slides and notes here.

The hour draws nigh!

If you’re at SIGGRAPH, don’t forget to step into the “Eleanor Rigging-by” session in room 153 to hear me pontificate! It’s on Wednesday at 2pm.

Here’s another piece of example animation I’ll be showing:

I recorded the making of this one, and plan to post an annotated version after SIGGRAPH. It took me about six hours total.

Why keyframes are bad for character animation

zog.png

One of the somewhat controversial opinions I’ve expressed in this blog is that keyframe animation* is bad and should be replaced with raw poses. I’ve always been a little bit vague about this though, without a clear statement about why exactly this is. That’s because it’s been more of a feeling than anything else, a frustration with the futzing around with keyframes we’re all forced to do when animating.

I recently submitted a talk proposal for SIGGRAPH 2019, and this required me to be much more rigorous about stating what it is exactly that I think is wrong with the process, and it clarified my thinking. I now think that you can boil down the issues with both keyframe animation and hierarchical rigging to this statement:

Animation curves and a rig together form a system that generates character motion. Animators do not create motion--they edit inputs to that system in the form of keyframe values. But with conventional keyframing and rigs, you cannot, by looking at the end results, understand the system and inputs that produced them.

Everything wrong with the keyframe animation process flows from this basic fact. Crossing effects between multiple layers of control, unwanted spline behavior, the mess created by space switching and FK/IK switches/blends, even gimbal lock issues--these all reduce to the fact that there is no one-to-one relationship between the inputs and the result, and the animator must therefore mentally model the keyframe/rig system to understand what inputs will produce the desired results. But, since multiple possible sets of inputs (ie. different combinations of key placement and rig state) can produce visually indistinguishable results, that mental model degrades extraordinarily quickly, and sussing out the real relationship between inputs and results requires constant attention and interpretation. This is true even for a “blocking plus” process, as the moment you spline generally reveals on even a very tightly blocked shot.

Put this way, the entire history of CG animation technique sounds completely insane, doesn’t it? Why the hell is this how we decided to animate characters? Why would anyone think this was a good idea?

There are multiple factors involved, but I think a lot of it comes down to what I’ve begun to think of as the “nondestructiveness problem” in computer art. “Nondestructive” in this case might also be described as “parameterized” or “procedural”--basically any case in which the end result is continually regenerated from a set of inputs that can be altered at any time. Nondestructive techniques are one of the major advantages to doing art with a computer...except when they aren’t. A nondestructive technique that in one instance allows you to do the work of ten artists working with more traditional techniques will in another instance absolutely cripple your ability to get anything done at all.

As an example, let's say you’re designing a logo, something along these lines:

branch.png

This is a flat shape with very well defined, simple curves. I did this in Illustrator by placing down bezier handles, because that’s the obvious way to approach something like this. If I’d tried to paint the shape it would have taken forever to tune the shape to the right curvature, and I would probably have ended up with something that looked a bit wobbly no matter how long I worked on it.

Clear win for the nondestructive technique, right? Tuning a simple shape through bezier handles is much faster then painting it. Now imagine a completely naive observer, a hypothetical, possibly alien intelligence that has never encountered this thing you Earth people call “art” before. Such a being could be forgiven for concluding that a nondestructive approach is always correct. Something that can be quickly adjusted just by tweaking a few bezier handles has got to be better than thousands of messy pixels.

Listen, Zog...can I call you Zog?...let's put that idea to the test. You’re an alien superintelligence, so you should be able to use Adobe Illustrator, which was clearly designed for your kind and not for actual human beings. Only I don’t want you to make a logo. I want you to make this:

This background, used in the  Monkey test  me and Chris Perry produced for Vintata, was painted by Jeet Dzung and Ta Lan Hanh.

This background, used in the Monkey test me and Chris Perry produced for Vintata, was painted by Jeet Dzung and Ta Lan Hanh.

Kind of a different situation, isn’t it? When drawing clean shapes vectors are the obvious choice, but trying to paint by placing bezier handles down for each stroke is immensely inefficient,** even to a being of Zog’s incalculable intellect.

Now this doesn’t mean that nondestructive techniques have no utility for digital painters at all. Layers, for instance, are clearly very useful. And yet, the number of layers you can keep around and still have something useful to interact with is actually pretty limited. Dividing a painting into foreground/midground/background or into layers for tone and color makes sense. Making every alteration you make to the painting into a new layer, on the other hand, leaves you with an incomprehensible stack that you’re going to end up having to either collapse, or basically leave in place and never modify (in which case you might as well have never done it at all). It’s the same problem with keyframes--the mapping between inputs (the pixels in each layer) and output (the final image) is too complex to hold in your head, and eventually it becomes more work then treating everything as a flat image. Compare this to CG modeling, where surfaces generated from control points (such as NURBs and subdivision surfaces) make modeling and adjusting simple shapes very easy, but are vastly inferior to sculpting tools that use micropolygons or voxels when it comes to a complex shape like a character.

I think of nondestructive vs destructive means of creation as being on a graph like this:

destructive-nondestructive_graph.png

When complexity is low, nondestructive techniques are clearly superior, sometimes by a lot. But the difficulty of using nondestructive techniques increases exponentially as complexity increases, where destructive techniques increase linearly. There is a point at which the two lines cross, and a primarily nondestructive workflow (as opposed to a mostly destructive workflow with nondestructive assistance) flips from great to terrible.

And that’s the crux of the issue. A lot of the things you might want to animate with a computer are on the left hand side of the graph. If you want to animate a bouncing ball then a graph editor is the right thing. It’s the right thing for motion graphics, for camera movement, and for mechanical/vehicular motion. But the right hand side of the graph? That includes all character animation. Because there is really no such thing as a character performance that isn’t complex.

Now, I do want to take a moment to discuss my use of the term “complexity” here. I’m using it because I don’t have a better term, but the term could be misleading, because what I mean here isn’t quite the same thing as visual complexity. It’s quite easy to make something that’s very visually complex through nondestructive means--think of any fractal pattern. The best I can do to nail down this definition of “complexity” is that it has less to do with number of elements present and more to do with how distinct those elements are. A painting or a character performance is extremely specific, and cannot be easily broken down into constituent elements. It doesn’t “parameterize” very well. Art that has, one might say, specific complexity is on the right side of the graph, and should be authored in as direct a manner as possible.

There is a pretty important exception to this rule: cases where the end result must be generated from inputs because it’s going to be applied to multiple sets of inputs. For instance, a compositing graph might well be complex and difficult to reason about. That’s just too bad, because “collapsing” the graph would make the results useless.

I suggest that this is an indication that compositing moving images is actually a completely different class of problem--just like rigging, compositing is in fact programming. A crucial difference here is that, unlike an animator who is either authoring inputs into the keyframe/rigging system, a compositor’s creation is the system itself, ie. the graph that will take in rendered or filmed inputs (plus inputs the compositing artist may have created like keyframes, ramps, masks, etc) and output final frames. The difference is whether what you are creating is fundamentally data that will be fed into a process (keyframes, poses, pixels, bezier handle locations, vertices, voxels, etc) or whether you are creating both data and the process that will be used to process that data (probably in the form of a node graph).

This idea isn’t at all new--Shake files used to be called “scripts” after all--but it’s not how people usually think about what a compositing artist is creating. And it’s not necessarily just true of node-graph-based systems. Are you using a lot of nested comps and complex layer interactions in After Effects? Congratulations, you’re a programmer. You’re using an extremely obfuscated system to do your programming, but that doesn’t make you not a programmer. Also, being a programmer doesn’t make you not an artist. There is nothing whatsoever mutually exclusive about those roles.

I’m really, really old***, so I remember the early days of CG. I was excited when I first saw Chromosaurus and Stanley & Stella in Breaking the Ice. I remember how much promise there was supposed to be in computer art, how the ability to tweak anything via a simple parameter was supposed to take the drudgery out of the artistic creation process. And sometimes, it did. Sometimes you got the kind of “big win” represented by nonlinear editing, which we now just call “editing” because doing things the old-fashioned way is so impractical by comparison that it barely exists. But just as often, the promise of “parameterized” art failed.

In the ensuing decades most disciplines gradually settled into an understanding of what different techniques were and weren’t good for. You generate a cityscape procedurally, but you sculpt a character. This understanding has never truly emerged for animation, and we’re still stuck with a system that is fundamentally built on a “parameterized” approach in every case. I’d go so far as to say that the fundamental assumption behind Maya is that this is the only approach, even though the actual animators and TDs using the software have been trying to turn it in other directions for the sake of their sanity for decades now.

To do something about this, it would help to gain some sort of understanding of why nondestructive techniques are good for some things and terrible for others. I don’t think this conception of the problem fully gets at all it’s aspects, but I think it’s a start.

You see, Zog? We’re a young species, but we show great promise. You were not so different, once.

*To be completely clear, by “keyframe animation” I mean animation based on curves with keyframes as control points. This is not the same, and in fact is in some ways opposed to, the concept of “key poses” as used by traditional animators.

**There are, of course, vector-based painting systems, but I’d argue that the user interacts with them more like raster painting systems then like Illustrator. The problem of nondestructive workflows is a user interaction problem--whatever the program uses to represent what the user is creating under the hood may be a separate question.

***I’m 37, but I got out ahead of the pack and started being super old at a young age. As evidence, I not only remember when “desktop video” was a thing, I remember when “desktop publishing” was a thing. For you youngsters, that’s what we now refer to as “publishing.”

Further Ephemeral Experiments

I’m using the ephemeral rig system in production right now! Unfortunately, I can’t tell you anything about it! They’re watching my house.

So far the first real production has gone smoothly, though there’s certainly room for improvement! I’ve also made a variety of updates.

One of the things I always wanted to do with the ephemeral system was to manipulate long chains like tails with a “magnet” tool, like you’d use to move vertices around. I now have this working.

The magnet I have now works as the ephemeral graph evaluation walks the chain, applying a delta based on how far the driving node has moved that cycle. Each node down the line gets 65% (a totally arbitrary number that happened to look good) of the movement of the node driving it. You could also set this up with a radius and falloff, though it would require organizing the graph a bit differently, but this was the simplest way to do it.

magnetConstraint.PNG

The magnet constraint required making some slight changes to how the graph builds. Most of the driver-drive relationships between ephemeral nodes can only go in one direction for a given mode--a node will have one possible driver in forward mode, and a different one in backward mode, but usually doesn’t have both drivers available at any one time. Magnet drivers, however, have to point in both directions (since any given node might be upstream or downstream of the node the user is controlling), which requires the graph to recognize what connections have already been established so that it doesn’t double back on itself.

Luckily, the ephemeral graph already has code to deal with a very similar situation: paired nodes also point at each other in the same mode, and also must avoid establishing circular connections when the graph is built. I was able to use the same system for magnets, though I had to tweak it slightly.

chooseDriver.PNG

The two relevant parts here are isNotDrivenByNode and chooseDriver. The first prevents circularities by rejecting as a potential driver any node that already drives this one, and the second filters a list of multiple possible pair or magnet drivers by favoring ones that are already constrained--otherwise, the node might choose to attach itself to a node further down the tail instead, and break the chain of magnet propagation. This is less of a problem for paired nodes, which all just move as a unit and don’t really care what order they’re hooked up in. (I should really stop using the term “pair,” since you can in fact hook any number of nodes up together, but it’s all over my code and I don’t really want to change it now).

Here’s another cool thing I discovered quite by accident--scaling eph nodes is actually a really useful way to effect your entire pose!

This is because of how I’m calculating the matrices of each node. Many of the interaction modes require a relationship between nodes that’s basically a parent relationship (albeit a temporary one). This would normally be achieved by simply multiplying the matrices of each node together. And that’s pretty close to what my ephemeral parent constraint class does:

parentConstraint.PNG

The constraint class calls a tiny library I wrote to wrap the Open Maya 2 classes I wanted to use in a way that would be more friendly to the rest of my code. Here’s the relevant functions from my library:

matrixCode.PNG

The parent constraint class uses transformMatrixIntoSpace to find out what the difference is between the driving and driven nodes matrices, ie. it’s putting the driven node into the driver’s space. The function does this by just multiplying it by the inverse of the driver’s matrix using om2’s existing math methods. No surprise there.

But when the constraint uses this “parentRelativeMatrix” to calculate a new matrix for the driven node, it’s not just calling multiplyMatrices--it’s calling concatMatrices, which does multiply the matrices, but then removes shear values and resets scale to whatever it was before the matrix multiplication. I did this because, unlike conventional hierarchies, any ephemeral relationship between nodes is expected to be thrown away and recreated all the time. If any of the nodes were scaled, errant scale and shear values might creep into the transforms behind my back. So I simply generate a new matrix each time with scale and shear values removed.

In practice, this creates a nice effect where scaling a given node repositions it;s children, rather then scaling them. This is particularly effective when using bidirectional/full character mode with a dummy pivot, which can be both repositioned and reoriented to create a useful axis and pivot for scaling a pose.

Finally, I've also implemented a way to merge and unmerge characters with the ephemeral system. You may recall that everything about the system assumes that every control in a character has a keyframe on the same pose--effectively, a big stack of keyframes are masquerading as a pose. Having two nodes interact ephemerally requires that they must share poses.

You may recall that I used to force every node in a character to share poses using character sets. Character sets are a feature of, of all things, the Trax Editor, and using them to enforce keyframe alignment is a bit like killing a mosquito with a bazooka, only instead of a bazooka it’s actually an ancient trebuchet and you need a couple of burly Byzantines to follow you around everywhere so they can help pull it back in case a mosquito shows up. Thankfully, It turns out that character sets aren't the only way to sync keys between nodes in Maya--Brad Clark of Rigging Dojo turned me on to Keying Groups, which serve the same function without introducing additional nodes between keyframes and their attributes, or historical siege technology.

Wait, I hear the experienced animators among you saying, that can't be right. Surely if Autodesk introduced a feature to sync keyframes between nodes but didn't use character sets I would have heard of it! Well, my friend, I'm sorry to say you have put rather too much faith in Autodesk's ability to communicate with its users, because they introduced this feature quite some time ago and then neglected to tell anyone that it existed. It's not even accessible through the GUI unless you're using FBIK, and the documentation for it is so sparse you aren't likely to encounter any mention of it unless you specifically go looking. It does, however, work very well and has put me off character sets for good.

When the ephemeral system inits, it scans the scene for everything that's tagged as belonging to an ephemeral character, and sets up keying groups for any characters it finds.

keygroups.PNG

Here again I find the "destroy it and rebuild it from scratch" principle simplifies things greatly--characters and keying groups should always be congruent in the ephemeral system, and the easiest way to ensure that is to destroy and regenerate the latter from the former any time something could have changed. That includes referencing in anything (it could be a new character!), initing the system (who knows what this file was previously doing?), or loading a file (dito).

Accordingly, merging characters is just a matter of adding a message connection between the character ident nodes that tells the system that these two characters should be treated as one when grouping up the controls into characters, and then telling the system to regenerate keying groups. Unmerging works exactly the same way. Being able to merge and unmerge characters is an important aspect of the workflow, as you might want a prop (for the purposes of this system, every ephemeral rig is a "character," including props) to share poses with one character while you work on one section of a shot, and another in some other section--for instance, if an object is being passed back and forth between characters.

Finally, I have some bad news to report--there are a couple frustrating limitations to this implementation of an ephemeral rig system I've discovered. The primary and most troublesome one is that, as it currently stand, the ephemeral callbacks and parallel eval can't be used at the same time, on pain of intermittent--but unavoidable—crashing.

While only Autodesk could answer the question definitively, my best guess at what's happening is that Maya is attempting to fire one of the ephemeral callbacks while the parallel graph is in the process of being rebuilt, causing a race condition or some other horribleness that brings Maya to it's knees. It's difficult to test, though, since the fact that this implementation of ephemeral rigging relies so completely on callbacks means that it's not really possible to test it meaningfully without them.

Luckily the rigs I'm using in current projects are fast enough that they're reasonable to use in DG mode, but this obviously isn't a long-term solution. Charles Wardlaw suggested to me that the issue might actually be that I'm using Python to create the callbacks through om2, and that I might get a different result if using the C++ API. It's going to be necessary to eventually port the system to C++ in any case for performance reasons, but I was hoping to avoid that any time soon. We'll see how things develop on that front.

The other issue has to do with manipulating multiple controls at once. So far, I've only had the ephemeral graph build when one control was selected. This doesn't stem from a limitation in the ephemeral graph itself--it can take multiple inputs with no problem, and does when building a breakdown graph--but with the need to figure out how to trigger graph evaluation from multiple callbacks. I'd planned to implement it first with one callback, and then expand that, maybe implementing a system that tracked the number of active callbacks and only evaluated the graph after the last one had fired. Once I looked more closely, though, I realized that the problem was more serious then I'd thought.

Consider a simple FK chain, like a tail. One of the most natural things to do with it is select all the controls and bend them at once to bend the tail. In an ephemeral context, however, this means that all the tail controls--which are currently being manipulated by the user--must effect each other. I'd previously been able to assume that there was a clear division between a node being manipulated by the user (from which the ephemeral system pulls) and nodes that are not (to which the ephemeral system pushes).

So while I'm sure this problem can be surmounted, it does complicate things quite a bit, and it will take some additional research to figure out the best way to approach it.

First flight

flight.png

Here’s the very first animation test using the full ephemeral rig system!

And here’s the second, recorded for your edification.

Animating these with the system was a blast—posing was just as fast as I’d hoped! In particular, you can see how easy it is to manipulate the tail, casually switching control modes as needed. It also revealed some areas I want to improve. Only allowing zeroing in forward mode, for instance, really isn’t as convenient as I’d hoped, so I’ll need to unlock that for other modes and figure out how to best present those options to the animator.

I’ve hid every aspect of the interface here that isn’t relevant to moving the rig around or scrubbing the timeline. I’m using Christoph Lendenfeld’s onion skin tool until I have a chance to reintegrate the 3D onion skins into the new system. Also, full disclosure—not every control in this rig is ephemeral. Specifically, fingers and face controls still use a conventional hierarchy, though I’m excited to start rigging faces ephemerally.

Finally, here’s a clip of Jaaaaaaaaames Baxter talking about animation technique, which I think perfectly encapsulates the workflow I want to enable for CG.

T-45 minutes and counting

I’ve now added all the features the ephemeral rig system needs to actually be used for animating a real shot, so that’s what I’ll be doing next. At 2287 lines, it is by far the largest programming project I have ever personally completed. Here’s a few words on those last, crucial features.

One obvious question that comes up about ephemeral rigging is how you zero things. Normally, you’d zero things by putting zeroes in every channel. (Except scale, or any other attribute that multiplies something else. We’ve all made that mistake at least once.)

For ephemeral rigging, this makes no sense. There is no “zeroed” space for anything to return to, except the actual center of the scene. Indeed, I intend to animate with an ephemeral rig while keeping the channel box entirely hidden! It serves no useful purpose in the interpolationless, ephemeral context.

But we can’t do away with the concept entirely. It will still be necessary to be able to return controls to some sort of “default” state. Without a meaningful parent space, the only kind of “default” that makes any sense is for ephemeral controls to be able to default to a specific relationship to each other.

Allow me to demonstrate.

To make this possible without parent spaces, I have each control store it’s default TRS values on the node as extra attributes. Because everything in the ephemeral rig is in the same space, pulling these values and building a matrix out of them is precisely the same as getting the world matrix of the node when it’s in its default position. I can therefore get those values at any time and find out where the node should be relative to any other node by comparing it’s default matrix to the default matrix of that node.

ephDefault.png

In this case, I’ve implemented zeroing through forward mode ie. zeroing a control will return it to it’s default relationship to its forward driver, and take it’s “child” controls with it. In theory there’s no particular reason that zeroing must be limited to the forward relationship. You could zero backwards, or sideways, or whatever you want. But figuring out how to make this accessible to the user in a clear way is tricky, so I’ve fallen back on the most basic functionality for the moment. I expect this will be the most common use of zeroing, in any case--while it’s completely essential to be able to zero out a control in some way, I don’t actually anticipate using it all that much.

It’s worth noting that in order to zero these controls, I actually have to build a whole ephemeral graph. Other controls may depend on them, and the controls being zeroed themselves must be zeroed in the correct order if they depend on each other. Because this is basically similar to the breakdown graph (ie. multiple controls can be used to start tracing the graph) I’ve made this a special case of the breakdown graph build.

Another new feature I’ve just added is “dummy” controls. These are temporary controls that allow you to pivot a control from any location, without actually adjusting the pivot of anything.

One thing I greatly dislike about Maya’s transforms is the way pivot offsets get in between the ideally one-one relationship between its TRS values and its matrix. There’s a reason why we generally “buffer” a control instead of freezing transformations--adding an additional transform to cleanly change a node’s parent space is preferable to dealing with the pivot gunk that attaches itself to a transform like a barnicle at the least opportunity. Basically, pivots are evil and I hate them.

That said, you obviously want to have pivot-like behavior, even if no actual pivots are used in the Maya transform sense. In the ephemeral rig system, this is actually rather easy to do--since there is already a concept of “pairing” nodes to allow for temporary connections between ephemeral controls, a “pivot” is simply an additional control that gets paired with whatever control you are using.

To keep with my philosophy that the ephemeral rig system never changes the Maya node graph (other then creating message connections that have no effect on the Maya scene) and does all ephemeral behavior internally, this control is not created or added as needed--it already exists in the scene, just hanging out waiting to be used. When you’re done using this “dummy” to pivot a control, it simply vanishes from view until needed again.

Finally, I’ve added a little HUD to the system that hovers around in the corner of the viewport, giving you an indication of which interaction mode in currently active. This allows me to shift changing the interaction mode back to hotkeys, which is much smoother then using the menu all the time (although I’ve left the options in the menu in case I ever need them there).

Like most of the GUI I have here, these are all represented by meshes in the scene. I know that using meshes to create GUIs is wrong, but I just can’t stop.

What even is animation?

backtothefuture.png

So, this animation thing. What is it, exactly? I mean, like, on a philosophical level? Time for some navel gazing!

Animation is difficult to define, but most definitions I’ve seen rely on the concept of frame-by-frame creation or manipulation. Something recorded from the real world in real time is not animation--anything created frame-by-frame is.

I think this definition made sense before the advent of CG, but it makes very little sense now. Is a VFX shot of an explosion making use of fluid simulation, and containing no characters, animated? By this definition, it is. Is a shot that uses only motion capture data, with no intervention by an animator, animation? (I know that doesn’t happen much in real life, but bear with me.) By this definition, it is. But neither of those scenarios describe something much like the animation an animator does.

Conversely, is motion graphics animation? This definition includes that too, but in that case I think it’s clearly correct--a motion graphics artist is an animator, just not a character animator. There is something similar between the processes a character animator and a motion graphics animator use that is fundamentally different from a shot that relies purely on simulation or mocap. The conventional definition fails to “cut the problem at its joints,” and leads to a lot of misunderstanding about what animation is or isn’t good for, and how it can be used.

I think this all becomes a lot clearer if you abandon the “frame-by-frame” definition and look at animation as just one method of creating motion data. I propose that there are three main methods of authoring motion:

motionTypes.png

This is motion recorded from a performance in real time. Performance capture, puppeteering, and live action film are all methods of recording motion.

This is motion created algorithmically from a set of starting conditions. Unlike the other two, generated motion only exists in CG. This includes all simulation and most procedural animation.

This is motion defined directly by a human artist. This includes CG animation (character animation or otherwise), but also drawn animation and stop motion.

These three methods of generating motion are very different from each other in terms of how they relate to time. Recorded motion is, of course, authored in real time. Generated motion may or may not be real-time. It does, however, have an “arrow of time,” albeit one imposed by a simulation’s reliance on prior states rather than the second law of thermodynamics.

Animated motion alone allows independence from time’s arrow.* An animator builds up a shot from its “bones”—usually storytelling poses, but this applies even to the “layered” animation approach—in a way completely at odds with the basic process of either recorded or generated motion. This is both animation’s great strength (the artistic possibilities offered by this way of looking at motion) and it’s great weakness (it’s really goddamn time-consuming).

Most shots in a conventional CG production process will use some combination of these three methods. Keyframed animation will be passed to the FX department for cloth and hair sim. Motion captured motion will be adjusted and reworked by an animator. But because the processes and basic relationship to time used by each motion creation method are incompatible, using more then one will force your pipeline into an exceptionally rigid configuration.

Do you discover, after simulating a character’s clothing, that its silhouette no longer reads? You must exit the simulation process and return to the animation process—where you can no longer see the effects of your simulation. Do you discover, while reworking a motion captured performance, that it must be significantly different to fulfill the needs of the project (or the whims of the client)? Your choices are either to turn it into animated motion, or to return to the motion capture stage and throw out any work you’ve done up to that point, since a process that is not time-independent cannot be easily used to modify existing motion.

Recorded and generated motion might conceivably be made compatible in terms of process if the generated motion was calculated in real-time as the motion was recorded, but neither can be made compatible with animated motion by the very nature of the processes involved. You can’t run a simulation backwards.** The real world, meanwhile, is so famously strict about its arrow of time that reversing its direction requires violating fundamental physical laws, usually the preview of a Doctor of some sort (notable Doctors with experience in this sort of thing include Emmet Brown and “just The Doctor, thanks,” although I understand that they have beef).

Interestingly, this isn’t true of many other parts of the CG production process, even though they are not used to create motion. It’s entirely possible, for instance, to animate and light a shot concurrently, updating the data in the lighting file as new animation revisions become available. The only reason we do not generally do this in the other direction, pushing lighting information to animation scenes, is just that since most lighting and shading is not intended for real-time use it wouldn’t be much use to an animator. That’s a technological limitation, not an inherent consequence of incompatible processes, and it’s one that isn’t even that hard to bridge: many studios have pipelines that use final rendered assets and their actual renderer for playblasts. Of course, the very best case scenario would be real-time rendering in-viewport.

Similarly, modeling and rigging processes do not produce the same kind of hard incompatibility as the various processes associated with motion authoring. Certainly, most riggers would prefer to have a model locked before rigging begins, but this is more of a bulwark against careless modelers and indecisive directors then an inherent incompatibility of processes—there is no reason one could not rig a base mesh while a modeler continues to work on surface detail, assuming one trusted the modeler not to make proportional changes that would cause major rig revisions (which is a very big assumption). Since I often act as modeler, rigger, and animator, I will often make modeling changes in situ.

Pipeline implications aside, the different methods of motion authoring are also fundamentally good for different things. This may seem obvious--no one tries to motion capture a dragon, simulate a main character’s performance, or animate realistic clothing behavior--but I don’t think that the differences are always fully appreciated. Specifically, there is a reason why animation lends itself so readily to comedy and action genres, and has such difficulty with subtlety.

Human perception and understanding of the actual behavior of the world around us is awful. Half the information we think we have about what happens around us is just bullshit our brains make up. This is terrible for pretty much everything we do, except art. It’s great for art, because it’s possible to appeal to those skewed expectations to create artistic effects that cannot be perceived in the real world, because they don’t actually happen.

For animation, this means appealing to human cluelessness about physics. I’m not talking about the classic “cartoon physics” cliches--walk out over a cliff and don’t fall till you look down etc--but something much more elemental about how movement is portrayed. For instance, “hang time” at the top of a character’s leap looks great to the human eye, even though the way it’s usually portrayed in animation is flat-out physically impossible. Animation can produce aesthetic effects that cannot be recorded and would be exceedingly difficult to generate, precisely because the direct, time-independent control of every aspect of movement by a human artist allows for the creation of movement that is wrong but feels right.

Conversely, aesthetic effects that rely on a great deal of fidelity to real life are precisely what animation struggles with. I want to animate clothing because I intend to animate it in a highly stylized manner--hand animating realistic clothing would be completely insane. At the far end of the spectrum you get something like a photorealistic face. Ironically, that’s pretty much the one thing we are good at perceiving, and animating one successfully is so incredibly difficult that I don’t think anyone has ever actually succeeded in doing so, even once, to this very day.

It will not surprise readers of this blog that all my interest is in animated motion, and that I have little use for the other two. Their incompatibilities with the process of animation make them a bad choice for the kind of fast production I’m interested in. However, there’s some question about whether these three categories fully encompass what’s possible. Not all procedural animation techniques necessarily have an “arrow of time,” and there is some possibility of developing some sort of “assisted animation” process where time-independent procedural techniques are used by an animator while animating. Better automatic inbetweening through an ML-assisted breakdown tool, for instance, is something me and Tagore Smith have discussed a bit in the past, and there may be some real potential there to speed up the animation process. But the potential for harmony between algorithmic and animated processes remains largely untapped. For the moment, I intend to deal with the problem by telling all procedural and simulated motion generation methods to keep their damn dirty hands off my characters.

* Stop motion animation seems like a good counter argument to my definition here--doesn’t it always have to proceed forward frame by frame, and doesn’t that give it an inherent time arrow just like generated and recorded motion? My answer would be that it still falls into the category of animated motion since arcs, poses, and performance details can all be decided on ahead of time with precision (even if they often aren’t)--indeed, I understand it’s quite common at studios like Laika for animators to “pose block” a stop motion shot for approval, and then use that as a skeleton to build the final shot on. It’s is a bit of a grey area, though.

** Some may take issue with my contention that simulation can’t be defined in a time-independent way, since simulations can have goals. While this does allow you to define what some aspects of the simulation will look like at a particular point in time, I don’t think it’s the same thing as the time-independence of the animation process, since you still can’t actually know how your simulation will reach that goal until you run it.

I have a new website

It's at https://www.rafanzovin.com!

In the process of prepping stuff for it, I gathered some stuff that had been kicking around on the web and consolidated some of it on my Vimeo page. For instance, here's a bunch of holiday card animations we sent out to people from Anzovin Studio in 2016:

And here's a promotional piece I animated while at Doodle Pictures (which is now part of Atwater Studios):

Both used interpolationless animation and partially ephemeral rigs (using the manipulator-based Phantom Tools system described in the previous post). While most of what I want to do would use very flat NPR rendering, using full rendering on the pirate piece ended up working out pretty well even though it has a variable pose rate--it gives it a bit of a stop-motiony feel.

 

Breakdowns and autokeying

breakdance.png

Ah, the life of an amateur programmer, so very full of backtracking because you did it wrong the first time (actually, as I understand it that might be all programmers). Now that I’ve had time to do the necessary refactoring, the ephemeral rig system supports being used to make breakdowns.

Doing breakdowns was a big part of the reason I needed an ephemeral rig system in the first place. The system we used on the Monkey test and New Pioneers, which we called Phantom Tools, had some limited ephemeral behavior built into manipulators. That made it possible to pose with a “broken” rig without having to move each part individually, though it was nowhere near as flexible as full ephemeral rigging, but it’s biggest limitation was that, being built into manipulators, it was completely separate from our breakdown tool. So breakdowns between poses that were significantly different from each other would tend to collapse, as there was no rig behavior in place while the breakdown was created.

The ephemeral rig graph, however, can now be run in either “control” or “breakdown” mode. In control mode the graph is activated by a callback placed on the node the control the user is currently manipulating, the same as I’ve been showing in the last few months of posts. In “breakdown” mode, on the other hand, the callback is instead placed on a slider object that can be used to slide the character between adjacent poses.

A note here about UI: so far, everything I’ve done for the UI on the ephemeral rig system (in-context menu aside) has been done with actual meshes in the scene. This is a really stupid way to do UI, but I ended up being backed into doing it that way to avoid even bigger headaches.

The selection map, for instance, could in theory be easily replaced by any one of many commercially available or free picker tools, and it was certainly my original plan to use mgPicker or a similar tool rather than continuing to roll my own. The problem with this is that I needed to do some pretty nonstandard stuff. For example, I wanted to be able to add stretch meters that would change color as the character was manipulated to give the user insight into how far off-model they were pulling a pose. It was necessary for this to update as the scene evaluated to be a useful indicator, and the easiest way to do that turned out to be just making it a part of the scene. No doubt I could have rolled my own QT-based interface that would have accomplished that goal, but that’s more work then I wanted to put into the picker.

The breakdown slider was originally going to be an exception to this: it’s easy enough to make a slider in a custom window that runs a command when dragged, and it’s much cleaner then having something that’s actually hanging around in the scene. The problem with this turned out to be related to how I have been running the graph and doing auto-keying.

As you may or may not recall from my earlier experiments, I’m handling auto-keying rather differently from the conventional Maya auto-keying. I want to treat poses as if they were drawings in a 2D animation package like Toon Boom, meaning that instead of keys on specific frames you have poses with duration. So if I go modify the pose on a frame that doesn’t happen to be it’s first frame (where the keyframe actually is), that doesn’t matter. I’m still modifying that pose.

Since I’m not relying on Maya’s auto-keying, I’ve needed to implement my own, and that means knowing when the user has just done something that requires setting another key. To do this, I have a global option called “doAutoKey”, and every time the ephemeral callback fires it sets this variable to True.

ephCallback.PNG

A brief note on coding style here--setting values by using get() and set() methods isn't considered to be good coding practice in Python and I really shouldn't be using them. The only reason they're present here is that I have a lot of global options I need to set and just setting them directly introduces problems with scope that I didn't know how to fix when I wrote this bit. So I ended up falling back on that particular practice because I was already familiar with it from pyMEL (which actually has kind of a reasonable excuse for that behavior that doesn't apply here) and I was in a hurry. It's not ideal and at some point I'm going to go back and purge it from my code.

Then when the user stops, I have another callback firing on each idle cycle. If the autokey option is set, it performs the autokey, and then turns it off so that subsequent idle cycles will not autokey again until the user activates it by interacting with the rig. In the old system I implemented this with a scriptJob, which was ugly. The new system just uses an idle callback, which is much cleaner.

idleCallback.PNG

In addition to performing autokey if necessary, the idle callback also builds the ephemeral graph and callback if it doesn't already exist. A callback on time change kills the graph:

onscrub.PNG

...and then, when the user has alighted on a frame, the next idle cycle triggers the idle callback, setting the current frame to be the stored frame that the eph callback function will use to determine if it should run, and then building the graph again for this new frame. (Building the graph is one of the things that the setManipModes() function does, in addition to setting the manipulation modes for the character's nodes based on the current selection.) Basically it's the same concept as my old system, except it doesn't have to check the scene for some attribute that sets the current pose, and instead just handles it all in code (and all scriptJobs have been replaced by callbacks). It's vastly simpler and less brittle.

This worked great for the ephemeral rig interaction, but when I tried to drive it with a GUI slider it fell apart completely. Turns out that interacting with a GUI slider does not prevent Maya from firing idle callbacks, causing the system to interrupt the users interaction to set keys! This actually makes sense because unlike a manipulator, which interacts directly with Maya’s graph, the slider just executes a command whenever the user changes its value, and there’s no particular reason to assume it would execute those commands fast enough that Maya wouldn’t have time to do idle cycles in-between (and indeed, if you use the Shape Editor or any other built-in slider-based interface in Maya, you will very much see idle cycles happening all the time during manipulation).

With some additional work it would have been possible to get around this. For instance, I could have created the slider in QT and turned the idle callback on and off based on mouse events. But since I already had a whole bunch of machinery set up to have things work correctly when being driven by a transform node for the ephemeral interaction, I decided it was just easier (for this prototype, at any rate) to use the system I already had set up and make the slider an object in the scene.

Building the graph in breakdown mode has a few differences from building it in control mode. Unlike building a control graph, which has a specific point where you can start tracing the graph from (the node the user is manipulating), the breakdown graph does not necessarily have a clear point of entry, or may have multiple points (as it’s possible to breakdown specific nodes while the rest of the body reacts normally). In fact, it's possible to have multiple "islands" in the breakdown graph that do not even relate to each other.

The ephemeral graph building logic turned out to be surprisingly robust here--it’s possible to throw pretty much any set of nodes, connected or otherwise, at the graph, and they’ll organize themselves appropriately and establish which ones should be driving which. The main thing I had to add was a new type of constraint, a “NoConstraint,” so that nodes that do not have any drivers can still operate in the graph. This is never needed when building the control graph, because only the critical path is ever built, ensuring that the only node that does not have an input is the control node being manipulated by the user.

breakdownDG.png

Creating the actual breakdown behavior depended on adding the ability to each node to know what it’s past and future matrices are, in addition to it’s current matrix. Because all the nodes are in world space (or at least in the same space) this wasn’t too difficult, as I can get their “world space” values right off their keyframes. Maya doesn’t usually evaluate the graph on frames other then the current frame, but you can ask it for the whole timeline’s worth of keyframes at any time. This is another big advantage of keeping all the control rig behavior outside the Maya graph--you can treat the actual world space location of each control throughout the shot as known information you can look at at any time, not something that must be evaluated before it can be known.

Here's how I look at the keys associated with a given node and figures out which ones represent the current, past, and future poses from the current time:

localPoses.PNG

Note that this is a bit more complex then just using the findKeyframe command to find next or previous keys, because, since I'm treating them as poses rather then keyframes, the "current pose" may or may not actually be on the current frame.

Finally, a word to the wise: if you ask om2 what a transform node’s rotate values are, it will quite naturally and correctly give you radians, as God intended. But the keyframes? They are probably in degrees, also called the Devil’s Unit. Mark this warning well, lest you be deceived into feeding this unholy unit to a function designed to accept only pure and immaculate radians. It’s extremely confusing.
 

How to interact with an ephmeral rig

puppet.png

As I’ve implemented the new ephemeral rig system, I’ve thought a lot about what the best way to interact with it is. While the system supports setting any given control to be driven by any other control using any of the constraints I’ve implemented (more on that later), defining these on a case-by-case basis is not a great way to interact with a character. As I discussed in an earlier post, The Animation Core Loop, fiddling with settings isn’t what you want to be thinking about when you’re animating a character, and setting interactions on a control-by-control basis adds too many extra steps to the “core loop” of animation.

One way to handle this would be to have a bunch of presets, like different “rigs” you can switch the character to, but this doesn’t really take advantage of the flexibility of ephemeral rigging. So I’ve been working on trying to figure out an interaction scheme that has a limited number of settings to mess with, but gives you a wide range of possible rig behaviors out of those limited settings. What I have right now is what I’m calling “directional manipulation.”

In this model, you have three primary interaction modes, “Forward,” “Backward,” and the default. When in a default state, the rig supports mostly free interaction, except for knees and elbows which more or may not be “suspended,” my non-IK version of IK. Forward drives things down the chain from the current control--in the simplest case, this is just FK--and backwards drives things up the chain. You can use both at once to drive both sides. The system can act on a limb or a character level, and has a couple of other settings--like rotation isolation, and suspension--that can be turned on or off. You can get a surprising amount of rig behavior out of this basic idea. For instance, a reverse foot is simply manipulating the foot backwards in limb mode--something you can just as easily do with a spine to swing the hips around.

It unifies a lot of rig interaction ideas into a few manipulation settings, which I think is the best way to think about interacting with the rig--not “this is what this control is set to” but “this is how I want to interact right now.” In my previous videos, I used hotkeys to activate or deactivate the various manipulation options. Now I’m experimenting with using a custom marking menu. I’m not sure which will be better in practice--some experimentation while animating will be necessary to reveal that.

It’s not perfect yet--for instance, Backwards works perfectly well on knees and elbows, but doesn’t do anything all that useful!

You’d probably want to have it move the torso in this case, which is completely doable but means implementing a special case so the torso can recognize which of its possible children it should be attaching itself to correctly, much like the system I already have to make sure the graph passes through paired controls in the right direction. Since this is an unusual case it’s not top priority for me right now but it’s how I think things should work eventually.

There are also three other settings that globally affect the behavior of all controls, however they are being interacted with. “Suspended” turns on and off the suspend behavior of elbows, knees, and the intermediate tail controls.

“Rotation Isolation” globally effects whether certain controls--such as hips, shoulders, and head--maintain their orientation when affected by other controls.

Finally, “Head Free” is my one concession to a control-specific setting. Sometimes you’d want the head to move with the torso, other times you’d want it to be free. But unlike the “Suspended” mode, which effects a bunch of different controls, there’s only one head.

This isn’t really a finished concept--as I animate with this system, I hope I’ll be able to find an even simpler scheme to use to globally affect the rig that requires fewer settings, but this already feels a lot smoother to interact with then my earlier attempts.

How to construct a node graph

So there are a bunch of pieces to how the ephemeral rig graph works, but I figure it’s best to start by talking about how the graph itself behaves. It’s actually really similar to how Maya’s graph works in DG mode, but unlike Maya’s graph it only needs to exist for nodes that are in the “critical path” of the user’s interactions. Currently the graph only supports manipulating one node at a time, so the critical path is whatever nodes depend on the one the user has selected.

criticalPath.png

The naive way to solve this would be to just start with the node being manipulated and work your way down, but this would break down fairly quickly. For one thing, you can have nodes in the critical path that depend on nodes that aren’t in the critical path. This is, for instance, true when manipulating the hand when the rig is in “suspend” mode. The elbow depends on the hand, but it also depends on the shoulder, which is not in the critical path of the hand.

Here’s another case: a character grasping it’s head with it’s hand. I’ve paired the head and hand so they move together as one. Rotating from the torso, however, effects the elbow from two different directions--from the hand via the head, and from the shoulder (note the HUD telling you what rig interaction mode is currently active).

criticalPathHeadAndHand.png

Trying to evaluate these examples forward through the graph would end up getting you old data, or nonexistant data. Basically, you need a way to know what has or hasn’t already been evaluated, and base your order of evaluation off of that. In other words, just like Maya’s graph, nodes need to be able to be “dirty” or “clean” so you can know if they are safe to pull data from to calculate other nodes. And the simplest way to arrange evaluation around that question is to work backward, starting at the end of the “tree” of nodes and working backwards through the graph.

Let’s take a look at the code for the ephemeral rig node class:

ephNode.PNG

A bunch of this code is about finding connections through the graph and building constraints, but for the moment what we care about is just the eval() method of the node.

nodeEval.PNG

Note that it tells its drivers to evaluate before it itself does. And, because all nodes in the graph have an eval method, the nodes on which it depends will tell their drivers to do the same. So first the evaluation requests will cascade up the graph until they hit something that doesn’t require any drivers, ie the node the user is controlling or a “dummy node” that isn’t in the critical path (like the shoulder in our arm example). Then the evaluation itself cascades back down the graph, marking the nodes clean as it goes. When there are multiple branches to the graph, the evaluation request cascade will stop when it encounters a clean node and use that node’s results without evaluating, preventing nodes from being needlessly evaluated twice. To figure out what’s in the critical path in the first place, the graph builds itself by looking at message connections I’ve placed in the scene.

These connections simply tell the ephemeral rig system what types of graph could be built--they do nothing else. When a node is selected, the ephemeral rig system looks at these connections and, depending on the current settings each node has for what it’s relationships to other nodes should be, builds the graph anew. Here’s the code for doing so.

walkNodes.PNG

Basically what’s happening here is that the function starts with the node the user has currently selected, and recursively walks down the connections to find all the nodes that this node could effect. At each step, those possible connections are filtered to get only the ones relevant to the current interaction.

For instance, each node also has it’s own current constraint type, which is stored as a string attribute on the transform node. When I switch to a different interaction mode, this string is set for the nodes in the limb I have selected. For instance, holding down ‘Z’ to switch into FK mode while manipulating an arm would set all it’s ephConstraintType attributes to “forward.” Then when the graph is rebuilt it will filter the connections used to find the critical path to just “forward” connections.

This doesn’t build the graph itself though--it just finds the nodes in the critical path. To actually build the graph, each node looks back up the connections to find its drivers. These drivers may or may not themselves be in the critical path, and if they aren’t the node creates a “dummy” node for them. This node has all the methods of an ephNode so that the whole group of nodes can be called together safely, but they don’t do anything. I already know this node isn’t in the critical path, so it will not be affected by the user and is always clean.

I also have it filter the drivers to detect and prevent circularities. The “pair” constraint type has connections that are always two-way, so it’s necessary to choose a single path through the pair and prevent the graph from doubling back on itself. Since the graph is rebuilt every time you select a different control, this still produces seemingly circular behavior--ie you can manipulate the pair from either side--as it will find a different path through the pair each time.

nodeFilter.PNG

(Yes, I know that isNotDrivenByNode() uses an if/else when I could perfectly well use a single not to so the same thing in one line. Formulating it this way is easier for my brain to reason about. I’m not sorry.)

Note that I’m being very careful when choosing variable names to create an obvious division between strings on the one hand, and actual nodes (which may be MObjects, EphNodes, or occasionally PyNodes) on the other. Every time I’m using a string to refer to a Maya node (as you would with the commands module) I use a variable like “nodeName,” never “node.” When mixing cmds and Open Maya 2 in the same code base this distinction is very important!

Just remember, the map is not the territory, and the tap is not meritorious! Little Shaper humor for you there, sundog!

Rigs are software

codeDemonstration.png

There are a bunch of new ephemeral rig tests at the bottom of this post. If you’re here to see cool rigging and don’t want to read a fairly long rant, scroll down there. Then scroll back up because you should read my rant anyway!

Years ago--I guess it would have been some time in 2007--Anzovin Studio was working with a company called Digital Fish on rigging tools for their animation package Reflex. I’ve probably mentioned Reflex here before--the fact that it was never released publicly is pretty good evidence that God is dead. You might have heard of Digital Fish, since one of the things they do these days is maintain OpenSubdiv.

Reflex did not require you to rig with it’s node graph, and indeed did not (at that time) provide any GUI tools for rigging at all. Instead you’d code rigs in a domain-specific language they’d developed for that purpose. What you could do with that language was extremely open, including defining your own deformers as needed. Coming from a Maya-by-way-of-Lightwave-and-Animation:Master TD background, not a programming background, this sounded insane to me. And indeed, it probably contributed to Reflex’s slow acceptance and eventual dormancy--but not because it was a bad idea. I’ve come to believe that Reflex’s rigging-as-programming approach was in fact precisely the right idea. It was just ahead of its time, and most TDs, including me, weren’t ready to hear it.

Well get ready, because much like the Master of Magnetism was eventually acknowledged to have Made Some Valid Points, we’re all going to have to admit that Reflex Was Right.

magnetoWasRight.jpg

The fact is that rigging is programming. It wasn’t necessarily meant to be. I recently encountered someone I hadn’t spoken to for years. He hadn’t had any real contact with the industry since the late 90s, and he asked me if I still did “boning.” (No, seriously, this was an actual term that people used, I’m not making that up.) Go back far enough in time, and rigging really is basically about placing bones and defining deformation and not much else.

But obviously rigging today is nothing like that. Even the simplest modern rig contains an great deal of internal logic about what drives what by what method under what conditions, and whether or not you are actually writing any code that is programming. To be clear, I’m not referring here to scripted auto-rigging. I mean the rig itself is a program. A great deal of the problem with scripted auto-rigging tools--and despite being the designer of a fairly popular Maya autorig tool, I have begun to regard the whole “auto-rigging” concept with suspicion--is that it’s a program that exists only to generate another program. You might even say that an auto-rig tool compiles to a rig, which would be fine except that the auto-rig program is frequently more complex than the rig it’s supposed to generate would be if expressed in code, and that’s not the direction that’s supposed to go.

This idea that rigs are software isn’t new, or something I made up--it’s becoming an increasingly common view. Rafael Fragapane’s Cult of Rig takes that view--part of what makes his approach so interesting is that he’s applying programming concepts like encapsulation to the Maya node graph. Cesar Saez has a great article on how the TD world is bifurcating into people who are fundamentally artists and people who are really software engineers.

Probably to some of the people who read this blog, the idea of just coding a rig from scratch sounds terrifying. The good news is that it actually isn’t! What surprised me about this project is that it was much easier then I’d thought--considerably easier then my earlier, hacky implementation of ephemeral rigging that attempted to get Maya to do a bunch of the work for me. Once again, we see the supposedly more intuitive, “user-friendly” approach turns out to be much more work then just buckling down and doing things the “hard” way.

That said, thinking of rigging as programming is more a point of view then a specific practice--it doesn’t necessarily imply that you have to write your rig in Python the way I’m doing now. However, creating your “rig program” purely through the Maya node graph locks you in to a very specific idea of how the rig can evaluate. I’m not arguing that node graphs are inherently bad—in fact I chose to write my new ephemeral rig system as a graph with nodes, specifically because it’s an easy way to figure out the correct order to evaluate things in. But Maya generally expects everything to evaluate through its graph, even when that’s counterproductive. How can you tell that it’s counterproductive? Look at how often Maya breaks its own rules.

IK handles are a perfect example. There’s nothing stopping you from making an IK handle that would evaluate through the Maya node graph as long as it isn't cyclic, but they wanted their handle to have two-way interactions with the joints that it drives. So they completely broke Maya’s basic model of scene evaluation, to the eternal consternation of TDs who thought they could reason about the graph by tracing connections (suckers!). Maya IK handles have worked this way since Maya came out in 1998. That’s how long it took Alias/Wavefront to give up on Maya’s scene graph model and just start special casing things—no time at all. They couldn’t get a single release of the product out before doing so.

So how does my node graph differ from Mayas? Well, for one thing it’s vastly simpler and does an extremely specific task, instead of trying to be the basis for an entire application. To be fair, if I was trying to write a node graph that could support that load, I probably would have failed miserably, since I’m not actually a software engineer!

It’s very lightweight-ness is both the reason why I could write it, and its purpose—it’s so simple that I can destroy and recreate it in different forms as needed without incurring a significant performance penalty or pulling the rug out from under some other aspect of the scene. I also make no assumption that the graph is the only way to evaluate transforms in the ephemeral rig system. It’s used to correctly order evaluation of transforms when it’s important to do so, and ignored when evaluation order is not important.

Here’s a couple more examples.

Here I’m doing a reverse foot, ephemeral rig style. Once again, rebuilding the graph lets me switch behaviors in seemingly circular ways easily. Of particular note here is that nothing really needs to change much in order to allow for "backwards kinematics"--I don’t have special controls or attributes. It’s just manipulating the same controls but tracing a different set of connections to build the graph. Any set of controls could be set up to behave that way, just by making appropriate connections that can be followed by the ephemeral DG.

And here’s a tail, showing just how useful the ephemeral rig behavior is at posing arbitrary numbers of controls.

Next time we’ll get into the code, and see how the ephemeral rig DG operates at a low level.