First flight

flight.png

Here’s the very first animation test using the full ephemeral rig system!

And here’s the second, recorded for your edification.

Animating these with the system was a blast—posing was just as fast as I’d hoped! In particular, you can see how easy it is to manipulate the tail, casually switching control modes as needed. It also revealed some areas I want to improve. Only allowing zeroing in forward mode, for instance, really isn’t as convenient as I’d hoped, so I’ll need to unlock that for other modes and figure out how to best present those options to the animator.

I’ve hid every aspect of the interface here that isn’t relevant to moving the rig around or scrubbing the timeline. I’m using Christoph Lendenfeld’s onion skin tool until I have a chance to reintegrate the 3D onion skins into the new system. Also, full disclosure—not every control in this rig is ephemeral. Specifically, fingers and face controls still use a conventional hierarchy, though I’m excited to start rigging faces ephemerally.

Finally, here’s a clip of Jaaaaaaaaames Baxter talking about animation technique, which I think perfectly encapsulates the workflow I want to enable for CG.

T-45 minutes and counting

I’ve now added all the features the ephemeral rig system needs to actually be used for animating a real shot, so that’s what I’ll be doing next. At 2287 lines, it is by far the largest programming project I have ever personally completed. Here’s a few words on those last, crucial features.

One obvious question that comes up about ephemeral rigging is how you zero things. Normally, you’d zero things by putting zeroes in every channel. (Except scale, or any other attribute that multiplies something else. We’ve all made that mistake at least once.)

For ephemeral rigging, this makes no sense. There is no “zeroed” space for anything to return to, except the actual center of the scene. Indeed, I intend to animate with an ephemeral rig while keeping the channel box entirely hidden! It serves no useful purpose in the interpolationless, ephemeral context.

But we can’t do away with the concept entirely. It will still be necessary to be able to return controls to some sort of “default” state. Without a meaningful parent space, the only kind of “default” that makes any sense is for ephemeral controls to be able to default to a specific relationship to each other.

Allow me to demonstrate.

To make this possible without parent spaces, I have each control store it’s default TRS values on the node as extra attributes. Because everything in the ephemeral rig is in the same space, pulling these values and building a matrix out of them is precisely the same as getting the world matrix of the node when it’s in its default position. I can therefore get those values at any time and find out where the node should be relative to any other node by comparing it’s default matrix to the default matrix of that node.

ephDefault.png

In this case, I’ve implemented zeroing through forward mode ie. zeroing a control will return it to it’s default relationship to its forward driver, and take it’s “child” controls with it. In theory there’s no particular reason that zeroing must be limited to the forward relationship. You could zero backwards, or sideways, or whatever you want. But figuring out how to make this accessible to the user in a clear way is tricky, so I’ve fallen back on the most basic functionality for the moment. I expect this will be the most common use of zeroing, in any case--while it’s completely essential to be able to zero out a control in some way, I don’t actually anticipate using it all that much.

It’s worth noting that in order to zero these controls, I actually have to build a whole ephemeral graph. Other controls may depend on them, and the controls being zeroed themselves must be zeroed in the correct order if they depend on each other. Because this is basically similar to the breakdown graph (ie. multiple controls can be used to start tracing the graph) I’ve made this a special case of the breakdown graph build.

Another new feature I’ve just added is “dummy” controls. These are temporary controls that allow you to pivot a control from any location, without actually adjusting the pivot of anything.

One thing I greatly dislike about Maya’s transforms is the way pivot offsets get in between the ideally one-one relationship between its TRS values and its matrix. There’s a reason why we generally “buffer” a control instead of freezing transformations--adding an additional transform to cleanly change a node’s parent space is preferable to dealing with the pivot gunk that attaches itself to a transform like a barnicle at the least opportunity. Basically, pivots are evil and I hate them.

That said, you obviously want to have pivot-like behavior, even if no actual pivots are used in the Maya transform sense. In the ephemeral rig system, this is actually rather easy to do--since there is already a concept of “pairing” nodes to allow for temporary connections between ephemeral controls, a “pivot” is simply an additional control that gets paired with whatever control you are using.

To keep with my philosophy that the ephemeral rig system never changes the Maya node graph (other then creating message connections that have no effect on the Maya scene) and does all ephemeral behavior internally, this control is not created or added as needed--it already exists in the scene, just hanging out waiting to be used. When you’re done using this “dummy” to pivot a control, it simply vanishes from view until needed again.

Finally, I’ve added a little HUD to the system that hovers around in the corner of the viewport, giving you an indication of which interaction mode in currently active. This allows me to shift changing the interaction mode back to hotkeys, which is much smoother then using the menu all the time (although I’ve left the options in the menu in case I ever need them there).

Like most of the GUI I have here, these are all represented by meshes in the scene. I know that using meshes to create GUIs is wrong, but I just can’t stop.

What even is animation?

backtothefuture.png

So, this animation thing. What is it, exactly? I mean, like, on a philosophical level? Time for some navel gazing!

Animation is difficult to define, but most definitions I’ve seen rely on the concept of frame-by-frame creation or manipulation. Something recorded from the real world in real time is not animation--anything created frame-by-frame is.

I think this definition made sense before the advent of CG, but it makes very little sense now. Is a VFX shot of an explosion making use of fluid simulation, and containing no characters, animated? By this definition, it is. Is a shot that uses only motion capture data, with no intervention by an animator, animation? (I know that doesn’t happen much in real life, but bear with me.) By this definition, it is. But neither of those scenarios describe something much like the animation an animator does.

Conversely, is motion graphics animation? This definition includes that too, but in that case I think it’s clearly correct--a motion graphics artist is an animator, just not a character animator. There is something similar between the processes a character animator and a motion graphics animator use that is fundamentally different from a shot that relies purely on simulation or mocap. The conventional definition fails to “cut the problem at its joints,” and leads to a lot of misunderstanding about what animation is or isn’t good for, and how it can be used.

I think this all becomes a lot clearer if you abandon the “frame-by-frame” definition and look at animation as just one method of creating motion data. I propose that there are three main methods of authoring motion:

motionTypes.png

This is motion recorded from a performance in real time. Performance capture, puppeteering, and live action film are all methods of recording motion.

This is motion created algorithmically from a set of starting conditions. Unlike the other two, generated motion only exists in CG. This includes all simulation and most procedural animation.

This is motion defined directly by a human artist. This includes CG animation (character animation or otherwise), but also drawn animation and stop motion.

These three methods of generating motion are very different from each other in terms of how they relate to time. Recorded motion is, of course, authored in real time. Generated motion may or may not be real-time. It does, however, have an “arrow of time,” albeit one imposed by a simulation’s reliance on prior states rather than the second law of thermodynamics.

Animated motion alone allows independence from time’s arrow.* An animator builds up a shot from its “bones”—usually storytelling poses, but this applies even to the “layered” animation approach—in a way completely at odds with the basic process of either recorded or generated motion. This is both animation’s great strength (the artistic possibilities offered by this way of looking at motion) and it’s great weakness (it’s really goddamn time-consuming).

Most shots in a conventional CG production process will use some combination of these three methods. Keyframed animation will be passed to the FX department for cloth and hair sim. Motion captured motion will be adjusted and reworked by an animator. But because the processes and basic relationship to time used by each motion creation method are incompatible, using more then one will force your pipeline into an exceptionally rigid configuration.

Do you discover, after simulating a character’s clothing, that its silhouette no longer reads? You must exit the simulation process and return to the animation process—where you can no longer see the effects of your simulation. Do you discover, while reworking a motion captured performance, that it must be significantly different to fulfill the needs of the project (or the whims of the client)? Your choices are either to turn it into animated motion, or to return to the motion capture stage and throw out any work you’ve done up to that point, since a process that is not time-independent cannot be easily used to modify existing motion.

Recorded and generated motion might conceivably be made compatible in terms of process if the generated motion was calculated in real-time as the motion was recorded, but neither can be made compatible with animated motion by the very nature of the processes involved. You can’t run a simulation backwards.** The real world, meanwhile, is so famously strict about its arrow of time that reversing its direction requires violating fundamental physical laws, usually the preview of a Doctor of some sort (notable Doctors with experience in this sort of thing include Emmet Brown and “just The Doctor, thanks,” although I understand that they have beef).

Interestingly, this isn’t true of many other parts of the CG production process, even though they are not used to create motion. It’s entirely possible, for instance, to animate and light a shot concurrently, updating the data in the lighting file as new animation revisions become available. The only reason we do not generally do this in the other direction, pushing lighting information to animation scenes, is just that since most lighting and shading is not intended for real-time use it wouldn’t be much use to an animator. That’s a technological limitation, not an inherent consequence of incompatible processes, and it’s one that isn’t even that hard to bridge: many studios have pipelines that use final rendered assets and their actual renderer for playblasts. Of course, the very best case scenario would be real-time rendering in-viewport.

Similarly, modeling and rigging processes do not produce the same kind of hard incompatibility as the various processes associated with motion authoring. Certainly, most riggers would prefer to have a model locked before rigging begins, but this is more of a bulwark against careless modelers and indecisive directors then an inherent incompatibility of processes—there is no reason one could not rig a base mesh while a modeler continues to work on surface detail, assuming one trusted the modeler not to make proportional changes that would cause major rig revisions (which is a very big assumption). Since I often act as modeler, rigger, and animator, I will often make modeling changes in situ.

Pipeline implications aside, the different methods of motion authoring are also fundamentally good for different things. This may seem obvious--no one tries to motion capture a dragon, simulate a main character’s performance, or animate realistic clothing behavior--but I don’t think that the differences are always fully appreciated. Specifically, there is a reason why animation lends itself so readily to comedy and action genres, and has such difficulty with subtlety.

Human perception and understanding of the actual behavior of the world around us is awful. Half the information we think we have about what happens around us is just bullshit our brains make up. This is terrible for pretty much everything we do, except art. It’s great for art, because it’s possible to appeal to those skewed expectations to create artistic effects that cannot be perceived in the real world, because they don’t actually happen.

For animation, this means appealing to human cluelessness about physics. I’m not talking about the classic “cartoon physics” cliches--walk out over a cliff and don’t fall till you look down etc--but something much more elemental about how movement is portrayed. For instance, “hang time” at the top of a character’s leap looks great to the human eye, even though the way it’s usually portrayed in animation is flat-out physically impossible. Animation can produce aesthetic effects that cannot be recorded and would be exceedingly difficult to generate, precisely because the direct, time-independent control of every aspect of movement by a human artist allows for the creation of movement that is wrong but feels right.

Conversely, aesthetic effects that rely on a great deal of fidelity to real life are precisely what animation struggles with. I want to animate clothing because I intend to animate it in a highly stylized manner--hand animating realistic clothing would be completely insane. At the far end of the spectrum you get something like a photorealistic face. Ironically, that’s pretty much the one thing we are good at perceiving, and animating one successfully is so incredibly difficult that I don’t think anyone has ever actually succeeded in doing so, even once, to this very day.

It will not surprise readers of this blog that all my interest is in animated motion, and that I have little use for the other two. Their incompatibilities with the process of animation make them a bad choice for the kind of fast production I’m interested in. However, there’s some question about whether these three categories fully encompass what’s possible. Not all procedural animation techniques necessarily have an “arrow of time,” and there is some possibility of developing some sort of “assisted animation” process where time-independent procedural techniques are used by an animator while animating. Better automatic inbetweening through an ML-assisted breakdown tool, for instance, is something me and Tagore Smith have discussed a bit in the past, and there may be some real potential there to speed up the animation process. But the potential for harmony between algorithmic and animated processes remains largely untapped. For the moment, I intend to deal with the problem by telling all procedural and simulated motion generation methods to keep their damn dirty hands off my characters.

* Stop motion animation seems like a good counter argument to my definition here--doesn’t it always have to proceed forward frame by frame, and doesn’t that give it an inherent time arrow just like generated and recorded motion? My answer would be that it still falls into the category of animated motion since arcs, poses, and performance details can all be decided on ahead of time with precision (even if they often aren’t)--indeed, I understand it’s quite common at studios like Laika for animators to “pose block” a stop motion shot for approval, and then use that as a skeleton to build the final shot on. It’s is a bit of a grey area, though.

** Some may take issue with my contention that simulation can’t be defined in a time-independent way, since simulations can have goals. While this does allow you to define what some aspects of the simulation will look like at a particular point in time, I don’t think it’s the same thing as the time-independence of the animation process, since you still can’t actually know how your simulation will reach that goal until you run it.

I have a new website

It's at https://www.rafanzovin.com!

In the process of prepping stuff for it, I gathered some stuff that had been kicking around on the web and consolidated some of it on my Vimeo page. For instance, here's a bunch of holiday card animations we sent out to people from Anzovin Studio in 2016:

And here's a promotional piece I animated while at Doodle Pictures (which is now part of Atwater Studios):

Both used interpolationless animation and partially ephemeral rigs (using the manipulator-based Phantom Tools system described in the previous post). While most of what I want to do would use very flat NPR rendering, using full rendering on the pirate piece ended up working out pretty well even though it has a variable pose rate--it gives it a bit of a stop-motiony feel.

 

Breakdowns and autokeying

breakdance.png

Ah, the life of an amateur programmer, so very full of backtracking because you did it wrong the first time (actually, as I understand it that might be all programmers). Now that I’ve had time to do the necessary refactoring, the ephemeral rig system supports being used to make breakdowns.

Doing breakdowns was a big part of the reason I needed an ephemeral rig system in the first place. The system we used on the Monkey test and New Pioneers, which we called Phantom Tools, had some limited ephemeral behavior built into manipulators. That made it possible to pose with a “broken” rig without having to move each part individually, though it was nowhere near as flexible as full ephemeral rigging, but it’s biggest limitation was that, being built into manipulators, it was completely separate from our breakdown tool. So breakdowns between poses that were significantly different from each other would tend to collapse, as there was no rig behavior in place while the breakdown was created.

The ephemeral rig graph, however, can now be run in either “control” or “breakdown” mode. In control mode the graph is activated by a callback placed on the node the control the user is currently manipulating, the same as I’ve been showing in the last few months of posts. In “breakdown” mode, on the other hand, the callback is instead placed on a slider object that can be used to slide the character between adjacent poses.

A note here about UI: so far, everything I’ve done for the UI on the ephemeral rig system (in-context menu aside) has been done with actual meshes in the scene. This is a really stupid way to do UI, but I ended up being backed into doing it that way to avoid even bigger headaches.

The selection map, for instance, could in theory be easily replaced by any one of many commercially available or free picker tools, and it was certainly my original plan to use mgPicker or a similar tool rather than continuing to roll my own. The problem with this is that I needed to do some pretty nonstandard stuff. For example, I wanted to be able to add stretch meters that would change color as the character was manipulated to give the user insight into how far off-model they were pulling a pose. It was necessary for this to update as the scene evaluated to be a useful indicator, and the easiest way to do that turned out to be just making it a part of the scene. No doubt I could have rolled my own QT-based interface that would have accomplished that goal, but that’s more work then I wanted to put into the picker.

The breakdown slider was originally going to be an exception to this: it’s easy enough to make a slider in a custom window that runs a command when dragged, and it’s much cleaner then having something that’s actually hanging around in the scene. The problem with this turned out to be related to how I have been running the graph and doing auto-keying.

As you may or may not recall from my earlier experiments, I’m handling auto-keying rather differently from the conventional Maya auto-keying. I want to treat poses as if they were drawings in a 2D animation package like Toon Boom, meaning that instead of keys on specific frames you have poses with duration. So if I go modify the pose on a frame that doesn’t happen to be it’s first frame (where the keyframe actually is), that doesn’t matter. I’m still modifying that pose.

Since I’m not relying on Maya’s auto-keying, I’ve needed to implement my own, and that means knowing when the user has just done something that requires setting another key. To do this, I have a global option called “doAutoKey”, and every time the ephemeral callback fires it sets this variable to True.

ephCallback.PNG

A brief note on coding style here--setting values by using get() and set() methods isn't considered to be good coding practice in Python and I really shouldn't be using them. The only reason they're present here is that I have a lot of global options I need to set and just setting them directly introduces problems with scope that I didn't know how to fix when I wrote this bit. So I ended up falling back on that particular practice because I was already familiar with it from pyMEL (which actually has kind of a reasonable excuse for that behavior that doesn't apply here) and I was in a hurry. It's not ideal and at some point I'm going to go back and purge it from my code.

Then when the user stops, I have another callback firing on each idle cycle. If the autokey option is set, it performs the autokey, and then turns it off so that subsequent idle cycles will not autokey again until the user activates it by interacting with the rig. In the old system I implemented this with a scriptJob, which was ugly. The new system just uses an idle callback, which is much cleaner.

idleCallback.PNG

In addition to performing autokey if necessary, the idle callback also builds the ephemeral graph and callback if it doesn't already exist. A callback on time change kills the graph:

onscrub.PNG

...and then, when the user has alighted on a frame, the next idle cycle triggers the idle callback, setting the current frame to be the stored frame that the eph callback function will use to determine if it should run, and then building the graph again for this new frame. (Building the graph is one of the things that the setManipModes() function does, in addition to setting the manipulation modes for the character's nodes based on the current selection.) Basically it's the same concept as my old system, except it doesn't have to check the scene for some attribute that sets the current pose, and instead just handles it all in code (and all scriptJobs have been replaced by callbacks). It's vastly simpler and less brittle.

This worked great for the ephemeral rig interaction, but when I tried to drive it with a GUI slider it fell apart completely. Turns out that interacting with a GUI slider does not prevent Maya from firing idle callbacks, causing the system to interrupt the users interaction to set keys! This actually makes sense because unlike a manipulator, which interacts directly with Maya’s graph, the slider just executes a command whenever the user changes its value, and there’s no particular reason to assume it would execute those commands fast enough that Maya wouldn’t have time to do idle cycles in-between (and indeed, if you use the Shape Editor or any other built-in slider-based interface in Maya, you will very much see idle cycles happening all the time during manipulation).

With some additional work it would have been possible to get around this. For instance, I could have created the slider in QT and turned the idle callback on and off based on mouse events. But since I already had a whole bunch of machinery set up to have things work correctly when being driven by a transform node for the ephemeral interaction, I decided it was just easier (for this prototype, at any rate) to use the system I already had set up and make the slider an object in the scene.

Building the graph in breakdown mode has a few differences from building it in control mode. Unlike building a control graph, which has a specific point where you can start tracing the graph from (the node the user is manipulating), the breakdown graph does not necessarily have a clear point of entry, or may have multiple points (as it’s possible to breakdown specific nodes while the rest of the body reacts normally). In fact, it's possible to have multiple "islands" in the breakdown graph that do not even relate to each other.

The ephemeral graph building logic turned out to be surprisingly robust here--it’s possible to throw pretty much any set of nodes, connected or otherwise, at the graph, and they’ll organize themselves appropriately and establish which ones should be driving which. The main thing I had to add was a new type of constraint, a “NoConstraint,” so that nodes that do not have any drivers can still operate in the graph. This is never needed when building the control graph, because only the critical path is ever built, ensuring that the only node that does not have an input is the control node being manipulated by the user.

breakdownDG.png

Creating the actual breakdown behavior depended on adding the ability to each node to know what it’s past and future matrices are, in addition to it’s current matrix. Because all the nodes are in world space (or at least in the same space) this wasn’t too difficult, as I can get their “world space” values right off their keyframes. Maya doesn’t usually evaluate the graph on frames other then the current frame, but you can ask it for the whole timeline’s worth of keyframes at any time. This is another big advantage of keeping all the control rig behavior outside the Maya graph--you can treat the actual world space location of each control throughout the shot as known information you can look at at any time, not something that must be evaluated before it can be known.

Here's how I look at the keys associated with a given node and figures out which ones represent the current, past, and future poses from the current time:

localPoses.PNG

Note that this is a bit more complex then just using the findKeyframe command to find next or previous keys, because, since I'm treating them as poses rather then keyframes, the "current pose" may or may not actually be on the current frame.

Finally, a word to the wise: if you ask om2 what a transform node’s rotate values are, it will quite naturally and correctly give you radians, as God intended. But the keyframes? They are probably in degrees, also called the Devil’s Unit. Mark this warning well, lest you be deceived into feeding this unholy unit to a function designed to accept only pure and immaculate radians. It’s extremely confusing.
 

How to interact with an ephmeral rig

puppet.png

As I’ve implemented the new ephemeral rig system, I’ve thought a lot about what the best way to interact with it is. While the system supports setting any given control to be driven by any other control using any of the constraints I’ve implemented (more on that later), defining these on a case-by-case basis is not a great way to interact with a character. As I discussed in an earlier post, The Animation Core Loop, fiddling with settings isn’t what you want to be thinking about when you’re animating a character, and setting interactions on a control-by-control basis adds too many extra steps to the “core loop” of animation.

One way to handle this would be to have a bunch of presets, like different “rigs” you can switch the character to, but this doesn’t really take advantage of the flexibility of ephemeral rigging. So I’ve been working on trying to figure out an interaction scheme that has a limited number of settings to mess with, but gives you a wide range of possible rig behaviors out of those limited settings. What I have right now is what I’m calling “directional manipulation.”

In this model, you have three primary interaction modes, “Forward,” “Backward,” and the default. When in a default state, the rig supports mostly free interaction, except for knees and elbows which more or may not be “suspended,” my non-IK version of IK. Forward drives things down the chain from the current control--in the simplest case, this is just FK--and backwards drives things up the chain. You can use both at once to drive both sides. The system can act on a limb or a character level, and has a couple of other settings--like rotation isolation, and suspension--that can be turned on or off. You can get a surprising amount of rig behavior out of this basic idea. For instance, a reverse foot is simply manipulating the foot backwards in limb mode--something you can just as easily do with a spine to swing the hips around.

It unifies a lot of rig interaction ideas into a few manipulation settings, which I think is the best way to think about interacting with the rig--not “this is what this control is set to” but “this is how I want to interact right now.” In my previous videos, I used hotkeys to activate or deactivate the various manipulation options. Now I’m experimenting with using a custom marking menu. I’m not sure which will be better in practice--some experimentation while animating will be necessary to reveal that.

It’s not perfect yet--for instance, Backwards works perfectly well on knees and elbows, but doesn’t do anything all that useful!

You’d probably want to have it move the torso in this case, which is completely doable but means implementing a special case so the torso can recognize which of its possible children it should be attaching itself to correctly, much like the system I already have to make sure the graph passes through paired controls in the right direction. Since this is an unusual case it’s not top priority for me right now but it’s how I think things should work eventually.

There are also three other settings that globally affect the behavior of all controls, however they are being interacted with. “Suspended” turns on and off the suspend behavior of elbows, knees, and the intermediate tail controls.

“Rotation Isolation” globally effects whether certain controls--such as hips, shoulders, and head--maintain their orientation when affected by other controls.

Finally, “Head Free” is my one concession to a control-specific setting. Sometimes you’d want the head to move with the torso, other times you’d want it to be free. But unlike the “Suspended” mode, which effects a bunch of different controls, there’s only one head.

This isn’t really a finished concept--as I animate with this system, I hope I’ll be able to find an even simpler scheme to use to globally affect the rig that requires fewer settings, but this already feels a lot smoother to interact with then my earlier attempts.

How to construct a node graph

So there are a bunch of pieces to how the ephemeral rig graph works, but I figure it’s best to start by talking about how the graph itself behaves. It’s actually really similar to how Maya’s graph works in DG mode, but unlike Maya’s graph it only needs to exist for nodes that are in the “critical path” of the user’s interactions. Currently the graph only supports manipulating one node at a time, so the critical path is whatever nodes depend on the one the user has selected.

criticalPath.png

The naive way to solve this would be to just start with the node being manipulated and work your way down, but this would break down fairly quickly. For one thing, you can have nodes in the critical path that depend on nodes that aren’t in the critical path. This is, for instance, true when manipulating the hand when the rig is in “suspend” mode. The elbow depends on the hand, but it also depends on the shoulder, which is not in the critical path of the hand.

Here’s another case: a character grasping it’s head with it’s hand. I’ve paired the head and hand so they move together as one. Rotating from the torso, however, effects the elbow from two different directions--from the hand via the head, and from the shoulder (note the HUD telling you what rig interaction mode is currently active).

criticalPathHeadAndHand.png

Trying to evaluate these examples forward through the graph would end up getting you old data, or nonexistant data. Basically, you need a way to know what has or hasn’t already been evaluated, and base your order of evaluation off of that. In other words, just like Maya’s graph, nodes need to be able to be “dirty” or “clean” so you can know if they are safe to pull data from to calculate other nodes. And the simplest way to arrange evaluation around that question is to work backward, starting at the end of the “tree” of nodes and working backwards through the graph.

Let’s take a look at the code for the ephemeral rig node class:

ephNode.PNG

A bunch of this code is about finding connections through the graph and building constraints, but for the moment what we care about is just the eval() method of the node.

nodeEval.PNG

Note that it tells its drivers to evaluate before it itself does. And, because all nodes in the graph have an eval method, the nodes on which it depends will tell their drivers to do the same. So first the evaluation requests will cascade up the graph until they hit something that doesn’t require any drivers, ie the node the user is controlling or a “dummy node” that isn’t in the critical path (like the shoulder in our arm example). Then the evaluation itself cascades back down the graph, marking the nodes clean as it goes. When there are multiple branches to the graph, the evaluation request cascade will stop when it encounters a clean node and use that node’s results without evaluating, preventing nodes from being needlessly evaluated twice. To figure out what’s in the critical path in the first place, the graph builds itself by looking at message connections I’ve placed in the scene.

These connections simply tell the ephemeral rig system what types of graph could be built--they do nothing else. When a node is selected, the ephemeral rig system looks at these connections and, depending on the current settings each node has for what it’s relationships to other nodes should be, builds the graph anew. Here’s the code for doing so.

walkNodes.PNG

Basically what’s happening here is that the function starts with the node the user has currently selected, and recursively walks down the connections to find all the nodes that this node could effect. At each step, those possible connections are filtered to get only the ones relevant to the current interaction.

For instance, each node also has it’s own current constraint type, which is stored as a string attribute on the transform node. When I switch to a different interaction mode, this string is set for the nodes in the limb I have selected. For instance, holding down ‘Z’ to switch into FK mode while manipulating an arm would set all it’s ephConstraintType attributes to “forward.” Then when the graph is rebuilt it will filter the connections used to find the critical path to just “forward” connections.

This doesn’t build the graph itself though--it just finds the nodes in the critical path. To actually build the graph, each node looks back up the connections to find its drivers. These drivers may or may not themselves be in the critical path, and if they aren’t the node creates a “dummy” node for them. This node has all the methods of an ephNode so that the whole group of nodes can be called together safely, but they don’t do anything. I already know this node isn’t in the critical path, so it will not be affected by the user and is always clean.

I also have it filter the drivers to detect and prevent circularities. The “pair” constraint type has connections that are always two-way, so it’s necessary to choose a single path through the pair and prevent the graph from doubling back on itself. Since the graph is rebuilt every time you select a different control, this still produces seemingly circular behavior--ie you can manipulate the pair from either side--as it will find a different path through the pair each time.

nodeFilter.PNG

(Yes, I know that isNotDrivenByNode() uses an if/else when I could perfectly well use a single not to so the same thing in one line. Formulating it this way is easier for my brain to reason about. I’m not sorry.)

Note that I’m being very careful when choosing variable names to create an obvious division between strings on the one hand, and actual nodes (which may be MObjects, EphNodes, or occasionally PyNodes) on the other. Every time I’m using a string to refer to a Maya node (as you would with the commands module) I use a variable like “nodeName,” never “node.” When mixing cmds and Open Maya 2 in the same code base this distinction is very important!

Just remember, the map is not the territory, and the tap is not meritorious! Little Shaper humor for you there, sundog!

Rigs are software

codeDemonstration.png

There are a bunch of new ephemeral rig tests at the bottom of this post. If you’re here to see cool rigging and don’t want to read a fairly long rant, scroll down there. Then scroll back up because you should read my rant anyway!

Years ago--I guess it would have been some time in 2007--Anzovin Studio was working with a company called Digital Fish on rigging tools for their animation package Reflex. I’ve probably mentioned Reflex here before--the fact that it was never released publicly is pretty good evidence that God is dead. You might have heard of Digital Fish, since one of the things they do these days is maintain OpenSubdiv.

Reflex did not require you to rig with it’s node graph, and indeed did not (at that time) provide any GUI tools for rigging at all. Instead you’d code rigs in a domain-specific language they’d developed for that purpose. What you could do with that language was extremely open, including defining your own deformers as needed. Coming from a Maya-by-way-of-Lightwave-and-Animation:Master TD background, not a programming background, this sounded insane to me. And indeed, it probably contributed to Reflex’s slow acceptance and eventual dormancy--but not because it was a bad idea. I’ve come to believe that Reflex’s rigging-as-programming approach was in fact precisely the right idea. It was just ahead of its time, and most TDs, including me, weren’t ready to hear it.

Well get ready, because much like the Master of Magnetism was eventually acknowledged to have Made Some Valid Points, we’re all going to have to admit that Reflex Was Right.

magnetoWasRight.jpg

The fact is that rigging is programming. It wasn’t necessarily meant to be. I recently encountered someone I hadn’t spoken to for years. He hadn’t had any real contact with the industry since the late 90s, and he asked me if I still did “boning.” (No, seriously, this was an actual term that people used, I’m not making that up.) Go back far enough in time, and rigging really is basically about placing bones and defining deformation and not much else.

But obviously rigging today is nothing like that. Even the simplest modern rig contains an great deal of internal logic about what drives what by what method under what conditions, and whether or not you are actually writing any code that is programming. To be clear, I’m not referring here to scripted auto-rigging. I mean the rig itself is a program. A great deal of the problem with scripted auto-rigging tools--and despite being the designer of a fairly popular Maya autorig tool, I have begun to regard the whole “auto-rigging” concept with suspicion--is that it’s a program that exists only to generate another program. You might even say that an auto-rig tool compiles to a rig, which would be fine except that the auto-rig program is frequently more complex than the rig it’s supposed to generate would be if expressed in code, and that’s not the direction that’s supposed to go.

This idea that rigs are software isn’t new, or something I made up--it’s becoming an increasingly common view. Rafael Fragapane’s Cult of Rig takes that view--part of what makes his approach so interesting is that he’s applying programming concepts like encapsulation to the Maya node graph. Cesar Saez has a great article on how the TD world is bifurcating into people who are fundamentally artists and people who are really software engineers.

Probably to some of the people who read this blog, the idea of just coding a rig from scratch sounds terrifying. The good news is that it actually isn’t! What surprised me about this project is that it was much easier then I’d thought--considerably easier then my earlier, hacky implementation of ephemeral rigging that attempted to get Maya to do a bunch of the work for me. Once again, we see the supposedly more intuitive, “user-friendly” approach turns out to be much more work then just buckling down and doing things the “hard” way.

That said, thinking of rigging as programming is more a point of view then a specific practice--it doesn’t necessarily imply that you have to write your rig in Python the way I’m doing now. However, creating your “rig program” purely through the Maya node graph locks you in to a very specific idea of how the rig can evaluate. I’m not arguing that node graphs are inherently bad—in fact I chose to write my new ephemeral rig system as a graph with nodes, specifically because it’s an easy way to figure out the correct order to evaluate things in. But Maya generally expects everything to evaluate through its graph, even when that’s counterproductive. How can you tell that it’s counterproductive? Look at how often Maya breaks its own rules.

IK handles are a perfect example. There’s nothing stopping you from making an IK handle that would evaluate through the Maya node graph as long as it isn't cyclic, but they wanted their handle to have two-way interactions with the joints that it drives. So they completely broke Maya’s basic model of scene evaluation, to the eternal consternation of TDs who thought they could reason about the graph by tracing connections (suckers!). Maya IK handles have worked this way since Maya came out in 1998. That’s how long it took Alias/Wavefront to give up on Maya’s scene graph model and just start special casing things—no time at all. They couldn’t get a single release of the product out before doing so.

So how does my node graph differ from Mayas? Well, for one thing it’s vastly simpler and does an extremely specific task, instead of trying to be the basis for an entire application. To be fair, if I was trying to write a node graph that could support that load, I probably would have failed miserably, since I’m not actually a software engineer!

It’s very lightweight-ness is both the reason why I could write it, and its purpose—it’s so simple that I can destroy and recreate it in different forms as needed without incurring a significant performance penalty or pulling the rug out from under some other aspect of the scene. I also make no assumption that the graph is the only way to evaluate transforms in the ephemeral rig system. It’s used to correctly order evaluation of transforms when it’s important to do so, and ignored when evaluation order is not important.

Here’s a couple more examples.

Here I’m doing a reverse foot, ephemeral rig style. Once again, rebuilding the graph lets me switch behaviors in seemingly circular ways easily. Of particular note here is that nothing really needs to change much in order to allow for "backwards kinematics"--I don’t have special controls or attributes. It’s just manipulating the same controls but tracing a different set of connections to build the graph. Any set of controls could be set up to behave that way, just by making appropriate connections that can be followed by the ephemeral DG.

And here’s a tail, showing just how useful the ephemeral rig behavior is at posing arbitrary numbers of controls.

Next time we’ll get into the code, and see how the ephemeral rig DG operates at a low level.

An exemplary stylized shading example

We interrupt your regularly scheduled discussion of ephemeral rigging to bring you this awesome example of stylized NPR rendering. This piece really exemplifies what I love about the possibilities of a "flatter" style, even more then the Edge of Spider Verse trailer.

The only thing I might criticize about it is that the animation feels a little too smooth to me. It's extremely well done, both in terms of performance and in terms graphical arcs/poses, but I feel like this kind of look almost calls out for a variable pose rate to feel stylistically cohesive. But that's also an opinion formed by a history of watching traditional animation--this looks so good that I'm willing to be convinced that a full pose rate can work with a stylized look.

Ephemeral Rig Mark 2

frankenstein.png

Last time, I talked about some of the issues I had with the existing ephemeral rig system, and that I planned to rebuild it. I didn’t talk about how I planned to rebuild it in any detail, because if I did there was a chance I was going to make a complete fool of myself. Luckily, that’s not what happened, so now you get to hear about how I’m writing my own rigging system, with it’s own dependency graph and it’s own constraints, that only connects to the Maya scene graph at specific points. And how this was actually much easier than it sounds.

First of all, take a gander at this:

Of particular note is the paired hand and prop controls. The old system would have allowed you to attach the hand to the prop, or the prop to the hand, but it would not have allowed the seemingly two-way connection you see here. I say seemingly because there isn’t actually any cycle-breaking going on here. What’s happening is that the dependency graph that exists when you select the hand--representing all the rigging behavior in the control rig, including hierarchy--, and the graph that exists when you select the prop, are two entirely different graphs. I simply have it rebuild itself from scratch any time anything changes. It will rebuild if you look at it wrong. It will rebuild if you sneeze. But it builds so fast--16 milliseconds for this three-control rig--that you’d never notice. This means that instead of hacking around with reparenting or constraining controls in Maya, I can just have the graph remake itself to work however it needs to work at this particular moment.

This diagram may make the behavior more clear:

ephDG.png

Conceptually, this is very similar to how the old system behaved: there’s a deformation rig that has keyframes, and a control rig that does not and is only used to manipulate the deformation rig. Previously, however, while the connection between the control rig and deformation rig was ephemeral, the control rig still existed in Maya as Maya transforms being evaluated through the Maya DAG and DG, each with it’s own callback to pass it’s data onto the deformation rig ephemerally.

Now there’s only one callback. It pulls data only from the node the user is currently manipulating, and all other rig behavior is evaluated by the ephemeral DG’s own nodes, with no connection at all to Maya’s evaluation. Then it pushes it’s data back to the Maya scene. And that’s the only other place it touches the Maya DG at all. It’s fast, too--evaluating this three-control rig takes a little less then a single millisecond, despite running a node graph that was written in Python, not exactly a language known for its performance.

This solves a huge number of problems. Remember all the hackery I had to engage in to get the ephemeral rig behavior to work when the control nodes were in Maya? Well I don’t have to do that anymore. No more special attribute to tell you whether or not you are on the current pose. Now I just kill the graph the moment the playhead leaves the current frame, and create it again the moment it stops. I mean, imagine deleting nodes and connections in Maya while in the process of scrubbing! But with my own nodes and DG, I can have them do whatever I want.

Similarly, the old system presented endless problems with undo. Because changing modes meant actually changing the Maya scene graph, getting it back into the right configuration to undo any given change was becoming a nightmare. While I don’t have undo fully working yet for the new system--you may notice that’s one thing I don’t do in the video!--tests suggest it will be far easier to implement for this system, since the graph has no persistent configuration in the first place, and there is no longer any concept of a “matchback” from the deformation rig to the control rig that would have to be triggered correctly.

I’ll be going through more details about how this system works in the next few posts.

 

The animation "core loop"

hamsterwheel.png

Game designers have a concept called a “core loop.” That’s the loop of things that your player will be doing over and over again during gameplay. For instance, the core loop for a shooter might be something like:

coreLoop-01.png

Game designers spend a lot of time trying to make the core loop fun, because no matter how many higher-level goals you build onto your game, if your game is a shooter then what you will actually be doing moment-to-moment is shooting stuff, so if that’s not fun you don’t have much of a game.

I think the concept of a core loop is useful for describing a lot of activities, especially artistic ones. The core loop for drawing might look like this:

coreLoop-02.png

Just as in game design, there are larger loops around that core loop for higher-level goals--in this case your higher-level goals might be things like composition, character design, and perspective. But you’re going to achieve all of those things by drawing a whole lot of lines through the core loop.

For me, at least, the constant cycling through steps of creation and evaluation is really central to creating anything, certainly for animation. Ideally, the animation core loop would look like this:

coreLoop-03.png

But it frequently looks more like this:

If you want to animate really fast, then getting this core loop to be as fast as possible is the number one priority, which means kicking out those extraneous steps as much as possible. You can mitigate the second extraneous step by relying on interpolationless animation, or push it to a different, less creatively important step of the process by using a blocking-plus approach and worrying about interpolation later, but that still leaves the cognitive load of thinking about how the motion must be expressed through the rig. Thinking about this has made me reconceive how I’m building the ephemeral rig system, and potentially I may end up having it work quite differently then it does now.

Right now I can take advantage of the ephemeral nature of the rig to make it reconfigurable. To do that, I’ve made a little in-context menu that lets you switch a control or limb to different preset rig states. You can also manually attach nodes together in any order.

But I’ve come to the conclusion that this is still too much effort. Configuring the rig isn’t as bad as thinking about a conventional rig and all of its interpolation issues, but it’s still adding an additional step to the loop. You want to just go in and make the changes you want, not configure something first.

One thing I’m exploring now is having the interaction work more like manipulators then like configuring a rig. I wouldn’t actually be authoring new manipulators, but rather having different manipulation modes that affect the entire rig at once, so that you can switch back and forth between interaction modes with different modifier keys. To do that, I’m going to have to rebuild the system, because right now switching modes on the entire rig is much slower than it should be.

There are a bunch of reasons for that; relying too much on pyMEL is one of them. But in the process of figuring out how to create a more efficient system, I’ve realized that my attempt to hack the Maya scene graph to do what I want with just a little bit of help from om2 is not going to be the most effective way of achieving this, as I keep running into more and more issues reconciling what I’m doing with the rest of Maya’s stuff--managing undo, for instance, is becoming a real mess. In other words, I need to stop thinking like a stereotypical “TD,” hacking together a system out of whatever pieces are already lying around in Maya, and think like a developer, willing to implement my own rigging behavior purely through code that will only pass information back to Maya in specific places. My initial tests of this idea suggest that it will actually be much simpler than the rather baroque system I ended up building around the last iteration of the ephemeral rig concept, and I’ll be documenting my experiments in future posts.

I’m also investigating the exoSwitch constraint, recently released by Tim Naylor and Andrea Maiolo. The exoSwitch constraint is a bi-directional constraint, allowing you to manipulate either side of the constraint connection, which, for one thing, would let you get some basic ephemeral-like behavior without having to build a whole system like I am. While I don’t know how it works under the hood, I’m guessing that it’s passing information behind Maya’s back in a way not dissimilar to the ephemeral rig concept, because that’s really the only way I can imagine getting this behavior working in Maya.

One really nice feature it has is the ability to automatically change the driver of a constraint system based on which control is selected.

Whether or not I end up replacing aspects of the ephemeral rig system with the exoSwitch constraint, making rig behavior dependent on context like what control is selected is a really good idea that I want to use to remove even more extraneous thought from that loop. I’ll post more about it once I have a chance to test it out more.

A Very Long Comment

books.png

Hey! It’s been a while since I’ve posted--I was working on a frankly unreasonable number of projects these last two months, some of which I hope to be able to show you soon, but it left me with very little time to add to this blog.

A couple of days ago, I was reminded I have to get back to this when I saw a comment come up on my last post “Action is his reward.” With permission, I’m reproducing it here:

I am rewarded by your enthusiasm and I can relate to most of the content that you produced for this blog post.
However, this project may not the best case for the perspective you are presenting, as it stands with today's technology trends and capabilities (perhaps limitations as well).
I hope some day, doing this style of work proves to be more cost effective, as I would love to see more of this style in hopefully even more ambitious productions.
Let me elaborate some other perspective that may explain my point better and hopefully have more people appreciate lesser understood details about what is presented in that teaser.
If you think about a team of people creating this whole thing from scratch and let's say during the process they might be using some techniques uniquely advantageous and otherwise impossible when not animating using computer aided techniques, you can appreciate making those techniques work as they work in traditional animation medium will pose its own challenges.
It is only fair if I gave two examples as well...
For example computer simulation of any kind is hard if not impossible with non-continuous representations of motion when they don't interpolate in a relatively plausible way.
Another example would be re-creating a traditional "looking" style, let alone being attempted at a scale like this, will just be a huge technical undertaking.

Now, I have a consistent problem where I open my mouth intending to add just a sentence to a conversation and a nine-volume encyclopedia pops out instead. Accordingly, my attempt to answer the poster succinctly turned into a post-long response that I decided might as well just be a post, so here it is!

Thanks for your comment! You may be right that Spider-verse isn’t the best example, and certainly I wouldn’t hold it up as an example of the kind of production I intend to create--just as a very good example of stylized CG. I suspect that rendering in a stylized way, and making this style work with their existing methods, was quite expensive for SPI! I recall an artist who worked on Paper Man describing it as twice the work of ordinary CG. That's certainly a danger with stylized approaches--but I think it's an avoidable one.

The problem, it seems to me, is that you really can't approach this sort of production as if it were conventional CG, with a conventional methodology and pipeline, and expect to reap the cost benefits I think are potentially realizable with it. You'd have to treat this kind of production very differently.

For instance, you mention simulation as something that would be difficult with non-continuous motion, and you're quite correct. So simulation itself would be the first thing on the chopping block for the production, outside of the occasional FX shot. It's one of the many steps that gums up the works of CG production and prevents us from getting to that an-artist-can-sit-down-and-just-make-something state. Plus I generally don't like its results on an artistic basis (at least in this stylized context). When traditional animators animate clothed characters, the clothing takes part in the character's silhouette and becomes a part of the performance. They never had any difficulty animating cloth by hand.

Yes, I am actually claiming that hand-animating cloth would be faster then simulating it, and I know how insane that sounds from a conventional CG perspective. But stylization completely changes the game. Consider the monkey test I posted a few months back.

The monkey is unclothed, of course, but there definitely parts of his body that require secondary animation, notably his hair tufts and ears. The hair tufts at least would most likely be simulated if this shot were approached in a conventional manner. The way I approached the shot was not only to animate them by hand, but to animate them from the very beginning--the very first key poses I put down already included the ears and hair tufts as an inherent aspect of those poses, already contributing to silhouettes and arcs. It’s pretty difficult to get an accurate idea of exactly what percentage of my time animating the shot was devoted to them, but I’m going to guess it was only a few percent.

This is only possible because the stylized look allowed me to ignore the “higher frequency” details that would be required for a fully rendered character, and I expect these same details would also be unnecessary for character clothing. I’m much more interested in character silhouettes then I am in wrinkles and clothing detail, so some simple secondary that’s really just part of the character’s pose would actually be more effective.

The idea here is that this isn’t just any form of stylization--it’s a specifically chosen set of stylizations that support each other in the goal of massively reducing the amount of work involved. And that means choosing subjects that work with the grain of those stylistic choices. For instance, you may be wondering how I’d approach a long flowing cape or a long coat. The answer is...I wouldn’t. I wouldn’t generally put characters in long coats or capes. There are about a million stories you could tell that don’t require anyone to wear a cape. Creating low-cost CG in this manner would be about making the design choices that let you get the most bang for your buck production-value wise while maintaining the essentials of character animation, a very different goal then that which I suspect drives companies like SPI and Disney to create stylized CG.

This also applies to the NPR rendering. There are a lot of ways to approach this problem, and some may be very time consuming! The two-tone methods I’m using here aren’t, though. I was able, as an individual with some understanding of the problem but no custom tools, to sit down and do the shading for the Monkey test without much trouble. Partly this is again choosing the most direct path to something that both looks good and is efficient to create. The simple two-tone present in the monkey test carries far less detail then the more painterly frames from Spider-verse, but I think it wouldn’t have any difficulty supporting emotionally engaging characters or exciting action scenes.

That said, the efficiency of this process could be improved a lot, and there’s a lot of room for R&D here--there’s still a required level of manual tweaking that I’d like to get rid of, and the two tone shapes could be improved. I’m hoping to tackle some of those problems this year.

There’s still the question of how that process, however reasonable on a small scale, would scale up to a large production like a feature film. In many ways, it may help to think of the look development for such a production as being less like a conventional film production pipeline, and more like a game. Ideally, except for certain FX shots, such a production would not even have a rendering/compositing stage--what you would see working on the shot would simply be the shot. It might be quite literally “in-engine” if using a game engine as the hub of production turns out to be the right way to approach it (this is something I’m getting more and more interested in). While this doesn’t remove all potential issues with scaling the approach to feature film size, I think it does drastically simplify the problem. Of course, we haven’t actually produced a long-form project using these techniques, and I’m sure there are going to be unforeseen roadblocks, so we shall see!

In any case, thanks again for your comment! I hope this illuminates how I envision this production process being different from the way I imagine that Spider-verse is being done, and why I think that the immense cost gains I’m claiming here are achievable.

Action is his reward

spidermen.png

So by now I bet every single person who reads this blog has already seen the Spider-Man: Into The Spider Verse trailer, but here it is just in case:

I have nothing whatsoever to do with this production, but I am very, very happy to see this trailer, and I’m even happier to see it’s reception, which has seemed very positive. I’m happy to see it because it means that something I really want to see--bolder, more striking style in both visuals and motion in animated films--is now something that is exciting to mainstream filmgoers with no specific investment in animation as an artform.

It wasn’t that long ago that the conventional wisdom was that anything with a more stylized look would be rejected by the American filmgoing public at large, with the implication that the success of Pixar and CG features in general was because the fully rendered look gave adults “permission” to enjoy something as fundamentally kiddy as animation, and that the same audience wouldn’t show up for a “cartoon.” At one time I think there was actually some truth to this. But that time was something like fifteen years ago now, which is plenty of time for a new generation with an entirely different set of aesthetic prejudices to come into their own as a demographic. I hope that this is the first example of a major sea change in how animation is marketed and consumed.

This isn’t the first time someone’s made an CG animated superhero film, of course, but even The Incredibles was marketed as a Pixar film first and foremost, which is practically its own genre. It’s trailers lead with Mr. Incredible’s dad bod and the family/superhero dichotomy, emphasizing the film’s comedy elements ahead of its action-adventure elements. This trailer is all about how cool it would be to be a Spider-person who can dive from skyscrapers and bounce off cars, and it seems marketed to the same audience who would watch any live-action Marvel movie.

How it’s stylized is exciting too. It has a variable pose rate*, and what I understand from talking to people from SPI is that it’s largely interpolationless! It’s not by any means the first CG feature to use those techniques--much of the animation in the Peanuts and Lego movies has been interpolationless, and as I understand it, previous features done by SPI have had so much frame-by-frame tweaking that some shots might as well have been. But I think it may be the first to use them in quite this way, married to nonphotorealistic rendering and used to depict more human characters in a non-comedic context.

This is as good a time as any to talk about my ultimate goals with the tools and processes I’m discussing on this blog. Certainly, I’d like to promote better rigging and animation tools overall, but my long-term goal is to help create a cheaper, much more direct process for creating high production value animated content. The idea is to be able to create low-budget productions in the $10-$15 million dollar range without sacrificing the things that are actually important about animation--the sense of irrepressible life that the best animation has, and it’s ability to depict almost any setting or narrative with beauty and economy. A lot of my ideas about the inefficiency of CG production were inspired by Keith Lango’s old blog (I can’t seem to find the posts anymore, though), and refined in collaboration with Chris Perry, who directed The New Pioneers.

I don’t think it makes sense to rely the same processes for low-budget production as you’d use for a high-budget feature, or even a mid-budget CG feature like the Lego or Despicable Me films. I don’t think I’ve seen an example of low-budget CG film--a film under 20 million--that manages to capture the things I care about in animation. I think a lot of this is attributable to the production process, because CG production, as traditionally constituted, is highly indirect. Exerting artistic control through that process takes a lot of time and effort. You can create well, but you can’t create well and quickly. And while I’m not aiming for truncated schedules at all--more on that below--the ability for individual artists to create finished work quickly is a cornerstone of the small team size I do envision.

So if low-budget, high production value CG is going to be possible, it’s going to mean coming up with a different production process, and finding a way to shear away everything that isn’t the essential artistic work of the process (which is more-or-less irreducible). To me, that essential artistic work comes down to design and performance. Design in the sense of character design, art direction, shot composition--all the things that make an image beautiful. And performance in the sense of character expression and the graphical qualities of movement, all the things that make animation meaningful and engaging.

Neither of these qualities necessarily relies on the CG production process, and in fact the process works against both of them. A concept artist can create a beautiful image extraordinarily quickly, and in fact concept art (in my experience) is frequently much more beautiful then the full-rendered CG it is meant to inspire. A drawn animator can rough in a great character performance very quickly as well (although to get that animation into a finished state will require a great deal more effort). In both cases the artist is creating directly, without the process and pipeline acting as a dead weight.

We won’t be able to remove the “process tax” entirely, but I think we will be able to reduce it massively by choosing processes and stylistic decisions that work together to make animation much more direct. Ephemeral rigging and interpolationless animation is just one part of that. The biggest savings is actually in environment creation. Background art for this production process would be created much as it is for drawn animation--it would be painted. With the right stylistic choices and some simple 3D geometry for perspective and projection, a background artist can do the work of multiple departments in a conventional CG pipeline. For The New Pioneers, art director Chris Bishop set the tone for the background art team by simply painting a lot of it himself. An individual can sit down and produce finished background art, rather than shepherding it through concept, asset creation, lookdev, layout, set dressing, and lighting, and that art in a final or near-final form is what animators work to.

Nonphotorealistic rendering of characters and other fully CG elements is also an important part of the process. For one thing, you’d quickly lose any advantage from painting the backgrounds if they had to match the look and feel of fully rendered characters. For another, the fully rendered look tends to demand a high degree of polish from animation--a level of polish drawn animators do not need to concern themselves with. Watch great drawn animation closely, and you’ll see lots of little imperfections--”hitting a wall,” wobbling, parts of the body freezing in place--that never bothered anyone, but would stick out like a sore thumb in most CG**. Why I think this is probably requires a different blog post, but suffice to say that it seems possible to escape these issues by using a simplified rendering style and variable pose rate. This frees animators to focus on the performance questions that matter, rather than spending endless time on polish. In some cases this reduces the amount of time needed to animate a shot to a fraction of the time you’d normally need to complete it.

Nonphotorealistic rendering also makes it easier, at least in theory, for the production to be entirely real-time. Last year I did some animation for Zafari, a production using the Unreal Engine for lighting and rendering. The advantages to the production were pretty huge, but they didn’t extend to layout and animation, which still had to be done in Maya.

Unfortunately it’s going to be a long time, if ever, before VP2 can keep up with the Unreal Engine. But while I don’t know if you’ll ever see VP2 producing something like the Fortnite trailer, it may be possible to use it for my much less photoreal purposes. That’s something I’m going to turn my attention to once the ephemeral rigging system is battle-tested in a few actual productions. Regardless of how it’s achieved, fully real-time production at every stage is a big part of the process I’m visualizing.

Lastly, I imagine using these techniques to create productions with small teams, rather than try to do production more quickly with a conventionally sized team. There’s such a huge logistical overhead to the normal animation studio environment that I don’t think adopting faster techniques would necessarily create the cost decreases that I’d like to see. A drastically smaller team doing a production on a more conventional--or even relaxed!--schedule could create far greater efficiencies, and allow the best artists on the team to do a lot of work of their own, rather than spending all their time managing others. The contribution of any individual artist to the production goes up too, hopefully leading to more engaged artists putting more of themselves into the production. You can think of what I’m trying to create as being way out on the “cheap/good” end of the “fast/cheap/good” space, even though the way I’m trying to do it is by making the work of production faster.

A lot of these ideas are pretty speculative! New Pioneers proved that they could work, but also showed a lot of areas where further development is needed (which is basically what this blog is chronicling!).

I don’t want to make movies like an Iron Man, from inside an expensive, complicated machine that blasts every problem with the same overwhelming force. I want to animate like a Spider-Man, leaping from scene to scene with incredible speed, clobbering shots with my own spider-strength and using just the right amount of technological assistance to let me swing away and leave them webbed up in my wake.

*I’ve decided to start using the term “variable pose rate” rather than “variable frame rate” to describe the mix of 1s, 2s, and 3s that are frequently used in drawn animation, because “frame rate” has a specific, separate meaning. Something can have a variable pose rate, but a frame rate of 24fps. You could have two characters with different pose rates within the same shot--indeed, you’re very likely to!--so the term “variable frame rate” doesn’t really make sense.

**The exceptions to this are usually leaning into the imperfections as a deliberate stylistic choice, like the stop-motion-derived look of the Lego movies.

 

The absolute necessity of onion skinning

ghosts.png

One of the most frustrating things about CG animation tools is that--proprietary tools I may be unaware of aside--basically no one has reasonable onion skinning. This is something the CG animation world as a whole has kind of brushed off, but I think it’s critical. We need onion skinning tools--good ones. And we don’t have them.

First, another note about nomenclature! I’ve encountered animators and TDs who have no idea what I’m talking about when I use the term “onion skinning,” which is understandable because it’s a weird phrase. It refers to the fact that the skin of an onion is semitransparent, but since no one has ever (to my knowledge) animated by drawing on onions, we should really call it “tracing paper.”. Some people call it “ghosting,” although I generally avoid that because it’s easy to confuse it with Maya’s “ghosting”, which is kind of an attempt at onion skinning that isn't usable in many real-world contexts (although I do like the implication that whenever we scrub the timeline we are murdering character poses, leaving only their wailing ghosts behind until, as the play head advances, they too are snuffed out forever).

After using even halfway-useful onion skinning tools, animating pose-to-pose without onion skinning feels like animating blind, with one hand tied behind my back. You can’t see what other poses look like while you work--all you can do is flip back and forth between poses and rely on a “mental frame buffer” to give you a vague sense of what they looked like. It’s hard to overstate just how much harder this is then it has to be.

So much of what makes character animation look good comes not just from the poses the character assumes on screen, but also the shapes it describes as it moves. Great drawn animators are absolute masters of this. Take a look at this bit from The Jungle Book:

Ka’s coils look good on any given frame, but also describe a complex, interrelated set of interesting temporal arcs as they move and flow over each other. It’s fantastically complicated if you trace any given section, and yet unifies into a coherent, meaningful performance when viewed. (I don’t know who animated it, but I’m guessing either Milt Kahl or Frank Thomas--people who are greater Nine Old Men geeks then I am can probably correct me about that).

Or take a look at this bit of animation from Tarzan:

His motion describes a bunch of interesting arcs as he moves.

tarzan_arcs.png

These arcs don’t track some specific part of the character, but rather the arcs that its shape makes on screen. This is a big problem with the motion trail method of visualizing character motion. Sure, it’s a lot better then the graph editor (which tells you very little about the character’s arcs as they will be perceived by the audience), but it’s an incomplete way of visualizing character motion.

Here's another bit from The Jungle Book. I’ve tried to overlay what a motion trail tracking his hand would show, as if the drawings had a wrist control the way a CG character would.

The results are all kinds of weird and jittery, and the little hook-arc at the end doesn’t make sense as an arc at all. But the motion looks perfectly smooth when viewed. I think that’s because what your eyes are perceiving isn’t really the position of the “wrist joint,” it’s the overall shape of the hand and arm. If we think of the arc as being based on that shape, where the “point” it tracks can shift around based on what’s leading the motion and where the silhouette is, we end up with something more like this:

Being able to see superimposed poses gives you a much fuller picture of how your animation will actually be perceived then any other method, and it lets you make accurate judgements about arcs while you pose, instead of requiring constant scrubbing and mental gymnastics.

Onion skinning has become such an essential part of my workflow that animating without it seems like insanity, but of course that’s exactly how 99% of CG animation is done. That's not surprising--writing an effective onion skinning tool for Maya turns out to be pretty difficult, and I'm not aware of any CG animation package has ever been released with onion skinning as a core feature (Digital Fish's late, lamented Reflex would have, if it had ever been released). Brian Kendall wrote an onion skin tool for Maya at Anzovin Studio, and it was a godsend for my animation workflow, but it was still an incomplete solution. What it did was to hardware render a frame to disk whenever you altered a pose, then display those frames over your viewport when you changed to a different pose.

This approach has a serious flaw--since it’s displaying frames rendered on other poses, it can’t handle camera movement. Any time there was a significant camera move in The New Pioneers, I’d have to create a number of cameras that did not move along the path of the camera to see onion skins from. That’s not an insurmountable problem, but it does make the workflow clunky.

The tool was written for default viewport, and has since been retired as VP2 has become the standard for Maya. Christoph Lendenfeld has developed an open source onion skin tool that works on similar principles, but takes advantage of VP2. However, it also suffers from the same problems.

The central issue is that you need some way to store the other poses you wish to display as onion skins, and storing them as images has inherent downsides. In some ways, storing them as meshes makes a lot more sense, but presents other problems. Maya does not provide any way to render a mesh as a true overlay on the rest of the scene. Sure, you can make a mesh semitransparent, but doing so will reveal internal geometry and intersections, plus it will intersect with the rigged mesh itself--not very useful for onion skinning purposes.

One way around this is to write your own shape drawing in VP2, but this opens a bit of a can of worms--drawing your own shapes in VP2 introduces complexities that I'd rather not deal with. There are also a variety of potential ways around this with shaders in VP2, though. Kostas Gialitakis, one of the few people around with a solid understanding of ShaderFX, made a shader for me that uses multiple render passes to generate toon outlines that are then pushed up to the camera in Z-depth so that they render on top of everything else in the scene. This is what I’m using to do onion skinning right now, and it works very well and handles camera movement perfectly.

This also displays a more advanced version of the system overall, including switching between different rig modes. Note that when I edit a pose, I don't have to edit it on it's first frame--this is a system that, at least in terms of the face it presents to the animator, is truly pose rather then keyframe centric, and you can edit a pose on any of the frames of it's duration without creating a new key.

Here's some of the code that runs the onion skin portion of the system:

onionSkinPoseGetter.PNG

This function refers to a bunch of stuff outside it's own scope, so it might be a bit confusing. I've been going back and forth on whether I should be posting little code snippets like this, or going over the code for the whole system instead, but I think posting the snippets is still the right way to illustrate specific concepts, even though they're obviously embedded in a system about which they make certain assumptions. For instance, this function is a method of an object with a "MFnMesh" attribute (the character's mesh), a "watchAttr" attribute, a list of "onionMeshes," other methods that save and restore poses, and to a module called "keyingUtils" that includes the poseBeginEndFrame() function I showed a few posts ago.

An argument could be made that I should have structured this in a more function style in any case, passing everything a function needs by arguments and avoiding mutating state except when absolutely necessary ie. in the system's connection to Maya. That would certainly have made it easier to review this code in little pieces like this, at any rate! Also I'd like to come up with a better way to show code snippets on this blog and not use screenshots like an idiot. We're all just going to have to learn to live in this cruel, indifferent world.

In any case, I’m not actually saving a mesh for each pose here-that would quickly balloon the scene to an unreasonable size! Instead, I have four onion skin meshes already in the scene, and I simply swap in the correct mesh data using Open Maya 2. Because deform rig targets--the ones the ephemeral rig is pushing it’s matrices to--are all in world-equivalent space, I can figure out what any given pose looks like without ever actually going to that frame just by looking at the keyframes for each target. So I simply swap the deformation rig to the position of the other pose, and then use Open Maya 2 to grab it’s mesh data.

Right now there's a hitch just after you change frames as the onion skins are generated, but this doesn't seem to be caused by grabbing the mesh data--it actually seems to be pyMEL that's taking up the time setting attributes, since the setTargetsToPoseOnFrame() function used in this code is written in pyMEL. Once I rewrite that section with om2, it should happen so quickly that it is completely transparent to the user, so when you alight on a particular frame you simply get the onion skins you would expect, seemingly instantaneously.

There is one significant issue with this approach though--the entire character must be one single mesh. Because VP2 shaders are only aware of the mesh they are currently rendering, characters composed of multiple meshes would reveal internal and overlapping geometry:

badOnionSkin.PNG

That’s fine for my purposes at the moment, since I can ensure that the character's I'm currently using the system with are made entirely of one mesh, but isn’t a great long-term solution. In the future, I’m planning to use a combination of the shader with Christoph Lendenfeld’s techniques to create a truly comprehensive onion skinning solution.

 

The zBrush Analogy

sculpting.png

In discussing interpolationless animation techniques, an analogy I keep coming back to is zBrush and other sculpting apps vs subdivision and NURBs modeling. While subdivision surfaces almost completely took over from NURBs as the most common technique for DCC modeling in the early aughts for very good reasons, the two methods of modeling surfaces have a lot in common. They both create a surface out of a relatively small number of control points that the user can manipulate, and it is the job of the modeler to place these control points in the right relationship to create the desired surface.

At first blush this seems like an obvious good. Manipulating a surface from a limited number of points must be easier than dealing with a huge mess of polygons, right?

Nope!

It turns out that dealing with a whole bunch of dense data is frequently better--as long as you have the right tools to do it. Until zBrush came along, nothing did. But once it had a chance to refine its toolset and retopology became a common technique, the advantages were so tremendous that now zBrush is sometimes used for hard surface mechanical/vehicle modeling and even product design, areas where subdivision or NURBs modeling would have seemed like an obvious choice!

I think this shift suggests some fundamental ideas about the best ways to approach content creation. There is a tendency to assume that “non-destructive” or “procedural” methods will always be the more effective, creative technique, when in reality using them when they are not appropriate can be crippling. For instance, digital painters frequently make use of layers and layer masks, a beneficial non-destructive workflow. But try telling a digital painter they have to make all their art by putting down Bezier control points to describe a brush stroke instead of using a Wacom to lay down pixels. Being infinitely tweakable in theory does not necessarily equal a better workflow in practice.

Any sort of non-destructive editing introduces an element of indirectness to content creation. Instead of editing a thing, you are editing a thing that makes the thing. Sometimes this is desirable. Bezier curves are frequently the right toolset for graphic/logo design because smooth and simple shapes with precisely defined curvature actually benefit from this indirectness. Tasks that require minute fine-tuning like compositing practically demand it.

There is an entirely different class of tasks, including much of painting, sculpting, and, I would argue, character animation technique, where indirectness can be disastrous. But there’s no zBrush for animation, no animation package built around manipulating dense animation data directly. The animation equivalent of subdivs/NURBs is all we have. That’s why the techniques I’m presenting here are currently only viable in certain stylistic contexts. Interpolationless animation is highly effective for the kind of cartoony, highly stylized animation I want to do. But it presents obvious problems if you’re doing more traditional, naturalistic CG!

In a future post, I’ll examine what an “animation z-Brush” might look like.