The absolute necessity of onion skinning

ghosts.png

One of the most frustrating things about CG animation tools is that--proprietary tools I may be unaware of aside--basically no one has reasonable onion skinning. This is something the CG animation world as a whole has kind of brushed off, but I think it’s critical. We need onion skinning tools--good ones. And we don’t have them.

First, another note about nomenclature! I’ve encountered animators and TDs who have no idea what I’m talking about when I use the term “onion skinning,” which is understandable because it’s a weird phrase. It refers to the fact that the skin of an onion is semitransparent, but since no one has ever (to my knowledge) animated by drawing on onions, we should really call it “tracing paper.”. Some people call it “ghosting,” although I generally avoid that because it’s easy to confuse it with Maya’s “ghosting”, which is kind of an attempt at onion skinning that isn't usable in many real-world contexts (although I do like the implication that whenever we scrub the timeline we are murdering character poses, leaving only their wailing ghosts behind until, as the play head advances, they too are snuffed out forever).

After using even halfway-useful onion skinning tools, animating pose-to-pose without onion skinning feels like animating blind, with one hand tied behind my back. You can’t see what other poses look like while you work--all you can do is flip back and forth between poses and rely on a “mental frame buffer” to give you a vague sense of what they looked like. It’s hard to overstate just how much harder this is then it has to be.

So much of what makes character animation look good comes not just from the poses the character assumes on screen, but also the shapes it describes as it moves. Great drawn animators are absolute masters of this. Take a look at this bit from The Jungle Book:

Ka’s coils look good on any given frame, but also describe a complex, interrelated set of interesting temporal arcs as they move and flow over each other. It’s fantastically complicated if you trace any given section, and yet unifies into a coherent, meaningful performance when viewed. (I don’t know who animated it, but I’m guessing either Milt Kahl or Frank Thomas--people who are greater Nine Old Men geeks then I am can probably correct me about that).

Or take a look at this bit of animation from Tarzan:

His motion describes a bunch of interesting arcs as he moves.

tarzan_arcs.png

These arcs don’t track some specific part of the character, but rather the arcs that its shape makes on screen. This is a big problem with the motion trail method of visualizing character motion. Sure, it’s a lot better then the graph editor (which tells you very little about the character’s arcs as they will be perceived by the audience), but it’s an incomplete way of visualizing character motion.

Here's another bit from The Jungle Book. I’ve tried to overlay what a motion trail tracking his hand would show, as if the drawings had a wrist control the way a CG character would.

The results are all kinds of weird and jittery, and the little hook-arc at the end doesn’t make sense as an arc at all. But the motion looks perfectly smooth when viewed. I think that’s because what your eyes are perceiving isn’t really the position of the “wrist joint,” it’s the overall shape of the hand and arm. If we think of the arc as being based on that shape, where the “point” it tracks can shift around based on what’s leading the motion and where the silhouette is, we end up with something more like this:

Being able to see superimposed poses gives you a much fuller picture of how your animation will actually be perceived then any other method, and it lets you make accurate judgements about arcs while you pose, instead of requiring constant scrubbing and mental gymnastics.

Onion skinning has become such an essential part of my workflow that animating without it seems like insanity, but of course that’s exactly how 99% of CG animation is done. That's not surprising--writing an effective onion skinning tool for Maya turns out to be pretty difficult, and I'm not aware of any CG animation package has ever been released with onion skinning as a core feature (Digital Fish's late, lamented Reflex would have, if it had ever been released). Brian Kendall wrote an onion skin tool for Maya at Anzovin Studio, and it was a godsend for my animation workflow, but it was still an incomplete solution. What it did was to hardware render a frame to disk whenever you altered a pose, then display those frames over your viewport when you changed to a different pose.

This approach has a serious flaw--since it’s displaying frames rendered on other poses, it can’t handle camera movement. Any time there was a significant camera move in The New Pioneers, I’d have to create a number of cameras that did not move along the path of the camera to see onion skins from. That’s not an insurmountable problem, but it does make the workflow clunky.

The tool was written for default viewport, and has since been retired as VP2 has become the standard for Maya. Christoph Lendenfeld has developed an open source onion skin tool that works on similar principles, but takes advantage of VP2. However, it also suffers from the same problems.

The central issue is that you need some way to store the other poses you wish to display as onion skins, and storing them as images has inherent downsides. In some ways, storing them as meshes makes a lot more sense, but presents other problems. Maya does not provide any way to render a mesh as a true overlay on the rest of the scene. Sure, you can make a mesh semitransparent, but doing so will reveal internal geometry and intersections, plus it will intersect with the rigged mesh itself--not very useful for onion skinning purposes.

One way around this is to write your own shape drawing in VP2, but this opens a bit of a can of worms--drawing your own shapes in VP2 introduces complexities that I'd rather not deal with. There are also a variety of potential ways around this with shaders in VP2, though. Kostas Gialitakis, one of the few people around with a solid understanding of ShaderFX, made a shader for me that uses multiple render passes to generate toon outlines that are then pushed up to the camera in Z-depth so that they render on top of everything else in the scene. This is what I’m using to do onion skinning right now, and it works very well and handles camera movement perfectly.

This also displays a more advanced version of the system overall, including switching between different rig modes. Note that when I edit a pose, I don't have to edit it on it's first frame--this is a system that, at least in terms of the face it presents to the animator, is truly pose rather then keyframe centric, and you can edit a pose on any of the frames of it's duration without creating a new key.

Here's some of the code that runs the onion skin portion of the system:

onionSkinPoseGetter.PNG

This function refers to a bunch of stuff outside it's own scope, so it might be a bit confusing. I've been going back and forth on whether I should be posting little code snippets like this, or going over the code for the whole system instead, but I think posting the snippets is still the right way to illustrate specific concepts, even though they're obviously embedded in a system about which they make certain assumptions. For instance, this function is a method of an object with a "MFnMesh" attribute (the character's mesh), a "watchAttr" attribute, a list of "onionMeshes," other methods that save and restore poses, and to a module called "keyingUtils" that includes the poseBeginEndFrame() function I showed a few posts ago.

An argument could be made that I should have structured this in a more function style in any case, passing everything a function needs by arguments and avoiding mutating state except when absolutely necessary ie. in the system's connection to Maya. That would certainly have made it easier to review this code in little pieces like this, at any rate! Also I'd like to come up with a better way to show code snippets on this blog and not use screenshots like an idiot. We're all just going to have to learn to live in this cruel, indifferent world.

In any case, I’m not actually saving a mesh for each pose here-that would quickly balloon the scene to an unreasonable size! Instead, I have four onion skin meshes already in the scene, and I simply swap in the correct mesh data using Open Maya 2. Because deform rig targets--the ones the ephemeral rig is pushing it’s matrices to--are all in world-equivalent space, I can figure out what any given pose looks like without ever actually going to that frame just by looking at the keyframes for each target. So I simply swap the deformation rig to the position of the other pose, and then use Open Maya 2 to grab it’s mesh data.

Right now there's a hitch just after you change frames as the onion skins are generated, but this doesn't seem to be caused by grabbing the mesh data--it actually seems to be pyMEL that's taking up the time setting attributes, since the setTargetsToPoseOnFrame() function used in this code is written in pyMEL. Once I rewrite that section with om2, it should happen so quickly that it is completely transparent to the user, so when you alight on a particular frame you simply get the onion skins you would expect, seemingly instantaneously.

There is one significant issue with this approach though--the entire character must be one single mesh. Because VP2 shaders are only aware of the mesh they are currently rendering, characters composed of multiple meshes would reveal internal and overlapping geometry:

badOnionSkin.PNG

That’s fine for my purposes at the moment, since I can ensure that the character's I'm currently using the system with are made entirely of one mesh, but isn’t a great long-term solution. In the future, I’m planning to use a combination of the shader with Christoph Lendenfeld’s techniques to create a truly comprehensive onion skinning solution.

 

The zBrush Analogy

sculpting.png

In discussing interpolationless animation techniques, an analogy I keep coming back to is zBrush and other sculpting apps vs subdivision and NURBs modeling. While subdivision surfaces almost completely took over from NURBs as the most common technique for DCC modeling in the early aughts for very good reasons, the two methods of modeling surfaces have a lot in common. They both create a surface out of a relatively small number of control points that the user can manipulate, and it is the job of the modeler to place these control points in the right relationship to create the desired surface.

At first blush this seems like an obvious good. Manipulating a surface from a limited number of points must be easier than dealing with a huge mess of polygons, right?

Nope!

It turns out that dealing with a whole bunch of dense data is frequently better--as long as you have the right tools to do it. Until zBrush came along, nothing did. But once it had a chance to refine its toolset and retopology became a common technique, the advantages were so tremendous that now zBrush is sometimes used for hard surface mechanical/vehicle modeling and even product design, areas where subdivision or NURBs modeling would have seemed like an obvious choice!

I think this shift suggests some fundamental ideas about the best ways to approach content creation. There is a tendency to assume that “non-destructive” or “procedural” methods will always be the more effective, creative technique, when in reality using them when they are not appropriate can be crippling. For instance, digital painters frequently make use of layers and layer masks, a beneficial non-destructive workflow. But try telling a digital painter they have to make all their art by putting down Bezier control points to describe a brush stroke instead of using a Wacom to lay down pixels. Being infinitely tweakable in theory does not necessarily equal a better workflow in practice.

Any sort of non-destructive editing introduces an element of indirectness to content creation. Instead of editing a thing, you are editing a thing that makes the thing. Sometimes this is desirable. Bezier curves are frequently the right toolset for graphic/logo design because smooth and simple shapes with precisely defined curvature actually benefit from this indirectness. Tasks that require minute fine-tuning like compositing practically demand it.

There is an entirely different class of tasks, including much of painting, sculpting, and, I would argue, character animation technique, where indirectness can be disastrous. But there’s no zBrush for animation, no animation package built around manipulating dense animation data directly. The animation equivalent of subdivs/NURBs is all we have. That’s why the techniques I’m presenting here are currently only viable in certain stylistic contexts. Interpolationless animation is highly effective for the kind of cartoony, highly stylized animation I want to do. But it presents obvious problems if you’re doing more traditional, naturalistic CG!

In a future post, I’ll examine what an “animation z-Brush” might look like.

Hacking the Maya Animation System

threeKeyframesInATrechcoat.png

The previous post focused on the “interact” mode of the ephemeral rig, and how node callbacks are used to create the ephemeral rig behavior. This post is about the “playback” mode, and how the system switches back and forth between the modes.

While the user is scrubbing or playing back, playback mode literally does nothing. The deformation rig (which will not normally be manipulated by the user) has stepped keyframes, and those keyframes drive its motion exactly as one would expect. In an ideal world, I’d use an animation system based on poses through time, rather than conventional keyframes, but while implementing that in Maya would probably be possible, it would be a pretty significant overhaul of the way Maya looks at animation, probably with unintended consequences for the animation workflow. So while the ephemeral rig presents to the animator as if it were simply poses through time, on the deformation rig keyframes do in fact exist--they are simply never set by the animator.

What playback mode does need to do is to recognize when the user has stopped on a given frame, so that it can conform the control rig (which is in whatever state the user left it in, probably not the same pose as the current frame) to the deformation rig before turning interact mode back on. To do that I ended up using Maya’s other way of triggering arbitrary code based on something changing in the scene, the rather justifiably maligned “scriptJob.”

ScriptJobs are...well, they’re very MEL. For instance, while you can pass a function to them to execute just like you can with callbacks, the scriptJob doesn’t let you pass any data through to the function. This necessitates using a function that doesn’t need to receive any arguments to know what to operate on, which is irritating and complicates the code.

Another common problem with scriptJobs is that they don’t fire while the Maya scene is evaluating--they fire the next time the system is idle. So you could never use them for the kind of thing I’m using callbacks for here--if you did, the deformation rig transform would only update AFTER you’d moved the control rig around and released the mouse.

In this case though, that turns out to be a hidden advantage. For playback mode, something that fires after the user does something (ie. changes the current time) but not while they are doing so is precisely what I needed. ScriptJobs are slow compared to callbacks, but they’re still fast enough that the scriptJob can fire after the user releases the mouse, conform the control rig to the deformation rig, and switch interact mode back on before the user can click on anything else.

After consulting with Brian Kendall, I ended up deciding to use a system where an attribute in the scene defines what the current pose is for the purposes of the ephemeral rig system. This attribute is step-keyed right along with the the deformation rig, and on the pose that the animator is currently manipulating it’s value is 1. On all other poses it’s value is zero.

The ephCurrentPose attribute is always 1 on the current pose, and 0 on all others. When the value drops to 0 because you scrubbed past the current pose, it triggers a scriptJob that sets the new current pose to 1 after you release the mouse.

The ephCurrentPose attribute is always 1 on the current pose, and 0 on all others. When the value drops to 0 because you scrubbed past the current pose, it triggers a scriptJob that sets the new current pose to 1 after you release the mouse.

As you may recall from the previous post, each callback checks a plug called “watchPlug” to see if it should do the ephemeral matching:

def callbackFunc(msg, node, data):
    sourceMatrixPlug, targetPlugs, watchPlug, activePlug = data
    if watchPlug.asFloat() == 1 and activePlug.asFloat() == 1:
        matchUsingTransformMatrix(sourceMatrixPlug, targetPlugs)

When the value isn’t 1, the callback function does nothing, effectively disabling interaction mode. But this attribute is also being watched by a scriptJob, and when the attribute changes value--because the user has changed the current frame--it fires, causing the control rig matchback and resetting all the watchAttr’s keyframes to 0 except for the one associated with the current pose. Now we’re back to where we started, just on a different pose.

Here’s the code that creates the scriptJob:

pa.scriptJob(compressUndo = True, attributeChange = onWatchAttrChange)

This calls a function that performs a "matchback" from the target to the control of each ephControl, simply setting the control to the transformation of the target using pyMEL:

def matchBack(self):
    self.ephControl.setRotation(self.target.getRotation(), space='world')
    self.ephControl.setTranslation(self.target.getTranslation(), space='world')
    self.ephControl.scale.set(self.target.scale.get())

And then resets the ephCurrentPose attribute so that the pose the timeline is now on becomes the current pose, therby turning interact mode back on. To do that first I need to have a way to recognize the extents of a pose from the keyframe information I have:

poseBeginEndCode.PNG

The formatting of my blog completely mangled this code, so I did a screenshot from Sublime Text instead--not the best way to share a code example, I know! Basically it first figures out if you are or are not on the first frame of a pose, then uses findKeyframe to get you the first and last frame. Then:

setCurrentPose.PNG

One advantage to watching an attribute, as opposed to having the scriptJob fire on frame change itself, is that anything that changes the timing of the system’s keyframes still functions. It’s possible that I may want scripts and tools that change keyframes, and potentially shift a different pose onto the current frame, without changing the current time. This system behaves correctly whether or not the current time has been changed--the only relevant question is whether or not the pose on the current frame is the previous current pose, or a different pose that requires a control rig conform and switch back to interact mode.

Of course, this only works if you have a way to bundle up all the keys associated with the deformation rig and the watchAttr, and ensure that any operations you do to the keys affect all of them at once, since, despite being keys, they’re supposed to represent poses. Luckily we do have such a method, though, just like scriptJobs, it’s apt to my purpose but also weird and irritating: character sets.

Character sets are an outgrowth of the Trax editor. The Trax editor used to be Maya’s nonlinear animation tool. An ancient order of TDs sealed the Trax editor behind the Windows/Animation Editors menu, there to remain hidden for all time. Opening it will unleash its horror yet again upon an unsuspecting world, so you should probably not do that.

Character sets were the Trax-editor’s way of interacting with keyframes in Maya. They insert themselves between an animCurve node and the attribute it drives, with the intent to do a bunch of Trax editor-related stuff that I don’t care about. What I do care about is that it allows you to have the timeline display the keys of an arbitrary set of attributes, instead of anything related to your selection. All keyframe operations--making keys, copying them, moving them around, etc--happen to these attributes at once. In other words, it lets us treat a whole bunch of unrelated keys as if they were indeed one pose.

A character set gracelessly inserts itself between a transform node and it's keyframes.

A character set gracelessly inserts itself between a transform node and it's keyframes.

I wish there was a way to do this that wasn’t character sets, but I haven’t found another method yet, short of writing my own timeline. After I get the ephemeral rig system rock solid, I’m thinking about trying to do my own timeline in QT, and seeing if that’s a reasonable thing to do. There are so many features--character-based tracks, markers, regular beat markers, the list goes on--that I’d like to have in a Maya timeline, and don’t. But I guess that’s going to have to wait a while.

Nuts and Bolts

robot.png

I really meant to have an initial version of the ephemeral rig-aware breakdown tool this week, but apparently life had other ideas! So instead we’re going to investigate how node callbacks work.

This post has code snippets, and anyone who finds them useful should feel free to swipe them for their own work. However, I am not a developer, so I make no claims as to the actual quality of this code. Basically, use it at your own risk.

First a couple of foundational ideas behind this particular ephemeral system. It actually uses two rigs, a deformation rig and a control rig. The deformation rig has keyframes but is never manipulated by the animator. The control rig can be manipulated but has no keyframes. Every control in the control rig has a precisely corresponding transform in the deformation rig.

Here you can see a deformation rig target (the locator) being synced up to a control ephemerally. The target transform has keys, although they appear yellow in the channel box because I am using a character set, but no other incoming connections.

It also has, for lack of a better word, two “modes.” In “interaction” mode the control rig takes control of the deformation rig using node callbacks and the API. This mode allows the user to control the deformation rig ephemerally by manipulating the control rig. “Play” mode is active during playback, scrubbing, or at any other time the user is interacting with the timeline and changing the current frame. In play mode the control rig has no effect--very important, since it has no keys and therefore no animation! Instead, the deformation rig is allowed to play back normally, and then when the user stops changing the current frame the control rig is conformed to the current state of the deformation rig and interaction mode is reactivated.

A bunch of the complexity in the ephemeral system comes from the need to switch modes smoothly and automatically, so that the animator never needs to notice or care about them. But the two modes themselves are not actually that complicated. In this post we’ll look at how interaction mode works.

Dealing with node callbacks means we will need to get into the Maya API, something I hadn’t done personally before I began building this system, but that’s a lot less daunting than it used to be. OpenMaya 2 means you can code for the API with Python in a way that’s performant enough for this purpose, and basically treat it as just another way to script. Like many TDs of the old school, I’m not a real developer, and I have no experience with C++ or compilers, so this is pretty useful!

The API often requires you to jump through a bunch of hoops, frequently by creating a bunch of additional objects, in order to do anything. I tried to wrap this up as much as possible. For instance, here’s a function that gets an MObject for a node from the node’s name, and then one that gets a given plug from an MObject.

import maya.api.OpenMaya as om2

def getMObj(name):
    tempList = om2.MSelectionList()
    tempList.add(str(name))
    return tempList.getDependNode(0)

def getPlug(mObj, plugName):
    mfnDep = om2.MFnDependencyNode(mObj)
    return mfnDep.findPlug(plugName, False)

For those like me who come from a purely pyMEL/cmds module background, MObjects are objects that point to and manipulate Maya nodes. And for our purposes at least, plugs are basically synonymous with attributes. So if I wanted to, for instance, get the value of a float attribute through the API, I could do this:

attrValue = getPlug(getMObj("nameOfObject"), "nameOfAttribute").asFloat()

Which isn’t really all that much more complicated than the pyMEL...

pm.PyNode("nameOfObject").nameOfAttribute.get()

...that I might have used otherwise.

The biggest problem we’ll face in understanding node callbacks is a lack of documentation. The reference docs for the Maya API are fine--if you want to find out what kind of methods are available to a MNodeMessage object, that’s easy enough. But there’s very little out there explaining how you’d actually use the object.

Most of what I know about the use of node callbacks I got from watching Cult of Rig, Raffaele Fragapane's rigging stream. Cult of Rig is worth watching for a lot of reasons; Raff (he has two fs, I have one) is really thinking about rigging in a much more structured and well-constructed way than most riggers are. But the most relevant point for me is that he actually uses node callbacks in a real-world situation, and explains why he is doing so.

The API lets you attach a node callback to a node in Maya. After that, whenever a specific event occurs--an attribute of the node changes, or the node is dirtied, for example--it will fire the callback. You can attach a function of your own to the callback, which it will run whenever it fires.

def createEphCallback(node, data):
    om2.MNodeMessage.addNodeDirtyPlugCallback(node, callbackFunc, data)

This creates a callback that fires whenever the node is dirtied, which includes when nodes further up the DAG hierarchy are manipulated--very important since we want the callback to fire no matter what the node is parented to. The data argument is whatever the callback function needs to receive to do whatever it does. The callback will automatically pass three arguments to the callback function--the data argument will be the third--and you need to write it to receive those arguments. You don’t necessarily need to do anything with the first two though.

def callbackFunc(msg, node, data):
    sourceMatrixPlug, targetPlugs, watchPlug, activePlug = data
    if watchPlug.asFloat() == 1 and activePlug.asFloat() == 1:
        matchUsingTransformMatrix(sourceMatrixPlug, targetPlugs)

Here, the data argument is a list that contains all the data I want the function to have--basically it’s what I would pass to the function as arguments if I was calling it normally. Since the callback will only pass one custom argument to the function, here I give it one list of all the arguments I want it to have and turn it back into individual variables on the other side.

Two of these are plugs, the value of which the function checks to see if it should do the ephemeral rig matching at all--this will be very important when we discuss the different modes the system works under. If the answer is yes, it uses the matchUsingTransformMatrix function to match the world space translate, rotate, and scale of the target object (the deformation rig node being controlled) to the world matrix of the source object (the corresponding control being manipulated by the animator). This matching is ephemeral because the function simply sets the plugs on the destination node to do this, without creating any connections in the node graph.

To get the appropriate data to pass into the callback function, I get the world matrix plug from an MObject:

def getTransformMatrixPlug(mObj):
    mfnDep = om2.MFnDependencyNode(mObj)
    return mfnDep.findPlug('worldMatrix', False).elementByLogicalIndex(0)

This matrix can be decomposed into translate, rotate, and scale values (going through a bunch of other API objects in the meantime):

def decomposeMatrix(matrixObj):
    mMatrix = om2.MFnMatrixData(matrixObj.asMObject()).matrix()
    transformMatrixObj = om2.MTransformationMatrix(mMatrix)
    translation = transformMatrixObj.translation(om2.MSpace.kWorld)
    radianRot = transformMatrixObj.rotation()
    scale = transformMatrixObj.scale(om2.MSpace.kWorld)
    return [[translation.x, translation.y, translation.z],[radianRot.x, radianRot.y, radianRot.z], scale]

Naturally, since it’s the control object’s world matrix, the TRS values will be in world space. For the moment that’s fine--at some point I will insert an additional step here to multiply this matrix by something first so that the whole system can be relative to something other than world space, but for the test I’m doing at the moment that’s not necessary.

I also need to be able to get the plugs to put these values on off of the target object:

def getTransformPlugs(mObj):
    mfnDep = om2.MFnDependencyNode(mObj)
    transformAttrs = [['tx','ty','tz'], ['rx', 'ry', 'rz'], ['sx', 'sy', 'sz']]
    return [[getPlug(mObj, attrName) for attrName in listOfAttrs] for listOfAttrs in transformAttrs]

Note that it returns the plugs as a list of lists. This will be important in a moment when I set the plugs.

The callback function passes the destination plugs and the source matrix plug to matchUsingTransformMatrix, which then decomposes the matrix and sets the destination plugs:

def matchUsingTransformMatrix(sourceMatrixPlug, targetPlugs):
    sourceVals = decomposeMatrix(sourceMatrixPlug)
    for sourceXYZVals, targetXYZPlugs in zip(sourceVals, targetPlugs):
        for sourceVal, targetPlug in zip(sourceXYZVals, targetXYZPlugs):
            targetPlug.setFloat(sourceVal)

It uses nested for loops like that because what the decomposeMatrix returns is a list of three lists (translate, rotate, scale), each list containing three values (x, y, z). The getTransformPlugs function returns the plugs in the same format so that they can be zipped together cleanly. (I’m pretty sure there’s a more elegant way to do this then using a nested for loop, but I couldn’t be bothered to figure one out when I wrote this.)

Last but definitely not least, you need a way to kill the callbacks, because if you don’t this will happen:

brooms.jpg

Luckily it is really easy to find out what callbacks you have on a given node and destroy them.

def killCallbacks(mObj):
    for cb in om2.MMessage.nodeCallbacks(mObj): om2.MMessage.removeCallback(cb)
Take that!

Take that!

And fundamentally, that’s how interaction mode works.

How do we inbetween?

Variable frame rate lets us animate more like a speed painter, suggesting rather then creating detail.

Variable frame rate lets us animate more like a speed painter, suggesting rather then creating detail.

When I talk about interpolationless animation, the first question I usually get is something along the lines of, "My God, you're not posing every single frame, are you?"

To answer that, I’m going to bring up an animation example I did with a slightly older system. This system is the same one used on The New Pioneers, which is maybe half-way to the fully ephemeral and interpolationless workflow I envision.

This was a test Chris Perry (the director of the New Pioneers piece) and I did for Vintata Animation Studio, using background art by Jeet Dzung & Ta Lan Hanh . I also produced an animation demo, showing the animation process that I was using at that time sped up ten times.

Yes, I’m posing every frame--but with caveats. I have the breakdown tool to help me make in-between poses quickly, though it doesn’t work well for all poses yet (that’s a big goal for the current ephemeral rig system). But I’m also leaning on the NPR-rendered look of the shot to let me animate using a combination of 1s, 2s, and 3s, as a traditional animator would.

This is hugely advantageous for a variety of reasons. Obviously, it lets me make fewer poses! But it also changes the way the audience perceives the motion.

A great deal of the time put into CG animation goes into “polish,” the point at which the overall performance is set and the animator goes in to remove little pops, smooth out discontinuities of motion, and add little overlaps and weight shifts. For most fully-rendered CG this is a necessity--not doing it results in motion that pokes you in the eye with something awkward right when you need the audience to pay attention to the character’s emotions. And yet drawn animation doesn’t have the same problem--lots of great drawn animation has pops and wobbles that would stand out like a sore thumb in CG, and yet look perfectly fine in the drawn context. A drawn animator can hold a frame. A CG animator has to do a moving hold just to prevent the character from dying on screen.

I’ve come to the conclusion that this is a combination of a more graphic, less specific look--line art being about as graphic as you can get!--and the use of what I’m calling “variable frame rate,” ie. the mix of 1s, 2s, and 3s commonly used by drawn animators. I think that the key here is supplying the audience’s eye with enough information to perceive the important parts of the shot, but withholding enough information to allow the audience’s mind to fill in the detail that isn’t important. It’s like a painter suggesting detail with a few brush strokes instead of painstaking photorealism--frequently the more painterly approach will actually be more beautiful than strict realism, but requires far less time and iteration (but potentially greater skill to pull off well).

That said, variable frame rate isn’t always inconsistent with fully-rendered CG: Blue Sky’s Peanuts movie has a stylistic context that allows for both!

To some, this may make the interpolationless approach seem very niche--naturally, most CG productions are not going to use the combination of NPR rendering and variable frame rate I’m using here! But I’m not convinced of this--I think that this is a consequence of the way CG tools have developed, and that an interpolationless workflow would actually be very effective for a wide variety of animated productions with the right tools. And while the current workflow would not be suitable for full frame rate of very nuanced work, I’ve found it entirely suitable for full frame-rate action shots in the stylized context. Consider this shot from New Pioneers, done using full frame rate to accommodate the sweeping camera move.

I’m also using a “free” or “broken” rig, with only partial ephemeral features. Free rigging dates back to the Disney production Chicken Little in the early 2000s. The idea is that most controls just live in world space, and the animator is expected to place them wherever is appropriate for their pose directly. If you want an elbow to be somewhere, put it there. Don’t expect IK to do it for you!

I’m a big fan of free rigging, and I think it’s frequently the perfect manipulation method for the cartoony or semi-cartoony animation I want to do. Watching the video above sped up, you can see how my workflow feels more like sculpting the pose into place, then adjusting an armature or puppet. But it does come at a cost. As you might expect, if you have to place everything where you want it every time posing becomes much slower.

To combat this, Tagore Smith and Brian Kendall developed a “phantom manipulator” plugin for me, essentially an earlier implementation of ephemeral rigging in the form of Maya manipulators. You can see me using it several times in the video, where I grab several world-space controllers and rotate them as one. It made things a lot faster, but having the ephemeral behavior be part of a manipulator turned out to limit it. What if you wanted that behavior, but in the context of an entirely different manipulator--say, having the arms react with IK when rotating the torso? Building a manipulator-based ephemeral rigging system that could react in all the different ways an animator might want was difficult to plan for, which is why I’ve moved to the callback-based system I’m developing now.

It also has implications for the way the ephemeral rigging interacts with the breakdown tool. Here it doesn’t, and all interpolation within the tool is done linearly in whatever space the control is in, which is frequently world space. Using the breakdown tool for bigger, wilder movements would tend to react mangled breakdowns. The new system, on the other hand, will let you configure the rig however you want and will then use that configuration to make breakdowns. That’s actually the part of the system I’m working on right now, and hopefully I’ll have a new video to show of the new behavior next week.

A brief comment on blocking and terminology

Before I go on, I wanted to drop a note about terminology here. I've used the terms "closely blocked" and "blocking plus" to describe a style of working that is antecedent to the approach I am proposing (or maybe that I will get around to proposing, as it seems like it is taking me many posts to describe the big picture!). To many animators, these concepts will be very familiar. But to some they'll be relatively new, and I've gotten the impression that not everyone knows what I'm talking about.

"Blocking" or "step-blocking" or "pose to pose" is the practice of creating key poses to "block out" the motion before turning it to spline interpolation, and then probably messing with it a whole bunch in the graph editor. It was introduced as a method of imposing some kind of structure on a shot from the beginning. If you started with splines from the get-go, you'd end up with something that needed a whole lot of iteration before it even got to the point where you could judge if it was working or not, which is a great way to animate yourself into a corner, or a padded cell.

(Not everyone necessarily agrees with me about this though--see comments to the previous post for a diametrically opposed point of view!).

So you'd first create a few step-keyed poses for the shot, and you could show it to the director and you'd be able to get some idea of whether you were going in the right direction. The problem is what happen next: you'd set all your keys to spline, and suddenly your nice, crisp blocking would turn into a horrific mess, which you then had to grovel through in the graph editor to fix into a presentable shot, a soul-killing process that turned bright-eyed young animators into hollowed-out shells. There were a few attempts made to come up with a systematic way to handle this problem, but they mostly resulted in motion that was pretty mechanical.

"Blocking plus," or "close blocking," or "pose and breakdown," or "really there isn't any agreed upon terminology" solves this issue by extending the blocking concept much further, allowing the animator to approach the shot as a series of step-keyed poses right up until final tweaking. Potentially you go down to using a pose on every other frame to every third frame, or even every frame for very fast motion. Creating so many poses sounds time-consuming, but if you have a breakdown tool it's actually a fairly fast process. If you have an onion skin tool it's even easier.

This allows you to do two important things. One is that you can watch something with a step-keyed pose on every other frame or so and actually understand the shot. It's not just the character popping between poses with no real idea of what the connective tissue will be: you can look at the shot and pretty much see the motion, and whoever has the authority in a given production can make effective judgements about it before it becomes difficult to edit.

The other is that, by nailing down the motion very closely before it's splined, you prevent interpolation from doing all that much violence to it. Now you can use the graph editor to tweak and finesse in a focused way, instead of trying to figure out how to get from pose A to pose B control by control. Blocking plus is, you might say, proto-interpolationless.

For a fascinating view of an animator becoming disenchanted with old pose to pose methods and discovering blocking plus, compare these two articles by Keith Lango. The second one was instrumental in my own discovery of blocking plus technique back in 2006.

What makes a rig "ephemeral?"

When there's danger in Node Graph City, the NGPD shines a node callback into the sky. As if from nowhere, Ephemeral Rig Man appears! When his work is done he vanishes...into the night!

When there's danger in Node Graph City, the NGPD shines a node callback into the sky. As if from nowhere, Ephemeral Rig Man appears! When his work is done he vanishes...into the night!

Last time we talked about the problems caused by keyframe interpolation, and the benefits you can get by removing it altogether. But because we can't use Source Filmmaker for production--it's really not geared towards character animation--we're going to need to figure out how to get some of those benefits in Maya.

Some of those benefits are easy to get. Maya was designed for a series of separate animation curves for each attribute, with it's own keyframe placement and interpolation. But we don't need to think of them that way--we can make a "pose," with a keyframe on every attribute associated with the character, and choose to animate only with poses. "Breakdown" tools like Justin Barrett's The Tween Machine or our own Anzovin Breakdown Tool can be used to generate in-between poses easily. Indeed, this is a well-established workflow for blocking, even when the final animation will be splined and adjusted through the graph editor.

Some benefits are harder to get. Working with poses effectively really requires you to be able to see other poses while you are working, which is best done through some sort of onion skin tool in the manner of a 2D DCC app like Toon Boom. Maya is really not designed to do this, but there are a number of possible solutions--I'll go over onion skinning in a future post.

And some benefits seem, at first blush, impossible. In theory, a truly pose-based system would let you completely change the rig's behavior between poses. But even if you're thinking of your motion in poses, Maya isn't. It still thinks you have a bunch of animation curves driving a bunch of attributes that must remain consistent to allow for interpolation--actually changing the rig in an arbitrary way would completely destroy your motion. Maya is built to think about rigs as pre-defined little machines, and that's the opposite of what we want.

So we need a concept of "ephemeral" rigging--rig behavior that does not use the node graph! To Maya, ephemeral rigging is essentially invisible. It's triggered by some callback or manipulator, performs some rig behavior on the scene, and then vanishes, as mysteriously as it had appeared!

Here's the two initial tests I've put out for the ephemeral rig system I'm currently working on:

Note that the control rig has no keyframes, allowing you to arbitrarily change the rig. The "attach" command I'm using here just parents the control to whatever you want. That includes being able to completely reverse the hierarchy if desired, or parent controls to something external to the character. No special consideration is needed for "space switching," because the controls have no canonical space to begin with!

Using this system, you could decide you want the hand to be parented to the head. After adjusting the pose, you scrub over to another pose. When you do so, the hand is still parented to the head, but both the hand and the head controls have conformed themselves to the pose you've scrubbed over to. This allows you to configure the rig in any way you want without disturbing any pose, and then use that configuration to manipulate any pose you choose.

There are a number of ways to implement this. What I'm doing here uses API node callbacks that fire when the control rig is manipulated. When they do, they get the world transformation matrix of the control rig node being manipulated and use it to figure out what values the geometry rig needs to receive to match its pose. Since I am starting with the node's world-space matrix, hierarchy is entirely irrelevant.

When I've had a few more posts to lay down other aspects of my overall workflow, I'm going to circle back to ephemeral rigs and go into detail on how this system works, with code samples.

Trapped by Keyframe Interpolation!

spider.png

So...I'm going to be getting to the ephemeral rig stuff soon, but before I do I think I need to explain some other basic concepts, and some of the foundational ways I'm breaking with the established practices of CG animation, without which the ephemeral rig approach won't make much sense. Lets start with the big one, the most significant sacred cow I want to slay.

Keyframe interpolation.

Yes, all keyframe interpolation.

This isn't a spline-vs-line thing, I think that the idea of persistent, always-on interpolation of keyframes of any kind was a bad idea from the start, and it's done a great deal of violence to the art of character animation. You don't see that many character modelers using NURBs, and we shouldn't be using function curves. But here we are.

To be fair, this isn't a completely new idea. People have been animating purely with step keys for a while, and lots of animators block till they have a full step-keyed pose on every other frame before hitting the dreaded spline button. That a technique that basically tries to put off using interpolation--the ostensible basis of computer animation!--until the last possible moment has become so common should tell us something. But I still don't think that the full horror wrought by the function curve is well understood.

I began to understand just how badly interpolation has screwed us when I started playing around with Source Filmmaker, Valve's machinima tool. Because it's primarily made to edit data captured from a game session, Source Filmmaker deals with animation as "samples" rather then keyframes. While they are not actually keyframes, thinking of them as keyframes that exist for all controls on every frame may help animators understand how they behave.

Source filmmaker treats motion as samples, rather then keyframes. It's not really a character animation package but it suggests possibilities that would be unthinkable with interpolated keyframes.

Source filmmaker treats motion as samples, rather then keyframes. It's not really a character animation package but it suggests possibilities that would be unthinkable with interpolated keyframes.

With a sample on every frame and no interpolation, you can do things that would be unthinkable in the context of conventional, interpolated keyframing. Perhaps you want to edit the motion of a characters hand for a portion of the shot on which the hand is on the character's head, and you want to edit the hand in context of the head movement. You could mess around with constraints and space switching and manage a bunch of transitions between different states and controls.

Or...you could just parent the hand to the head! A system like Source Filmmaker already knows where the hand is on every frame of the entire shot, because there is no interpolation to make the motion dependent on context. So it can perfectly well just calculate a new position for the hand in the space of the head for every frame. You can modify the hand in head-space in whatever way you like, then just switch it to some other space whenever that is convenient. The motion will be precisely identical in any space you put it in.

The fact that we can't do this has lead to runaway growth in rig complexity. Having a lot of different ways to manipulate a character is clearly desirable, but the need to make rigs accommodate different manipulation methods and also interpolate properly leads to rigs with a million switches and dials and blend controls and additive controls. Not only must the animator think about the important stuff that is actually the job of an animator--what is the character thinking, how will they express themselves--but also how to use the overlapping controls of a modern rig to produce motion that won't turn into an uneditable mess when it's finally splined and you need to clean it up. But in an interpolation-less system...none of that matters! You can reparent controls however you like, change their pivot points, essentially just swap out one rig for another when desired. You are free to manipulate the character in whatever way makes the most sense at any given moment, and there are no consequences on interpolation, because there is no interpolation.

However, Source Filmmaker is not really a character animation tool. We can't just switch to it and get these benefits. Instead, we will need to figure out how to get the benefits of interpolation-less animation in Maya. Not to mention figure out how, in the absence of interpolation, we will generate and edit our inbetweens.

The very first post

Hey, it's a blog! This blog exists because I posted a rudimentary test I did of a new "ephemeral rig" technique, and so many people were interested in it that I thought an in-depth examination of my ideas would be something people might also be interested in. And because I'm working on something that might become a SIGGRAPH (or wherever) presentation anyway and trying out some of these concepts in a public forum might help me whittle them down to the important bits.

I'll be expanding on the basic concepts in future posts, but the idea here is to approach CG animation in a way that reduces the complexity and time-consuming nature of the process and lowers it's cost, but makes the artist's contribution more direct and meaningful.

When I tell people I'm trying to make animation easier, they mostly at first assume I'm planning some sort of procedural system that tries to automate much of the animation process, but I'm actually doing the opposite. I'm trying to take out a lot of automation, and make the process of creating animation more direct. Within the right stylistic context, this can produce huge production speed gains.

The first real test of these ideas was on a production called The New Pioneers in 2016, directed by Chris Perry as a test for a television or film production.

I've been very gratified that The New Pioneers has been frequently mistaken for drawn animation. In fact all the character animation here is CG, and it was done with a tiny crew. I'd say I did about 60% of the animation myself, including some of the most difficult scenes--Mynn running up the tower, throwing her spear, and much of the monster--working part time over the course of about four months. The production as a whole took about five months, and never had more then four artists working at the same time. For a Cartoon Brew article last year, I put together this video showing how the process worked at that time:

Now, this was really just the first test of some aspects of the process I'm envisioning, and we hit plenty of snags and found plenty of areas where further research is needed. The animation quality isn't quite where it would need to be for a feature production, but it's a strong first step.

This blog is about how I'll take the next steps.