The Return of Just to do Something Bad

llamas.png

It has been quite some time since this blog was updated! One could perhaps be forgiven for assuming that, in the madness of this dread year 2020, I had met some strange and inexplicable fate--press-ganged into an intergalactic war, perhaps, or devoured by feral llamas. Well, I’m happy to report that all rumors of my demise are entirely erroneous. I have merely been ensconced in my laboratory, biding my time until I could unleash, upon an unsuspecting world, these strange and foreboding announcements.

To begin with, I’m pleased to announce that Epic Games--the proud creators of a powerful Engine with a startling ontological status--have seen fit to award Chris Perry and I a MegaGrant for the creation of a pipeline for nonphotorealistic animation production using Maya and Unreal.

Epic_MegaGrants_Recipient_logo.png

This has been a dream of ours for quite some time, going back to our work on The New Pioneers test. This test demonstrated that rapid CG animation production using interpolationless animation, illustrated backgrounds, and stylized rendering was viable. It also demonstrated that it was a lot less rapid than it could be. A lot of that came down to the need to manage each shot through a rendering and compositing process to create the final look. The renders themselves were fast (though less fast then you might think--we were using Mental Ray to do the passes at the time) but the assembly and tuning of each shot still took much longer then it should have, using complex After Effects comps that were far from real time themselves with lots of passes per character that had to be managed properly for every shot.

None of this seemed like a technological necessity of any kind, given that game engines routinely crunch through far more complex rendering and compositing math for every frame of almost any game in real time then we needed to do here.

To give you some idea of what this process was like, here’s a video showing some of the passes rendered for the monkey test we completed after New Pioneers.

The process of assembling the final shot from these passes makes assumptions that are very alien to those made in any kind of conventional CG rendering, real time or otherwise. For instance, the monkey’s main light is occluded by a shadow that is projected from an entirely different vantage point then the light itself, and there is an additional shadow pass that doesn’t occlude any light at all--it’s just applied to darken certain areas. That’s because, when doing graphic-looking work like this, the light and shadow serve very different purposes. I’m angling the light to give the best two-tone, while I’m angling the shadows to silhouette different aspects of the monkey's body to make them read clearly (the arm against the body, for instance).

Similarly, a surface’s normals are usually intended either to represent the surface as accurately as possible, or to represent a higher level of detail (as with a baked normal map). But to get a decent two-tone, we need the normals to represent a surface that’s simpler than the actual mesh. This is one of the areas of our process we’re planning to iterate on, because the methods we used on The New Pioneers and the monkey test still required some manual masking in the composite and we think better methods are possible.

Unreal isn’t designed to do this kind of trickery out of the box, but it is far more easily extensible then any traditional DCC and renderer, and our initial tests have been very promising. As we continue to develop real-time techniques around NPR in Unreal, you can expect this blog to shift somewhat from focusing almost exclusively on ephemeral rigging and interpolationless animation to a bigger focus on NPR--which, as you may note from the logline, is something I’d always meant to discuss here in any case.

That said, there’s still going to be plenty of ephemeral rigging to show off too. The other announcement I’d like to make is that, over the past six months, Tagore Smith and I have been working on Mark 3 of the ephemeral system. The previous prototypes I’ve shown have been experiments with some awkward edge cases that might make them difficult to put into full production: Mark 3 is intended to be a robust and production-ready implementation of the system without the issues with performance, parallel eval, and multiple selection that plagued Mark 2. Tagore is ensuring that this version has a well-engineered structure, so that it can be a stable basis to build future features on. The core of the system has also been separated from Maya, meaning it should be much easier in the future to implement it for other “host” applications. We mean to make this version available commercially in some form, although the details are still being worked out.