Dec 14, 2009

Automatic Shape and Pose Extraction

I was happy to see this that this blog is still getting updated, so I thought I'd share some cool and relevant research I learned about in my Computer Vision class. These researchers managed to construct a generic human model based on the common features of several meshes, and can use them to extract movement from a series of photos, and apply them to another model.

The results are really impressive. Their website is here, and there you can see some of the animations they created. If you're really interested, here's the paper I read:
http://www.cs.brown.edu/~ls/Publications/nips2007sigal.pdf

Nov 25, 2009

modeling fast

See below about a very intriguing, fast, model capture system.

ProFORMA: Probabilistic Feature-based On-line Rapid Model Acquisition

Here's the video demo:

May 8, 2009

Scripting Maya

0. Thoughts on Maya

It's easy to find learning a piece of software like Maya daunting. At most schools, it takes two full semester-long classes in 3D animation before someone really understands how to create 3D animation from beginning to end, and that doesn't even cover the advanced techniques and features buried inside of the program. Scripting something like Maya can seem an even more difficult task, as most of the time, we know Maya from its user interface, and not from the way it stores data internally.

What Maya does do for you is echo the API commands in its own scripting language, MEL. Maya's UI is written in MEL, at the same level I was working when I started writing my scripts. That means that the source code to the Maya tools written in MEL would provide a valuable learning tool in my attempt to learn the ins and outs of the program. Autodesk don't, however, provide step-by-step tutorials or even an explanation of its API by feature, rather, Maya developers are expected to learn the program through a combination of its command reference and the UI source code.

In addition to MEL, Autodesk provide a binding into Maya for the popular programming language Python. If you've already begun scripting for Maya in Python, you probably already know the weaknesses in the provided Python layer can make for some ugly code. The good news is that translating MEL into Python is fairly obvious. The bad news is that it results in code like this:

curQual = cmds.modelEditor (currentPanel, q=True, rnm = True)

if curQual == 'base_OpenGL_Renderer':

cmds.modelEditor (currentPanel, e=True,displayAppearance = "smoothShaded", displayLights = "default", displayTextures = True, rnm = "hwRender_OpenGL_Renderer")

else:

cmds.modelEditor (currentPanel, e=True,displayAppearance = "smoothShaded", displayLights = "default", displayTextures = True, rnm = "base_OpenGL_Renderer")


Calling one function for everything relating to, say, 3D Paint or a model view can get confusing quickly, especially since you're either setting or getting flags that are explained only briefly in the documentation. In order to see how these functions are actually used, you need to dive into the Maya source code and pick out the relevant bits separate from the code that creates the user interface. In practice, this means turning on “Echo All Commands” inside of Maya, doing something you want to automate inside of Maya, and then grabbing the name of the UI call from the script editor and grepping through the hundreds of thousands of lines of MEL lurking in Maya's /scripts folder. I think most Python hackers would agree that this development process and way of laying out the API is decidedly Unpythonic (http://en.wikipedia.org/wiki/Pythonic#Programming_philosophy). Some projects exist, such as PyMel (http://code.google.com/p/pymel/), that make Python syntax and MEL objects make more sense with each other, but unfortunately we didn't have time to look into those for this class.

With all of that said, once you get the hang of development in Maya, you quickly realize that you have a powerful 3D package at your control that allows you to take advantage of some very advanced features without having to code a renderer for them yourself. In terms of time taken, using Maya's particle engine or a 3D paint tool is undoubtably faster than writing your own, and this class is all about speed, so learning how to manipulate these tools with code was necessary to the mission of the class.

When we first decided talking about the kinds of tools a programmer might contribute to the class, a lot of my initial ideas centered around creating art programmatically. I thought about writing some code that laid out a virtual world based on a few basic objects that populate it, so that a program could be fed a few models of carnival booths and rides and then spit out an intelligently laid out scene. Another idea we talked about was taking a rigged model and generating walking animation for it based on the layout of its limbs, similar to what happens in the game Spore.

We eventually decided to pursue a different path, and focus on improving the interaction between the animator and Maya. We talked a lot about how the animator can get the ideas in his or her mind into Maya as quickly and efficiently as possible, while still having creative control over what results.

1. 3d Paint Tool

We decided that one group would start exploring the possibility of 3D matte painting, which would greatly reduct the amount of world building and modeling necessary to create the scene for our movie, without a sacrifice in visual quality (provided the camera stayed put). I was tasked with creating a 3D paint tool to streamline the painting process in Maya.

This tool would include a “Prime” button that would let an artist prepare a mesh for 3d painting a single click. Naturally, this requires a lot of steps to complete, and I ultimately decided to give the artist the option of turning off each step as needed. This saves the artist time in the beginning, but we realized that the real time saver would be minimizing the number of times the artist needs to go back to the Maya 3d Paint panel and change settings. For that reason, I added some hotkey management code that would swap out common single-button hotkeys for 3d Paint specific functions (e.g. C for choose color, G for grab color from screen). The animators really liked this, and said that specifically saved them a lot of time. Another feature we were interested in was moving the current 3d paint texture map to Photoshop. That eventually became the next tool in itself.

One final thing I should mention on the 3d paint tool is that there were some troubles getting it working at first. One peculiar thing about Maya 2008 and 8.5 is that it requires all named commands to be linked to MEL code, even in Python. This has apparently been fixed in 2009 with the sourceType flag on the nameCommand command, but most of the computers in Hampshire's lab weren't running 2009, so I used a quick fix using the MEL command python(), to effectively jump from Python to MEL and back to Python. A little unusual, but it worked...

Another problem with the 3d paint tool was its tendency to not restore the hotkeys once you were done using it. This was a nuisance that the animators had to go into the hotkey editor and fix themselves. The problem is that even though my script could restore the hotkeys, Maya didn't seem to provide a way to trap the event of the script ending. Strangely enough, it does provide events for when your tool window is minimized and restored (minimizeCommand/restoreCommand flags on the window command), but even these weren't called at the appropriate time. I ended up adding a restore all hotkeys button to the window for the artist to use before they finished with the tool.

2. Getting Renders out of Maya and Into Photoshop

This posed an interesting problem at first, as we weren't sure how to divide up the painting duty between Photoshop and Maya's 3D paint, and in which order they would be used. It seems like the painters were mostly outlining in Maya and finishing up all of the detail painting in Photoshop, so it seemed necessary to write a tool that could facilitate an export to Photoshop as quickly as my previous tool made it possible to manage a palette inside of Maya.

Since our artists were already dividing up the textures for 3D paint into categories and render layers in Maya, it seemed to make sense to develop a tool that could organize a camera, its materials, and its child render layers and textures all in one, while pushing renders of each layer out to Photoshop. I found the render command in Maya script, which I preferred over Maya's external renderer because it let us have a scene file open as we rendered it and had built in flags for rendering specific render layers at a time. By default, it rendered to .iff with bad transparency. Maya provides a function called convertIffToPsd which I started using in the first version of my code, but Maya's generated .psd files didn't retain transparency as alpha (instead painting the empty void black), so we tried several different file formats. Maya's export to .png preserved transparency perfectly, so I devised a system to include such a file in a .psd by means of a smart object (that's Photoshop for an updatable reference to an image stored elsewhere) and a Photoshop actions file (.atn) to update it.

Getting Maya to spit out a .png as opposed to a .iff required some hunting through the MEL code as well. The trick is to call setAttr ('defaultRenderGlobals.outf',32), with 32 being Maya's special identifier for .png files. You may want to setAttr the value back to 7 (.iff, the default) after your call to render.

I built an early version of this tool and demoed it for the class, but we ultimately decided we wanted to lose the organizational overhead and let the artists push out a single render reference image themselves that they could paint over, and “spray” the new paint over the old texture back in Maya.

Getting that single image out was surprisingly easy, and the script ended up being deployed as a windowless button that you could press and generate a quick render of the selected camera on your desktop.

3.Creating a Hypergraph Shader

Having created a series of paintings on top of a quick render, it became necessary to plan how to move the new paint back into Maya. Basically, we moved the organization task of fitting the painting images in place from the “before” Photoshop stage to the “after”.

The two animators, Taryn and Tatiana, who were painting in the class and provided commentary on the tools took two different approaches. One chose to use a quad shading switch in Maya, which is a special shadingNode that acts as kind of a multiplexer for pixel values within a shader. Because we had so many paintings that would have to fill a single surface, we thought it would be a good idea to have the shader make an intelligent decision as to which psdNode to source the image from, instead of building shaders for each image.

We did run out of time at this point in the semester, so I was never able to build a final version of this script, but I did have an early version to show off on the last day. In this final tool, a painter would first select a camera and lock it into place, and then create some materials associated with that camera's shot. The painter could then assign each material (which already had a psdNode created for it in hypershade). Then, the painter would select a meshes they wanted to texture, and assign it to a material (it's important to realize that the relationship between meshes and materials is many to one), which would then update the hypergraph shader. Finally, when they were done, they could mash a final button which would connect the whole mess together through the quad shading switch. Seeing what this looked like created manually made it seem like a process in dire need of speeding up.

The animators decided later in that class to move to multiple shaders for other reasons, which does eliminate the problem of creating the quad shading switch, although a tool to create multiple shaders could certainly have been just as handy, and that most likely is what I would have started on next if I had more time.

4.Conclusion

I hope that the code I've written as part of this class becomes helpful to CG students here at Hampshire and animators anywhere that might be reading this blog. Improving the interaction between technically minded artists and necessarily complicated software is a rich area to explore, and I wouldn't be surprised if there were big gains in efficiency to be had in the process of streamlining the interface of a massive piece of software like Maya. As for me, I'm satisfied to have learned a lot more about the techniques and terminology surrounding computer graphics, and to have picked up Maya scripting skills along the way. I'm still impressed by the program's size and capability, but no longer intimidated, and plan on continuing to play around with Maya this summer.

Capturing Body Animation with One camera (AND CHEAP!)

Through this process, you are able to capture the timing, poses, weight, etc of a character with a cheap, homemade MOCAP suit and a digital camera. You will need some tight fitting clothing (to make the suit), paper, marker, cardboard (optional), a digital camera that has a “video” mode, and an animation package (I will be explaining this with maya’s terms but the process should be able to cross software boundaries).

You first need a MOCAP suit. Mine was modeled after the imocap suits used in pirates of the carabean and iron man. For reference, you can look below at what I was wearing. The tight fitting clothes are so the tracked points that you put on the suit will not drift around (they will move with you exactly). To get a nice tracked point on a suit, you will want to put a dot (slightly smaller than a quarter) on a small piece of white paper and staple this to your clothing. I am sure there are better ways of attaching these, but this method is cheap, fast, and easy. You want the dots where your joints would be rotating. I did a test of one arm moving so I had a hip control, chest, shoulder, elbow, wrist, and gun point track. The cardboard is for creating loops of track points (say for around your chest or arm) so you can almost guarantee a good track point. When creating this, alternate from white background with black dot and black background with white dot so the tracker doesn’t get confused on which dot it is supposed to be tracking.

After you have the suit, you will need to take the movie of your actions (act out what you want to be captured). The whole idea of this process is that acting is the easy part and it comes natural and that should be where the speed up happens. We are acting with our bodies and not a mouse and keyframes. So, put your camera on a tripod or steady surface and make sure it doesn’t move while filming. You will also want to have a nicely lit room. You might see in my example that I am in a dorm room with a couple lamps pointed directly at me to light up the tracks (once again to make the computer’s job easier). With your suit on, press record and start acting. Right before you start acting, you might want to have a stretch position where your arms, legs and torso is fully extended. This will be used later in jointing. I have found that a digital camera works great. Of course an HD cam would provide a faster frame rate and an overall higher resolution.

Once you have your movie and are ready to take it into maya, convert it to a image sequence so maya can process it correctly in its different stages (you can’t have an image plane with a movie file and maya live won’t track movies either, only image sequences). You first want to start up maya live by creating a new track-solve. You can import your image sequence and start tracking. There are tutorials of how to track things with maya live on the web so I won’t go to in depth here. The basic idea is make a track point, move it to where the mocap point is, press “track” and whenever it fails, keyframe it’s location through the trouble spots of video. If the track is lost for a long time, you can keyframe it through it’s entire animation or find another track point to help. For example, I had 2 track points on my gun. One was on the top, one was on the front. When my gun is down, I can track the top point but as soon as the gun faces the camera, I am able to move the track down to the “front” track point and automatically track that through the duration of its visibility. Once you have tracked all of your points, you are left with locators. Heres the funny thing… the locators x and y translate attribute never change. In maya, you have to connect into the locators “location” attribute.

To be able to work with the tracked points, make a sphere (or some geometry that you can connect up to the locators). You can write an expression or use the relationship editor, but you should have the spheres X and Y translate = tracked locator’s X and Y location. Press play. You should see that sphere dancing around the origin but not up where your track is. You will have to place the sphere under the tracked locators group in the outliner to achieve the same scaling that is happening with the translations. With that done, you should see the sphere have the same x and y location as the locator. You should repeat this process for all of your tracked points.

What you should have now are some moving spheres in x and y space. Find that frame where you had your stretch pose. Here you will want to start placing joints (and they should correspond to the tracked spheres, so you can snap them). With your body still in the stretch pose, you now add an IK handle to each joint. For example, you want one from the hip to chest, chest to shoulder, shoulder to elbow, etc until every joint has an IK associated with it. You will then (still on the stretch frame) point constrain the IK handles to the tracked spheres. You will get a crazy animation here. The IKs are trying to reach the balls, so they rotate the joints in funny ways to get there. The only keyframing that you have to do in the whole process comes next.

Start with the root and work your way down the chain of joints. You will be keyframing the z location of the IK handles. You want to move the IK handles until the joint (which has a fixed length, found with the stretch pose) can actually reach its IK goal. For each IK on every frame, there are 2 Z solutions. The bone can bend forward to reach up towards the proper X Y position or backwards. As an animator, you will be able to tell that your elbow bends toward the camera and not backward. Go through each IK handle and keyframe the correct Z position so the IK and bone ends match up. Then press play. What you should end up with here is a bone structure that moves along with your motions. The timing should be correct, the poses should be correct but how can I use this?

If you have a set of bones that follow your body through 3D space, you can make other bones move like them. Say woody has a very narrow chest and long arms (which he does) and you wanted your bones to move him. You want your bone’s rotation to control his bones rotation. In maya that is called an orient constrain. One bone will orient itself to the other bones. No matter the length difference, the angles of the bones should be correct. This can also be achieved with an expression, simply having the rotations of woody bones = my rotations of tracked bones.

There will be weight painting issues which you can not dodge with this process but I still find that it is a faster way of getting baseline animation with correct timing and poses. The idea here is that you can work more on this animation. Bake it down so it has no connections and you can clean it up even more by hand.


I hope this made sense and that it helps someone who wants to animate faster. If I didn’t make anything clear, feel free to contact me and I would love to help update the blog and help through specific questions with this process. My e-mail is imk07@hampshire.edu.

May 7, 2009

Facial Animation using 2D Tracking

For the better part of the semester, I have been working with techniques of facial animation using two-dimensional motion capture. While figuring out the primary technical obstacles took only about two weeks, the bulk of the time Shane and I spent on this was geared more towards figuring out how to make Maya Live work for us the most, so not only would we have to spend as little time as possible to get the results we wanted, but we could put the facial performance in the context of the short we set out to make. 

For tracking software, we chose to use MayaLive because, despite the advantages Boujou presented as far as how user-friendly it was and how much it did automatically, Maya Live was ideal for the two-dimensional tracking we sought to do. What MayaLive does is put your video, in the form of an image sequence, onto an image plane. On the first frame, you create tracking points to move onto whatever points you’ve made in the video that you want the program to follow. What you come out with, in the end, is a group of locators that move along the X and Y axis with the tracking points in the video.

Our first attempt at using MayaLive was very rough, but yielded the solutions to our most fundamental problems. I started by sticking pieces of Scotch tape, which I had darkened with permanent marker, to parts of my face I felt would be important to animate for a full facial performance, such as the eyebrow, lips, cheeks, and eyelids. I shot myself on a relatively low-quality DV camera with no light source other than my window and the light in my room. I said the line “just one last thing, and then it’s done” several times, and also made some exaggerated faces for the purpose of testing the limits of the software. The limits of the software ended up being tested elsewhere. The quality of both the tracking points I put on my face and the lighting I used proved to be too low to be entirely cooperative with the software once tracking began. Because I used Scotch tape on my face, their reflectivity made them appear white at points, which caused the tracker to fail to recognize them and lose them altogether. Because the lighting was poor and the marks I put on my face were dark, many of them, particularly those on my lower lip and under my eyes, would get lost in the shadows on my face. It became clear that recording footage for motion tracking was not just a matter of putting the points in the right place and getting a good performance, but making it so the points are clearly distinguishable no matter what position your face is in.

These shortcomings didn’t make tracking points impossible, but just more time-consuming. Instead of being a matter of placing the points on Frame 1 and clicking “Start Track,” tracking became a routine of finding where Maya lost the tracking point, manually placing the point where we knew it should be, and continuing to track it until it was lost again. Repeat. We soon realized that we could make this process at least a bit faster by setting the tracking to “Bidirectional” instead of just “Forward.” Ultimately, 80 percent of the time we spent tracking footage was filling in the holes Maya left open in order to compensate for the failures of my source footage.

Once we had the points completely tracked, it became apparent that we couldn’t directly use the locators Maya gave us for the facial animation, because parenting them to polygons or joints wouldn’t influence their movement. This was because the locators were keyed not through a translation channel, but a “location” channel. So, what we had to do was create as many polygon primitives as there were locators and use the connection editor to connect each locator’s “Location X” and “Location Y” channels to the “Translation X” and “Translation Y” channels, respectively, of a polygon primitive. One mistake we made in our first attempt at this was placing the polygons in approximation with where the locators were. While this still resulted in the polygons moving, it was far from anything similar to the movement of the locators. It turned out that the polygons needed to start out at 0 in order for the connection to truly work, so this meant having them start out at the origin, and making the connection would automatically put them in the right place.

To turn these floating spheres into a facial animation, we placed as many joints on a rough model Woody face, in about the same place on his face as they were on mine. The joints were then each made children of their respective polygon, and smooth bound to the face. Despite the fact that no additional weight painting was done, the result was an animation that resembled my own facial performance more than we expected.

Knowing what we now knew, the next step was to figure out the best places to put tracking points on my face in order to get the kind of facial performance Woody gave in the Toy Story movies, and the best method of putting those points on my face, to say nothing of the lighting.

I was able to improve the lighting, and thus the visibility of the points, simply by repositioning my face in relation to my light sources and boosting the exposure on the camera I was using. If I needed to, I also adjusted the brightness and contrast in Final Cut.

The two main problems with the Scotch tape were that they were shiny and they were irregularly shaped and sized; they were often too big to be useful tracks. So from then on, I used a Sharpie pen to put dots on my face. Once Shane made a “map” of where the tracking points should go on the Woody face, I was able to put dots on my face to correspond with them.

This was the first of what became 3 retakes of facial footage, each time trying to solve a different problem. The changes, as well as new problems presented, can be summarized thusly:

  1. Though the lighting was better and the dots were more evenly sized and shaped, I put too many extra dots on my face and some of them appeared to mash together, and as a result they sometimes confused the Maya tracker. Also, some dots were still too large or too small, resulting in the same problem. Because I used a black Sharpie, the problem of points getting lost in the shadows on my face was still present, though there were fewer shadows thanks to the improvement in lighting.
  2. At Shane’s suggestion, I used a red marker this time to make the dots more visible. However, because the lighting and exposure was substantially better in this take than in previous ones, the need for non-black points was not nearly as great, and ironically some points were lost, now because their color became hard to distinguish from my face instead of the shadows. At this point, it had become increasingly apparent that we needed a “fixed” point on my head to track that only moved with my whole head, without influence from the movement of my eyes or mouth.
  3. I returned to using a black Sharpie, and in marking my face tried to make the dots as “medium” as possible, but err on the small side. My primary goals in marking myself were to keep from having any dots too close together and making sure the placement was as symmetrical as possible.  My solution for a “control” point was to take my headphones and tie 4 aluminum armature wires around the band, having the last 2 or 3 inches stand straight up, ending in a small loop. I then put at the tips of each wire a small ball of red clay to give the ends distinct points that could be tracked.  While this last take was far from perfect (mainly the problem was that some lower lip dots got obscured), this was the footage we wound up using for the remainder of the semester.

Now that we had pretty much worked out all of the problems we faced with the process of tracking points and getting a CG face to animate the way we want, the final challenge was to have the face animate both on the full body rig and in the context of Woody’s body animation, mainly Woody lifting his head, and the body animation that follows his line.

The problem I faced regarding allowing Woody’s face to animate along with the rest of his body was that in its current state, if I were to move the head at all the joints would stay in place, and, skin-bound, keep parts of the head with them, stretching the mesh in undesirable ways. Before I touched the full body rig, I tried this with just the head. What I did first was put all of the joints into a group, and set the pivot point to the same place as the head pivot. I then parent constrained the translation of each of the joints to their respective polygon, and I parent constrained the rotation of the group to the head control. This allowed me to animate Woody’s head, and the joints would not only stay firmly on his face, but they animated his face the same way with his head facing down as with any other direction.

            The last problem was that the full body rig already had a skin cluster, so binding more joints to the face was impossible. The solution was a simple one: instead of smooth binding the joints to the face, I made them influence objects. This way, they could behave the same way they always did, except they didn’t interfere with the pre-existing skin cluster. Other than this difference, the rest of the procedure was the same. 

All considered, this is without a doubt a much faster method of facial animation and lip synch than the traditional keyframe-based method. Creating the actual performance is done more or less in real time, and making the dots traceable is just a matter of placement and contrast. Once one understands what they are doing, the most time-consuming parts of the in-computer process are more tedious and repetitive than anything else. While this technique is obviously limited to the expressive abilities of the real human face, the amount of time it takes to get that level of performative quality and nuance this way versus the traditional way is incomparable.

Painting Your 3d World with a Camera Projection!

  1. To begin painting a world from a camera projection in maya you first need to create the camera.   Make sure the camera you create is either a duplicate of the camera for the shot you are painting or a similar camera that catches all the objects you need to paint.  Make sure the settings on your camera correspond to your render settings and your shot camera.  They need to be identical except for the location.  Lock your projection camera.  Once everything is set, render an image from your projection camera.  You can add any lights and shadows you would want to have for a reference image.  Open the image in photoshop.

  2. Once you are in photoshop with your rendered image there a few things you need to do before you begin painting.  Those are create photoshop groups for each object you are painting in your scene.  For example, say you are painting a kitchen you will probably have modeled a sink, stove, frig, walls and floor.  In photoshop you should have groups for each one of those objects with the name of the group corresponding to the object. Once this is done you can begin painting each object.  Paint them with all the information you want to see in your final scene, that may include lights, shadows, reflections and texture.

  3. This step is the tricky part because we are going to build the shader you are going to use in maya to project your world.  You should begin by creating a surface shader for each projection camera you created.  Name the shader to the camera it corresponds to in order to avoid confusion.  Apply the shader to the objects you want it to paint.  The next step is to create 2 projections nodes in the hypershade.   One is for the color you painted and one is for the alpha if you have any (if you do not have alpha you do not need the 2nd shader).

    Name them projections so you will remember what each one corresponds to you.  Example: The shader you have is called kitchenShader and your projection camera is called Kitchen Projection.  You should name your projection nodes something to the effect of: kitchenColor and kitchenAlpha.  Trust me, keeping the names cohesive is very important.   For both of these files set the projection attributes to prespective and under Camera Projection Attributes link it to your projection camera.

    Next you want to create PSD node for the Photoshop Group that you want to apply to your object in maya.  Once the node is created, go to the file attributes of your PSD node and click the folder icon next to image name to load your photoshop file.  Now set your Link to Layer Set from Composite to whatever layer you want to be applied to the object.

    Now for the very tricky part.  You are going need to create 2 more nodes, a multiply and a reverse.  These are so we can get the alpha channel you created in your photoshop document working on the surface shader.  In your hypershade you should now have, 2 projection nodes, a PSD node, a Multiply/Divide node, a Reverse node and your surface shader node. 

    The first thing to do is to link the outColor of your PSD node to your Color projection node's image input.  After that link your PSD node's outAlpha to your Alpha Projection node's imageR input.  

    Next link your color projection's outColor to input1 of your multiply/divide node.  Then link your alpha projection's outColorR to the input2x, input2y, and input2z of your multiply divide node.

    After this link your multiply divide node's output to your projection shader's outColor.

    Now take your alpha projection node's outColorR and link it to your reverse node's inputX.

    Then take your reverse node's outputX and link it to your projection shader's outTransparencyR, outTransparencyG, and outTransparencyB.

That is all.  Whew.

World Building With 3D Paint and Projected Paint

Exploring methods of world building for computer animation has been my primary focus this semester. As a group, we decided that some goals for a successful world included evidence of a past and history. The created world must also be cohesive within itself. Each world has its own set of guidelines to follow, whether it is color scheme, physics of motion, or something else. All of these components help to make a world believable.

Our intentions as a class were to find a way to create this world quickly and easily but still with a high quality result. We decided that painting was the best way to do this. Skilled painters can create compelling and detailed scenes very quickly, so we began be exploring how to integrate traditional painting with a computerized 3D world.

Using Maya's 3D paint proved to be a good way to paint a layout on a character or object, but did not allow for much detail when painting, as the paint tool was fairly simple. Issues arose when areas of the mesh met, and painting straight lines accurately was extremely challenging. 3D paint offered a lot of exciting options such as painting transparency, luminance, or even displacement, but since each property needed to painted over again from scratch is was a difficult process and not as intuitive as "real" painting.

After a suggestion from Jeremy, we began to explore camera projections as a source of paint. I worked on creating a shader that would project a painting drawn in photoshop onto objects within the scene. The painting would be based off of a rendered frame and then taken back into Maya.

At first I was creating shaders that preserved the original 3D paint, which also allowed for a method in which the painter could use both 3D paint and then a projection for more detail. This involved a layered shader, and after some trial and error we successfully created a shader the preserved both paint layers as well as the transparencies between them so that they overlapped correctly.

An early problem with projected paint that caught light was that paint did not translate well between objects. Our example was that grass painted on the ground that overlapped a booth received light at different angles and the scene instantly revealed that the grass was painted on. To avoid this problem we began painting light and shading into the painting and having 3D lights not affect the object's texture. Another issue with projected paint was that paint did not stick to moving objects. This was easily solved by using texture reference objects, which "glues" the paint on at a specific frame.

As the semester advanced, there was a clear continual shift towards "true" painting with shading, lighting, color, texture, and even shape information all within the painting itself. We moved from 3D painting to projected paint that was lit with lights in Maya, to painting the light on surface shaders which would catch no light at all from Maya. I think this is a result of painting traditionally being easy, intuitive, and fast - especially for a professional painter.

For a short while I explored the use of a quad shading switch. Since each object needed a separate shader with only slight differences from each other (different layers within the same PSD file) it seemed logical to use a quad switch so that all of the objects utilizing the same PSD file could use the same texture. This seemed to be a brilliant plan until it was discovered that linking objects to PSD layers with the switch was time-consuming and confusing at times, especially with texture references were being used. Perhaps using individual shader materials for each object or one material with a shader switch is a decision the individual painter should make, as everyone works differently, and some might find one way easier than the other.

I think projecting a complete painting is a great way to make a computer generated world look believable. With a talented painter, I think there are endless possibilities to making a scene detailed and compelling in a short amount of time. Based on my own skills at painting, I think I preferred when Maya took my painting and handled the perspective and lighting for me. However, this method has some obvious weaknesses, including not knowing how your paint will look once in Maya, and the issue with the grass. I imagine that in a professional setting a painter would feel more comfortable painting the light in, in which case they can use the latter method of surface shaders which proved to be successful. All new techniques and methods take time to get used to, but I feel that projecting paint from photoshop in this way could speed up the entire process once it becomes familiar.

May 2, 2009

On Worldbuilding

Over the semester I've been using various techniques to produce something like a compelling world for our action to take place in. The aspects of a compelling world, which we came up with in our first few meetings: history, continuity, and believability. We decided that painting is the most direct way of getting these ideas across.

First attempts with 3D paint hardly addressed this. 3D paint is great for roughing in the concept, but it is messy both in execution (you have to have pretty high-res maps to get good detail, and even then it is hard to pull off) and file management (it makes new copies of the 3D paint texture each time the project folder is changed). I experimented in making cubes into different shapes by using the transparency aspect of 3D paint. It works reasonably well, but really only for (far) background objects. In the end, 3D paint seems really only good for touch-ups.

Our next foray into painting a world in 3D was using paint projected from a camera in perspective. The point of this was to allow the artist to paint a scene as a whole (say, projecting paint for each of the booths) and in the context of the shot. Time would not be spent on background objects and foreground objects would be adequately painted. Small details inside the booths would not be modeled, but painted onto planes which should catch the paint at different depths. I used a camera copied directly from the main camera. It wasn't moved back to capture more of the surroundings.

Taryn painted a booth, which I lit in Maya to see how the paint held up. Lighting the scene opened up some sticky problems, as Taryn had painted some grass in front of the booth that when lit didn't look as if it was actually in front of the booth. Placing a plane here wouldn't work because it would still interact incorrectly with the lighting. Also, the objects in the scene looked very different from the initial painting.

My painting process went along these lines:

I built some more detailed models of the surrounding booths (as we decided that if we were lighting the scene, better geometry means better light, though they were still fairly rough).

Though the scene was already lit, I took a render of the scene and painted over it as it was not lit.Then rendered the new paint with a shader that would catch light (a lambert in this case). This first step used different PSD file textures for each object, that though it used one psd file, each pointed to a different layer set. This approach means you can paint over the edge of the objects in your scene, and it shouldn't show up when rendered.

I took the lit image back into photoshop, to paint details according to the light. Then applied that paint back into the scene as a surface shader. This last step allowed me to paint over the boundaries of the initial objects and add in transition details (like the grass). The last surface shader could pretty much be one surface shader, applied to all objects in the scene, using a PSD file texture set to Composite. The only issue with using Surface Shaders is that they don't compute transparency properly, but for my approach this wasn't really an issue.

The downside to working this way is that I have to paint in perspective, which is one of the things that Maya is great at and that I am not so great at. Same with painting in light. The resulting image is an odd mix of handwork and mathematics. I personally don't think that my painting skills can make something look 3D polished.

Apr 23, 2009

Render Settings and the Paint Pipeline



This is the image I rendered using surface shaders. The whole image was rendered out, taken into photoshop and painted on. That was then sent back into Maya and then rendered again. Default shading sampling of 2 to 8 samples, and multipixel filter.




Multipixel filter, shading rate of 1 sample per pixel.



Lambert rendered with no multipixel filter and shading samples at 1.


Surface shader, no filter and shading samples at 1, render used as background. This is to show if there is any loss when an image created with no filter is used again in the render.


This is the image with multipixel filtering, after I have gone back and made sure when I iterate in photoshop, that I retain the crispness of my painted image.

Basically, regardless of how the image is ultimately rendered, when working in the paint pipeline, it is important to render images that will be taken into photohop as crisply as possible, so as to lessen the degradation of the image as it is bounced back and forth in the lighting stage. When rendering for use in photoshop, do not use multipixel filtering, and set shading sampling to 1 per pixel. Final render can use multipixel filtering.

Apr 8, 2009

Face to The Face Machine Test

So its kind of coming down to the fact that I know very little about rigging a face, so I am looking for people have done already for me.  I went to anzovin studios website and download the demo face for the Face Machine.  So without retracking another video(steve and i are having ftp server issues) I just linked some points up from my last track to a scaled up Face Machine Face.  I feel like this has the most potential so far.


A Facial Rig Video

So I found this facial rig video on youtube that does not look like it uses blendshapes.  Of course there is no tutorial so I have no idea how they do it but I imagine ours being similar


Apr 6, 2009

boujou Tracking

So the automatic tracking in Boujou is pretty good but not perfect. It is great for attempting to track everything and anything in the frame (or the mask field) by attempting tons of points. The problem occurs when most of these points only track for a few frames. A movie, 600 frames in length would have about 19000 tracked points. One way to mend this is to stitch tracks together. If you are attempting to track a specific point, chances are that it has been tracked for 95% of the movie. There are multiple tracks that make up the complete path of the track, and it is quite easy but a little time consuming to stitch tracks together. Though time consuming, these composite tracks are now regarded as more accurate than the other ones, so you can track the features again more accurately once you have composited tracks.

But how can we use these points?

There are 3 ways to export usable data from Boujou.
First, Export Feature Tracks: creates a .txt file that has, for each track_id, what frame it is active and its X and Y position. here is some sample data:

# track_id view x y
auto_18604 0 656.968 25.4732
auto_18604 1 669.579 16.3805
auto_18604 2 671.129 15.7436
auto_18604 3 671.787 16.1741
auto_18605 0 213.145 193.769
auto_18605 1 213.142 194.029
auto_18605 2 212.903 194.239
auto_18605 3 215.941 197.151

above are the tracks for points "auto_18604" and "auto_18605". Both are visible in frames 0, 1, 2, and 3. For each of those frames, there is the x and y position.

This method can also export the data for manually placed locators that track user defined points.


Method #2, Export Camera Solve:
even if you want 2D data, not 3D, you could run a camera solve so that the points it generates will last throughout the entire duration of the clip. You will not lose any work you put into the feature tracks (such as if you joined tracks together for a more accurate and longer solve) because these "gold tracks" heavily influence the camera solve it puts out.
Another plus about Exporting a camera solve is that it can be exported as a .ma file to import directly into Maya.

Method #3, Export Tracks to Shake:
This process will simply take all of your Feature Tracks (target tracks and locators as well if you have them) and export into shake as simple 2D movement. I haven’t explored this option fully yet but think it has potential.

Apr 4, 2009

Reference Images

I found a gazillion reference pictures for the fairground. Most of them have similar lighting (cloudy sky around dusk with lights from the rides) but they vary enough that we should probably pick just one or two to work from, I just wasn't sure which. I put them all in a folder called "Reference Images" in group storage. And there are even more here: http://www.flickr.com/photos/10thavenue/sets/617252/

And here are a few from the folder that I thought might work well.




Apr 1, 2009

Steve's Face to Woody Rig Unweighted

Besides a few points that didnt track well, everything seemed to work pretty well.  There are some pecular things though.  The track points eventually seem to be behind the movement on Steve's face.  Like the points do the movement but moments later than they should.  No idea what is causing this.  Steve also has way more dots on his, but I don't know if tracking all of them is needed.


Mar 31, 2009

Woody Rig Ideas


























So this is pretty much the idea I came up with for placement of joints on our mesh and dots on Steve's face.  I based these off of both other motion capture systems and watching Toy Story.  The upper most dots will be used to define the top of the head, I dont expect much movement up there, the purple lines out of the ears will define orientation of the head and the lower most will control jaw movement.  The dot on the nose defines where the nose is.  Dots above the eyes control the brows and below define the eye area.  The dots in the cheek will control puffyness/creases in the cheeks.

The one thing about MoCapping Woody's facial movement is that in Toy Story his facial movement above the mouth and below the brow ridge is very limited.  In fact all of the characters in the movie carry the same limitations.  The human characters I would assume they would have liked to have more facial deformation but I am unsure about the Toy Characters.  Woody is probably made of plastic in the head so one could assume that they did not intend to have much facial deformation.  Or it simply could be that they just did not have the technology and know how to perform so much facial deformation yet.

Another important thing I noticed was that Woody's brow ridge never really got much bigger or smaller.  Again the was little deformation there.  However the eyebrows themselves were what really defined his expressions.  Overall I dont know if these observations make what we are doing easier or harder.  Guess we will find out when we try it.

Mar 26, 2009

Joints parent constrained to locators, no weighting other than automatic. First test!


Mar 24, 2009

Face Mocap!

Mar 11, 2009

Poses Video

So obviously there are some problems with the sound syncing up with the poses.   I imported the image sequence but I must have exported it out at the wrong fps(I did 29.97 but I think I should have done 24 now).  I imported the sound at the end, which was stupid of me but I was pretty confident the video was 29.97 fps.

Getting the image sequence to work in Maya took alot of troubleshooting.  Perhaps I was just not familiar with the process but the only way it will work is by having the name setup as so: name.fame#.extension .  The problem with this was I could not export the frames out with quicktime with a period in the name.  I found a file renamer eventually but it took me a while to figure out this was the problem.  

As far as the speed of doing the actual poses, it was faster than not having a reference video.  I feel like we aren't going to see a significant increase in the speed of the animation.  It took about 1 and a half to 2 hours to put all the poses in.  These poses aren't even as accurate as they could be and would definetely need more finessing if this was a final product.

Anyways here is the result: