You really do start to realise how big a vfx project is once you get to the compositing.
Take shadows for example. This became (and continues to be) perhaps the biggest part of the post processing. I already knew I would have to build any object in 3D if it needed to receive a shadow, but there were several things, having never done a project like this from start to finish, that I didn't realise until I came to the compositing:
There were a lot more objects in the scene once I really started to look. Originally I just thought I had to build the main buildings and surfaces. But no, as I looked closer, railings, people, buses, taxis...the more I looked the more I found. Luckily most of this didn't require any new techniques, just more modelling and quite a bit more animation (more on animation in a separate post, later).
Most of the objects can be dealt with quite easily as they're stationary. There was, however, one pedestrian who was walking across the street and was close enough to the camera that the shadow couldn't simply go over them flatly but for realism should match the contours of the body.
I already had a person armature and model from the people I had made to sit on the rollercoaster so it wasn't much of a problem to re-use this model and animate the armature to the person in the footage. It was a bit tedious, and if you saw the 3D view from any other perspective than the camera you might think this person didn't have a lot of bone structure, with knees and elbows bending in directions that would make you wince. Nevertheless, from the camera view the little 3D figure appears to cross the road matching his real-world counterpart fairly well with shadows falling over his body instead of being flat, like some of the more distant people can afford to have.
If mid-ground people weren't annoying enough to recreate, then boy was I in for a treat when I noticed all the fore-ground people, with more detail than my generic 3D human could handle.
When we originally filmed, it took several takes to find one that didn't have cyclists or buses going right through the middle of the shot, but even so, it was impossible to film on a busy street without there being some pedestrians and vehicles in the way.
Any person that appears in front of the shadow means that a part of the shadow has to be removed. The only way to really deal with that was with masking, so I had to painstakingly mask the 3 or 4 people walking across the shot frame by frame. Frame. By. Frame. This was not enjoyable even though it was a fairly short sequence. But, once complete a simple 'Math' node could add the white mask back over the top of the shadow pass, cancelling out the shadow in one fell swoop.
These two previous points are annoying but at least I was already aware of those issues even if I wasn't completely aware of the scale they would need to be dealt with. Which left the detail I hadn't been aware of:
The footage has existing shadows. This sounds obvious. And it is! Of course there are shadows in the footage, things in the real world cast shadows. But what was less obvious is that I would somehow have to stop the CG shadows from overlapping them. If they overlapped there would be a dark patch, you can't cast a shadow onto something that's already in shadow without it getting darker than it should be, so the shadows have to very precisely stop at the exact point the real shadows begin.
This started to worry me slightly, I'd missed something which seemed obvious and I wasn't immediately sure how to solve this...
I realised that unless I wanted to try and roto (mask) the shadows very precisely I would have to attempt to use a technique similar to how I'd extracted the ripples from the river. My thought was that I could create a rough mask around the area in the footage with the existing shadow and then isolate the darkest parts of that area, which should be the shadows, with colour correction. For the most part this does seem to work, there's still a bit of fine tuning to do at the exact join between real and CG, but the vector blur helps to, well, blur that line.
A lot of work for just one part of the compositing and there's still lots more to do. Aside from compositing though, putting the people on the rollercoaster was a fun little exercise so I'll try and cover that next time.
As the rollercoaster project continues this blog post will look at a technique I used to displace the reflection of the rollercoaster in the river reflection.
The river, where the rollercoaster is being reflected is obviously not a flat surface in real life and so in some way that surface needs to be recreated so our reflection also isn't completely flat. There's a couple of options at this point and they separate into either using geometry to displace the reflection during the render or by creating an effect in the compositor as a post-process.
The geometry method could either be a displacement modifier or an ocean modifier. I only tried the ocean modifier, and while it can recreate the surface well, the amount of geometry that is needed for the small ripples took far too long to load into memory when rendering. The displacement modifier would have the same issues.
The second option and the one I had already thought would be better for the render time reason is to displace the reflection in the compositor. I don't know at what point I checked to see if a displace node existed (a key ingredient for this to work) but I already knew the technique I wanted to use. It was a technique I had seen demonstrated in Nuke, where using colour correction nodes you could isolate the highlights on the surface of the water and use this as a factor for the displacement, because each highlight represents the top of a ripple.
It's not a method I've seen widely documented apart from the original place I saw it, unless it generally goes by another name. I tend to call it matte extraction, a matte being a mask. Knowing the technique I just had to put it into practice, and, this is one of those rare occasions where nothing went wrong.
On the left you can see the ripples I extracted from the footage using colour correction and on the right you can see a reflection that has been displaced (albeit very faintly at this resolution). I'm currently isolating the water in the original footage with a mask, which isn't animated, so I'll probably try and create another render layer which can be used to do this automatically. I don't exactly enjoy animating masks.
It's not perfect at the minute and will no doubt go through a lot of fine tuning before it looks right, and of course this was without the motion blur applied to it.
The same thing has to also be done for any shadows that appear in the reflection, such as the shadows cast onto the reflected bridge, but it's easy to apply the effect again by grouping the nodes and creating another instance.
It's a catchy post title I know.
Technically this is the first post on my own site about the vfx rollercoaster project I'm working on, so if you want to catch up have a look at some of the previous posts that I ported over from my other site.
At the end of the last blog post I said something about using an inverted camera so that vector blur would work for reflections. I did indeed get this to work and seeing as I can't find much mention of either an alternative method or this method, I'll explain a little further.
So in the third shot of the vfx sequence the rollercoaster loops around the bridge before going off into the distance. It's going quite fast so there will be motion blur and we're using Cycles in Blender so I could probably use the true 3D motion blur, but aside from the fact I've never got this to work I also want to cut down the render time so I'm using vector blur in the compositor. This all works fine, no problem there. The issue is that said bridge is over water, meaning there should be a reflection of the rollercoaster...but if the rollercoaster is being vector blurred, then so should the reflection.
But that's the thing, the vector blur works by using a vector pass which holds the motion of the object. But a reflection, or rather the object doing the reflecting, doesn't move itself, only what it reflects moves. So if you looked at the vector pass you wouldn't see any motion data for the reflection. A vector pass has to directly see the moving geometry, not indirectly via reflections.
Here's where the inverted camera comes into effect. We're going to use a second camera that completely bypasses the object that does the reflecting and renders the geometry itself. It's a little complex so I've put together a crude drawing.
The red camera is the original, motion-tracking camera. The green represents the object that we've been using to reflect the rollercoaster (the river). The blue camera is a new, second camera, and by the little blob on the bottom of the camera we can see it's pointing downwards, whereas the red camera is pointing upwards. This second camera has been created by duplicating the original camera, parenting it to an empty, and then flipping the empty on several of it's local axis until it's flipped upside down. You've then just got to move the empty down so that the geometry is in roughly the right place below the bridge.
The white dotted line represents a pixel, the red camera sees it via the reflection object (green line) but the blue camera can see it directly (when the reflection object isn't rendered).
Below is an example of this in action. On the left is a viewport rendered view showing the reflection created by a flat plane. On the right the view of the flipped camera, I disabled the rendered view to make it a bit easier to see that the geometry is now on the bottom, I used the image on the left as a reference when moving the empty down to so the geometry lines up with the reflection it's replacing. Success! We now have a camera that sees the 'reflection' geometry directly and will generate a vector pass.
One thing I noted is that once the reflection has been distorted by the water ripples (saving that for another post) the reflection, and it's vector blur, are a lot more faint than I thought they would be. It started to make me think that the whole creation of the inverted camera was a bit pointless if you couldn't see the effect much but I think it must make some difference.
It's actually still useful because I use the same technique to reflect the shadow pass that I composite over the footage, as with the reflections the shadow pass wasn't showing up shadows of the reflected geometry, so this also solves that.
The only downside at the minute is that Blender can't render multiple cameras per frame so you have to render out the reflection camera (as .exr's to preserve the vector pass accurately) first and then composite it in (which kind of cancels out the 'saves render time' reason). This may well change with the multi view project that is being developed at the minute, but I'm not sure if that will allow different render layers to be visible per camera, so it may not help.
This is just a quick post (I hope) with a tip on how to render a different background when using HDRI lighting in Blender 3D. As you can see from the images on my gallery I generally just render a single object and don't really make a scene for it. The downside of this is that if the object is reflective it doesn't have anything to reflect in the empty scene and looks plain and unrealistic.
Like others, to get around this problem I use HDRI maps to give fake reflections. The problem with this is that you then have the HDRI image in the background which can take focus away from the original image, to get around this you can make a plane in the background which blocks out the HDRI image, but it often has to be quite large to fill the whole camera background.
A workaround which I use is to use the Blender's compositor to composite in a different background from a different scene in the blend file. Of course this could be done afterwards if you just rendered a PNG with transparent background, but if your not intending to do any post on it anyway you might as well do this in Blender. For example, I normally use a simple blended background found in the 'World Settings' giving a gradient effect, which I would rather use than the HDRI background but still have the benefits of the reflections the HDRI provides.