RAY MAIRLOT FREELANCE 3D ARTIST
  • Gallery
  • Blog
  • Store
  • Contact
  • About

Animation Nodes: An Experiment

24/11/2017

Comments

 
I recently made my first foray in learning probably one of the most advanced Blender add-ons that exists: Animation Nodes (AN onwards). And by 'learning' I mean I've done one project in it and don't know when I'll do another.

Here is my first, and possibly last, test:

My first little test in #AnimationNodes in #b3d: pic.twitter.com/QtNEKGNRPI

— Ray Mairlot (@RayMairlot) October 13, 2017

Now I have the rather unenvious task of trying to explain how I did it. Because I said I would and it seemed like a good idea at the time and now it's two thousand and ninety-three words later and I've come back up to the top of this post to write about how long it's taken and how I regret saying I would explain it.

However, despite the time it's taken to write this, this isn't a step-by-step tutorial and will probably take some investigation of your own to get a similar result. I merely nudge you towards the door, you must stumble your own way through it.

There are 3 main parts to re-creating the effect:

  1. Filling an object with other objects.
  2. Scaling the objects with AN.
  3. Creating the 'wave' effect with AN.

When I first started this mini-project I wasn't even intending to use AN, I was just trying to see if it were possible to fill an object with lots of other objects. It was only after looking at the sphere-filled torus that I wondered if I could animate it nicely.

My point is, at least in this version, the generating of the spheres to fill the torus was done beforehand. I would eventually like to try doing this step more procedurally as well, but for now this is a much more manual process.

Now, if you are ready to have some learning flung at you, proceed to the first part.


The Filling The Torus With Spheres Part


I'm not going to go into mega-detail for this part (or indeed any part), but the basic process is to fill a torus with a particle system of spheres and then use a rigid body system to allow the spheres to settle into non-intersecting positions:

  1. Fill an object with particles by using an 'Emitter' type particle system set to volume and a sphere as the Dupli Object (with particles set to start and end on frame 1).
  2. Convert the particle system to separate objects (Ctrl + A > Make duplicates real), make them all single users (U > Object & Data) and apply rotations and scales (Ctrl + A again).
  3. Set the new spheres to rigid bodies.
  4. Make the torus a passive rigid body set to Mesh and flip its normals so the rigid body system knows you your intention is for the rigid bodies to collide with the inside of the mesh.
  5. Turn off gravity (set gravity values to '0' in the Scene tab in the Properties Editor) and run the simulation.
  6. Discard any spheres that escaped the torus. They are dead to us now.
  7. On a frame where the spheres have settled, select all spheres and apply the Visual Transform (Ctrl + A > Visual Transform). This sets the spheres' positions to their positions as calculated by the rigid body simulation.
  8. Remove the rigid body settings from the spheres.

After doing this project I actually found someone had already done nearly the exact same steps (explained in far more detail) in their own YouTube tutorial here.


The First Animation Nodes Part


This is the part where the real 'fun' begins:

#AnimationNodes is great, but it does somtimes (at least at the beginning) feel like building your own brick wall to bang your head against.

— Ray Mairlot (@RayMairlot) October 13, 2017

When I first made this it wasn't immediately obvious where to start. Partly, that's because with AN, nothing is obvious, immediately or indeed ever. Nevertheless, I had heard enough about AN to know that 'Subprograms' might lead me to the effect I wanted. So there too, is where we will begin.

Here is the first section of AN node setup, which deals with scaling the objects:

Picture

A key concept of Animation Nodes is to think about repeatable chunks of work. Instead of having to animate each object individually, we create a chunk of work (or in AN terms, a 'Subprogram') that takes in one object and animates it. We can then take that chunk of work and repeat it for as many objects as we need.

The 'Loop Input' node is connected to the chunk of work and the 'Invoke Subprogram' node calls that chunk of work as many times as needed.

How does the 'Invoke Subrogram' node know how many times to run the 'Subprogram'? Well, that's what we'll set up first.

After adding an 'Invoke Subprogram' node and choosing 'New Subprogram'> Loop, we need to change the 'Loop Input' node to accept a different input. Ideally, we need it to take in a list of objects and perform an operation on each one. Luckily, that's as easy as clicking the 'New Iterator' button on the node and choosing 'Object List' from the list of available inputs.

You'll now see that the 'Invoke Subprogram' node has changed its input from being 'Iterations' to 'Object List'. Far more useful. The 'Loop Input' node now knows that I will be passing it a list of objects and that I want to run the 'Subprogram' for as many objects in the list.

Picture

As for the list of objects, we can do this with a regular Blender object group, so add all your spheres, or whatever confounded shape you've used, add them to a group (Ctrl+G) and name it something completely irrelevant, like 'Group.001'. You know, something that will be completely unintelligible when you come back to this project in a few months. I've been silly and named my group 'Objects in Torus', which almost sounds like a useful name. I'll change it later.

So to summarise, the 'Invoke Subprogram' node will be passed a group of objects and it's going to loop through the list of objects and animate them depending on what is connected to the 'Loop Input' node.

Add an 'Objects from Group' node, choose the group in the node's drop-down list that you just created and then connect its one output to the 'Subprogram's one input, like so:

Picture

That unnerving, unusual feeling, which you're hopefully feeling if my writing has served its purpose, is the feeling of starting to understand something, of making progress. A quite unusual feeling when first working with AN. Make the most of it, because it probably won't last long.

Now we're onto the actual animation, we want to be able to pull an individual object from that list. We already set up the iterator, and when we did this the 'Loop Input' node got a new output called 'Object'. We can now add any nodes we want to animate that individual object and the animation will get applied to all objects in our group, because the 'Object' output changes to a different object in the group each time the 'Subprogram' runs.

Here are the nodes, connected to the 'Loop Input' node that will actually animate each individual object:

Picture

The basic idea of how it's animating the object is it is looking at the location of two empties (one either side of the torus) and depending on how close the empties are to the individual object, it should be scaled somewhere between 0 and its original size. The empties are then animated closer and further away from the object which means the objects will then individually scale up or down.

To summarise the node setup above:
  1. Read in the 'Falloff' (a bit like Blender force fields, where there is an area around an empty in which objects are effected) - in this case we're specifically looking at the 'z' or '-z' direction of each empty and adding them together (so that both falloffs are considered). I think the falloff range is 0-1 so multiplying them together actually works better than adding so that the maximum values will never exceed 1 (1 x 1 = 1, obvs).
  2. Use the 'Evaluate Falloff' node to compare the current object's location and its distance to each of the empties (each object is only ever in range of one empty).
  3. Use the output of this calculation as the scale value for the object. As the objects have their scale applied, when the falloff is 1 the objects will be at full size, otherwise they will be scaled down.

On the left in the screenshot below, you can see the two empties selected, whose falloffs are compared to each object's location:

Picture

They're parented to a larger empty which is animated diagonally along the local 'z' axis as shown above. This is how the effect happens diagonally, as the 'Object Controller Falloff' nodes are set to read the local 'z' and '-z' axis respectively.

You can see on the right (above) a representation of the falloff, where in the centre it's lighter and the objects are full size, but if the empties move diagonally up and right, the imaginary dark falloff area will also move, start to cover the torus and cause the spheres to scale down.


Creating the 'Wave' Effect


The third and thankfully final part (come on, we're nearly at the end now) is to create the billowing wave effect. Here's the node network for the wave effect (I've removed the nodes that handle the scaling for the minute, just to make things a bit simpler):

Picture

The basic idea is that we are taking the original location of the object and using a 'sin' wave to animate it back and forth on the 'y' axis, adding in a bit of randomness for good measure along the way. The node setup above equates to this equation:
sin(xLocation - ((time + TimeOffset)/Speed)) * RandomNumber * Strength + OriginalLocation
For those that don't know, the 'sin' function produces a wave depending on the number fed to it. Seeing as we want an animated wave we need to pass in a constantly changing value, so we pass in the time value ('Time Info' node), offset it just so the wave starts on the frame I want (the 'Math' node adding 40 to offset by 40 frames), turn down the effect a bit ('Math' node dividing by 5) and finally we subtract the object's original 'x' location so that each object gets a slightly offset sin value (otherwise they would all move the same amount).

After that we multiply in a bit of randomness between 0.5 and 1 so they each move a bit more individually, turn down the overall effect by multiplying by a number less than one, add it to the original 'y' value (so they start from their original 'y' positions), combine the 'y' location with the original 'x' and 'z' locations, before it's finally used by the 'Object Transforms Output' node which sets the location for the current object. Phew.

It's possible, likely even, that this equation could be simplified, but such was the joy of finally getting something which resembled a result that I couldn't bear to touch anything and accidentally break it.


Add it all together


Now we've calculated the location and the scale we can add all the nodes together, producing this wonder:

Picture
Look at that big, lovely, seamless screenshot. It's almost as if I used some automated method to capture it.

Really, I should have added in a few more images, gifs or some other animated doodahs to break up those chunks of text, but golly gosh, I really couldn't be bothered; it took long enough just to write all of this without faffing about making nice illustrative, understandable and helpful images.

So there we have it, a completely clear, 100% explained, nothing vague, detailed, step-by-step guide of how I did it. Apart from all the parts I skimmed over and left to you, the avid reader, to figure out for yourselves. Which, in fairness to me - and I have been assured that I definitely deserve fairness - is only really the animating of the empties and the adding and connecting of the nodes.

Well done me. And well done to you for getting this far. But mainly well done me. Because let's be honest, it's me that's done most of the work in this transaction. I've had to use all 10 fingers to type, you've just had to use two eyes. Or whatever number of eyes you have available to you. Either way, regardless of the number of operational eyes, 10 fingers is more. No one's got 10 eyes.

Unless you're reading it in a group.

Bugger.


Ray.
Comments

All My Time Is Gone And Other Stories

23/5/2016

Comments

 
It's been 4 weeks since I last wrote here, leaving The Internet to descend into madness as it tries to comprehend a world without regular blog posts from me. "When will the literary drought end?", The Internet cries into the dark. "When?!", it cries again, worried that no-one heard the first time. Fear not, I have heard you. The drought/darkness (delete as appropriate) is over. I have returned, albeit briefly, to quench your thirst for ramblings, quenches and of course, thirsts, or my name's not Ray 'The Thirst-Quencher' Mairlot*.

*I will continue to proclaim that is my name up to, but not beyond, the point of being asked to prove it.

While once my time was abundant, now, my time is taken up by (and I'm happy to say, will continue to be taken up by) freelance, but I did manage to steal a few hours away to work on a small script at the weekend. Or one of the weekends. I forget which one and it's not really important to telling you what I worked on. What I'm trying to say is, it's an extraneous detail that doesn't deserve to be expanded on. Let's just say a weekend and be done with it. Embrace the ambiguity.

The script I made is currently a standalone script, but if it proves to be worthwhile it will be packaged up to be part of Animated Render Border (my add-on on the Blender Market), upgrading it to its third and probably, final version. What always improves something? More of that thing! In this case, that means more render borders, ie, being able to set multiple regions of the image to render, instead of just one.

My test was successful as the image below shows; two borders are set using a temporary UI and then rendered into one image:
Picture
There are a few hurdles before I can say it will be definitely released, such as trying this out on larger scenes. Essentially, the script renders the frame twice and then*** combines the results, so if a frame takes a long time to build the BVH or do some volume pre-processing then any time saved by doing a border render might be lost by having to do this pre-processing twice.

***30th April! I remembered, that's when I made the script. Thank goodness. Anyone who was worried about the lack of detail before can now calmly recede from the depths of ambiguity, back into the comfort of specificity.

When will I get to work on it again? I don't know. Will I probably start another experimental script before finishing this one? Yes, it's more than likely. But, for now, it's back to having no time, which is really no complaint at all, because I can attest to the fact that getting paid to do something you enjoy is far better than not getting paid.

And with that, perhaps somewhat abruptly, the end.

Ray.
Comments

Modelling The Forth Arm

23/4/2016

Comments

 
In my last post I noticed that the forearm models looked a little less than perfect. The forearms were one of the first parts of the model I made and originally I really wanted to make sure they conformed to the reference images I had. Coming back to them now, I think I made them conform a bit too much. Even though they appeared to fit the references, they were a weird shape when viewed from the top. I thought it would be better to reshape them to something more logical even if they didn't match the reference images as well:
Picture
As it turns out, having updated the models to a better shape, they do still manage to fit the reference images somehow. It makes sense that the more logical shape is the correct shape, so it's reassuring that the references seem to confirm that.

I have a slight worry that this project is a bit like 'Painting the Forth Bridge', in that once I finish one part enough time will have passed that another part will seem outdated or messy enough to need re-doing. I don't intend to redo a lot more of it, though I think some of the chest panels need refining.

I actually have some freelance work over the next few weeks; I'm not sure how much of my time it will take up, but it likely means less work done on this project for a while.

Such is life.

Ray.
Comments

The Shrinkwrap Modifier: A Hard-Surface Modeller's Best Friend

10/4/2016

Comments

 
This post is mainly about using the Shrinkwrap modifier for modelling, which is below, but there's also a quick update on some of my projects right at the end.

My Favourite Modifier


Before I started the 'Heartbreaker' project I probably wouldn't have said that the Shrinkwrap modifier is one of my favourite modifiers in Blender (not that anyone had actually asked me, or likely ever would). Maybe in the top 10, but only just. I would probably have gone for one of the classics, like the Subsurf or Mirror, you just can't go wrong with those two. However, that's all changed. If anyone ever asks me*, I will say my new favourite modifier, at least regarding modelling - which is what I'm doing most of the time - is the Shrinkwrap. It has become my go-to, problem solving, reliable friend.

*Which they wont.


Are You Insane? And What Does The Shrinkwrap Modifier Even Do?


No, I am not. A valid question (the second one), thank you (me) for asking. In its simplest form, the Shrinkwrap modifier is tasked with snapping the current object onto the surface of another object. It also has the ability to only snap specific vertices if you specify a vertex group.

Here we see a simple subdivided plane being shrinkwrapped to the surface of a sphere:
Picture
A simple, but I'm sure you'll agree, *Powerful* example.
I think the Shrinkwrap modifier was probably first created as a retopology tool, the snapping allowing you to easily create new, low-poly geometry, over your high-res model, without having to constantly think about manual snapping. Considering this, I'm not sure that 'Shrinkwrap' is actually the best name for it; maybe 'Snap' would have been better. The Snap modifier has a ring to it. But then, who am I to start renaming things? Sure, I was technically the best renamer in the area of Greater London in the years 1993 - 1998*, but I have no official certification for that, so I'll leave actual naming and any subsequent renaming to those that do.

*I retired from the gruelling world of competitive renaming undefeated and vowed never to return, due to the physical stress it caused my body.

How Does Retopology Tie-In To Hard-Surface Modelling? Are You Sure You're Not Insane?


It's not so much that retopology fits into hard surface modelling it's more that some hard-surface modelling scenarios and retopology share some common needs. Also, please stop asking if I'm insane.

There are two scenarios that can be very time-consuming when modelling:

  1. You have several meshes that all need to conform to the same profile (meaning surface curvature).
  2. You need to edit or add additional details to a curved surface.

This is very similar to what retopology requires and these two problems both happen to be the Shrinkwrap modifier's strength: conforming vertices to a specific surface.

That's All Well And Good, But Show Me Some Specific Examples


Please don't be so demanding. I've got some examples from 'Heartbreaker', the project I just literally won't shut up about.

The 'Heartbreaker' Iron Man suit has a tendency to have many separate panels that all conform to the same profile. Below, on the left, is the forearm, which is made of many pieces. They all have the same bulge and crease going through them, which would be time consuming to model manually. Instead, I built one continuous surface to describe the surface I want my mesh to conform to, shown on the right. All the pieces on the left conform to the profile of the mesh on the right (the Shrinkwrap target):
Heartbreaker also has many examples of detailing cut into curved panels. Cutting into curved surfaces is notoriously difficult as any sharpening edge loops on those details end up causing undesirable pinching, particularly at corners. It's also very difficult to perfectly maintain a curved surface while adding in new geometry.

Here, the head remains perfectly curved despite having cut details into it, thanks to good ol' Shrinkwrap:
Picture
Here is a wider view of the top of the head on the left, with its Shrinkwrap counterpart on the right:
The 'eyebrows' in the above image were excluded from the Shrinkwrap by adding all vertices apart from the eyebrow to a vertex group and selecting it on the Shrinkwrap modifier.

The Process


  1. Model a simple mesh that describes the curvature you want your mesh(es) to follow*.
  2. Add a Shrinkwrap modifier to the mesh(es) you want to conform to your Shrinkwrap object.
  3. Select the Shrinkwrap object as the 'Target' on the Shrinkwrap modifier.
  4. If you only want to Shrinkwrap part of your mesh, then create a vertex group that contains all the vertices you want to effect and select that vertex group on the modifier.
  5. If you want, repeat the previous steps to add multiple Shrinkwraps.

*If I've already started modelling something, but decide I need a Shrinkwrap, I will sometimes duplicate the object I'm modelling, simplify it, and use it as the Shrinkwrap object.

I also like to change the 'Maximum Draw Type' to 'Wire' in the 'Display' panel of the 'Object' tab in the 'Properties Editor' for the Shrinkwrap object so you can see the object you are modelling as well (shown below). Also, you may find it useful to turn on 'Draw All Edges', also in the 'Display' panel and 'Optimal  Display', on the Surbsurf modifier, if you're using one.
Picture
'Maximum Draw Type' set to 'Wire', 'Draw All Edges' and 'Optimal Display'.


When One Shrinkwrap Just Isn't Enough


Sometimes, creating a good enough Shrinkwrap object would be as complicated as modelling the original object, so not only will I sometimes use several Shrinkwrap objects (instead of one big one), but some of those Shrinkwrap objects have Shrinkwrap modifiers themselves. It's like needing to build scaffolding to be able to build more complicated scaffolding to be able to build the final object.

Here, for example, is the chest piece from Heartbreaker. It's probably the single most complicated piece in the whole suit, shown with all 9 of it's Shrinkwrap objects:
Picture
The key to this is vertex groups. Each Shrinkwrap object above is responsible for only part of the mesh.

Nobody's Perfect


Despite the unrivalled awesomeness of the Shrinkwrap modifier, it does have a problem, but it can be worked around quite easily.

When you're moving the vertices of an object that is Shrinkwrapped, what you're seeing is the vertices moving along the surface of the Shrinkwrap object. What's actually happening is that you're moving the vertices in 3D space. This causes the Shrinkwrap modifier to sometimes have a hard time determining which part of the surface the vertices should be on if the vertices are actually very far away from it in 3D space (which can happen when editing the mesh). The vertices will snap to the surface, but might not move smoothly. A quick fix for this is to press the 'Copy' button on the Shrinkwrap modifier to create a duplicate and then press 'Apply' on the copy. You have now applied the effects of the modifier and the vertices are now close to the surface again, allowing smooth movement.

The same 'Copy' and 'Apply' process also needs to be done if you find loop-cut-and-slide, vertex-sliding or other modelling tools start to position vertices weirdly. For example, with vertex-sliding, the vertices will slide dependant on where they are in 3D space, which isn't necessarily where they visually appear to be.

One other thing is that the Shrinkwrap modifier should probably be before any Subsurf modifiers you might have on the object. If you have a Subsurf first then you're giving the Shrinkwrap more vertices to play with than actually exist. This will only be problematic if you want to eventually apply all the Shrinkwraps on the object though.

The End, Finally


You may think using this technique only applies to specific objects or tasks, but I have found myself using it on many projects and I now can't do without it, so give it a try and let me know how it goes.

As a reward for getting to the end of this post (if you just scrolled all the way down without reading DO NOT LOOK BELOW AT THE REWARD, IT IS NOT FOR YOU), here is the entire 'Heartbreaker' suit with all 113 of it's glorious Shrinkwrap guides:
Picture
Embrace the Shrinkwrap modifier, love it, cuddle it, hold it at night, whisper sweet nothings into its ear and thank the gods it exists.

A Quick Update On Other Projects


I haven't got much done regarding 'Heartbreaker' so no update on that, other than I'm now working on fixing the upper-back, which I think is the last major part to be redone.

A quick update on 'Selective-Unhide', my unhiding add-on, is that it now supports not just Object Mode, but also Armature Edit Mode and Armature Pose Mode for hidden bones and bone groups and Mesh Edit Mode for hidden vertices:
Picture
Still a few things to do on it, I think, but they're relatively minor.


Ray.

Comments

Making Headway

1/4/2016

Comments

 
Despite promising to cover some of the modelling processes I use for the 'Heartbreaker' project, I'm just doing a short post today. Hard surface modelling techniques can wait until I have time (and/or inclination) to do a proper write up.

In my ongoing modelling odyssey* the Heartbreaker project continues, today with the 'finishing' of the head. I say 'finishing' as there are still a few things to do, like a few interior panels that lie behind the exterior panels, but essentially I have finished the main modelling.

Below is the comparison between the old head and the new one. Basically every piece was taken back to a basic stage to be rebuilt or just finished according to some better (or what I believe to be better) reference images.

*Arguably comparable in terms of epicness to The Odyssey by Homer. Not that Homer, the other one, the non-doughnut eating one. Maybe that's unfair. Who am I to say Homer didn't like doughnuts? Maybe he loved them. Maybe referencing different Homers by their doughnut preference was ill-advised and I simply should have been more specific, or, in reality, maybe there aren't as many similarities between this modelling project and an 8th century, ancient Greek poem as I thought...
Picture

Below is a still of the new version of the head as well as the back:

Picture

I hope you liked the title of this post: 'Making Headway'. It was a pun, because this post has been about the head I've been modelling and because of the progress or 'headway' I've been making. Puns really do give us the best of both worlds: they're fun and informative.

Ray.
Comments

Iron Man - Heartbreaker

17/3/2016

Comments

 
Finally! I worked around the rendering issues mentioned previously, so I can finally reveal what I've been working on: the Iron Man 'Heartbreaker' suit.

Picture
Proof that I actually was working on something. Ha!

I'm quite pleased the way the renders turned out. I've been working on this so long that all I see is what has to be fixed, but seeing some renders and hearing some feedback lets me know I'm on the right track*.

*Keep reading for a great continuation of this train based metaphor.


Interestingly (not my opinion, an actual certified Very Interesting Thing™), the rim lighting on the models is actually from the material, not from physical lights. Partially, that's because I think you get more control, but really it's because I've never been able to get completely satisfactory results from trying to set up rim lighting. This setup uses the normal of the faces to determine whether it should be highlighted or not. Here is a (simplified) screenshot of it:

Picture
Click to enlarge. Or don't. The choice is literally yours.
The normal is manipulated with the 'Normal' node before being sharpened by the colour ramp. This mixes between the glossy base material and the white highlight. To get a really bright highlight I actually use an emission shader, but to make sure it doesn't cast light onto itself I use a 'Light Path' node so only the camera sees the emission, the objects in the scene just see black (the empty 'Shader' input).

This is a simplified setup, so it just shows the right side rim lighting, if you want other sides of the model to be highlighted, duplicate the 'Normal' and 'ColorRamp' nodes, adjust the normal direction and add them to the other rim nodes with a 'MixRGB' node set to 'Add'.

See I told you it was interesting, and yet you (probably) resisted believing me. Hopefully you will trust me a bit more in future. If not, things are going to get pretty embarrassing for you when I continue to show you Interesting Things™. Let's avert this embarrassment by jumping aboard this train of Trust and riding out this analogy right to the end, together.

Now that all that train business is dealt with I can get on with my work, so that by next week I will be able to show off a new screenshot.

I guess that's the end of the line for this post (THE TRAIN FUN NEVER ENDS).


Ray.
Comments
<<Previous

    RSS Feed

    Categories

    All
    Blender
    Compositing
    Flash
    Heartbreaker
    Hmhb
    Image
    Macro Maker
    Nodes
    Programming
    Python
    Rollercoaster
    Tutorial
    Unity
    Vfx
    Video


All images and videos copyright © Ray Mairlot, 2021.
None of the content on this site is to be used without my permission.
Powered by Create your own unique website with customizable templates.
  • Gallery
  • Blog
  • Store
  • Contact
  • About