I recently made my first foray in learning probably one of the most advanced Blender add-ons that exists: Animation Nodes (AN onwards). And by 'learning' I mean I've done one project in it and don't know when I'll do another.
Here is my first, and possibly last, test:
Now I have the rather unenvious task of trying to explain how I did it. Because I said I would and it seemed like a good idea at the time and now it's two thousand and ninety-three words later and I've come back up to the top of this post to write about how long it's taken and how I regret saying I would explain it. However, despite the time it's taken to write this, this isn't a step-by-step tutorial and will probably take some investigation of your own to get a similar result. I merely nudge you towards the door, you must stumble your own way through it. There are 3 main parts to re-creating the effect:
When I first started this mini-project I wasn't even intending to use AN, I was just trying to see if it were possible to fill an object with lots of other objects. It was only after looking at the sphere-filled torus that I wondered if I could animate it nicely. My point is, at least in this version, the generating of the spheres to fill the torus was done beforehand. I would eventually like to try doing this step more procedurally as well, but for now this is a much more manual process. Now, if you are ready to have some learning flung at you, proceed to the first part. The Filling The Torus With Spheres Part
I'm not going to go into mega-detail for this part (or indeed any part), but the basic process is to fill a torus with a particle system of spheres and then use a rigid body system to allow the spheres to settle into non-intersecting positions:
After doing this project I actually found someone had already done nearly the exact same steps (explained in far more detail) in their own YouTube tutorial here. The First Animation Nodes Part
This is the part where the real 'fun' begins:
When I first made this it wasn't immediately obvious where to start. Partly, that's because with AN, nothing is obvious, immediately or indeed ever. Nevertheless, I had heard enough about AN to know that 'Subprograms' might lead me to the effect I wanted. So there too, is where we will begin. Here is the first section of AN node setup, which deals with scaling the objects: A key concept of Animation Nodes is to think about repeatable chunks of work. Instead of having to animate each object individually, we create a chunk of work (or in AN terms, a 'Subprogram') that takes in one object and animates it. We can then take that chunk of work and repeat it for as many objects as we need. The 'Loop Input' node is connected to the chunk of work and the 'Invoke Subprogram' node calls that chunk of work as many times as needed. How does the 'Invoke Subrogram' node know how many times to run the 'Subprogram'? Well, that's what we'll set up first. After adding an 'Invoke Subprogram' node and choosing 'New Subprogram'> Loop, we need to change the 'Loop Input' node to accept a different input. Ideally, we need it to take in a list of objects and perform an operation on each one. Luckily, that's as easy as clicking the 'New Iterator' button on the node and choosing 'Object List' from the list of available inputs. You'll now see that the 'Invoke Subprogram' node has changed its input from being 'Iterations' to 'Object List'. Far more useful. The 'Loop Input' node now knows that I will be passing it a list of objects and that I want to run the 'Subprogram' for as many objects in the list. As for the list of objects, we can do this with a regular Blender object group, so add all your spheres, or whatever confounded shape you've used, add them to a group (Ctrl+G) and name it something completely irrelevant, like 'Group.001'. You know, something that will be completely unintelligible when you come back to this project in a few months. I've been silly and named my group 'Objects in Torus', which almost sounds like a useful name. I'll change it later. So to summarise, the 'Invoke Subprogram' node will be passed a group of objects and it's going to loop through the list of objects and animate them depending on what is connected to the 'Loop Input' node. Add an 'Objects from Group' node, choose the group in the node's drop-down list that you just created and then connect its one output to the 'Subprogram's one input, like so: That unnerving, unusual feeling, which you're hopefully feeling if my writing has served its purpose, is the feeling of starting to understand something, of making progress. A quite unusual feeling when first working with AN. Make the most of it, because it probably won't last long. Now we're onto the actual animation, we want to be able to pull an individual object from that list. We already set up the iterator, and when we did this the 'Loop Input' node got a new output called 'Object'. We can now add any nodes we want to animate that individual object and the animation will get applied to all objects in our group, because the 'Object' output changes to a different object in the group each time the 'Subprogram' runs. Here are the nodes, connected to the 'Loop Input' node that will actually animate each individual object: The basic idea of how it's animating the object is it is looking at the location of two empties (one either side of the torus) and depending on how close the empties are to the individual object, it should be scaled somewhere between 0 and its original size. The empties are then animated closer and further away from the object which means the objects will then individually scale up or down. To summarise the node setup above:
On the left in the screenshot below, you can see the two empties selected, whose falloffs are compared to each object's location: They're parented to a larger empty which is animated diagonally along the local 'z' axis as shown above. This is how the effect happens diagonally, as the 'Object Controller Falloff' nodes are set to read the local 'z' and '-z' axis respectively. You can see on the right (above) a representation of the falloff, where in the centre it's lighter and the objects are full size, but if the empties move diagonally up and right, the imaginary dark falloff area will also move, start to cover the torus and cause the spheres to scale down. Creating the 'Wave' Effect
The third and thankfully final part (come on, we're nearly at the end now) is to create the billowing wave effect. Here's the node network for the wave effect (I've removed the nodes that handle the scaling for the minute, just to make things a bit simpler): The basic idea is that we are taking the original location of the object and using a 'sin' wave to animate it back and forth on the 'y' axis, adding in a bit of randomness for good measure along the way. The node setup above equates to this equation: sin(xLocation - ((time + TimeOffset)/Speed)) * RandomNumber * Strength + OriginalLocation
For those that don't know, the 'sin' function produces a wave depending on the number fed to it. Seeing as we want an animated wave we need to pass in a constantly changing value, so we pass in the time value ('Time Info' node), offset it just so the wave starts on the frame I want (the 'Math' node adding 40 to offset by 40 frames), turn down the effect a bit ('Math' node dividing by 5) and finally we subtract the object's original 'x' location so that each object gets a slightly offset sin value (otherwise they would all move the same amount).
After that we multiply in a bit of randomness between 0.5 and 1 so they each move a bit more individually, turn down the overall effect by multiplying by a number less than one, add it to the original 'y' value (so they start from their original 'y' positions), combine the 'y' location with the original 'x' and 'z' locations, before it's finally used by the 'Object Transforms Output' node which sets the location for the current object. Phew. It's possible, likely even, that this equation could be simplified, but such was the joy of finally getting something which resembled a result that I couldn't bear to touch anything and accidentally break it. Add it all together
Now we've calculated the location and the scale we can add all the nodes together, producing this wonder: Really, I should have added in a few more images, gifs or some other animated doodahs to break up those chunks of text, but golly gosh, I really couldn't be bothered; it took long enough just to write all of this without faffing about making nice illustrative, understandable and helpful images. So there we have it, a completely clear, 100% explained, nothing vague, detailed, step-by-step guide of how I did it. Apart from all the parts I skimmed over and left to you, the avid reader, to figure out for yourselves. Which, in fairness to me - and I have been assured that I definitely deserve fairness - is only really the animating of the empties and the adding and connecting of the nodes. Well done me. And well done to you for getting this far. But mainly well done me. Because let's be honest, it's me that's done most of the work in this transaction. I've had to use all 10 fingers to type, you've just had to use two eyes. Or whatever number of eyes you have available to you. Either way, regardless of the number of operational eyes, 10 fingers is more. No one's got 10 eyes. Unless you're reading it in a group. Bugger. Ray.
Comments
It's been 4 weeks since I last wrote here, leaving The Internet to descend into madness as it tries to comprehend a world without regular blog posts from me. "When will the literary drought end?", The Internet cries into the dark. "When?!", it cries again, worried that no-one heard the first time. Fear not, I have heard you. The drought/darkness (delete as appropriate) is over. I have returned, albeit briefly, to quench your thirst for ramblings, quenches and of course, thirsts, or my name's not Ray 'The Thirst-Quencher' Mairlot*. *I will continue to proclaim that is my name up to, but not beyond, the point of being asked to prove it. While once my time was abundant, now, my time is taken up by (and I'm happy to say, will continue to be taken up by) freelance, but I did manage to steal a few hours away to work on a small script at the weekend. Or one of the weekends. I forget which one and it's not really important to telling you what I worked on. What I'm trying to say is, it's an extraneous detail that doesn't deserve to be expanded on. Let's just say a weekend and be done with it. Embrace the ambiguity. The script I made is currently a standalone script, but if it proves to be worthwhile it will be packaged up to be part of Animated Render Border (my add-on on the Blender Market), upgrading it to its third and probably, final version. What always improves something? More of that thing! In this case, that means more render borders, ie, being able to set multiple regions of the image to render, instead of just one. My test was successful as the image below shows; two borders are set using a temporary UI and then rendered into one image: There are a few hurdles before I can say it will be definitely released, such as trying this out on larger scenes. Essentially, the script renders the frame twice and then*** combines the results, so if a frame takes a long time to build the BVH or do some volume pre-processing then any time saved by doing a border render might be lost by having to do this pre-processing twice.
***30th April! I remembered, that's when I made the script. Thank goodness. Anyone who was worried about the lack of detail before can now calmly recede from the depths of ambiguity, back into the comfort of specificity. When will I get to work on it again? I don't know. Will I probably start another experimental script before finishing this one? Yes, it's more than likely. But, for now, it's back to having no time, which is really no complaint at all, because I can attest to the fact that getting paid to do something you enjoy is far better than not getting paid. And with that, perhaps somewhat abruptly, the end. Ray. In my last post I noticed that the forearm models looked a little less than perfect. The forearms were one of the first parts of the model I made and originally I really wanted to make sure they conformed to the reference images I had. Coming back to them now, I think I made them conform a bit too much. Even though they appeared to fit the references, they were a weird shape when viewed from the top. I thought it would be better to reshape them to something more logical even if they didn't match the reference images as well: As it turns out, having updated the models to a better shape, they do still manage to fit the reference images somehow. It makes sense that the more logical shape is the correct shape, so it's reassuring that the references seem to confirm that.
I have a slight worry that this project is a bit like 'Painting the Forth Bridge', in that once I finish one part enough time will have passed that another part will seem outdated or messy enough to need re-doing. I don't intend to redo a lot more of it, though I think some of the chest panels need refining. I actually have some freelance work over the next few weeks; I'm not sure how much of my time it will take up, but it likely means less work done on this project for a while. Such is life. Ray. This post is mainly about using the Shrinkwrap modifier for modelling, which is below, but there's also a quick update on some of my projects right at the end. My Favourite Modifier Before I started the 'Heartbreaker' project I probably wouldn't have said that the Shrinkwrap modifier is one of my favourite modifiers in Blender (not that anyone had actually asked me, or likely ever would). Maybe in the top 10, but only just. I would probably have gone for one of the classics, like the Subsurf or Mirror, you just can't go wrong with those two. However, that's all changed. If anyone ever asks me*, I will say my new favourite modifier, at least regarding modelling - which is what I'm doing most of the time - is the Shrinkwrap. It has become my go-to, problem solving, reliable friend. *Which they wont. Are You Insane? And What Does The Shrinkwrap Modifier Even Do? No, I am not. A valid question (the second one), thank you (me) for asking. In its simplest form, the Shrinkwrap modifier is tasked with snapping the current object onto the surface of another object. It also has the ability to only snap specific vertices if you specify a vertex group. Here we see a simple subdivided plane being shrinkwrapped to the surface of a sphere: I think the Shrinkwrap modifier was probably first created as a retopology tool, the snapping allowing you to easily create new, low-poly geometry, over your high-res model, without having to constantly think about manual snapping. Considering this, I'm not sure that 'Shrinkwrap' is actually the best name for it; maybe 'Snap' would have been better. The Snap modifier has a ring to it. But then, who am I to start renaming things? Sure, I was technically the best renamer in the area of Greater London in the years 1993 - 1998*, but I have no official certification for that, so I'll leave actual naming and any subsequent renaming to those that do. *I retired from the gruelling world of competitive renaming undefeated and vowed never to return, due to the physical stress it caused my body. How Does Retopology Tie-In To Hard-Surface Modelling? Are You Sure You're Not Insane? It's not so much that retopology fits into hard surface modelling it's more that some hard-surface modelling scenarios and retopology share some common needs. Also, please stop asking if I'm insane. There are two scenarios that can be very time-consuming when modelling:
This is very similar to what retopology requires and these two problems both happen to be the Shrinkwrap modifier's strength: conforming vertices to a specific surface. That's All Well And Good, But Show Me Some Specific Examples Please don't be so demanding. I've got some examples from 'Heartbreaker', the project I just literally won't shut up about. The 'Heartbreaker' Iron Man suit has a tendency to have many separate panels that all conform to the same profile. Below, on the left, is the forearm, which is made of many pieces. They all have the same bulge and crease going through them, which would be time consuming to model manually. Instead, I built one continuous surface to describe the surface I want my mesh to conform to, shown on the right. All the pieces on the left conform to the profile of the mesh on the right (the Shrinkwrap target): Heartbreaker also has many examples of detailing cut into curved panels. Cutting into curved surfaces is notoriously difficult as any sharpening edge loops on those details end up causing undesirable pinching, particularly at corners. It's also very difficult to perfectly maintain a curved surface while adding in new geometry. Here, the head remains perfectly curved despite having cut details into it, thanks to good ol' Shrinkwrap: Here is a wider view of the top of the head on the left, with its Shrinkwrap counterpart on the right: The 'eyebrows' in the above image were excluded from the Shrinkwrap by adding all vertices apart from the eyebrow to a vertex group and selecting it on the Shrinkwrap modifier. The Process
*If I've already started modelling something, but decide I need a Shrinkwrap, I will sometimes duplicate the object I'm modelling, simplify it, and use it as the Shrinkwrap object. I also like to change the 'Maximum Draw Type' to 'Wire' in the 'Display' panel of the 'Object' tab in the 'Properties Editor' for the Shrinkwrap object so you can see the object you are modelling as well (shown below). Also, you may find it useful to turn on 'Draw All Edges', also in the 'Display' panel and 'Optimal Display', on the Surbsurf modifier, if you're using one. |