During the Coronavirus lockdown I've been trying to improve my understanding of procedural materials in Blender. During one of my experiments I remembered a video I had seen by the excellent Daniel Shiffman on the Coding Train YouTube channel about '10 Print' and I wondered if I could recreate this in Blender.
The idea is pretty simple: for each cell in a grid you generate a random number and based on that number you either display a forward slash or a backwards slash, creating a maze-like image. Below is my result, along with an animated reveal effect: ![]() I won’t go into the details of how I made it because I made it in a very non-linear way, starting with something more complex than it needed to be, but that I understood pretty well, that I simplified further and further as I went on until the end result is something quite compact but that I understand less. There’s some screenshots below which should help, along with the blend file with fairly good labels on everything. You are free to use the contents of the blend file in commercial or non-commercial work as long as you don’t distribute or sell the contents as is. Please also provide attribution if you use it. Ray.
Comments
A while ago I was doing a series of repetitive actions in Blender - not an uncommon occurrence - and as usual, I began to think of a Python script that I could write to do this task for me... But I write a lot of scripts. And the add-ons I write are often project specific and therefore limited in use. Plus, this task was about manipulating object's and their data and applying modifiers and joining objects and moving them and setting their origins and removing doubles and and...and that sounded like a terrible amount of effort. I thought to myself that it would be so much simpler if I could simply take the actions I was already doing - which were pretty straightforward when done through the UI - and just record them. Other software has this functionality: Word and a few of the other Office products have 'Macros' while Adobe's offering is 'Actions'. Both produce the same results: record a sequence of tasks, save it and then play those actions back any time on the selected content. I looked into whether Blender had any similar functionality. I knew it didn't really have anything like that built-in. Common advice is to pull down the Info Editor window, copy the actions that are listed there and paste them into the Text Editor and run them. This does technically work, but this workflow is itself repetitive. There's even an add-on that comes with Blender called 'Macros Recorder', which does automatically record a series steps to a text file and inserts it into a script ready to be run. But that add-on doesn't have the nicest of UIs and at best I would describe it as 'cryptic', even though, once worked out, it is quite simple to use. Nevertheless, when comparing these options* with Word's macros or Photoshop's actions, they fall somewhat short, which tempted me to make my own add-on. *I've just found another add-on, here, which also offers slightly more features, but still relies on the copy-from-the-info-editor workflow. Now, you may be thinking "But weren't you complaining earlier that you right a lot of add-ons?" and the answer is "Yes", or at least "Yes, but..." I do write a lot of add-ons and I have, in the past, sometimes got caught up in writing an add-on as a bit of a distraction instead of just getting on with whatever task the add-on is meant to be helping with, but, if the add-on is helpful to many projects instead of just one and if I can see that maybe this add-on would be popular and useful enough to potentially sell, then I give in and allow myself to have a bit of a play with some code. The result of this playing is Macro Maker, the add-on I wrote: Not exactly the most exciting screenshot, I'll give you that, but it gets across the basic idea. So far the add-on has the following functionality:
One important thing to note is that it's only possible to detect 'actions' i.e. Blender's operators - basically anything that's a button or a tool from a menu. It can't automatically detect property changes. So you could add an Array modifier, but you couldn't set its Count property. I've got around this by providing an entry in the right-click context menu for properties called 'Add Property to macro': This adds the property to the macro. I can understand that might seem a bit fiddly, but I don't think there's a way around it. Here's the most up-to-date screenshot of the UI, showing a few recorded macros: However - and this is a rather large 'however' - I'm not going to do any more work on it, at least not for the minute. It works pretty well, I'm able to use it for some tasks, but it still needs a lot of work. For example, I can't:
There are also a few technical things that need to be changed, but I think for the most part it would just take time. As for why I'm not continuing to work on it, the answer is 2.8. Blender is undergoing heavy development. Heavy enough that the very specific Python APIs I'm using might not even exist when 2.8 is finished (I'm already having to prepare for the fact that my existing add-ons, including the one I sell will have to be updated, as some critical parts of the API are changing), so I'm going to wait until 2.8 is out before I work on it again. There is also a slightly different problem which makes me slightly wary of further development. From talking to a few people on Twitter about the add-on, I'm slightly worried people haven't thought about how macros actually work (in any software, not just my add-on) and they may have difficulty in adapting their current way of thinking when trying to use the add-on. Let's say I want to record a macro that takes the selected vertices in Edit Mode, adds them to a new Vertex Group, adds a new Mask modifier and assigns that newly created Vertex Group to the modifier. The steps would look something like this (having already selected the vertices):
This looks pretty straightforward, but there are actually a few problems with this that would stop the macro working under some (maybe most) circumstances. The macro would work fine up until step '4': "Choose the new Vertex Group". It sounds simple, but how would the macro know which modifier to add the Vertex Group to? I don't yet have this part of property recording working yet, but when trying to choose the Vertex Group, it would likely look for a Mask modifier with the same name as when the macro was recorded. So what happens when the macro is run a second time or if there's already a Mask modifier? The new modifier would have '.001' appended to the end of its name and the macro, not being able to find a mask modifier with the correct name, would fail. The important thing to realise is that when dealing with lists, the add-on will likely look for something to have the same name each time, whether that's a modifier in a list of modifiers or a Vertex Group in a list of vertex groups. Potentially, this could be fixed by having an option when recording a property to always look for the property on the last item in the list. I'll only know if this is an option once I resume development. Even then I have a feeling that wouldn't solve all the problems. So, to conclude, I'll start working on Macro Maker again when 2.8 is released and I'll see if the APIs I need are still intact. There's also the problem that the toolbar as it exists now, where the add-on is placed, doesn't exist in 2.8, but I've heard that the developers are going to make sure the new UI supports 'UI heavy' add-ons by updating the UIs of the add-ons that are distributed with Blender, so hopefully there will be a solution for add-ons that require constantly visible columns for their layouts.
In the wake of ceasing to work on this add-on, I've been working on a personal Lego project, which I'll probably talk about in the next post, but judging by my past release schedule for blog posts, that won't be for another 6 months. Ray.
I recently made my first foray in learning probably one of the most advanced Blender add-ons that exists: Animation Nodes (AN onwards). And by 'learning' I mean I've done one project in it and don't know when I'll do another.
Here is my first, and possibly last, test:
Now I have the rather unenvious task of trying to explain how I did it. Because I said I would and it seemed like a good idea at the time and now it's two thousand and ninety-three words later and I've come back up to the top of this post to write about how long it's taken and how I regret saying I would explain it. However, despite the time it's taken to write this, this isn't a step-by-step tutorial and will probably take some investigation of your own to get a similar result. I merely nudge you towards the door, you must stumble your own way through it. There are 3 main parts to re-creating the effect:
When I first started this mini-project I wasn't even intending to use AN, I was just trying to see if it were possible to fill an object with lots of other objects. It was only after looking at the sphere-filled torus that I wondered if I could animate it nicely. My point is, at least in this version, the generating of the spheres to fill the torus was done beforehand. I would eventually like to try doing this step more procedurally as well, but for now this is a much more manual process. Now, if you are ready to have some learning flung at you, proceed to the first part. The Filling The Torus With Spheres Part
I'm not going to go into mega-detail for this part (or indeed any part), but the basic process is to fill a torus with a particle system of spheres and then use a rigid body system to allow the spheres to settle into non-intersecting positions:
After doing this project I actually found someone had already done nearly the exact same steps (explained in far more detail) in their own YouTube tutorial here. The First Animation Nodes Part
This is the part where the real 'fun' begins:
When I first made this it wasn't immediately obvious where to start. Partly, that's because with AN, nothing is obvious, immediately or indeed ever. Nevertheless, I had heard enough about AN to know that 'Subprograms' might lead me to the effect I wanted. So there too, is where we will begin. Here is the first section of AN node setup, which deals with scaling the objects: A key concept of Animation Nodes is to think about repeatable chunks of work. Instead of having to animate each object individually, we create a chunk of work (or in AN terms, a 'Subprogram') that takes in one object and animates it. We can then take that chunk of work and repeat it for as many objects as we need. The 'Loop Input' node is connected to the chunk of work and the 'Invoke Subprogram' node calls that chunk of work as many times as needed. How does the 'Invoke Subrogram' node know how many times to run the 'Subprogram'? Well, that's what we'll set up first. After adding an 'Invoke Subprogram' node and choosing 'New Subprogram'> Loop, we need to change the 'Loop Input' node to accept a different input. Ideally, we need it to take in a list of objects and perform an operation on each one. Luckily, that's as easy as clicking the 'New Iterator' button on the node and choosing 'Object List' from the list of available inputs. You'll now see that the 'Invoke Subprogram' node has changed its input from being 'Iterations' to 'Object List'. Far more useful. The 'Loop Input' node now knows that I will be passing it a list of objects and that I want to run the 'Subprogram' for as many objects in the list. As for the list of objects, we can do this with a regular Blender object group, so add all your spheres, or whatever confounded shape you've used, add them to a group (Ctrl+G) and name it something completely irrelevant, like 'Group.001'. You know, something that will be completely unintelligible when you come back to this project in a few months. I've been silly and named my group 'Objects in Torus', which almost sounds like a useful name. I'll change it later. So to summarise, the 'Invoke Subprogram' node will be passed a group of objects and it's going to loop through the list of objects and animate them depending on what is connected to the 'Loop Input' node. Add an 'Objects from Group' node, choose the group in the node's drop-down list that you just created and then connect its one output to the 'Subprogram's one input, like so: That unnerving, unusual feeling, which you're hopefully feeling if my writing has served its purpose, is the feeling of starting to understand something, of making progress. A quite unusual feeling when first working with AN. Make the most of it, because it probably won't last long. Now we're onto the actual animation, we want to be able to pull an individual object from that list. We already set up the iterator, and when we did this the 'Loop Input' node got a new output called 'Object'. We can now add any nodes we want to animate that individual object and the animation will get applied to all objects in our group, because the 'Object' output changes to a different object in the group each time the 'Subprogram' runs. Here are the nodes, connected to the 'Loop Input' node that will actually animate each individual object: The basic idea of how it's animating the object is it is looking at the location of two empties (one either side of the torus) and depending on how close the empties are to the individual object, it should be scaled somewhere between 0 and its original size. The empties are then animated closer and further away from the object which means the objects will then individually scale up or down. To summarise the node setup above:
On the left in the screenshot below, you can see the two empties selected, whose falloffs are compared to each object's location: They're parented to a larger empty which is animated diagonally along the local 'z' axis as shown above. This is how the effect happens diagonally, as the 'Object Controller Falloff' nodes are set to read the local 'z' and '-z' axis respectively. You can see on the right (above) a representation of the falloff, where in the centre it's lighter and the objects are full size, but if the empties move diagonally up and right, the imaginary dark falloff area will also move, start to cover the torus and cause the spheres to scale down. Creating the 'Wave' Effect
The third and thankfully final part (come on, we're nearly at the end now) is to create the billowing wave effect. Here's the node network for the wave effect (I've removed the nodes that handle the scaling for the minute, just to make things a bit simpler): The basic idea is that we are taking the original location of the object and using a 'sin' wave to animate it back and forth on the 'y' axis, adding in a bit of randomness for good measure along the way. The node setup above equates to this equation: sin(xLocation - ((time + TimeOffset)/Speed)) * RandomNumber * Strength + OriginalLocation
For those that don't know, the 'sin' function produces a wave depending on the number fed to it. Seeing as we want an animated wave we need to pass in a constantly changing value, so we pass in the time value ('Time Info' node), offset it just so the wave starts on the frame I want (the 'Math' node adding 40 to offset by 40 frames), turn down the effect a bit ('Math' node dividing by 5) and finally we subtract the object's original 'x' location so that each object gets a slightly offset sin value (otherwise they would all move the same amount).
After that we multiply in a bit of randomness between 0.5 and 1 so they each move a bit more individually, turn down the overall effect by multiplying by a number less than one, add it to the original 'y' value (so they start from their original 'y' positions), combine the 'y' location with the original 'x' and 'z' locations, before it's finally used by the 'Object Transforms Output' node which sets the location for the current object. Phew. It's possible, likely even, that this equation could be simplified, but such was the joy of finally getting something which resembled a result that I couldn't bear to touch anything and accidentally break it. Add it all together
Now we've calculated the location and the scale we can add all the nodes together, producing this wonder: Really, I should have added in a few more images, gifs or some other animated doodahs to break up those chunks of text, but golly gosh, I really couldn't be bothered; it took long enough just to write all of this without faffing about making nice illustrative, understandable and helpful images. So there we have it, a completely clear, 100% explained, nothing vague, detailed, step-by-step guide of how I did it. Apart from all the parts I skimmed over and left to you, the avid reader, to figure out for yourselves. Which, in fairness to me - and I have been assured that I definitely deserve fairness - is only really the animating of the empties and the adding and connecting of the nodes. Well done me. And well done to you for getting this far. But mainly well done me. Because let's be honest, it's me that's done most of the work in this transaction. I've had to use all 10 fingers to type, you've just had to use two eyes. Or whatever number of eyes you have available to you. Either way, regardless of the number of operational eyes, 10 fingers is more. No one's got 10 eyes. Unless you're reading it in a group. Bugger. Ray. Over the past few months I found the need to create two more add-ons for Blender 3D: 'Timecode' and the aptly, but rather unexcitingly named 'Move Render Layers'. Timecode 'Timecode' is a small add-on that adds the ability to navigate the timeline by inserting a timecode into the Timeline Editor's header (in the form of HH:MM:SS:FF): As part of the add-on there is also a small label that appears above the current keyframe/selected-object label, showing the current timecode: Note: One thing I still have to do is get the Timecode label to shift over when the 3D View Toolshelf is open. Currently, the label will be covered when the toolshelf is open. You can get Timecode, here. Move Render Layers I've always been slightly annoyed that unlike the majority of the other lists in Blender, the render layer list cannot be re-ordered, and so eventually, after nurturing that annoyance, it bore fruit in the form of two little arrows: Yes, this is perhaps the epitome of the 'First World Problems' meme, but you just wait 'till you're working on a complex project with one shadow render layer at the top of the list and one at the bottom, both of them destined to remain separate, then we'll see who's laughing.
It will be me. "Ha, ha, ha ha, ha," I will say, actually saying the words instead of laughing. And you will say "How did you get in my house?" and I'll say "That's not important." Then, I'll reach down (because you'll have already fallen to your knees in despair), cup your cheek with my hand, softly brush away your tears with my thumb and say "No, I was just joking, that was my fake laugh. You can still use my add-on". And you'll get up, feeling slightly bashful that you got so upset, wiping away the remainder of your tears (because I did a bad job of wiping them away and really just smeared them) and I'll give you a little reassuring touch on the shoulder to say "It's ok." We'll make eye contact, break into smiles and walk off into the sunset. And it will be weird, not least because it wasn't even close to sunset when I first appeared and you didn't really even want to leave your house, but you accept this is a new future where all kinds of things are possible, like moving render layers up and down. Alternatively, if you would prefer not to enact that little tête-à-tête you can download the add-on, now, from here. Ray. A few months ago I had a silly idea, and as with all my silly ideas I spent far too much time on it. I knew it was silly. I knew because whenever I thought about it I sniggered to myself. I also knew that if I worked on it that the pay-off would be relatively small (compared to the work it would require), but I couldn't resist. I can't remember what made me think of the idea, maybe it was that I had been working with Modal Operators in Blender - tools that allow continuous execution and interaction from the user instead of the more common single-run tools. A modal operator allows you to listen for certain events, run code continuously and update the interface while it's running. These three ingredients are also necessary for something else. Games. I wondered if it would be possible to build a game utilising parts of Blender's interface. So that's what I set about doing. To quote from the project's eventual 'Readme' on Github:
The game is available on Github here, along with instructions on how to run it yourself. I released it, awaiting the imminent deluge of favourites, retweets and messages such as "You're so clever!" and "You're amazing!". Some people ask me how I stay so modest. Unfortunately I can't explain, it's just one of my many, many natural talents. However, that didn't happen (the social media frenzy, not the modesty. I am always modest). I posted it on Twitter, got a few favourites, waited a little while and then quietly came to the conclusion that perhaps it wasn't as funny as I had first thought and was suddenly self-conscious of the amount of time I had misguidedly sunk into this project. You see, yes, I may get silly ideas, but these small projects, while enjoyable for me, are essentially done to get my name out there a bit more. If I'm not getting any monetary value from a project I at least hope to get some promotional value from it. While I had hoped Node Pong would be popular, I had also experienced a slight numbness to it as the project neared its end. Now, yes, I would later conclude that at least part of that numbness was my leg falling asleep, but it was also the novelty of the original idea wearing off. It had become normal to me and it suddenly seemed quite reasonable that maybe it seemed normal to other people too, leading to it's mediocre reception. And then, some 4 hours later...I got a trickle of favourites and retweets. And then I got more. And soon the trickle became a river and the river a flood, but the nice type of flood, not the killing type of flood. Potamology aside, soon the most well known members of the Blender community were commenting on, favouriting and re-sharing my silly little idea. It turns out it was funny and maybe a little clever too (more modesty, it is in fact very clever). Several weeks later it now stands as the most popular and far-reaching thing I have done. I'm sure there's some valuable lesson to be learnt here about building things for yourself instead of requiring the validation of other people or to continue with projects even when you become numb to them, but I haven't really got time to make those points. I have other silly ideas to pursue. Ray. During the freelance work that I've been working on for the last couple of months I had to do a lot of long renders of animations. When I'm doing a really long render I tend to use the command line to do a 'background' render so that Blender's UI doesn't have to be visible (which apparently saves a bit of memory), and as with most things recently, that caused me to write an another add-on... The Manual Way Before I get onto the add-on I'll take you through the manual rendering process I used to do. To do a 'background' render (more commonly referred to as a 'command line' render) you open up a Command Prompt (or something similar for non-Windows users), navigate to the Blender installation directory and use something like this: To break things down, that's:
If I want to render multiple files then I would create a Windows Batch file with the following commands (I've simplified the paths just so they would fit on the page nicely):
I would then just double click the Batch file, which would automatically open a command prompt and start rendering. To navigate to Blender's folder and open a command prompt or to create a batch file with the updated filepaths and parameters each time I wanted to render became a bit time-consuming - I wasn't even doing that many renders at that point - so I decided it might be worthwhile (and fun) to create an add-on with a UI to handle this for me. Out of the primordial code came: Batch Render Tools Features:
It's available to download from Github, here. One caveat is that at the minute it's still Windows only. The 'readme' over on Github is quite extensive (and if I say so myself, quite excellently formatted) so head over there if you want to know how to use every little feature, and given the amount of time it took to create that 'readme', I really suggest you do. 'Batch Render Tools' also has a small secondary panel which serves as a shortcut for opening a Command Prompt in the Blender installation directory, which I also find quite useful: |