We often use the term 3D rendering to describe the entire process behind producing 3D artwork, which serves us well when trying to communicate what we do concisely to the layperson. Still, for those in the early phases of learning 3D, making some distinctions can do wonders for their progress.
In my early years of studying 3D, for example, I put most of my focus on learning to make models. As far as I was concerned, 3D rendering just turned those models into images. Later on, I became more acquainted with the concepts of texturing, lighting, and composition, and in that time, while my renders were noticeably improving, I soon reached a plateau. No matter how detailed my models and textures were, the final image lacked something. I would see other renders online that featured simple subject matter and were still more visually impressive than any character render I ever made. Eventually, I realized the missing ingredient was somewhere in the domain of rendering.
Just as a traditional artist understands the term rendering as the careful articulation of light on form, texture, and detail, so too must we regard the rendering process not only as a computational process for our render engines but as the stage where we work to gel all of our elements together into a cohesive and believable image (or sequence). In this article, we’ll go over both sides to better understand this sometimes underinvestigated stage (at least among beginners) of 3D production.
How does 3D rendering work? (an extremely oversimplified and reductive explanation)
There are different methods our 3D programs create the final image. Some are deprecated while others continue to be developed because of the advantages they offer to the different kinds of media 3D applies to. The two most prominent methods today are:
Offline Path Tracing- where an algorithm calculates all light rays in a scene through a mathematical process called integration. The light is further calculated as it hits an object's material surface, determining how much light makes it to the camera. This is an oversimplification of the process, but the advantage of this method is that it naturally recreates the physical effects of light, such as indirect lighting, soft shadows, ambient occlusion, volumetrics, and more.
Real-time - where every object in the scene is broken down into triangles that then generate pixels and are ultimately converted into data to generate the pixels shown on the screen. This method is used in video games and similar media, where what is shown on the screen must update immediately after an event, such as moving a character in a game world. With today’s technology, real-time can incorporate ray tracing to achieve a level of accuracy previously only seen in non-real-time rendering. However, some effects still need to be faked, whereas, in path tracing render engines, they come naturally.
It's important to see projects to completion, learn from them and move on, but waiting hours on end to see what the project will look like will make it harder to do that. Keeping up motivation can also be extremely difficult over several iterations of the final stretch of a project, where we need to render at a quality close to our final render settings. We all have different interests, and no subject matter should be considered superior to others, but I think it would be fair to say that many artists lose sight of their initial goals and stay in their comfort zones mainly because of render times. Knowing some optimization techniques can mitigate that.
Today, offline render engines are much faster, and real-time rendering almost bypasses the problem entirely. However, optimizing scenes is still crucial to professionals and students alike. We won't go over in too much detail how to optimize scenes in this article, but here are some basic considerations that can save hours of waiting for renders to finish:
A compelling image tells a story; to the benefit of all visual artists, the best stories are never overstated. When planning scenes, always ask yourself whether each element is necessary to what you want to communicate. This will allow you to focus on what people will really be looking at in your images and in some cases, even give you a better understanding of what your image is really about.
(Hopefully) This render depicts a monarch and his vizier receiving a guest in perhaps not the highest spirits. Compositing a shadow in the foreground not only made the scene less render intensive than if an actual extra character was in its place but also made for a cleaner composition.
Every polygon, image texture, modifier, hdr map, simulation, and particle system (to name some of the factors) increases the memory consumed by your machine when executing a render. Determine what elements the viewer of your work will focus on, which ones fade into the distance or are partially obscured, and simplify them.
This can mean decimating their geometry to produce low poly versions, using smaller (or less) image textures, or reducing the amounts of instances generated by particle systems. You may find that your scenes render much faster without any real cost to the overall quality and detail in the image. This also affects, to an extent, how smoothly you can navigate around your project in your viewport, which is just as important when iterating on a project.
On the left is a bust rendered with 4096x4096 textures, with a polycount of 8786 quads.
On the right is a version of that bust with roughly half the polycount and 1024 x 1024 textures. While there are some visible changes in quality, the bust on the right would more than suffice as a substitute for a shot where this asset was not the point of focus.
Most render engines have presets for the number of samples calculated when rendering your scene. The more samples, the cleaner the render. However, with the modern denoising features available in most renderers, you may be rendering more samples than you need. Try incrementally decreasing your sample count after your first render until you find a value where there's no real loss to image quality and has significantly lowered the render times.
Denoising was applied in both cases. Halving the sample count hardly makes a difference to the quality of the image. Denoisers are great for early to mid-iterations of a project but should be used carefully for the final render since some optical effects can be lost.
This video by Decoded perfectly illustrates the importance of allocating time and work to the elements in your scene according to their prominence in the final result (and more!).
Online render farms are essential for meeting deadlines and big animated projects. They render a project’s frames simultaneously over a host of powerful computers in a network, which usually means you can get hundreds of frames rendered in the time it takes for a single frame in your project to finish.
While using render farms cost-effectively is a skill in itself, some cloud rendering services, including ours, come with a dedicated support team of specialists to help you every step of the way. If you or someone you know could use some extra rendering power, check out our online render farm!
If you want a brief primer on the benefits of render farms, take a look at this article.
With even a basic understanding of reducing render times, we can render tests more freely and spend more time evaluating the resulting image.
Strictly speaking, very few creative aspects of rendering in 3D decidedly belong to this stage of the production pipeline alone. When we iterate on a project, we use our renders to evaluate the work and find what to improve upon, and often it will have something to do with other parts of the pipeline- some tweaks to a model, adjustments to lighting, etc. But, we can argue that once we’re at the rendering stage, we shift our focus from developing individual elements in our images to how they come together to form the whole of the work.
We might adjust our global Exposure, the amount of motion blur seen on a moving image, try working in different color spaces or negotiate optics effects that contribute to the realism of a scene (like caustics).
We can also set render passes to isolate different Light contributions that we can adjust individually in compositing and allow us to make stylistic choices.
Only by seeing the final render do we become aware of the subtler opportunities for improvement available to us, and so ultimately, the 3D rendering phase is where we get to art direct ourselves. Personally, I find going through a list of key considerations is a great start to uncovering said opportunities:
Whether or not photorealism was the goal, if viewers can’t lose themselves in an image, the culprit is usually a discrepancy from what we perceive in the natural world. Make sure everything is scaled correctly, textures have the appropriate fidelity relative to their distance from the camera, and the objects in the scene are affected by their environment.
The render above, for instance, uses objects that were not true to real-world scale, while the lights and cameras were. The textures don’t seem affected by the environment in any way (everything is too clean) As a result, the subjects look more like action figures than living things.
An arresting render is more than its subjects. A palpable tone, mood, or narrative makes even a simple product shot captivating. Our layout, color choices, and contrast heavily determine whether we see more than what is shown.
In this render, we see a portrait shot of a woman situated somewhere dim with a blank expression.
With some camera adjustments, lighting changes, and a vignette, we see more of her weathered face and piercing eyes, which might at least convey a sense of mystery and melancholy.
Unfortunately, the end result feels a little overdone, and a lot of the soft quality of the skin from the previous version was lost, which brings us to….
Working on a project for prolonged periods can numb the senses and make it easy to overstate things. Our pride in how well we modeled a certain prop might affect how much attention we bring to it to the detriment of the whole scene. A brilliant 3D artist once told me if you feel like you hit the right number for a shader value, light intensity, or exposure setting, reduce it by 0.5. Those words will serve you well!
So what is 3D rendering? It is the processing of data into pixels, the process of instructing a machine to process that data such that a balance is achieved between efficiency and quality, and the subtle art of fine-tuning a project to completion through the iterative analysis of that data. The more often we dedicate time to these aforementioned processes, the faster we can progress toward doing the best work we can.