It would be wrong to cover the power of GPU without explaining just what it is. Initially, the meaning of this acronym can be a bit of a mystery, conjuring up images of complicated machinery, unless of course, your knowledge set is that way inclined. GPU stands for Graphics Processing Unit, as opposed to CPU, or Central Processing Unit (more on that later). If I said the words graphics card this would provide an abundance of clarity to any dedicated PC gamer – of which I’m not, PC “master race” be damned. No, I was unfortunately led to believe Mac was far superior for 3d animation, and near shed a tear when a friend said they’re gonna get Borderlands 3 on PC. Technically a graphics card is not the GPU, but rather may include one or more GPUs as part of it; although visualizing those semi-futuristic looking rectangular things with the dual (or even triple!) fans gives you a good idea of what we’re getting at.
While it’s true an individual CPU core of a computer or games console is good at performing a large variety of tasks sequentially very fast (making me a nice cup of tea is sadly not one of them – yet) and a single core of a GPU is slower and simpler than those of a CPU, this is made up for by the sheer volume of GPU cores and their ability to work in parallel. For example, a top-notch 18-core Intel i9-7980XE CPU will set you back at least £1,785 (US$1,843), meanwhile the 4352-core self-described “ultimate gaming GPU”, Nvidia GeForce RTX 2080 Ti costs £1,099 ($1,199); ⅓ cheaper for over 241× the cores. This quantitative approach becomes all the more valuable when you consider the technological limits of individual core improvement, you can only do so much.
With the word simulation soon comes the idea that we are living in one; we have The Matrix (1999) to thank for that. We’re not talking that kind of simulation, but the at-times-beautiful visualizations hiding mathematical processes that range from relatively simple to the mind-bogglingly complex. The most famed simulations replicate interactions of celestial objects such as stars, galaxies, or black holes. They allow us to speed up the colossal timescales of the universe and view sections of it how they once were or what they’re going to become via supercomputers. Of course, you don’t need a supercomputer to perform a simulation of your own after all most 3d software comes with the ability built-in to some degree.
By messing around with the properties and interactions of a bunch of floating balls or dots you can simulate all four of the classical elements and beyond. If you want to create as high-quality simulation as reasonably possible, you’re gonna need a lot of power and some damn good software. This is where plugins such as Phoenix FD for Maya & 3ds Max (£40 per month) or Trapcode Form for After Effects ($199) come in.
If you want your simulation to look not only great but render as fast as possible, you’re gonna need GPU power. Don’t just take my word for it; a 2017 paper by members of the NASA Ames Research Center, NIO & the USRA states “The GPU-based implementation provides a significant advantage and scales much better than the purely CPU-based implementation. With a small number of samples…the CPU-only approach is preferred. For larger numbers of samples, the GPU approach provides an order of magnitude improvement.” Think of the GPU as an army of master swordsmen, whereas the CPU is a trained assassin capable of dispatching foes one by one with a variety of weapons.
With the power devouring waiting game that rendering usually is, real-time rendering may sound like nothing more than a pipedream. But it’s actually something that many of us see regularly and take entirely for granted. Real-time rendering is central to today’s videogame industry, and it relies heavily on GPU power. The visual splendor of today’s games demands it. Everything from the ever-changing XYZ position of your character, to the lighting, textures, and various other simulations such as fire and gravity need to be re-rendered simultaneously time and time again within fractions of a second. As to real-time previz, you may be surprised to know it has been around since at least 2006 when it was first developed by ILM.
Now here’s where realtime pre-viz comes in. Because of the nature of animation production how closely the previz resembles the final product depends on time constraints, budget, and team size. While movie studios can afford to create action-packed & easily interpretable previz, the same can’t necessarily be said for a potentially struggling freelance 3d artist.
Being able to see a higher quality version of your previz in the viewport of one window, while you’re making adjustments in the other is a game-changer for not only productivity but clarity too. After all, the better your previz, the more chance you have of an interested party getting on board with your project.
The benefits of GPU rendering are similar to some of the benefits of using a render farm. First and perhaps foremost it saves time. After all, handling complicated graphics processing is the name of the GPU game. As Workstation Specialist helpfully points out, GPUs are designed “to render on specific render software packages available in the market today such as, NVIDIA’s IRay, Chaos Group’s VRay RT, Otoy’s OctaneRender; and Maxwell Render” which is comforting to know if you’re considering buying a new graphics card for rendering purposes.
So while your computer’s CPU is straining to keep your web browser from collapsing under the weight of far too many tabs (guilty as charged), your GPU doesn’t give a damn and renders those frames at comparatively lightning speed; however, if you want to do a binge-watch of a series or videos while you wait, you’re better off using another device. You could also try a very interesting option that is hybrid rendering. If this isn’t possible, or you are looking to be as cost-effective as possible, you may want to consider using a render farm instead, which, as paradoxical as it may sound, is a far more cost-effective solution to cashing out on a brand spanking new graphics card or CPU.
Particle simulations partly fit into this category, but this depends on what you’re going for; this could be a simple spilling glass of milk, or say, the usual colossal tsunami destroying Hollywood’s favorite (or least favorite??) city. As 3d designers, we have become so incredibly sensitive to viewport lag. To avoid our chosen software crashing we waste time carefully watching our polygon counts and “fake” as much of the complex geometry as possible via texture maps, spending hours upon hours dutifully plugging them in and hoping for the best, just so our computer doesn’t melt. Just imagine sculpting something incredible in Mudbox and being able to render it as is, with GPU power that dream can be a reality.
For this one, we’ll be focusing on the award-winning Substance Painter by Allegorithmic (no affiliation); who were acquired by Adobe earlier this year. This software streamlines the massive undertaking that is texturing, rendering, and effects application via a physically-based rendering (PBR) workflow. Just what is all this exactly? Well, SpeedTutor has got you covered.
With SP’s high importance to the likes of AAA game developers, it’s clear why many in 3D production would want to get in on it. Painting textures directly onto your model is a far cry from the dull and repetitive process that is UV unwrapping. It’s a process that is a beautiful emulation of reality, the pride, and joy of any digital Banksy. This is all the more tempting by its 4 license tiers. The Indie license alone covers any revenue below $100K (~£77.3K) for as little as $19.90/mth or $239/yr (~£15.40/mth, ~£185/yr), and let’s not forget the discounted Education license. If you’d like to see if your computer can survive, you can download a 30-day free trial. Unfortunately, it will be of no surprise to hear that SP is extremely GPU intensive. Not being able to run this arguably beautiful software can be absolutely heartbreaking for any freelancer, from newbies to veterans; but don’t worry, we’ve got your back.
By now there should be little doubt in your mind of the immense benefits of GPU powered 3d production. But don’t worry if you can’t afford to invest in a whole new graphics card, or you’re in the minority of Mac users seemingly forever stuck with a far past its prime GPU—such as my defunct 2013 NVIDIA GeForce GT 750M—and can’t afford an eGPU, GarageFarm.NET have created an alternative that is both cheap and powerful, a GPU-server rental service that goes by the name of Xesktop, at your disposal for GPU 3d rendering, processing Big Data, or any task that can benefit from parallel processing.
Besides that, GarageFarm.NET has evolved to support both CPU AND GPU rendering. This means that if you’ve benefited from a good GPU in your own setup or over at one of Xesktop’s servers, you can take it to the next step by sending it over to the farm, and leveraging GPU render speed over multiple nodes fitted with powerful GPU nodes: 8x Tesla K80 cards, with 128 GB RAM.
Credits: By Danny Rollings