In the past decade of computer graphics, GPU rendering has become a staple of artists' workflows in almost every 3D pipeline, including Maya users. But, as is the question with most tried-and-true methods, is GPU rendering worth the cost and effort? In this article, we look at GPU rendering for Maya users to see how it compares to other tools and if it's genuinely worth the cost.
To put it simply, GPU rendering has become a staple for Maya users due to its speed and new feature developments. With modern GPUs supporting AI frameworks and the ability to use GPU acceleration for viewport rendering and tools like X-Gen or Nvidia’s OptiX Denoiser, it's difficult to ignore the possibility that GPU rendering may permanently replace CPU-driven workflows in Maya. The amount of VRAM a GPU has should be a priority if regularly rendering complex scenes with many large texture maps.
While this article focuses on the viability of GPU rendering for everyday use on local machines, many of the considerations discussed here also translate to using GPU nodes on third-party cloud render services. If you need an online renderfarm service, or would like to make some comparisons yourself, sign up on our Maya render farm! New users get $50 of render credits to test the service, free of charge and without any need to commit.
So what makes GPU rendering so popular? For starters, speed. GPUs are known for being well optimized for rendering complex ray-traced environments. While CPUs are well optimized for general use, they aren't especially fast at raytracing compared to GPUs. This means you can get a lot more bang for your buck with a high-end GPU vs a high-end CPU in the same price range if your goal is fast render times.
Additionally, in the past five years, GPU rendering has outpaced the feature development of CPU-based rendering. With a modern GPU, you can take advantage of new tools like the raytracing optimizations Nvidia has brought to RTX Series GPUs. GPU rendering is no longer on the back foot and is now at the forefront of rendering advancements. And with the speed and availability of GPU rendering farms, rendering times can be sufficiently cut to meet tight deadlines.
These days, anything from viewport rendering to tools like X-Gen or Nvidia’s OptiX Denoiser can take advantage of GPU acceleration to make not only your render times quicker but the creation process too. Modern GPU cards from Nvidia and AMD also now support many AI frameworks that are slowly being integrated into Maya for creators, like AI-driven de-noising, texture creation, and physics simulations. These factors make it difficult to ignore the possibility that GPU rendering may permanently replace CPU-driven workflows in Maya.
A consideration that has made many 3D artists question the sustainability of GPU rendering is the memory limitations that consumer GPUs bring. Unlike CPU rendering, you can't simply purchase more VRAM when needed since VRAM is built directly into almost all GPU cards. In the past, consumer and prosumer GPUs from Nvidia and AMD were limited to as low as 4-8GB of VRAM (Video Memory). This means that rendering large scenes that require more on-demand memory to render would be out of the question for these GPUs. However, with advances in GPU offerings, this issue has slowly become a lesser one. Modern-day GPUs like the RTX 3090 now come with up to 24GB of VRAM, and server-grade offerings allow almost unlimited VRAM usage by using multiple GPUs to pool VRAM.
If you're in the market for a GPU for rendering in Maya, the amount of VRAM a GPU has should be a priority if you regularly render complex scenes with many large texture maps. The 12-24GB available on high-end consumer GPUs are almost always enough to work with if you're working with less heavy textures or smaller scenes. However, high-end VFX or studio-grade shots could require huge amounts of VRAM, often up in the 32-64GB range. GPUs with these capabilities are almost always available exclusively on server-grade GPUs. Because of this, GPU-accelerated renderfarm services are far more cost-effective than building your own render server, as these cards can easily creep into the quadrupedal digit range.
Apart from auxiliary questions, the top consideration for 3D artists when picking a GPU is almost always speed. So how do GPUs compare to CPUs? Well, the comparison can be hard to quantify. However, benchmarks comparing standalone CPU rendering vs standalone GPU rendering can give a rough estimate. According to tests done by Pudget Systems in V-ray comparing Nvidia Titan GPUs to AMD Threadripper CPUs, best-in-class GPUs can beat best-in-class CPUs by reducing render times by about 20-40% conservatively. Combine this with the new features the modern GPUs offer, and it's difficult to see CPU-only rendering staying around for long in the professional space.
The same can be said for many render farm offerings. Although CPU rendering is still a popular choice for render farms, GPU accelerated farms can offer much quicker speeds than traditional farms. As we’ve shown in the past here at GarageFarm, GPU render nodes can bring a single render frame down from 6:29 to a mere 1:20. On a large-scale animation, this can multiply into being quicker by hours or even days for heavy scenes.
So, you're convinced. GPU rendering is the future. But now what? There are dozens of GPUs on the market to choose from, each with a unique feature set and limitations. With new players like Intel entering the GPU market and supply chain issues causing shortages, it's never been harder to pick the right GPU.
If you regularly render large or complex scenes, a GPU render farm is the best option. Oftentimes, enterprise-grade GPUs are the only GPUs that can support the requirements for these types of scenes without relying on slower CPU-based rendering. For artists, a renderfarm can provide the tools to render enormous scenes at a far more reasonable price than purchasing a high-end Nvidia Quadro card or AMD PRO series card. If you decide to rely on cloud rendering, you likely still need GPU for test renders and the creation process, but the GPU won't need to support the heavy workload and memory needs of final renders and animations.
Even for less demanding projects, the speed of a render farm can be essential to meet deadlines. Though GPUs are extremely fast at what they do, even the best GPU can’t compete with the resources of a full-fledged render farm when it comes to speed.
In the mid-range, Nvidia's A-series cards offer up to 48 GB of VRAM and very speedy performance that is almost unmatched in the GPU market. These cards are expensive but offer the best performance apart from server-grade cards.
If you mainly work on abstract scenes or smaller projects requiring less than 24GB of VRAM, then the latest Nvidia 4000 series or AMD RX 6000 series are a fantastic choice. Nvidia is currently at the cutting edge of GPU development with cards that offer support for AI Creation tools and accelerated raytracing. On the other hand, AMD cards use the RDNA 2 architecture to offer hardware accelerated raytracing and a host of other optimizations. However, it's worth being cautious of AMD cards depending on what render engines you use day-to-day.
Many render engines lack support for GPU rendering with AMD-branded cards. For example, V-ray and Arnold for Maya lack support for AMD cards altogether and instead require Nvidia's CUDA technology. Redshift render for Maya does support some AMD GPUs on macOS but lacks support on Windows or Linux. In the end, it's essential to check the requirements of your chosen render engine to be sure that the GPU you choose is supported. If you often switch or change up your workflow, an Nvidia GPU may be best, as almost all modern render engines support RTX and A-series cards.
As modern-day 3D graphics become increasingly commonplace, technology will only become harder to predicate when it comes to rendering. This is why buying a mid-range GPU for day-to-day work and utilizing a render farm for larger projects is a solid choice. With a cheaper GPU, you may run into bottlenecks during the lookdev and creation process. But with a top-of-the-line card, you run the risk of investing heavily into a technology that may change in only a few months or years. At the end of the day, it’s up to your specific needs and Maya workflows to know the exact GPU for you, but I hope this article has given you the tools you need to make the most out of GPU rendering for Maya.