In 2022, everybody and their grandmas switched their profile photos to artsy renditions of their faces. From Chance the Rapper to Lebron James, these colorful and flattering avatars flooded social media seemingly overnight.
Many were delighted. Also, many went up in arms against them.
Why? Because the profile photo artworks were generated by artificial intelligence (AI) through an app called Lensa. The images sparked conversations (even outrage) about whether these AI platforms plagiarized the artists on whose artworks the AI was trained, and whether software like Lensa, Dall-E, and Jasper will eventually (some say soon) kick artists out of their jobs.
Now, whichever side you may land on regarding this debate, one thing is undeniable: AI is disrupting almost every field of human activity, from dating apps to smart homes, stock trading to chess playing, launching marketing campaigns to solving climate challenges. And as the popularity of Lensa and text-to-image platforms have made clear, the creative arts are not exempt from the coming of AI.
This includes what until now has been the human labor- and talent-intensive field of creating 3D graphics. What’s AI got to do with 3D?
AI, by its very nature (weird to use the word “nature” for something artificial but here we are; I digress), can be trained to do pretty much anything and with that, its impact on 3D can range from automating specific parts of the 3D workflow to potentially taking over the whole workflow itself. And one of the more exciting (or scary, depending on who you ask) areas of AI development regarding 3D is its application to modeling.
Modeling is the first part of the 3D workflow and it deals with creating a 3-dimensional representation of an object or surface. It is painstaking and laborious, and takes a long time to master. 3D modeling artists fall somewhere in between a fine artist and a tech wiz: they must have a good grasp of anatomy (when modeling characters) or architecture (when dealing with spaces and structures) while also being masters of software tools and commands and parameters.
But imagine being able to take a short video of an object using your phone camera, feeding that video into software, and getting a working 3D model of that object in a matter of minutes. This is exactly what Nvidia’s Instant NeRF technology does. In fact, Instant NeRF doesn’t stop at just 3D modeling – it goes all the way to outputting a completely textured, lighted, and rendered 3D scene from just a small set of static images or a short video shot with your phone.
Yes, AI just did that.
Amazing? Terrifying? Again, depends on who you ask. But even as we are just in the early days of AI, the fact is it can already do this. It is here. And it will only get better, faster, cheaper.
Let’s now jump to the last part of the 3D workflow – rendering – and see what impact AI has on it. Rendering is the part of the process where the 3D scene you’ve meticulously modeled, textured, and lighted is converted into 2D so that it can be viewed properly as an image or video on 2D screens like your phone screen or TV or cinema.
This description may sound simple but a lot goes into this process. As such, there are many ways that AI can be applied.
For example: let’s say you have a low-resolution, low-frame rate animation that you want to scale up to 4K. How could you go about doing it? Traditionally, you’d go back to the original project file in the 3D software and re-render the whole thing in the higher resolution that you want. But with AI, you can do it without having to access the original project at all. Software like Flowframes lets you upscale a low-res 3D material to a higher resolution. This process is called interpolation.
Because AI can do this, it opens up possibilities for 3D artists and firms looking to be more efficient. Now they can render frames fast using lower resolutions (perhaps using their own computers) then just upscale using AI after.
So what does this mean the for render farms?
Throughout history, there have always been winners and losers every time new technology gains mainstream adoption. Writing and publishing made town criers irrelevant. Cars relegated horses to farms and races. Mobile phones killed the jobs of pager operators. The internet led to the decline of print news, brick and mortar bookstores, and encyclopedias.
But there have also been technologies that led to the evolution of jobs and roles, not necessarily to their eradication – bank tellers remained even when ATMs arrived, referees are still on the field even with VAR, online news is still written by professional journalists (but yes, there are also “news” published by non-journalists).
And this is how, I think, AI will impact cloud render services – evolve them, not end them.
Sure, AI can help individual 3D artists and firms render on their own (as described earlier) but AI can also help render farms optimize their operations, leading to faster speeds and lower costs that will make rendering locally simply impractical and inadvisable.
The hardware side of render farms is what will continue to make it a relevant service for 3D artists and firms. AI is software and, therefore, can only offer software-side improvements to the workflow. Rendering, because it is by nature computational, will always require beefy hardware to be efficient. More CPU cores and more GPU memory will always mean faster and better at least for rendering. And as 3D technology advances, scenes will become more and more complex to render, demanding more and more hardware muscle.
Different 3D artists will, of course, feel the impact of AI on rendering differently. It is possible that for students or freelancers working on relatively smaller projects, AI may advance well enough to make local rendering practical. But for production-level projects where artists often operate at the cutting edge, even constantly pushing past it, the demand for powerful and efficient rendering solutions will likely always point to render farms. And since AI can also enhance render farms (AI can help optimize project sequencing, frame troubleshooting, node assignments, more advanced pricing schemes, etc.), AI can help render farms reach the point where it becomes a no-brainer to simply render with a farm.
In short, AI is simultaneously threatening and beneficial to render farms.
Make no mistake, AI is here. While I can see how it might lead to lessening the number of 3D artists needed to complete projects, AI won’t completely replace human talent, vision, and originality in 3D.
On a similar note, AI will lead to both disruptions and improvements for 3D render farms. Future advancements in AI might lead to a smaller number of smaller projects being beamed up to cloud farms for rendering. But AI itself can also improve how render farms operate, leading to more efficiency and ensuring the relevance of render farms especially for larger scale projects. AI has the potential to advance render farm operations to the point that everyone and their grandmas are rendering with farms online.