elevAIte

↝      Description

While I don't think that CG is going to be replaced by AI completely anytime soon, it is an incredibly powerful tool to have in your arsenal; not replacing 3D renders, but aiding and enhancing the process.

Creating intricate backgrounds for 3D scenes tend to be a laborious process, especially when given further restrictions by the client. This is where I helped out: I crafted two workflows, one for generating a background that fits the scene and its style, and another to 'enhance' the details of the final render - especially noticeable in the foliage and the water, both of which can be quite tricky to get right in pure 3D. This allowed the client to choose from a palette of backgrounds and to implement revisions quickly and efficiently.
Another thing to note is that this process was done with print-ready resolution in mind: That is to say, 8k x 8k pixels.

↝      Tools used

ComfyUI & other AI tools

Enhance!

Closeup: Raw render vs. inpainted background & upscaled

Various inpainted backgrounds for the scene

Create!

Various generated characters using my fine-trained model

Another idea I pursued for this project was whether it is possible to delegate quick concept designs of a given - branded - mascot to AI. I was handed only 8 images of the character in various costumes, and I did my best to fine-train a model with those to be able to generate new ones. The results were all we could have asked for: The character was recognizable, the 'costume' you could prompt for could be nearly anything - again, there were only 8 original images, whose costumes I did not choose to generate again here - and the style fit the branding perfectly.

Another thing to note is bias: you might notice the character with the pink, liquid cape; AI tends to try to make liquid blue, especially when the training data is of a blue character. However, the model performed exceptionally well in this regard, and liquid capes could be generated in any color.


Now, AI image generation is a little beast to handle. And while it has gotten a ton easier since the days of Disco Díffusion in early 2022, it still struggles to place objects in the scene correctly based solely on the prompt. What I did below was to create a workflow in which you could sketch out in different colors where the object should be in the image; which of course isn't perfect by any means, but it helps tremendously in the layout phase - especially when you're working with a client who has a very specific vision in mind.
If you add a depth pass from a quick blockout in 3D, you can get pretty consistent and yet varied results for the kind of layout you might have in mind.

Sketches and their results. Note on the last one the added depth pass.

←      Go Back