AI-generated art has exploded all over Twitter, LinkedIn and wherever you get your games industry news. With its ability to quickly and cheaply create polished content, social media is filled with bold statements and striking imagery that make the case for AI generative art replacing your art team. Yet, as amazing as these individual pieces of content from Chat GPT, Stable Diffusion, Midjourney and others are, the present reality is not as clear as a viral social media post may lead you to believe.
At Deconstructor of Fun, we pride ourselves on providing expert opinions from experienced industry veterans. As such, we wanted to share our findings on the strengths and weaknesses of AI art generation in its present form for creating game assets.
For the sake of this article, we will focus on Midjourney, the generative art project that has most captured the web’s imagination with the amazing images it is able to produce based on short text prompts.
What Works – Concept & Testing Phase
Imagine it’s your first day on a brand new game project, and you have a vision for a game starring a Psylocke inspired cyberpunk ninja straight out of 90’s X-Men comics. You want to test the viability of this art style and main character for your new ARPG. Further, let’s imagine you have some budget, but no internal resources to produce some concept art for a CPI test and subjective, internal review. You might solve this problem by:
- Writing a brief on what was needed
- Creating a Pinterest board to capture inspiration
- Search for contract artists who produce similar work
- Contact several until one is hired
- Iterate with them until you have a piece of key art you’re happy with
- Conduct CPI tests and a subjective review
It’s not rocket science, but this process will undoubtedly take several weeks and a meaningful amount of cash. But along comes Midjourney with its promise of text-to-image generation. So instead, you get set up with the Midjourney bot and prompt:
/imagine a cyberpunk female ninja fires a powerful energy punch on a white background + Jim Lee style
A few minutes later you have this stunner. You have the key art you need and are ready to run your first CPI tests. Within a week or two, you have all the data you need to determine if this is your main character, or if more concepting is needed.
This is an ideal use of the Midjourney tool. You can use it to quickly generate 2D concept art for a variety of game or marketing uses.
Now let’s say you know the type of character you want, but wish to test several different rendering styles. With just a little tweaking to your prompt, you are able to quickly create additional takes on your cyberpunk ninja warrior.
It is not a scientifically perfect A/B test, as there are meaningful differences in character design between these three takes. But the results are quite good. In just a few minutes, you have enough raw material to run CPI tests and measure if any of these styles resonate with target players.
For both of these cases – Concept Art and Concept Testing – Midjourney feels like a turbocharged image search. It was much easier to translate what’s in your imagination into something you can center a conversation around.
Here we feel compelled to point out a difference between Midjourney and one of its main competitors, Stable Diffusion. Using Stable Diffusion, we aren’t able to generate character art that is close to the level of polish of Midjourney. However, the Midjourney art leaves us feeling a little… unsettled. The work feels so close to finished and polished concept art that we can’t say whether the algorithm truly authored this work, or if it stole someone else’s work off Art Station and made a few, minor tweaks to it.
A reverse image search of the images in this article did not turn up any results, but the use of these images certainly opens up many legal and ethical questions. We are not legal experts, and time will tell how all of this shakes out in the courts.
Needs Improvement – Game Assets
Now let’s imagine your cyberpunk ninja performed great in those CPI tests, and now you want to turn her into in-game assets. This is where the promise of generative work crashes against reality.
Above, you can see an attempt to put your character into another pose. Though the two characters are similar, they are clearly different people wearing different outfits. Even if you were making a static, visual novel game, this is not a tool that can create all the game assets you need.
But more likely, you want to do more with your character than merely put her into a static image. You want to animate her. You want different facial expressions, poses, outfits, gear, etc. You want her to look proportional to other characters and in-world objects. And yet your Midjourney art doesn’t even have legs, let alone a PSD with properly named layers ready to be animated with a Spine skeleton.
All of this will require real artists. You can’t simply write prompts into Midjourney and let the algorithm take care of the rest. Final game art your engine can ingest and animate is so much more complicated than an unlayered piece of concept art.
These tools, or others on the horizon like Leonardo.ai, Scenario.gg or countless other start-ups that are just getting off the ground, may someday be able to provide “text to final game assets.” It is a beautiful dream, and with enough time and investment, even an attainable one. And in this dream, you will still need hard-working, talented, creative people at many points in the artistic process. They will just be using a new tool – AI generation – to magnify their creativity and output.
Playing with these new, generative AI tools are some of the most fun experiences we’ve had with game development in the past few years. Just ask the algorithm to give you a big furry monster with wheels for legs eating a popsicle, and what you get back will surely put a smile on your face.
But developing an algorithm that can churn out assets for your game with consistency and quality is no simple task. And the current, public tools aren’t there yet. After spending time playing with them, this is what we imagine the future of generative AI will look like:
- You will pay a provider to set up and train a custom, enterprise instance of their tool inside your infrastructure
- You will work with the provider to train up the instance using materials created by your art team
- Your art team will then use this custom-trained AI to augment their work
- You will not be able to replace your artists, only the more menial aspects of their work. Net these tools will allow you to get more, higher quality work out of fewer people (and at an overall lower cost).
All in all, these tools are incredibly fun and powerful, and we can’t wait for their capabilities to catch up to the hype.