The mobile games industry has proven to be a keen adopter of AI models and the use cases for AI continue to increase day to day. Following thier ability to create for game development they’re now finding their place in the broader marketing context too.
At the heart of the AI revolution are neural networks. Able to teach computers to tackle data in a way that resembles human thinking and, therefore, seems more authentic, their potential to expand AI’s role beyond asset creation and coding is now being realised.
In this guest post, Playkot’s marketing production lead, Igor Mosin, details the studio’s practical experiences using AI tools to boost productivity in mobile games marketing. Mosin shares various case studies on how Playkot has utilised neural networks and offers advice on how you can do the same.
We started experimenting with neural networks in marketing in 2023. We tried using Midjourney to generate ideas for the icons of Spring Valley, our mobile farm with expeditions. At the time, there was a big trend around the “Wednesday” series, so we quickly created a concept icon in that style: using prompts to find a suitable image, polishing imperfections and refining the icon. It went very well. Then we tried the same approach with a Pedro Pascal icon, but it didn’t resonate.
This taught us that when a trending story emerges, we can quickly adapt it to our project, test it, and get results. This opened up a vast field for research using fast situational trends related to shows, movies or events. We then began to expand the use of neural networks in our marketing tasks, which has helped us to significantly increase speed and efficiency. It’s worth noting that, as of now, we only use the results of neural network work as a basis for photobashing and further refinement.
Our neural network toolbox
In marketing, we are constantly working with Midjourney, Stable Diffusion and neural networks to improve the quality of images. We recently started experimenting with scenarios for in-game objects in creatives. Neural networks help us with several tasks:
- Production of static creatives: We quickly generate multiple variations of icons, screenshots, banners, and other static assets and compile them into a photobash. This has become our new reality.
- Background enhancement and rework: Sometimes, we generate parts of locations in screenshots, LiveOps cards, and features. This helps us produce more material faster and leaves time for creativity: the artist only focuses on polishing the creative or developing the main character.
- Upscale and resize: Suitable for various elements, from textures to old low-resolution art. For example, it’s easy to turn a square source image into a full-screen, 2K resolution, ultra-wide art piece – augment the background, upscale and crop to the desired resolution and proportions. Or quickly change a detail in an old creative: replace a crow with a seagull, for example. ControlNet, available in Stable Diffusion and Midjourney, helps.
- Improve video quality: For example, you can create a video that shows 3D characters up close based on the texture of low-poly models that were originally unsuitable for marketing purposes. This helps the marketing team to create interesting material featuring our 3D characters without wasting the artists’ time.
- Development of in-game objects: We use models trained on our art to quickly generate environmental objects for creatives that we need but are not currently in the build.
Let’s take a look at some case studies within these tasks and see how neural networks can help.
Rapid hypothesis testing on trends
Our initial experiment with the style of the “Wednesday” series and Pedro Pascal paved the way for others.
In marketing, we are not limited by the game’s aesthetics: we can create creatives in different styles and take advantage of current trends. Midjourney produces high-quality results and seems to capture popular user queries and trends well. We constantly test different hypotheses and pivot quickly if the aesthetics become outdated or the creative metrics fail to deliver the desired result.
Let’s say a new Disney movie comes out. However, our game has an entirely different aesthetic, and we do not have the resources to model, render, colour, and perform numerous other tasks for a character in that style. Using neural networks, we could quickly style Tiffany and Madeline from Spring Valley as Disney princesses and test this hypothesis before moving on. Then, we can explore how other stylisations, such as anime, Desperate Housewives, or other, even relatively old, trends, will perform. For this, we use Stable Diffusion, which is trained on our assets.
Quickly adapt art and characters
With models trained on art from our games, we can quickly generate different elements for creatives not yet in the build and expand our hypotheses without limitation.
For example, we had success with the hook (the first three seconds) of a video featuring a gardener and Nalani, other characters from Spring Valley. We decided to create a banner based on the video and quickly generated the necessary version from a screenshot using Midjourney and Stable Diffusion, refined it and tested it. We got a high number of views, so we decided to keep working with it and look for a new twist, possibly with a spicier storyline. Previously, we would have had to redraw the required frame from the video for such a task.
Sometimes, we need to position a character in a specific pose for a creative that is not available in the existing art. In this case, we create the necessary scene in 3D compositionally and then generate the required image using ControlNet based on this reference. Here’s an example from the marketing material for another of our games, Age of Magic:
The approach also works in more complex cases, such as creating a more complex UI or rendering a specific item. The designer uses prompts and references to create the composition with ControlNet, refines the resulting preview with Midjourney or Stable Diffusion, and achieves the desired result in a controlled way. For example, we created a water pistol using Stable Diffusion and ControlNet for the video with the gardener and Nalani from the example above.
Empowering professionals
There was a time when the marketing team was down to one artist out of two. The designer had to take on some of the art tasks themselves, such as using neural networks to create artwork for creative projects with their own touch-ups. The experiment’s success was evident from the art director’s reaction: she was surprised that a graphic designer could produce such a high quality result in just three days. Here is the art:
We certainly haven’t abandoned the role of the second artist. However, this case helped us see how AI tools can augment the skills of professionals. A designer can take over a senior specialist’s task that requires artistic expertise and complete it relatively quickly. Or a designer who previously needed the help of a 3D artist to render for a marketing creative can now do it independently, without diverting product specialists for such a minor task.
Not only can we produce more creative variations in the same amount of time, but we can also produce a higher quality option in less time. For example, using neural networks, a designer can quickly sketch five scenes for videos or banners to review with the producer and decide which option to develop further. This significantly speeds up and improves the process, as previously, only one concept could be sketched in the same amount of time.
Marketing creatives are a ‘perishable’ product, the life of a banner, video or other format is usually short
Speed up text translation
Our players speak at least 13 languages, so we localise creativity for different regions and countries. To speed up the process, we integrated the GPT-4 API. Now, instead of using the localisation department’s resources for the entire translation cycle, we ask them to review the finished versions, which is much easier and faster.
For example, suppose we need to localise a video into ten languages. In that case, we insert the original text into a special table with configured automation and connected GPT-4 and specify the translation context, such as “advertising style”. That’s it. If a voiceover is required, we pass the finished versions to another neural network, such as Murf, Vocemaker or ElevenLabs, which excel at this task. Music can also be generated in AI Test Kitchen or SUNO. And here we have a ready-made creative that we produced quickly, saving the resources of another team.
Marketing creatives are a ‘perishable’ product, the life of a banner, video or other format is usually short. For example, we do a lot of ASO experiments with different text, and it’s not very sensitive to the quality of the translation: it could be variations of “Dig or Hit” or “Dig or Pour”. It seems impractical to divert another team to localise text for such A/B testing. Here’s an example of a creative we did this way:
Here’s an example where neural networks helped solve several tasks simultaneously for an A/B test. The girl’s head was created from an image generated by one neural network and then animated by another. A third neural network helped create the text and subtitles, while a fourth provided the voiceover.
About A/B testing
It’s faster and cheaper and saves our resources compared to creating the perfect creative right away
Marketing is about creativity and experimentation because the outcome depends on many factors: timing, placement, quality, trends, context, and audience. That’s why our team often works with MVPs and then refines them. It’s faster and cheaper and saves our resources compared to creating the perfect creative right away. And with neural networks, we can do a lot more A/B testing than before.
For example, for one of the tasks for Age of Magic, we used neural networks to get 40 icons in 40 minutes. Some were filtered out, some we refined, and we ended up with five icons—5 visual concepts to test. Quickly and without wasting the artist’s resources.
We often A/B test our creatives in different styles – for example, a particular animation genre or visual trend that is currently popular with a particular audience segment (such as players from China). We try different colour and composition solutions. Regardless of the variation, we can generate creative where characters are positioned in the same pose, angle and scale. For example, we can get several banners with the same composition but with different characters: an orc, an elf, a mermaid or a warrior.
Creative A: AI tools helped create the beginning, and for the rest we used our usual 3D creative
Creative B: It was created using AI tools
Typically, in marketing, it takes three days to develop an icon, but with neural networks, we can create six in different styles in 2 days, which is nine times faster. The same goes for developing banners, screenshots and backgrounds. It’s difficult to give specific figures for standard speed-ups because each case is unique. However, we can confidently say that neural networks save 30-50% of our time and resources, allowing us to run more A/B tests faster.
Our plans for using neural networks in marketing
- Automatic generation of locations for videos so that artists don’t have to assemble them manually. Such a tool already exists in Unreal Engine, and one is being developed in Blender.
- AI mockups and character AI animation: This will help create smooth animations, for example, making them smoother, cleaner, and more visually appealing – and in less time.
- Capture motion and turn it into character animation in the engine. We are already using this approach with move.ai: we capture our movements on the phone, feed them into the Unreal Engine and quickly get unique animations. However, we expect AI tools to become more proficient in this area.
- Fast generation of multiple UI variants. This is useful for A/B testing videos and banners that contain game UI. For example, we’re not sure which colour solution to use, so we’re looking for the right option. Or we want to rebrand an old UI. These are rare cases at the moment, but we hope that neural networks will soon be able to help with more of them.
- Render. With the current development of AI tools, this is not easy to achieve, but we are actively exploring this direction.
- Music creation. Our R&D team is already actively testing AI music generators and sees potential for their use in marketing tasks.
We are constantly signing up for beta testing of new AI products to understand how they could potentially be useful in our marketing work.
Our approach to using AI in marketing production is that neural networks are a new tool that the team uses alongside other solutions, not a robot that does everything for you
Neural networks – primarily a tool
Our approach to using AI in marketing production is that neural networks are a new tool that the team uses alongside other solutions, not a robot that does everything for you.
We can create in Figma or Photoshop while quickly sketching ideas or searching for compositional cues in Midjourney and use the resulting output as a basis for future work.
An important skill to develop when working with neural networks is prompting: how to compose a prompt, where to look for examples, and how certain words affect the output. It’s a bit like learning the interface and hotkeys in Photoshop when you first start working with it. With practice, prompting skills are honed through observation and hands-on experience: for example, on the Midjourney homepage, you can study examples of generations, the styles used, the prompts used, and the results they can produce.
Understanding the risks is also crucial. The legal framework for neural networks is not yet fully developed, so we use them only for rapid iteration and testing, not as a complete result that can be integrated into our products as is. Generated art is still more of a narrow experiment. If we find that a certain concept works, we take it as a reference and rework it significantly, more than halfway. Only then does it take on a finished appearance suitable for further use in our work. Nevertheless, the use of AI tools always increases the speed and efficiency of our work.
Edited by Paige Cook