Neural networks and board games. Savings on artists or a tool for creativity?


Promt: A neural network draws an illustration for a board game box

Promt: A neural network draws an illustration for a board game box

Neural networks entered our lives not so long ago, but they are already considered the “killer of 1000 professions”. Artists are one of them. And since I’m both an illustrator and a board game publisher, I wanted to know how they could help me replace myself.

I have tried many generative neural networks and I will say that they cannot completely replace the artist for me. It often costs more to create a competent query and try to get the result, as in time, than I would draw on my own, but here are some applications that I found not only useful, but also used in games planned for publication this year.

Creating illustrations for game cards

The main task here is to maintain a single style for all the N + cards you need, unless you are making a new version of the Imaginarium. Therefore, to illustrate the maps of chemical elements and compounds of the game “Mendeleev’s Dream”, the option of generating incomplete images was chosen, as to ask the neural network to draw “An Indian yogi with a heart painted on his chest, sitting on a cast car disk with two burning Bengal lights in his hands.” leads more often to the result shown below than to the desired one. And the generation of the necessary assets for the subsequent collage into a single picture was carried out.

Promt: An Indian yogi with a heart painted on his chest, sitting on a cast car disk with two burning sparklers in his hands.

Promt: An Indian yogi with a heart painted on his chest, sitting on a cast car disk with two burning sparklers in his hands.

But I liked the result. I used Microsoft Designer (I have early access to it) and Adobe Firefly beta, several days of promt selection and “twisting” the result, and manual refinement. Some of the images in these illustrations are drawn by hand on a graphics tablet, some are generated by neural networks, and some are processed from ready-made photographs. The picture below shows only six of the more than 40 illustrations. They are on postage stamp-sized maps, so even the standard generation resolution of 512×512 pixels was enough for me.

Neural network + manual drawing + drawing from ready-made photos and the final collage.

Neural network + manual drawing + drawing from ready-made photos and the final collage.

Refinement of game art

In the same game, the neural network was used to refine art previously made in vector graphics. I ran the Chemical Group token art to get more realistic images through Stable Diffusion (I have it on my computer) with ControlNet. The only problem was that I have a Radeon 6500 XT video card with only 4 GB of RAM, so the generation on the GPU did not work for me (under Windows on AMD it already works through a stump deck), but I had to generate images 4-5 minutes by the CPU.

On the left are the original images on the tokens, on the right after "tuning" Stable Diffusion

On the left are the original images on the tokens, on the right after “tuning” Stable Diffusion

Making covers for board game boxes

This is where I went completely off the rails. To begin with, I decided to create a cover for the new edition of the game “Don’t Short the Chain!”. At the beginning of this research, there were few options: shareware DALL-E2, Mindjourney and Microsoft’s implementation of DALL-E in the form of Designer. The last one I used. I generated almost 100 different variations of images where there should have been a robot trying to fix itself (or a person who was supposed to do it).

Four concepts that I liked

Four concepts that I liked

After that, one concept was chosen, increased with the help of another neural network to a higher resolution, finished to be used as a box texture and supplemented with game elements of a board game. On the box, the neural network also generated a texture for the top plate with the name of the game, and in Adobe Firefly, the inscription “Sequential History” stylized as wires was generated (as I did, using only the Latin font, a separate life hack).

Final result

Final result

It was decided to find the second use of neural networks for the same game for a crowdfunding video. More precisely, for two: an electrician’s sketch was generated in Adobe Firefly (namely, a sketch, since Adobe’s neural network prohibits the commercial use of its creations in its original form), and manually changed and supplemented with missing elements such as wires or a normal belt, and then animated with D-ID neural networks and re-voiced using the Clipchamp video editor.

The second run I did was to generate a cover for another game in development, this time it was “Golem Battle. Tournament on Ganymede”. The result can be seen below. More precisely, this is also the result of the work of three neural networks: DALL-E 2 in the implementation from Microsoft + Stable Diffusion, with the help of which robots and a crystal in the hand of one of them were made, and Kandinsky 2.1, which generated the rising Jupiter, the starry sky behind it and rocky Ganymede in the front.

The alloy of neural networks and manual labor greatly saves development time

The alloy of neural networks and manual labor greatly saves development time

Creating covers for games is now easier.

Creating covers for games is now easier.

This is just a small part of how I currently use neural networks. In fact, they really save me time both as an artist and as a developer from searching for ideas to improving the result of my work or styling it. But it does not cancel drawing with a pen or even a pencil on paper. Below are a few more examples of works that were also “flavored” with the creation of neural networks.

Feather trial of game board elements and components for a new "Golem Battles".  Labor Dall-E 2.

Pen test of the elements of the playing field and components for the new “Battle of the Golems”. Labor Dall-E 2.

Stable Diffusion helped make faces and hands more realistic for "hard workers".  They always didn't work out well for me.

Stable Diffusion helped make the faces and hands more realistic for the Workers. They always didn’t work out well for me.

Making posts on social networks is accelerated at times.  An astronaut from Adobe Firefly has been added to the previously shown art

Making posts on social networks is accelerated at times. An astronaut from Adobe Firefly has been added to the previously shown art

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *