Can Midjourney replace the designers? Testing the neural network

Take a look at the cover of the article. One part is drawn by the designer, the second is generated by the neural network

midjourney

.

Now many people admire the quality of illustrations from neurons, and we decided to conduct an experiment. Will the neural network be able to illustrate texts at the level of designers? Maybe we can save them some time?

The test results and the answer to the cover puzzle are under the cut.



Neural network for artists


Anyone can experiment with Midjourney without waiting for access to the service. Just connect to
project discord channel. Each user has 25 free requests.

The reverse policy of a competitive project is DALL-E: before using, you need to send a request and wait for feedback from the developers. The wait can take over a month. But DALL-E is completely free.

Comparison of illustrations DALL-E 2 and Midjourney. Source

It is difficult to say which neural network shows the best results. Like DALL-E, Midjourney can draw not only amazing, but also incomprehensible and even frightening images.

How to generate a request for a neuronet


When creating an illustration, the designer thinks about how best to combine the different elements that are specified by the author in the terms of reference.

Midjourney works the same way. To get the desired result from the neural network, you need to correctly form a request for it. Midourney Developers published tipsHow to properly communicate with a neural network. Let’s highlight the main ones.

Write like a child

The wording should be literal: no metaphors, euphemisms, verbal puns, etc.

Not properly: “Monkeys Do Business”

Correctly: “Monkeys are sitting in business suits”

It is better to form requests in English. Other languages ​​Midjourney understands worse.

Avoid negatives

Imagine that you need to choose one door out of a thousand – there will be a chest with gold behind it. Nearby is an “assistant” who knows for sure where the riches are hidden. You ask him which door you need to open in order to get rich. And the assistant replies: “Definitely not 178.” Did the task become easier from his hint?

If you want Midjourney to draw an umbrella of any color, but not red, – try use negation. But developers argue that language models often ignore negative particles, conjunctions, and prepositions (“not,” “but,” “except,” “without”). If you need a blue umbrella, write about it directly.

Forget about small details

They can overload the system. No need to describe the number of wrinkles on the monkey’s face. Try to describe her features in one word.

You may also be interested in these texts:

→ We select skins in Counter-Strike: Global Offensive in the color of the handbag
→ Overview of the developer and the benefits of stuttering: how IT people tried themselves in stand-up
→ How to quickly implement search on the corporate portal

Image generation example

To generate an image, you need to connect to the Discord channel, go to the newbies room, enter the /imagine command and write your request. Within 10-30 seconds, a selection of images will be ready.

The process of generating an image for the query “dinosaur”

When the download reaches 100%, “U1, U2, U3, U4” and “V1, V2, V3, V4” buttons will appear below the collage. The buttons from the first row are needed for upscaling – improving the quality of selected images. And the buttons from the second row are for generating pictures that are “similar” to the selected image from the selection.

Upscaling the fourth image

Variations of the fourth image

Midjourney Testing


We decided to check how a neural network would be suitable for solving the problems of illustrators. A similar experiment was conducted by the guys from SkillFactory: they tested whether DALL-E could help get rid of expensive stock illustrations.

What was important for us was not the style of drawing, but the composition that Midjourney could come up with. To do this, we chose three articles from our blog, which were drawn by designers, and formulated requests for covers. Let’s see what happened.

rabbit hole

Recently released

article

about a long search and debug errors in monitoring object storage. The cover metaphorically depicts a rabbit hole with deep lines of code, alerts and various pictograms. The developer specifically dug into abstractions, and the designer portrayed it.

We tested several query options.

First request

Rabbit hole with Python program code

First, we tried to describe the general concept for the neural network.

Midjourney managed to replicate the perspective of the hole and even draw a rabbit. But there is nothing more to do with the original cover.

Also, the neural network funny interpreted the mention of the Python programming language. In the illustrations, there are textures resembling scales, and even snake eggs.

Second request

Python code in the rabbit hole

Added a separate rabbit. But the neural network went further and drew a creepy “luntik” that hatched from a snake egg.

Third request

Program code in the rabbit hole and rabbit

To prevent Midjourney from generating more snake-like rabbits, we decided to cross out the mention of Python. Wrote easier: “program code”.

The neural network generated old CRT monitors (first and third pictures). And if the second picture shows something abstract, then the fourth one is a pure clone of the rabbit from Alice in Wonderland.

But where does the egg come from in the first image? Maybe there are guesses? Share your ideas in the comments.

Box with cats

The next stage of testing is the generation of a cover for

articles

about machine learning on the GPU in Managed Kubernetes.

Designer’s idea: out of the box, symbolizing the Kubernetes container, pictures with cats fly out, which are generated in a neural network on the GPU.

First request

Kubernetes container, photos with cats, machine learning, graphic processing unit

First, we decided to see what Midjourney would come up with if we simply listed the key elements separated by commas.

As expected: the neural network does not know what Kubernetes is and, moreover, has not heard about containers in IT. The result is a picture with a container ship, a container terminal, some shelves and a photo of Murzik.

Second request

Box of pictures with cats

When they realized that Midjourney would not be able to come up with a composition, they decided to make a simple request: “a box with pictures of cats.” This time there were no problems, if you do not pay attention to strange cats.

It’s funny that while working on the article, the author suggested adding cat artifacts – extra legs, strange tails, and so on. The designer doubted the idea, deciding that modern neural networks are not so wrong. It turned out that this is not so: Midjourney has drawn cats without eyes.

Cloud on a plate

It seemed that the neural network would not show anything better. In addition, it was necessary to generate an illustration based on a complex concept – sharing the power of a virtual processor.

Blog article cover

The designers approached the issue creatively – they drew a sliced ​​​​cloud on a plate. But what will the neural network come up with?

Request

Virtual CPU, power sharing, cloud operations, shared line

It was pointless to paint the whole idea of ​​a still life. The request would be vague and too long. Therefore, the neural networks were “fed” with the usual sequence of key elements.

The result surprised us. The palette and graininess of some of the images are very reminiscent of the pictures drawn by Selectel designers.

True, there is a meaningful composition only in the first illustration. The cloud appears to be resting on a square plate, which we didn’t even mention in the request.

Result

We wondered what would happen if we generated additional variants of the first picture. To do this, click on the button V1.

The idea with a sliced ​​cloud is conveyed especially accurately in the first illustration. After upscaling, we got an illustration with an even greater degree of accuracy.

The result impressed us so much that we decided to play interactive with you in the title picture. Here is the answer – the left side was generated by Midjourney, and the right side was completed by the designer.



Is the neural network a competitor? Lead designer’s opinion


A neural network can help in the search for concepts – suggest an option that can direct the designer’s thought in a non-trivial direction.

But illustrations still need to be created by designers. Indeed, more has been invested in the work of a person than in a random machine drawing. We think over whole plots and metaphors that can reinforce the company’s blog with meaning and beauty. So far, only designers can reflect the brand identity in an attractive way, — Alina Ekizashvili, Head of Design at Selectel.

Whether neural networks will replace designers is still difficult to say. You can teach Midjourney and DALL-E to make illustrations in the style of the company. But if the brand book changes, the neural networks will have to be retrained. To do this, you need a dataset of examples that someone needs to draw. Whether a profession will appear at the intersection of design and DataScience is an open question.

But now, as well as in the foreseeable future, it seems that freelancer Midjourney will not be needed at Selectel. But if you need the service, and the free limit is exceeded, write a request in the comments – we will help.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *