A couple of years ago an article appeared on my reading list called “Progress and development of GAN to improve quality, stability and increase variation”. It describes gradual growth. generative adversarial networksstarting with low-resolution images and increasing the degree of detail as you continue to learn. Many publications have been devoted to this topic, as the authors used their idea to create realistic and unique images of human faces.
Image examples created by GAN
Looking at these images, it seems that other neural networks will need to study many examples in order to be able to create what the GAN produces. Some factors seem relatively simple and reasonable in fact – for example, that the color of both eyes should be the same. But other aspects are fantastically complex and very difficult to formulate. So, for example, what details are necessary in order to tie together the eyes, mouth and skin into a holistic image of the face? Of course, I’m talking about the statistical machine as a person, and our intuition can deceive us – it may turn out that there are relatively few working variations, and the decision space is more limited than we imagine. Probably the most interesting thing is not the images themselves, but the terrible impact that they have on us.
Some time later in my favorite podcast was mentioned Phylopic – A database of silhouette images of animals, plants and other life forms. Reflecting on these lines, I wondered – what happens if you train a system, such as the one described in the article “Progressive GAN”, on a very diverse set of such data? You will get many varieties of several known types of animals, or we will get many variations that will give rise to speculative zoologydriven by neural networks? No matter how everything turned out, I was sure that I could get some good drawings from this for my study wall, so I decided to satisfy my curiosity with experiment.
I adapted progressive GAN code and trained the model using 12,000 iterations using the power of Google Cloud (8 NVIDA K80 GPUs) and the entire PhyloPic dataset. The total training time, including some errors and experiments, was 4 days. I used the final trained model to create 50-kilobyte individual images, and then spent hours looking at the results, categorizing, filtering and matching the images. I also edited some images a bit, rotating them so that all the creatures were directed in the same direction (in order to achieve visual satisfaction). This practical approach means that what you see below is a kind of collaboration between me and the neural network – it was a creative work, and I made my own changes to it.
The first thing that surprised me was how aesthetically pleasing the results were. Much of this, of course, is a reflection of the good taste of the artists who created the original image. However, there were pleasant surprises. For example, it seems that whenever a neural network enters an area of uncertainty – whether it be small pieces that it has not yet mastered, or flights of blurred biological fantasy – chromatic aberrations appear in the image. This is curious, because the input set is completely made in black and white, which means that the color cannot be the solution to any generative problem adopted when teaching the model. Any color is a pure artifact of the mind of the machine. Surprisingly, one of the factors that constantly cause chromatic aberration is the wings of flying insects. This leads to the fact that the model generates hundreds of variations of brightly colored “butterflies”, similar to those presented above. I wonder if this can be a useful observation – if you train the model using only black and white images, and at the same time require the output of full-color images, then colored spots can be a useful way to display areas in which the model is not able to accurately display the training set.
The bulk of the output is a huge variety of fully recognizable silhouettes – birds, various tetrapods, many small graceful carnivorous dinosaurs, lizards, fish, beetles, arachnoids and humanoids.
As soon as the creatures we know end, we encounter unfamiliar things. One of my questions was: will there be plausible plans for the body of animals that do not exist in nature (perhaps hybrids of creatures that are part of the input data set)? With the help of a thorough search and a small pareidolia, I discovered hundreds of tetrapods, snakehead deer and other fantastic monsters.
Going even further into the unknown, the model gave rise to strange abstract patterns and unidentifiable entities that create a certain sense of their “vivacity”.
What is not visible in the above images is the abundance of variations in the results. I printed and put several of these sets of images in frames, and the effect produced by hundreds of small, detailed traces of images that are side by side in scale is quite striking. To give some insight into the scope of the full dataset, I include one example of a printout below – this is a random sample from an unfiltered image body.
Learn the details of how to get a sought-after profession from scratch or Level Up in skills and salary by completing SkillFactory paid online courses:
- Machine Learning Course (12 weeks)
- Learning Data Science from scratch (12 months)
- Analyst profession with any starting level (9 months)
- Python for Web Development Course (9 months)
- Trends in Data Scenсe 2020
- Data Science is dead. Long live Business Science
- The coolest Data Scientist does not waste time on statistics
- How to Become a Data Scientist Without Online Courses
- 450 free courses from the Ivy League
- Data Science for the Humanities: What is Data
- Steroid Data Scenario: Introducing Decision Intelligence