A couple of years ago an article appeared on my reading list called “Progress and development of GAN to improve quality, stability and increase variation”. It describes gradual growth. generative adversarial networksstarting with low-resolution images and increasing the degree of detail as you continue to learn. Many publications have been devoted to this topic, as the authors used their idea to create realistic and unique images of human faces.
Image examples created by GAN
Looking at these images, it seems that other neural networks will need to study many examples in order to be able to create what the GAN produces. Some factors seem relatively simple and reasonable in fact – for example, that the color of both eyes should be the same. But other aspects are fantastically complex and very difficult to formulate. So, for example, what details are necessary in order to tie together the eyes, mouth and skin into a holistic image of the face? Of course, I’m talking about the statistical machine as a person, and our intuition can deceive us – it may turn out that there are relatively few working variations, and the decision space is more limited than we imagine. Probably the most interesting thing is not the images themselves, but the terrible impact that they have on us.
Some time later in my favorite podcast was mentioned Phylopic – A database of silhouette images of animals, plants and other life forms. Reflecting on these lines, I wondered – what happens if you train a system, such as the one described in the article “Progressive GAN”, on a very diverse set of such data? You will get many varieties of several known types of animals, or we will get many variations that will give rise to speculative zoologydriven by neural networks? No matter how everything turned out, I was sure that I could get some good drawings from this for my study wall, so I decided to satisfy my curiosity with experiment.