The GAN That Knew Too Much

The Maxon Bollocks project was inspired by the techno-social moment we find ourselves in. The human species is at the frontier of an unprecedented age. Artificial intelligence and neural networks assist and drive many aspects of our lives. Today, machine intelligence is inseparable from our own and, in many ways, is on the precipice of surpassing us.

As a team of technicians, coders and creatives, we wanted to use blockchain and emerging technologies to create an artistic statement of this moment. It is a meditation on the value of art, aesthetics, ethics, and the limits of humanity. An experiment in creating and collecting ever-more-sophisticated NFT artworks using a GAN.

Maxon Bollocks is not only a fine art effort but can be referenced and used as an educational aid in understanding GAN production. 

What is a GAN?

A GAN (or generative adversarial network) is a machine learning framework designed for generative modelling. That’s just a fancy way of saying that GANs are designed to create, rather than to analyze and categorize things. Most neural networks that we interact with on a daily basis are discriminative, that is to say, they are built for analyzing things: for example when your phone unlocks itself by recognizing your face, an online shop learns your tastes based on previous purchases and online searches, or when a self-driving car knows how to distinguish between the different traffic lanes.

GANs on the other hand are generative – they are specialized in producing believable outputs for whatever task that they are built and trained for. The goal is to train the GAN on a set of inputs – for example thousands of images of dogs – and at the end have them produce new dog images that are like to, but distinct from the training images.

The GAN powering our project was designed to produce fine art images. It begins with a learning set of images that we use to ‘train’ the GAN. The GAN trains by learning the main characteristics of these images and then attempts to produce original artworks inspired by this set.

How does it work?

A GAN is composed of two competing sides that are constantly locked in a zero-sum game – thus the ‘adversarial’ part of the name. The two sides are called the Generator and the Discriminator – on one hand, the Generator gradually learns to produce better and better results, while the Discriminator learns to better judge the outputs from the Generator and indicate if they are indistinguishable from the learning data set.

Practically, the Generator’s role is to learn how to ‘trick’ the Discriminator into thinking that it is producing good images. The Discriminator on the other hand gradually learns to judge the images produced by the Generator, and to determine if they correspond to the ones from the learning set.

Both sides start from a blank slate and are trained together, forever in conflict – one side (the Generator) always tries to convince the other that it is producing genuine images, while the Discriminator constantly judges these images for accuracy. By repeating the process enough times, the Generator eventually learns to produce images that pass the Discriminator’s test of genuineness.

In a way, a GAN mirrors the eternal conflict that is ingrained in the human cognition – our self-reflection and self-awareness can be said to be at the root of our consciousness. While the mind is infinitely more complex than that, the similarities between the duality of the human spirit and that of how GANs operate inspired our team.

This duality is built into the human condition. Our civilisation and growth have been powered by twin drivers: good and evil, yin and yang, us and them; to invent and destroy, build and tear down, create and criticise.

Naturally, this conflict has also driven art. New artists produce work that challenges the status quo, and the critics and collectors of the day decide if it qualifies as ‘real art’. The Generator and the Discriminator?

It made sense for us to combine these ideas to produce an original experiment combining a written narrative inspired and influenced by the constraints and potential of GAN technology. To tell the story of a digital artist.

How we used a DCGAN

Since GANs were first introduced in 2014, several variations and extensions have been implemented, with each bringing a different set of changes to the original design.

The Maxon Bollocks team created our images utilising a DCGAN (Deep Convolutional Generative Adversarial Network), which is a special GAN implementation in which the neural network is structured to be optimized for analyzing and generating images.

Convolutional networks are especially suited to computer vision applications, and their structure is often compared to the way the brain performs vision processing in the visual cortex.

While traditionally used for image analysis, a deep convolutional structuring of the neural networks for two sides (the Generator and the Discriminator) suits the goal of the GAN very well – that is, to generate and judge its created images.

Why did we decide to use a DCGAN?

A DCGAN is designed specifically for image production, and we found it was exactly what we needed to produce the images that represent our story. Specifically, we were excited about the capacity for customisation in implementation of how the network learns from its training set, and it turn and produces the images.

Starting from the standard DCGAN design, we underwent extensive training with specific sets of images. This allowed us to incorporate multiple themes over each run, weighing the importance of the various themes differently.

 

What we tried to achieve with the GAN in relation to the story

To take full advantage of the DCGAN, we wanted to intertwine the story / narrative elements with the technical / graphic parts of the project.

With this in mind, we tried to mimic the behaviour of our narrator and his interaction with the ‘robot’ and to reflect the conflicts that arise between these two during the artistic creation process. The narrator ‘feeds’ the robot a broad range of images, putting his desires, aspirations and fears upon it. Our vision was for the DCGAN and its adversarial nature to function the way a muse does to an artist.

Starting from a narrative sketch, the story of Maxon Bollocks and the AI ‘M’ was written in tandem with the art it produced. As the narrator and the AI grappled with themes on a chapter by chapter basis, we ‘fed’ the network with images sets to reflect the conflicts that developed in the story. In turn, as the pictures and artwork were produced, we incorporated the (often surprising) results into the narrative itself.

Because we wanted to represent the evolutionary / learning process of both the narrator and its AI, we used pictures produced at various learning stages of the DCGAN. As the story and the sophistication of the DCGAN progresses, the graphic output becomes more impressive and more unpredictable. By following the narrative, the reader watches the growth of the digital artist over time, just as they would reviewing the oeuvre of a human artist.

What we created

We didn’t want the images in our Maxon Collection to represent a single theme but to reflect the multiple themes of the story. So, we fed the network a broad range of images which were based on public domain imagery, collaged works we created and other personal images to form different concepts and to create visual representations of the conflict at the heart of the story.

In this way, the DCGAN helped drive the narrative in the direction it ‘wanted’ to go. The resulting pictures don’t always converge into a single topic but reflect the abstract clash of conflicting themes and desires in the story. We wanted to unlock the potential for abstract art made by a non-human creator, to give the machine room to dream. The result is the Maxon Bollocks NFT collection.

Read the the Maxon Bollocks Story here.