Within the current Machine Learning Revolution, so-called convolutional networks (ConvNets) can now recognize objects within photographic images. This might seem like a trivial addition to already existing computer and online service capabilities, but it is a crucial step in how humans can interact with representation and depiction. The enhanced agency differentiating a cow from a horse, is rapidly developing into recognizing specific people and their moods, and interpreting behaviour.
If we all collectively imagine a candle, what would come out? Thousands, millions of images of candles are photographed and archived, representing so many aspects of our lives. Dinner conversations, social relationships, health, design, wealth, culture. Reverse engineering the so-called ‘neural networks’ that Facebook and Google use to recognize image content with, shows however what these networks understand of us so far: a cold machine-like interpretation of what we prefer to see and what we depict when representing a restaurant, fire, a refrigerator or a handkerchief. A collective visual consciousness learning to recognize the gradients, saliency, angles, curves and hues of every visual concept we can imagine. It shows what translates of our culture to machine understanding at this moment. Every day, new interpretation skills are outsourced to a neural network and every month shows us new applications of creative labour learned to a machine. Security cameras recording to the cloud are feeding the network to learn what evil is, based on statistics. We are outsourcing judgement and prejudice to facts interpreted by rules. It's not our decision, it is the network's decision based on learning from all the facts in the world.
In a way these synthesised images, based on photographic representation, show a mechanised version of the collective consciousness of Western societies. These current state-of-the-art, synthesised images look quite painterly, very surreal, copying human sense of documentation, and composition, rendering each visualised class a deadly but sympathetic rendering of the gestalt of the class in question.
This mechanical depiction of a concept questions our understanding of depiction in general, especially when positioned within a timeframe of rapid development of these convolutional networks. The sheer amount of money, education, talent and computing power Facebook, Google, OpenAI and others are throwing at these developments, does not suggest anything else then a future in which these techniques will be utilised and rapidly developed beyond the current painterly, even beautifully naive level.
Selected to show the trivial banal technicalities of human life, and the Convolutional Networks vision of its own parts, Dullaart sent the slightly naive, yet cold and surreal depictions to paint factories in Dafen Village, Shenzhen, China and translated into oil paintings on canvas. Continuing the image automation process with outsourced human labour. The TNT express delivered canvases were treated with an automotive clear coat mixed with ghost pearls, normally used in car paint and product design, amplifying the mechanically attractive, adversarially authentic compositions.