AImoji: AI-generated Emoji

Generating new Emoji with Deep Learning

What happens when you train an AI system to create Emoji?

Using a Deep Convolutional Generative Adversarial Network (DCGAN) and a dataset consisting of 3145 individual, commonly-used Emoji as input we trained our model for 25 epochs to come up with new ones.

The resulting faces and their expressions range from expected happy/sad/angry and weird-looking to flabbergastingly horrifying ones.

Make sure to check out our instagram page for more upcoming results and variations.

In-between/artificial emotions

The animations show the training process and the transitional morphing between shapes with every frame being a newly-created Emoji.

Further down is a video and a little interactive map to explore the training process and output: from vague, noisy blur to concrete forms and more distinguished shapes and faces.

Trying to make (some) sense of the results

We tried to categorize some of the results. Most generated Emoji are a mixture of weird/new-looking expressions, but here are examples labeled with Neutral, Horror, Happy and Sad.

To be fair, it’s pretty difficult to draw the line since there’s a visual twist to nearly all of them. After all, this ambiguity is what makes the results so appealing to us.

The complete training process

Here are some samples from our output. Note the evolution in the beginning, from noisy nonsense to face-like structures. The video shows the complete training process: 25 epochs of training, which took 14 hours on a laptop without GPU acceleration. The video contains every 8th generated image, for a total of 10,040 frames.

Interactive AImoji Map

The following map shows the first 10,000 images from the training process—100 per row.

  • Hover over the map to see bigger emoji.
  • Press and hold the left mouse button to zoom in a bit.
  • Double click to view the map full-screen. (Hotkey: F)
  • Right click to save an emoji. (Hotkey: S)

This work is part of the graphic concept and communication design for the upcoming exhibition “UNCANNY VALUES. Artificial Intelligence & You”, a project of the MAK Vienna in the context of Vienna Biennale for Change 2019: Brave New Virtues. Shaping Our Digital World (29 May – 6 October 2019)

Code based on: Tensorflow implementation of DCGAN by Taehoon Kim
Dataset scraped from: Emojipedia

References:
Original DCGAN Paper
Original DCGAN implementation (Torch)
Neural Face by Taehoon Kim
Introduction to GANs (Chapter 1 of ‘GANs in Action’ by Jakub Langr and Vladimir Bok)
Convolutional Neural Networks (Chapter from ‘Machine Learning for Artists’ by Gene Kogen)

More Works

Adobe

Sagmeister X Walsh Game Show