Using a Generative Adversarial Network (GAN) trained on a dataset of 512 fonts (or images thereof), 100 images of fonts were generated. From these 100, 10 were then selected to represent the full range of results. Based on these 10 images, fully functional fonts were then created in Glyphs.
Machines, of course, do not speak our language, but are trained to decipher it.
But do the machines even understand what they decipher?
Part of my Theoretical diploma exploring GAN font creation and Optical Character Recognition / Machine Reading Comprehension technologies.