PACK ⬕ Machine Eyes
- This pack contains 9 VJ loops (6 GB)
Welcome to the uncanny valley! Here we have a selection of human eyes so that you can watch your audience from the front stage. Finally the literal all seeing machine eye. These videos are the result of training StyleGAN3 using a dataset of 217 images.
Machine learning has long intrigued me since I've always been curious of different methods of interpolation. I find the results are often evocative and almost always different from what I initially anticipate. So naturally I've wanted to explore machine learning for art purposes and aim for reality versus uncanny. Yet the GPU requirements have been too heavy and the results too low rez and so I've been waiting for the tech to mature... And that time has finally arrived!
My mind really started reeling when StyleGAN2 was released and so I did some experiments of the feasibility of training at home. But then I stumbled across Google Colab and at first I thought it was really too good to be true... Cheap access to high end GPU's? It felt like a sudden leap into the future. Utilizing a Telsa P100 GPU node on Google Colab, I would get interesting results typically after about 12 to 48 hours of retraining since I'm looking for surreal and glitchy visuals.
I haven't seen much shared about training with really tiny datasets. I've found that the 1000 to 2000 image datasets end up with a decent amount of interpolative potential. Yet for the 200 to 500 range of image datasets I had to ride the line of avoiding mode collapse by hand selecting the seeds prior to rendering out the latent walk video. In other words, the generated visuals would start to repeat itself and so I'd overcome that hand selecting and arranging the gems. Yet even this method would fall apart when using datasets containing less than 200 images and so that was really the absolute minimum necessary, which I found surprising but perfect for my needs. Manually arranging the seeds into a specific order was vital.
In the beginning I was tinkering with a few Colab Notebooks to try and understand the basic pitfalls, but most people are using it for generating media from models that have already been trained. So a huge thanks goes out to
Artificial Images for sharing their training focused Notebooks, workshops, and inspiration. This one
workshop in particular was helpful in answering questions that I'd been wondering about but hadn't seen shared elsewhere. Getting the StyleGAN2 repo running on Colab proved to be frustrating and then I realized that the StyleGAN3 repo included support for both techniques and is a more mature codebase.
Initially I was frustrated about being limited to 512x512 since the retraining times are so much more realistic. But then I did some uprez testing with the Topaz Labs Video Enhance AI and the results blew me away. I was able to uprez from 512x512 to 2048x2048 and it looked sharp with lots of enhanced details.
Collecting, curating, and preparing my own custom image datasets took a solid 2 months. Then 1 month dedicated to retraining. And finally 1 month of generated the latent walk videos and experimenting with compositing. So that explains why I haven't released any packs recently. Hence I have a bunch more machine learning packs to be released coming up.
Released January 2022