PACK ⬕ Mask Oblivion
- This pack contains 51 VJ loops (29 GB)
Just in time for Halloween. Let's get freaky!
With the recent release of Stable Diffusion, I can finally create large image datasets from scratch much easier. I've done some experiments using DallE2 but it's just too much of a slog to manually save 4 images at a time, even if DallE2 follows text prompts more precisely.
So I downloaded
NMKD Stable Diffusion GUI and started experimenting with text prompts that I found over on
Krea.ai. From there I tweaked the text prompt until I was satisfied it would consistently produce images that had a similar composition. I then rendered out 4,129 images over a few overnight sessions. Then I manually went through all of the images and deleted any weird outliers. Having Stable Diffusion on my computer is a game changer since it means that no longer have to waste tons of time curating and preparing images by hand.
After that I prepared the image dataset and retrained the FFHQ-512x512 model using StyleGAN2. Due to Google Colab recently adding compute credits, I was able to select the premium GPU class and get a A100 GPU node for the first time. This meant that I was able to finish retraining overnight instead of training over several days. Then as per my usual workflow, render out videos as 512x512 MP4's.
An interesting development for me has been using the ScaleUp plugin in After Effects. I've always had to first pre-process the 512x512 MP4's in Topaz Labs Video Enhance AI so as to uprez to 2048x2048. But ScaleUp produces very similar results and plus I can just start doing compositing experiments instantly in After Effects without needing to first uprez. This saves me a substantial chunk of time, creative energy, and savings on free hard drive space. Skeletor would be proud.
Released October 2022