PACK ⬕ Cyborg Fomo
- This pack contains 78 VJ loops (61 GB)
The robots aren't coming, they're already here in our pockets. The machine learning revolution has only just begun.
I've long wanted to visualize a robot that is shaped like a chimpanzee. So for the "Chimpanzee" scenes I nailed down an accurate text prompt in Stable Diffusion v1.5, loaded up x2 instances of A1111, and had both of my GPU's render out a total of 24,955 images. Then I took this dataset and did some transfer learning using StyleGAN2 until 6988kimg, which is quite a bit more training than I normally need but I think was required due to the amount of variety in the robotic monkey faces. This converged rather nicely although there are some strange small blob artifacts that are occasionally visible, which I believe is the result of the model trying to learn the machinery details of the dataset. But I thought these blobs actually looked as if the tech was just bubbling underneath and trying to escape. Diego added the Flesh Digression scripts into the StyleGAN3-fun repo and so that was fun to experiment with. I also tried transfer learning SG3 but the variety in the dataset proved too difficult and wished I had trusted my gut since it takes double the amount of time to train.
From there I thought it would be interesting to further explore the SD stop motion technique. So I rendered out a few of the SG2 Chimpanzee videos to frames, explored some circuit board monkey related text prompts, then injected them into SD v1.5 and used a Denoising Strength of 0.6. I think the jittery feeling of the stop motion matches the wild technological evolution feeling really well. I'm amazed by how well SD reacts to being fed imagery that is similar to the text input. I then took each of these stop motion videos into Topaz, interpolated from 30 to 60fps, and then uprezzed to 2k.
I'm enamored with the SD stop motion technique, so I grabbed some human faces and eyes videos from my prior packs and used them as fodder for creating some wild cyborgs and chrome covered people. Although I had some trouble getting the model to equally represent all skin tones within a single text prompt, so instead I rendered out three different videos for the black, brown, and white skin tones. I'm amazed by how reflective the chrome metal looks, which is bizarre because what is it reflecting within the imagination of SD? Somewhere in the depths of all of those nodes it has some notion of what reflective metal typically looks like. Baudrillard would no doubt be rolling his eyes right now.
But I think the real gem of this pack is the "Implants" scenes where people have installed all sorts of cameras, cell phones, wires, and such onto their faces. It was important to me for the faces to be smiling because I often feel the rush of non-stop tech advancements and yet there is so little time to slow down. So for me the smile represents the overwhelming feeling of having to join the wave or be left behind, with the societal expectation to simply accept it and enjoy it. Again for these I injected some SG2 human faces into SD v1.5 and with the Denoising Strength tuned just right, then it has enough room to imagine the text prompt while also following the input frames somewhat reliably. Sometimes more Denoising Strength is needed, sometimes less. I also rendered out some stop motion videos of human eyes that have camera lenses for the iris and they are darting all around. I really enjoy this technique since happy accidents are at its core.
For the "Wires" and "Circuits" scenes, these are Disco Diffusion experiments from before Stable Diffusion was even released. They have a very different flavor, more dreamlike and suggestive, which I find to be evocative and yet they were very difficult to find a reliable text prompt since I expect that those models received very limited training in both time and scope.
Why stop with the human face eh? I took some videos from the Nature Artificial pack and did some more SD stop motion experiments. I think these were successful for the simple fact that circuit boards are similar looking to green leaves, and also wires are similar to roots. So I'm able to leverage the fact that SD has studied every imaginable word, understands how to visualize it, and can therefore interpolate in between any thing... But it excels at interpolating between things that already look similar. Adding some heavy glow to these in AE gave the videos a wonderful electric feeling that I think further enhances the blend of mother nature combining with human science.
I have wanted to visualize a robot with angular plastic forms and bright LED's but I had so much trouble with it. But I finally nailed it down after too much experimenting and lots of manually added parenthesis to direct SD that I wanted emphasis on certain words. So I rendered out 2,018 images and then did some transfer learning on SG2 until 3260kimg. We may not look too different from people a few decades ago, but the thoughts in our minds are certainly now digital.
Released August 2023