Especially in comparison to the direct rgb CLIP generative synthesis code i was running in late summer here shown below. This is using the same 'new age dream text prompt key-framing used in the last post.
Now the second part of the experiment i was interested in running was to take the CMU direct rgb CLIP generative synthesis output and pump it into the U-net embedding in Stable Diffusion. Since the CMU video output was a total bust i decided to do that experiment with my direct rgb CLIP generative output instead (above video time compressed by 6X), which is shown below.A depository for John Dalton's personal artwork. Studio Artist, MSG, procedural art, WMF, digital painting, image processing, human vision, digital art, slit scan, photo mosaic, artistic software, video effects, computer painting, fractals, generative drawing, paint animation, halftoning, video effects, photo manipulation, modular visual synthesis, auto-rotoscoping, directed evolution, computational creativity, artificial intelligence, generative ai, style transfer, latent diffusion
Wednesday, October 26, 2022
new age dream 3 - CLIP guided RGB synthesis remix
I tried running the CMU 'Towards Real-Time Text2Video via CLIP-Guided Pixel Level Optimization' code using the 'new age dream 3' text key-frame prompts. Still frame above shows it's kind of a bust. And the video download was unplayable on a mac? So double bust.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment