The first 2 examples below are using the same source video as the previous post. I'm using the artwalk style prompting with a static giraffe story line for the content prompting. The strength increases from the first to the second. The third one below takes the same prompting keyframes without any source video input to the U-Net latent but interpolates the prompt text embedding, to help show off how the video input is modulating the generative synthesis output. All with additional Studio Artist enhancement processing.
A depository for John Dalton's personal artwork. Studio Artist, MSG, procedural art, WMF, digital painting, image processing, human vision, digital art, slit scan, photo mosaic, artistic software, video effects, computer painting, fractals, generative drawing, paint animation, halftoning, video effects, photo manipulation, modular visual synthesis, auto-rotoscoping, directed evolution, computational creativity, artificial intelligence, generative ai, style transfer, latent diffusion
Sunday, September 18, 2022
stable diffusion video processing experiment 7
The first 2 examples below are using the same source video as the previous post. I'm using the artwalk style prompting with a static giraffe story line for the content prompting. The strength increases from the first to the second. The third one below takes the same prompting keyframes without any source video input to the U-Net latent but interpolates the prompt text embedding, to help show off how the video input is modulating the generative synthesis output. All with additional Studio Artist enhancement processing.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment