Sunday, September 18, 2022

stable diffusion video processing experiment 7


The first 2 examples below are using the same source video as the previous post.  I'm using the artwalk style prompting with a static giraffe story line for the content prompting.  The strength increases from the first to the second.  The third one below takes the same prompting keyframes without any source video input to the U-Net latent but interpolates the prompt text embedding, to help show off how the video input is modulating the generative synthesis output.  All with additional Studio Artist enhancement processing.





No comments: