Monday, October 31, 2022

pst feind hort mit 3

 









Trying some minor modifications of the style prompting side of the generative ai context.  Using the same 'pst feind hort mit' story line we've been using in previous posts derived off of the song titles from the WMF cd with the same name. Experiments like this make me think i need to add something that lets you interpolate between different style prompts totally separate from the overall story line text prompting key-framing.  So that you could take 2 of these, dial in something in between, and then run it with the story line.

Sunday, October 30, 2022

progressive fill experiment 3

 






progressive fill experiment 2

 



art strategy - progressive fill over several iterations

 


Using auto-selection of unpainted space to progressively fill in entire canvas over several iterations.

pst feind hort mit 2

 







pst feind hort mit

 

The storyline text prompting key-frames are based on the song titles of the WMF cd 'pst Feind, hort mit'.  The timing of that cd corresponded to the GW Bush era in the USA, and dealt with the irony of Bush administration public media slogans mirroring those used by the Nazis in WW2.

Latent diffusion totally mangles the words, which adds an additional level of ambiguity to the whole thing.  Like you are observing a parallel universe that somehow mirrors our own while still being some foreign entity.


The generative ai context paradigm that we've been prototyping here for awhile now is focused on splitting the text prompting into 2 different components (that can be independently manipulated), the underlying storyline for the animation, and the styling characteristics of the animation.  My first experiments with this particular storyline are a good example of the style prompting capturing or taking over the actual storyline.  

A number of my recent posts have been exploring the stylistic components of the bay area figurative art movement.  You can see in my first stab at this particular storyline directly above that 'figurative' and 'movement' are totally taking over and twisting the storyline of the rendered animation to a large extent. I had to switch things around to get it to move more in the actual storyline direction, which is an examination of the irony of political jingoisms.  But i was really hoping to get that with a more painterly feel associated with the bay area figurative movement, and the first example above is much more like political poster litho artwork rather than what i was originally shooting for.

Saturday, October 29, 2022

the rising - CLIP gudied RGB synthesis remix

 

Same text prompting as the previous 'the rising' post processed with CLIP direct rgb generative image synthesis above.  I then ran that (time compressed) through the stable diffusion U-Net latent input with the exact same text prompting sequence below.




the rising

 








kahului adventure 2

 

Recursive feedback of the previous frame output (depth map warp transformed) into the generative image synthesis U-Net latent with a fixed random seed above and an incrementing random seed below.


Interpolating the text prompt key-frame latents below.  All of the kahului adventure posts use the exact same set of text prompt keyframes.


Friday, October 28, 2022

kahului adventure - CLIP guided RGB synthesis remix

 

Same text prompting used in the previous 'kahului adventure' post processed with CLIP guided direct rgb generative image synthesis above.  I then ran that (time compressed) through the stable diffusion U-Net latent input with the same text prompting sequence below. I ran it twice to see the effect of different random seeds with the same dynamic U-Net latent input.




Thursday, October 27, 2022

Wednesday, October 26, 2022

new age dream 3 - CLIP guided RGB synthesis remix

I tried running the CMU 'Towards Real-Time Text2Video via CLIP-Guided Pixel Level Optimization' code using the 'new age dream 3' text key-frame prompts.  Still frame above shows it's kind of a bust.  And the video download was unplayable on a mac?  So double bust.

Especially in comparison to the direct rgb CLIP generative synthesis code i was running in late summer here shown below.  This is using the same 'new age dream text prompt key-framing used in the last post.

 
Now the second part of the experiment i was interested in running was to take the CMU direct rgb CLIP generative synthesis output and pump it into the U-net embedding in Stable Diffusion.  Since the CMU video output was a total bust i decided to do that experiment with my direct rgb CLIP generative output instead (above video time compressed by 6X), which is shown below.



new age dream 3

 






new age dream 2

 




Tuesday, October 25, 2022