Saturday, April 30, 2022

tWister

 


Another gallery show movie processing experiment. So no Action Animate or Action Process here with PASeqs, it's all done just by Gallery Show.




time Square rally

 



GS Transition Run

 
Playing around with building transition effects directly in Gallery Show, so no Transition Contexts involved.  It's flow transitioning back and forth between 2 different source image folders every 12 frames.

Generative Diffusion Zoomer

 

Experimenting with feeding the previous frame output back as the init start image in a generative diffusion model. The individual frames were then brought into Studio Artist and time expanded with some additional PASeq processing to generate this movie example. Working within 'the brutality of war' text tagging experiments.  Takes forever to generate the frames on colab.

the Brutality of War

 


Two different generative image synthesis models here running the same guided drift experiment. Disco Diffusion above, VQGAN below.  Both CLIP guided synthesis.  Contrast that to the earlier post with the same title that used latent diffusion with the LAION model.





Friday, April 29, 2022

rigGed

 




that Certain mood





GS Movie Experiment

 

Movie processing experiment using Gallery Show for the processing as opposed to Action Process or Animate.



murky Vision





double Vision

 



Thursday, April 28, 2022

the Brutality of War

 

Blow up in Studio Artist with paint action sequence processing.  From an animation, but blogger is balking at the video up load for some reason.

Kailua Beach AI Dreamer

 

Used latent diffusion generative ai algorithm create a folder of low resolution images generated from 'kailua beach kite surfing' prompt variants.  Then used that image folder as a source folder for Gallery Show in Studio Artist to generate high resolution painterly output.





Babbler

 

One interesting behavior with the LAION-400M latent diffusion generative ai model is that it generates very weird images when you give it a gibberish text prompt.  This one was prompted with 'arghhh'.  Who are these people anyway and why do they come up when you feed the model this text?

Other nonsense prompts give you something like this where you get random people along with a mangled version of the gibberish text prompt built into the generated image.

Changing the gibberish prompt slightly can push you into a different space where you get mangled text mixed with the weird alien cartoony characters that are very characteristic of this particular model.

Using 'zappp' gives you mangled text built into some weird abstract design structure.  Probing the boundaries of the system in this way exposes to some extent what the synthesis algorithm is doing.  It helps in the analysis of this to think of generative texture models, how they are built, what is going on under the hood, and what the resulting output looks like.  It also gives you all kinds of clues to building alternate algorithms for synthesis that are not neural net based but would give similar looking results.


 

Cats on Surfboards

 


Using a latent diffusion multi-modal generative ai algorithm to generate low resolution images of cats riding surfboards at the beach.  Then put all of those in a folder that i used as my source folder for a Gallery Show run in Studio Artist working with a set of paint action sequence presets to generate higher resolution painted images.

Wednesday, April 27, 2022

the Big slip

 


Paintings are derived from an art strategy that works with a folder of multiple images to emulate the segmentation texture fill in you see in generative neural net renderings.



mental Foil wrap

 





MSG Drifter

 





Tuesday, April 26, 2022

the End of Time - a children's story

 


A few grabs from a generative ai guided drift session using latent diffusion with the LION-400M public model.

Complete guided generative diffusion session animated in Studio Artist.


Monday, April 25, 2022

new Order