A depository for John Dalton's personal artwork. Studio Artist, MSG, procedural art, WMF, digital painting, image processing, human vision, digital art, slit scan, photo mosaic, artistic software, video effects, computer painting, fractals, generative drawing, paint animation, halftoning, video effects, photo manipulation, modular visual synthesis, auto-rotoscoping, directed evolution, computational creativity, artificial intelligence, generative ai, style transfer, latent diffusion
2 comments:
Very cool to say the least, is there an SA training video for this effect John or is this one of your more experimental works.
We want to incorporate different generative ai image synthesis techniques into Studio Artist. So i have been experimenting with various generative ai image synthesis techniques. Both to figure out what options we should offer, as well as to stay on track of where this rapidly advancing field is going. And to figure out different workflow options we could offer within Studio Artist to work with this stuff.
All of the different approaches to generative ai image synthesis have very distinctive visual appearances.
This particular example is using stable diffusion, which is a specific approach to latent diffusion multi-modal image synthesis that is open source. You feed a text prompt into the algorithm and get an image out.
The notion of story based generate image synthesis is to build a story and then have the resulting generative animation tell the story over time. In this case the 'story' is a series of short text descriptions that keyframe over time.
I'm running stable diffusion in google colab pro in the cloud. The output in this post is the straight stable diffusion output (no Studio Artist processing). I'm still trying to figure out what works and doesn't work for this particular gen ai image synthesis technique (which was just released very recently).
The 'SOMA Walk -SD remix 3' post from this morning takes the stable diffusion output for the same 'SOMA Walk' story (with a different style setup) and then runs it through Studio Artist for some additional processing.
I'm using the same stories i used for the recent CLIP guided direct RGB image synthesis experiments to contrast that output with the stable diffusion out. Very different beasts for sure.
If you look at what happens visually in this particular approach, you could think about how to work with image folder or movie brushes in Studio Artist to cop the visual style to a certain extent. If i get some time i'll try that experiment. Your question got me thinking about it.
Post a Comment