Tuesday, March 23, 2021

Every Point Release of Studio Artist V5.5 is Different

 

Just started adding some source data augmentation options to Studio Artist V5.5.1 in gallery show.  Data augmentation is what you do when training neural nets to jigger the system in a useful way for training.  You are manipulating the prior inherent in the system.  And you can do the same thing to your collection of source images when running gallery show.

This leads to an interesting observation which i was already thinking about. Every point release of Studio Artist V5.5 is different. By different i mean that the underlying AI Generative Algorithms are different.  Because we're always tweaking them, enhancing them, adding new things like data augmentation, new Smart Edits available to the generative preferences, etc. So every point release is going to have it's own feel to it.

This is kind of like Edition Prints, except it's edition software.  Each point release is visually unique. You will want to collect them all.

Monday, March 22, 2021

Using the Paint Synthesizer Background Texture Module as a 'Texture Synthesis by Example' Engine

 

So who knows what i used for the image loaded into the Image Background Texture in the Studio Artist V5.5 paint synthesizer?  The answer is posted in the Studio Artist User Forum.

Here's the thing. Both procedural and neural 'texture synthesis by example' are using a codebook.  And many of the best algorithms are really just slopping down sections of texture from the code book, joining them together to build the macro texture from micro texture pieces.  And you can program the paint synthesizer to do that as well.

Could it be better? Sure. When you do experiments like this, you can figure out what to add to make it better.  And i am open to your suggestions of course.

Sunday, March 21, 2021

AI Artist Paints What It Thinks a Deep Learning Neural Net is Seeing in It's Internal Feature Space Representation

 

Continuing today's riff on faking the output of deep learning neural net feature space visualizations.  The AI Generative Artist did all of the work. I just setup a generative art strategy for it.

I guess we should work on making it generative for building generative strategies.  Meta generative system.  Sure, why not.

Faking Feature Space Representations

 


So i was reading this multi-modal neuron feature representation paper (neural net stuff), and they were generating some wild output images. So i wanted to know how they were doing their back-prop optimization from a feature vector in their neural net to the actual images they were showing.

And then i looked at the PyTorch code for that part. And i realized you can get there way, way simpler than what they are doing.  Spent 5 minutes in Studio Artist V5.5 dialing in something close. Their facets are better, but i think you could get very close just using existing Studio Artist features.

So this would be visualizing what a clown neuron represents. Here's another slightly different approach.




Thursday, March 4, 2021

City Lights from the Water

 

Studio Artist V5.5 did all of the work on this one (for the most part). I setup the generative strategy, then let it rip.  Hit the grab button when this one flashed by in an afternoon of automated gallery show runs that generated hundreds if not thousands of gallery ready art images from my generative strategy.  All using the power of AI Generative Paint in Studio Artist V5.5.