Monday, April 5, 2021

Studio Artist V5.5 - What is it and Where is it Headed


So what is Studio Artist anyway?

It's a digital art program with a 20 year long history at this point. When Studio Artist V1 was released at MacWorld New York in 1999 (is that a long time ago or yesterday, i get confused), it was a pretty radical rethink of what a digital paint program could be.  It was not PhotoShop, it was not Painter, it was not Illustrator, it was its own thing.  With its own internal structure, its own way of working, its own way of thinking about how to work.

It allowed you to work with still images or video, merging those 2 very separate application categories at the time into one unified system. It introduced the concept of a 'source' and a working 'canvas', again a very unique distinction at the time.  It introduced the concept of incorporating a human visual model (based on research into how the human visual cortex works at the neural level) into a graphics program for digital artists.  It incorporated conceptual ideas from music synthesis, music synthesizers, and digital audio software, and then incorporated them into a digital art program for visual artists.  It introduced a new hybrid raster-vector model for digital paint effects (bezier paths that define a drawing path, raster paint nibs that a relaid down on that editable vector paint path).


It also introduced what i call the first 3 levels of ways of working and interacting with digital imagery.

Level 1 is all about working at the pixel level.  Down in the basement.  Literally pushing pixels around on the canvas with an interactive stylus (wacom pen at the time) or a mouse cursor.

I would also consider the use of simple image processing effects (like an edge or blur filter) to be working at level 1 to some extent.  At least for the sake of this discussion.


Level 2 moves up a conceptual notch with intelligent automatic actions.  Intelligence based on the underlying human visual modeling built into the program.  That takes a center stage in the Studio Artist paint synthesizer with its extensive use of visual attribute modulation internally.  But intelligence that was also distributed throughout the program in a wide variety of adaptive image processing effects.  Also based on visual attribute modulation.

Studio Artist is built from the ground up on the concept that the program tries to look at a visual 'source' just like a human artist would look at that 'source'.   That internal visual representation is then made available internally throughout the program.  Visually modulating what the paint synthesizer is doing, visually modulating what adaptive image processing effects are doing.

You can think of the 'source' as a model the artist is painting, or a photograph of a scene the artist is reinterpreting in a painting.  The artist looks at the source to influence their work, but so does Studio Artist. Using its internal human visual modeling.  Trying to perceive the source in a similar way to the artist perceiving it, reacting to it, etc.

This process is not about the machine replacing the artist.  Far from it. 

Studio Artist always tries to incorporate the human artist into the automatic action creative loop. At whatever level of interaction the artist is comfortable with.  The artist can do all the work if they want to. Or the artist can manually manipulate the stylus, while Studio Artist intelligently assists in the manual drawing (intelligent assisted painting).  Or the artist can press the Action button, and Studio Artist does all of the work automatically (fully automatic action painting).  

But even in that last case of a fully automatic action, the artist is still involved in the loop, making decisions about what they like and don't like, how to proceed next, etc.


Level 3 in our taxonomy of ways of working i would define as working with scripts of actions (which could be manual or automatic actions).  Studio Artist includes a Paint Action Sequence (PASeq) Editor, that allows you to build more expansive visual effects composed of multiple actions that work in sequence.  Combining together different kinds of brush sizes, different kinds of digital media emulation, different kinds of artistic techniques (charcoal, ink pen, watercolor, water or acid wash, canvas smear etc) into a sequence of different processing steps that work together in synergy to build up a final visual effect.

Since automatic actions intelligently analyze the 'source', they can be built using just one specific image as the source when they are initially created, but then applied to an infinite variety of different source images later, while achieving the same stylistic visual effect output for those new input images as was designed using the first source.

Intelligent actions meant that you could also work at level 3 building PASeq presets in Studio Artist, and then process movies with them.  Creating dynamic visual effects in processed movie files that looked like hand painted or hand animated art styles in moving art.


That was a lot of innovation introduced in old SA V1.  Innovation that is still missing in other digital art programs 20 years later to be honest.  I say that because in every other digital art program i sit down at, you start with a blank canvas, and then you have to do all of the work.  The other programs do nothing, you have to explicitly make everything happen in them yourself.  

Here's the thing, that's a really hard process to create something from nothing.  Especially if you have not taken the extensive time and training required to develop the muscular and neural motor skills required to draw well.  Some people are very good at it, many, many more are not.

Now there's a direct analogy in the world of music composition, where in the old days a composer would compose a piece of music (as marks on paper literally), and then go into a very expensive recording studio, and tell a bunch of musicians they hired at union scale wages to play their piece of music for them, and a recording engineer you also had to pay hourly would record them playing the music, and then another person you had to pay would mix and master the recorded tracks into a finished piece of music.  And the expense and organizational complexity of all of this was really prohibitive to musical creativity.

So the home recording studio revolution created by digital audio workstation software running on personal computer hardware (a revolution i was heavily involved in created back in the day), was truly liberating for individual musicians and composers.  Because they could do the work on their kitchen table if they wanted to, and they could do all of the work if they so desired themselves.

Studio Artist was trying to do a very similar thing in the visual art and digital video world.  So that if an individual had a great idea for an animated film, they could do it all themselves on their kitchen table.  As opposed to hiring a team of animators at great expense to make that happen.

David Kaplan pursed that very vision using Studio artist to great effect, winning an award at the Sundance Film Festival for his feature length Studio Artist animated film 'Year of the Fish'.  Literally created at his kitchen table in his apartment in NYC.


And the innovation continued over the years as new versions of Studio Artist were released.  Vector effects were introduced via the Vectorizer, new vector paint options in the paint synthesizer, and vector output from some Ip Op effects.  You could output these new vector effects as resolution independent svg files, an interesting alternative to the traditional raster image file output.  

Temporal image processing effects were introduced.  These were great for processing video with time based effects, but also opened up a whole new way of working called 'stack filtering'.  Stack filtering involves taking a collection of still images, and then using that collection of individual images as the input to temporal image processing effects.  The end results can be visually amazing.

MSG (modular synthesized graphics) was introduced in SA V2, and has been extensively expanded over the years.  With over 600 individual image processing and generative modular effects that can be used to construct an infinite variety of different modular preset effects.  MSG can be totally generative (creating visual imagery from nothing but editable parameters), or it can process a source image into an effected output image, or a source can be used to modulate a generative process in some way internally.

MSG presets can also be embedded into the paint synthesizer, providing a way to modularly expand different internal paint synthesizer components (like path start generation, path shape generation, brush load processing, source brush generation).  Studio Artist V5.5 also lets you embed Ip Op and Vectorizer effects directly into the paint synthesizer (once again super-charging what you can do with paint synthesizer presets).

Movie brushes (using a movie as a paint brush) were introduced fairly early on in Studio Artist history. Movie brush capabilities have expanded over the years with image folder brushes, as well as movie and image folder background textures.  These can be incorporated along with visual attribute modulation to build photo mosaic and other visual effects built off of artist curated sets of multiple visual images loaded into a digital paintbrush.

Keyframe animation in the PASeq Timeline allows for the construction of interpolated hand painted strokes that then move dynamically over time in an animation, automatic dynamic visual transformations, morphing, warping, etc.  Bezier paths derived from automatic actions can also be embedded into single paint actions and then key-framed as well on the PASeq Timeline.


So Studio Artist has always been a generative digital art system from its very beginning.  And i want to draw a distinction between how the word 'generative art' is oftentimes used, because many artists use it to refer to digital art output from software coding systems created for artists like Processing, Open Frameworks, or perhaps mucking about in neural net Colab notebooks these days, etc.

I'm all for artists learning how to work with software code if they so desire. But my experience working with a lot of visual artists over the last 30 years indicates that many of them are not really interested in software coding.  It requires a certain level of 'left brain' analytical thinking they aren't necessarily comfortable with (especially within their working methodology which is much more intuitive or 'right brain' in nature).

These 'left brain' - 'right brain' metaphors are much over-worked (and technically incorrect) at this point in time, but i think they are still useful on some level.  We have traditionally made the distinction between 'left brain' building your set of tool presets, vs 'right brain' using your set of tool presets within the Studio Artist universe.  So you would set apart specific times to build working tools (left brain activity), and then use those tools creatively in your artwork (right brain activity).

The previous distinction between these 2 ways of working ( 'building the tools  and then 'using the tools' ) is a good lead in to where Studio Artist is heading.  Both in terms of the new features we are introducing in Studio Artist V5.5, and where we see those new features heading in the future.  Because we believe we have staked out a whole new way for digital artists to work in Studio Artist V5.5. And we are going to fully develop that out and expand the nature of what it truly means in future Studio Artist releases.


Studio Artist V5.5 introduces 2 whole new higher levels to our taxonomy of 'levels of working' for digital visual artists.

The 4th level of working involves expanding the whole concept of what the 'source' even means in a digital art program.  Traditional digital art programs are very religiously rooted to the concept of working with a single image, a single movie file.  The single digital photo you took, and you now want to enhance or effect in some way.  The single movie file that you want to effect in some way.  The empty blank canvas that you are supposed to turn into a sketch of a woman's face, or a bowl of fruit on a table.

So this new 4th level of working i call 'source abstraction'.  Abstracting and expanding the whole notion of what the source even means.  So that it is no longer tied to just a single image or movie file. It might be a representation of a collection of images, a collection of movies.  You might start to think of it as more like a data model, filled with visual attributes, all available for modulation within Studio Artist.

You can also start to think about 'style' mixing in the context of the 'source'.  Playing one abstracted 'source' off against another abstracted 'source'.


My personal viewpoint on the whole notion of what a source is, or even style for that matter, has evolved and expanded quite a bit over the years.  Just like our notion of what the source is and can do has expanded in Studio Artist V5.5.  With the new Gallery Show source options, and the new Load Style features.  Both of which are really just teasers for even cooler things to come, but still incredibly useful in their current limited forms.

It's fascinating when you start thinking about the 'source' for a piece of artwork as being a collection of images instead of a single image. It's very much like working with a database of visual attributes.  It's a very different way of thinking for most digital artists (at least that is my perception).  Different, but really expansive, liberating perhaps, definitely worth getting a grasp on, worth trying out and exploring.  It's new territory waiting to be charted out, waiting for you to find your own personal niche within it.

As i pointed out earlier, the Studio Artist universe dived into this water when we got heavily into stack filtering. But it's always existed since V1 if you really wanted to explore it, via loading a movie (which could be any old collection of images as individual frames within the movie container), and then riffing with it while painting.


I've been very heavily involved in getting up to speed on the latest developments in deep learning neural nets during this last pandemic year.  Most people are probably unaware that i did neural net research in the 90s, including some very basic work on using convolutional neural networks for learning artistic stylistic transformations on images.  Not really practical back then to be honest, so we quickly moved on to adaptive fuzzy logic systems and other things that ran a lot faster on the ancient computer hardware available at the time.

But it's a very different world now days. So it's been fascinating to take my background in all things neural net and bring it all up to date with the latest and greatest developments in the field. So i'm heavily influenced by what people have been doing with recent research areas like neural style transfer, generative models like GANs and VAE systems, etc.

And the notion of a database of visual images is very central to how these deep learning neural net systems work.  Generative imaging transformations based on the collective visual statistical properties of a collection of images. So all of that exciting new research work has heavily influenced my thinking, and is going to continue to do so in the future as we move forward.


This whole notion of source abstraction (expanding the whole notion of what a 'source' even means), is the new 4th level of working.  

And the 'build a tool' - 'use a tool' discussion mentioned earlier is really what the new 5th level of working hopes to alleviate.  

Through the use of AI Generative Systems.  Implementing automatic intelligent Generative Art Strategies.


The power of AI Generative Systems is that they can intelligently create presets for you on the fly. A literally infinite variety of them. And then they can intelligently work with multiple kinds of presets, working with them together in sequence to build more elaborate effects.  More elaborate generative strategies.

So it's like scripting, except Studio Artist builds the script for you, automatically, on the fly, and it can always be different if you want that, even as it follows a generative strategy that is constrained in some high level conceptual way.


So AI Generative Systems are the 5th level of working. Building on top of everything below it. Built on 20 years of development. So many old effects to rediscover. So many new effects to learn about and explore. Letting the program build artistic strategies and presets for you for you.

Or letting you define artistic strategies for it to explore for you automatically. Like that crew of really hard working people famous artists use to crank out their work for them.


Gallery Show has expanded tremendously in Studio Artist V5.5.  Originally conceived as a way for artists to create free-running dynamic visual art shows using Studio Artist in an actual art gallery (it was very lightly used for this task), people quickly glommed onto it as a way to visualize what the factory preset collection did, or to batch process images with custom PASeqs, or to riff random mutations of pre-existing presets.

In Studio Artist V5.5 you can use Gallery Show to build custom Generative Art Strategies.  Generative Art Strategies that can be fairly simple, or extremely elaborate.  The new Generative Paint options in Gallery Show basically allow you to control Studio Artist as it dynamically creates an infinite variety of custom automatically edited paint presets on the fly.  Exploring areas of the overall editable parameter space you might never get to through the use of old school manual preset editing.

We've also tried to bring many of the intelligent Gallery Show generative features up to the surface, providing ways to access them at the high level working interface of Studio Artist.  Through new smart Quick Edit command options for the paint synthesizer and the vectorizer.  With more to come in the future.


Old school preset editing involves going into the depths of the Editor for the specific operation mode (paint, vectorizer, ip op, etc), and manually editing individual parameter values by hand.  This extensive level of editing control over every effect available in Studio Artist has always been there, and is great from the standpoint of being able to tweak things to your hearts content. Dial in just the effect you want.

The other side of that coin is that you really need some level of conceptual understanding of what is going on under the hood to really be fluid at this low level of manual parameter editing for visual effects and digital paint.  And if you aren't carrying around a conceptual understanding of how the paint synthesizer works internally, how the vectorizer works internally, hand editing all of those parameters can be a very challenging left brain kind of task. A task that gets in the way of your right brain creative work.

At the same time, it has become quite apparent to us that increasing the level of internal visual attribute modulation inside of these areas of the program (ie expanding the level of editable parameters even more) leads to amazing visual output effects.  A whole new level of visual wow in the resulting art output.


So that is the dilemma.  Expanding the range of editable parameters increases the range of potential effects, and increases the resulting visual complexity of the effect output, pushing it to an amazing new level. But the program becomes more unwieldy to manually edit, even for experts like myself well versed in how it works internally.  

AI Generative Systems to the rescue.

Rather than having to manually adjust individual editable parameters, we've been adding all kinds of different intelligent Quick Edit commands. Smart edits that let you work at a higher conceptual level when editing an existing preset, or when creating new ones from scratch.

We've started this process in V5.5, but it's going to be an ongoing endeavor from now on.  More of them, and more places to use them.  You can run these smart Quick Edit commands manually.  But you can also use them to build automatic generative strategies that can run within Gallery Show in V5.5.


Let's say you want to use MSG Brush Load (did you even know it existed, or if you did how to manually program it in the paint synthesizer).  You can just dial up that generative paint option in Gallery Show and Studio Artist will then riff out an endless variety of new unique most likely never seen before paint presets on the fly that are all using MSG brush load internally.  

Studio Artist is automatically creating new presets for you on the fly.  But they aren't just total random mutations of the parameter space.  They are constrained in intelligent ways based on your personal specification.


Let's say you want to edit a paint preset to make it 'wet'.  That could take a number of very specific manual edits within different control panels inside of the paint synthesizer Editor if you wanted to do that manually. For just one particular approach to building a wet paint.

Now you can just dial up that generative paint preference in gallery show. Or you can run a specific smart Quick Edit command to do it for you at the top level working interface.


Applying specific effects (paint, ip ops, vectorizer, whatever) to specific areas of the canvas while not processing the rest of the canvas with that effect is an important component of building an overall final art image.  Certainly not essential, but extremely useful.  A great creative tool to take advantage of in your work.

You could always do this via manual selection and subsequent manual masking of an effect in Studio Artist.  But Studio Artist V5.5 can do it for you automatically now. Intelligently deriving an automatic selection mask on the fly.  You can dial this in with all kinds of different variations in Gallery Show, or you can run specific variations of it up at the high level working interface.

Is the 'intelligence' associated with this perfect? No.  Is it useful? Yes.  Will it get better as we move forward into the future?  Absolutely.


But what about movie support?  Is that back.  Yes it is.  You can now work with movie file io again (as opposed to being forced to use folders of frame images like you had to in the 64 bit build of V5 on mac).  You no longer need quicktime for windows if running on windows.   

Are all of the codec options you want available now?  Maybe not.  But we intend on expanding those as we move forward.  Yes we know you want ProRes.

We also want to add support for MNG in the future for movie brushes, since it provides a great open source container for lossless PNG images with embedded alpha channels.

Live video is also back in somewhat limited form. All of that will be turned on as we move forward, and we have some interesting ideas for expanding what people can do with it.  Multiple video sources being one of them.


What about other new stuff?  Well, there is quite a bit of it. Sprinkled all over the program.


There is a new positive/negative space visual attribute available throughout the program where ever visual attribute modulation is available.  It models the visual perception of positive and negative space in an image.  Since we now offer visual attribute modulation options for both the current source as well as the current loaded style, it's available for source and style.


We've built in this whole new conceptual notion of clipping an algorithm internally.   You can think of algorithm clipping as like working with selection masking, except the processing algorithm is masked internally as a part of its under lying internal structure.  As opposed to after the fact by masking what gets placed from the effect output into the working canvas.  Which is how canvas masking works.  

This is currently available in the paint synthesizer and the Vectorizer.   But you can expect to see that expand to other areas of the program in the future.


The Transition Context features have been greatly expanded (did you even know Transition Contexts existed).  More algorithm options, adjustments for them now, and the ability to route where the output of the Transition Context ends up.  You can route to the canvas, source, or style.

Transition Contexts automatically generate transition effects between different keyframes.  Each keyframe is associated with a specific image or movie file tied to that keyframe.  


Dual Mode Paint now lets you use the Vectorizer as a DualOp if you so desire.  We also introduced the concept of injecting time based modulation from Dual Mode Paint into an effect that overrides existing internal parameters.  This currently only works for Vectorizer Dual Ops, but is going to be expanded out to the other Dual Op choices in the future.

This override injection modulation feature is going to become very important as we develop the program into the future. Because it provides a way for other effects or signals to modulate an Op Mode effect without having to add endless internal parameters for those additional modulation options to a specific effect. 


We discussed Load Style in the initial discussion above.  Current support at V5.5 release is a loaded style image, but loaded style image folders are coming soon.

We also discussed Gallery Show in the initial discussion above. Many new Gallery Show features are available now.  New techniques, new paint transform and path path options associated with paint based techniques, a whole new generative paint preference tab with associated options, new generative options associated with vectorizer techniques, new source options for gallery show, new automatic intelligent masking options for gallery show, new built in generative options for the Start and End cycle processing, a timer to skip to the next cycle if using paint techniques that would otherwise run a long time, new automatic color palette generation options.

Many of the new gallery show generative options are now available in the high level working interface.  This includes the paint and vectorizer generative preference options which are currently available as QuickEdit commands in the Edit menu, automatic selection mask generation options available in the Canvas : Selection menu, paint draw and paint path generation options available in the Action : Art Strategy menu, generative dissipative image processing available in  Action : Art Strategy as well.

There is a new Gallery Show Toolbar you can use to control gallery show in the high level workspace interface.

There is also a new PowerTool Bar that currently supports some of the gallery show generative features when working in the high level workspace.  Our design specs have detailed descriptions of additional features that will be available in the PowerTool Bar in the future (like the ability to work with sets of PowerTool presets, and the ability to access QuickEdit commands directly in the PowerTool Bar).

There is a new M mutate button in the Preset Browser that lets you mutate new presets on the fly based on the currently loaded factory preset category.


We mentioned you can internally clip the vectorizer algorithm, which is currently available for region generation as well as region color modulation.  The old Draw control panel is split into 2 now, Draw Setup and Draw Apply.  There are a ton of new region modulation options in Draw Setup now.  There are new modulation options in the Draw Apply control panel for color gradients and cast shadows for vector rendering.

We mentioned you can internally clip the paint synthesizer algorithm. This is currently available in Paint Color Source as well as for brush Nib clip in Paint Fill Apply.  There are 2 new pen modes, freestyle multi-pen and freestyle region fill as brush.  You can embed vectorizer and ip op effects that generate vector output directly into a paint preset, and then use the resulting generated vectors for path sketching with paint or path start regionization.  You can choose whether the selection buffer is overwritten by individual regions when running path start regionization or not (it always did it before).  You can now chose whether the aspect ratio is maintained for resized image or image folder or movie brush folder paint nibs, or whether they resize to fit like they used to.  You can choose whether the visual attribute used for path angle modulation is based on the source or style image (more style modulation is coming in the paint synth in the near term future). New Brush Size modulation options.  Bezier paths can now store and restore brush size in addition to brush color.

We've got some experimental optical flow stuff going in Temporal Ip Ops that is going to be cleaned up in the short term.

There are various new IpOp parameters and features hiding inside of some of the individual IpOp effects.

Studio Artist V5.5 is based on a totally different internal framework (different than V5).  This framework is very current with recent OS changes, so things like cosmetic issues on mac Big Sur are gone.  It supports things like dark appearance display mode on mac.  This change may seem trivial to you, but believe me it was not trivial under the hood to implement.

There are various other new things i'm spacing on right now that i'll try to drop into this list as they come to mind later.

So, i hope this gives you some idea what the long awaited Studio Artist V5.5 release is all about.  And where we have pointed it for future Studio Artist development.  

We've also been working internally on V6 things as well, so that effort has been ongoing in the background and will now move to the forefront with the V5.5 release.

1 comment:

John said...

Wow, a lot to absorb, and I have a few thoughts that I will share in the forums.