The text prompt input to the multi-modal synthesis model is not changed at all for the duration of the animation. Some of the other parameters associated with the synthesis model are being adjusting in a controlled fashion, dramatically changing the look and appearance of the output imagery as those changes occur.
2 comments:
Like the portrait of the young lady, what process did you use John
The animation was generated in Studio Artist using Transition Contexts in a PASeq.
The individual images key-framed in the Transition Context were generated using a prototype of a generative ai context, which is the mechanism we're working on to implement key-framable neural net image synthesis in a future version of Studio Artist.
The particular neural net synthesis algorithm being used is a variant of latent diffusion.
Post a Comment