Video experiment. Gladiators - Fight or Love. Result and Workflow.
I wanted to generate a sequence of images based on an image that I have previously generated with Midjourney. From playing with Stable Diffusion’s Control Net, I know that it should be possible. Instead of showing just the finished product on its own, I made a process video showing the workflow, since many of these technical things are immediately apparent with visuals.
Steps in short:
Text prompt with Midjourney
Stable Diffusion Control Net
CN SoftEdge HED
CN OpenPose
Remove unwanted artifacts e.g. generated watermarks with Photoshop (Content Aware Fill) or Lightroom (Heal)
Upscale with Topaz Gigapixel AI
Assemble image sequence to video in After Effects
This was mostly a test so I didn’t perfect each of the images. That’s why some of the faces are bad. I am aware! If this image depicts a single person and the men were facing the camera, then the initial results would be fairly good. I tested that. I didn’t have many images of posted online where there is only a single person, so I opted to use an image of a couple even though the final results were less than ideal.
It’s possible to use do something similar with image prompt in Midjourney, but the result won’t be the same as Stable Diffusion’s Control Net, which, as you can see, is able to produce images that have the same exact composition and form, with differences in colors and the objects depicted within those rules.