--excerpt from journal found in the ruins of the SanTokyo Temple of the Unnamed --
June 2: The child has been found, exactly where the King said it would be. He was taken from the back seat of the car just after the accident.
The ritual was started hours later. He was opened and the yellow sign was carved onto his heart and into his soul. He will be a force for the return of the Elder Gods.
-- excerpt ends --
Case Officers notes: We suspect the child they are talking about is the boy now called only Soul Yellow. He's become a major threat, even to the Magical Girls that patrol SanTokyo. He fights as if driven by inner demons, and rends space and people apart with lightning and magical contructs. I have no idea what this 'sign' they carved into him was, but he's become a monster now.
>>><<<
Soul Yellow is a Dark Magical Boy and was sworn to the pantheon of the Elder Gods weeks after his birth. The contamination from the Yellow Sign has corrupted and changed the path of both his body and soul. He functions now as a minor avatar of the King in Yellow.
----------------
This will be the very very last image I post in 2024. Unsurprisingly.
Since I ended up having to use ControlNet Canny to get this image the way I wanted it, I've included the Canny control file. It was built with the same prompt. One of the models in this prompt made reproducibility impossible, almsot as bad as if I'd used an Ancestral Sampler.
No idea. My Initial thought was that it was a Lora. But removal of the lora's didn't resolve that issue, at least not in all cases. It's nice to be able to reproduce an image predictably, but sometimes the models aren't predictably stable, or you are using an Ancestral Sampler, or there is a unresolved bug in the interface. I tend not to get terribly worked up if I can't get the system to always reproduce an image.
In this case, I resolved the issue by finding a render generated off this prompt that had the right amount of dynamic tension in the layout, with good overall attention flow, that drew the eye well. I reduced it to a Canny Control Image (basically a white on black contrast outline) and used a Canny Controlnet to limit the layout to something close to that image.
That allowed me to resolve much of the reproducibility issue. At least enough that I could then spend time trying to deal with actual detail issues, like teeth and eyes. Inpainting resolved that. Final details like rust, wounds, some background edits, glow effects and cleaning up some Stable Diffusion artifacts were done in photoshop.
No idea. My Initial thought was that it was a Lora. But removal of the lora's didn't resolve that i
How much of this image is actually ai and how much is photoshop? The amount of detail is borderline nonsensical. Did you add the depth of field effect manually via photoshop?
How much of this image is actually ai and how much is photoshop? The amount of detail is borderline
It's a little hard to say exactly how much is photoshop and the inpainting system is often frustratingly stupid. It's really common to have to hand draw what you want into the image, and then use img2img to refine that. I'd say that a lot of the smaller details are hand edited.
The Depth of Field, however, was defined in the initial prompt. That particular model seems to work with those sorts of effects really really well. Like any other tool, a large part of the work of getting better quality images out of it, is understanding it's strengths and weeknesses and working with them appropriately.
It's a little hard to say exactly how much is photoshop and the inpainting system is often frustrati