In 2010 I colored one of Shane Nelson's Shayla pics, and did simple lighting from each cardinal direction, so I could add a background last and edit the layers to match.
By 2013 this became various plans to swap colors around for smooth lighting from any direction. I started a few Taral Wayne pieces with "leftness" and "upness" layers, but never found a good method for using them.
In 2017 I finally gave up looking and just slapped something together in Javascript. It was dead simple: replace an RGB value with pixel <R,G> from a 256x256 image. If that image was a mirrored sphere you'd get a decent statue. "Easy, Butterfly" is a clear example.
That tool improved incrementally, but always relied on sphere-like "probes." Making those and matching them to a scene was an ordeal - sometimes involving a command-line mod tool for Oolite just to convert cubemaps - and soft lighting never looked as good as shiny stuff.
Some improvements begged for animation. I had a whole material system with multiple probes. It could handle backgrounds internally. I knew how to fake alpha-blending for translucent and reflective surfaces. Changing the probes changed the whole image, but I could not bring myself to manage all those frames by hand.
By 2018 I'd tried using Blender instead. In theory it could do this environment mapping in seconds. In theory. In reality I spent days fucking around with nodes and scenes and only managed to get lighting from one direction, sometimes, if it felt like it. It was beyond stupid. My false-color images for this and other Kandlin pieces sat around on my hard drive.
Finally, at the very end of 2019, I tried again, and got things almost-sorta-kinda working. It's still only halfway there. Blender does some clever things in a fake way that is nearly impossible to unfuck. It still flipped my normals around, multiple times, for no discernible reason. But it finally worked well enough for me to model a scene and fight Blender's usual nonsense instead of banging my head against step one.
This took a long time to render. Blender doesn't like my video card. But on the scale of how long I've been trying to do this simple trick that should be obvious but seems completely unprecedented... hours are moments.
So naturally InkBunny jerked me around all afternoon about the video format. In fact, I gave up, uploaded it to GfyCat, and then realized I could upload GfyCat's version. Hey, guys? If your robot's going to be this particular about which goddamn files it will host... maybe transcode it yourselves.
Edit: Guess who bought a new comput-- god dammit, "no playable sources found?" InkBunny admins, y'all need to unfuck your video handling. Just inline <video> for whatever file gets posted and if people's browsers choke then boo hoo. Everything supports mencoder's default x264 AVIs. You know what's unsupported these days? SWFs!