Just because you think you know what you’re doing doesn’t mean you’ll actually get the result you expect. The basis for some great science and technology finds or stuff to fill the waste bin of history.
Also, garbage in – garbage out.
Fighting the restrictions of a narrow depth of field you’ll find all sorts of techniques, gear, software, etc. offering to help you overcome the physics of lens design and light paths. Focus stacking is popular – has been way back to film days – and interesting in how it assumes to provide a substitute for our eyesight. To enable us, finally, to make an image that looks the way our eye really sees things.
Except it’s a lie.
Focus on reading the words on this screen. Now, using your peripheral vision (you know, the corner of your eye), “look” at something about a foot to the right or left of the screen but don’t take your focus off the words on the screen. What can you really tell about the details of that object off to the side? A little blurry, isn’t it. Here’s a better test. While focusing on the words in the top part of a newspaper, “look” over the top at something in the distance, again using your peripheral vision. What kind of details are you making out in that distant subject? Not many, you say.
It seems our eyes have a depth of field as well, not only near to far but also around the circumference of our vision. So, all those images of three dimensional objects that are sharply focused in all dimensions don’t actually mimic our eyesight. They mimic the eyesight we wish to have.
Since they are artificial constructs, though, it means they lend themselves to interpretation and alternative versions. From whence comes art, human and otherwise.
Taking a lot of images of a subject, each slightly focused on a different plane of the subject, and then “stacking” them together in software can result in one of those 3D objects with all aspects in focus. Or it can result in what you see above. Does this ball of bands exist in some alternative dimension, some slightly askew reality that periodically bleeds over into ours when we’re not looking? Possibly, if you want to believe software programs have a mind of their own.
Your digital work seems to be becoming only as smart as your software. The image above is not digitally enhanced, at least not in the manner you’d expect. It’s the result of not turning on the right switch in my software, of selecting the wrong option (well, wrong unless I actually wanted this look to the image). Instead of asking the software to deal with geometric distortions caused by moving the focal plane for each image in the stack, I simply told it to reposition each image to make sure it lined up with the others in the stack. Except that’s not possible.
Each image in the stack is slightly different because when I refocused on a new part of the ball, all the other out-of-focus parts got slightly more out-of-focus. Each variation adds up. When the software was told to simply reposition all the images and line them up, it did the best it could, but in the face of an impossible task, decided to get creative. The algorithm essentially said, “I’ll line up these few areas I can work with but with the rest I’m taking a wild guess.” Apparently the wild guess included ignoring exposure information and color balance as well, resulting in the strange glow that appears to emanate from the ball itself.
The correct switch literally tells the software to correct for perspective shift, the phenomena that is occurring when I shift focal planes. See, the software is actually smart – it’s the operator that gives poor direction. You know, garbage.
It does raise the question of what other type of images you can make this way and what would they look like.
Like this, I guess.
How would you like to see that coming your way while on a Yellowstone hike?