Wikipedia:Featured picture candidates/Photon noise

Photon noise
Voting period ends on 17 May 2011 at 12:00:02 (UTC)
 * Reason:High ev picture which provides a good illustration for its topic
 * Articles in which this image appears:Shot noise
 * FP category for this image:Featured pictures/Photographic techniques, terms, and equipment
 * Creator:Mdf


 * Support as nominator --Tomer T (talk) 12:00, 8 May 2011 (UTC)
 * Oppose Sorry. Any unbiased renderer with a random sampler produces such images with a clearly better looking end result. Meaning, that it isn't hard to create, while the actual motive could be something better. Why does it need to be black and white? --Niabot (talk) 12:04, 8 May 2011 (UTC)
 * Now Strong Oppose. The image isn't physically correct. White parts should provide much more spots in the first place. A film has a logarithmic response, meaning that there should be much more hits/photons inside bright areas as suggested by this image. Also it has to be noted, that the result would have much more photons hitting the actual film. Meaning that there should be a median exposure used for every step. This image shows both effects. Longer exposure and dots. In the result it is actually wrong. The result should be comparable to that of a physical renderer like LuxRender which they aren't. An animated example how it should look like: File:Luxrender-Reihe-animiert.png (this is an APNG, may not be animated in all browsers) --Niabot (talk) 13:35, 8 May 2011 (UTC)
 * Can anyone else that knows about this stuff confirm that it's wrong? Aaadddaaammm (talk) 20:44, 12 May 2011 (UTC)
 * Just look at the first image. It has bright spots. That would imply that the steps are normalized (first frame: extremly high ISO + very short exposure Last frame: low ISO + long exposure). But the mean brightness of the images is different. This does not illustrate the real effect in a good way. All frames should have the same median brightness if it should illustrate noise reduction due to longer exposure. Unbiased renderer's, simulating single lightrays/photons for physical correct rendering, using the same method and are able to keep the brightness very stable, while reducing the noise over time. --Niabot (talk) 21:07, 12 May 2011 (UTC)
 * Weak Support. I believe this image accurately illustrates the problem of shot noise. In the top left image, each pixel is receiving 0.001 photons on average. If it was a 1 Mpix image then there would be ~1000 pixels with one photon, maybe a couple with two (these are the lottery winners :). It appears each image has the contrast adjusted so the pixel with the most photons is pure white. It's common to take this approach to try and discern the content of an image at the shot noise limit. The image in the bottom right is receiving an average of 100,000 photons per pixel, so each pixel will collect between 0 and ~200,000 photons. Again the pixel with the maximum photon count is set to pure white. My problem is the shot noise issue is usually encountered in science, for instance astronomy where the light levels entering a telescope from a far away galaxy are so low that it's difficult to discern an image at reasonable exposure times. Why not use an image of a galaxy as the source? Nobody has shot noise problems taking pictures of the beach, you just set your camera to something like 1/100s and click perfect image... Gabeguss (talk) 00:07, 13 May 2011 (UTC)

--Makeemlighter (talk) 02:23, 18 May 2011 (UTC)