Talk:Shadow mapping

praetor_alpha 17:33, 3 October 2005 (UTC) 🔐 🔐

Untitled
This statement is completely incorrect, so I removed it: "The rough edges of shadow maps are sometimes referred to as "shadow acne"." Shadow acne occurs when the depth parameter is too low relative to the shadowmap resolution, and a surface begins to self-shadow. It usually creates dark bands where there should be none. Pixellated edges of shadows has nothing to do with "shadow acne".

-Josh

I disagree about the merging with Shadow Maps for two reasons:


 * Shadow maps are static, while shadow mapping is dynamic. Shadow maps are loaded once and usually never again, while shadow mapping is constantly updating.


 * Shadow mapping involves comparing depth values, rather than just putting a brighter/dimmer shade on a polygon.


 * Oh, I see what you are trying to say; you mean light maps? These are precomputed maps; but shadow mapping is dynamic and is usually run inside a GPU.

I agree about merging with Shadow Maps since they are both the same, or even simply linking Shadow Maps to this page. Its just unfortunate someone wrote a (small and inaccurate) article about static shadow mapping (or light mapping) and called it 'Shadow Maps' and someone else wrote a larger and more detailed one about dynamic shadow mapping and called it 'Shadow Mapping'. The underlying theory is always the same, working out shadows and storing them in some texture map, then recalling that information to reconstruct shadows. Whether you use depth maps or color maps, do it dynamically or statically, it all falls under the loose description of shadow mapping in my opinion.

Jheriko 22:08, 22 January 2006 (UTC)

Shadow maps and shadow mapping are the same thing. There may have at one point been a static implementation of shadow maps, like back in the 80's, but the term now means dynamic depth-texture-based shadows, usually calculated on the GPU, but I am sure Pixar has a CPU approach. -Josh

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): VjiaoBlack.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 09:08, 17 January 2022 (UTC)

More information needed to adhere to quality standards
Shadow mapping involves finding the world coordinates of a point, projecting it back into light space and then performing a compare. These steps and also the disregard of vertex/pixel shaders which are extremely crucial (also the current hardware capability allows everything to be performed in real-time) is missing in this article. Until one of us updates to reflect this, I am adding a stub/cleanup for this page. -- dormant25, Feb 8, 2006 Moreover, the information given is not in the right side of "highly technical" nor in the "amateur level" - it is more of a passing information...


 * I noticed that too. The whole "pass two" step contains basically know information. "Some magic happens" would be almost as good. I'll see if I can learn what is going on there from the cited NVIDIA articles and incorporate it into the article. - Rainwarrior 17:45, 26 July 2006 (UTC)


 * Ahh, okay. The shadow map is a collection of depth values which will be used for a visibility test. When drawing a fragment, project that fragment's location into the light's space, and it will have a depth that can be compared against the shadow map. If it's further away than the depth in the shadow map, the fragment is in shadow. The actual matrix transformation will be something like a translation and rotation from the light to the eye (maybe a scaling as well, the light-view frustrum doesn't have to be the same size), and the trickiest part is turning the projected coordinates into a depth value that you can compare against the shadow map (I haven't quite figured that part out yet). - Rainwarrior 18:25, 26 July 2006 (UTC)


 * I rewrote that section of the article, and cleaned up the rest as best I could. How is it now? - Rainwarrior 19:30, 30 July 2006 (UTC)

Edits by Praetor Alpha, 20:32, 13 August 2006
I have several comments about this edit.


 * 1) I think "cross multiplied" is the wrong term (isn't cross multiplication a technique for dealing with fractions in equations?). It is simply the Ordinary matrix product of these matrices.
 * 2) It must also be clarified what $$modelview_{light}$$ is. There is currently no explanation. The scale bias matrix is also completely unexplained. It maps -1 to 1 onto 0 to 1, but it really isn't a part of the "shadow mapping" technique, per-se. It's just necessary in many hardware implementations. I think this extra matrix should be left out of the equation, and instead mentioned in a footnote that explains its purpose.
 * 3) I don't think it's really necessary to speak of "three passes" in this article. Whether to use two or three passes is really up to the person implementing it. (3 passes aren't inherent in the technique, nor are they the most efficient or common way to do this these days.) We should describe the process in terms of what needs to be done, rather than how to do a particular implementation of it. It deserves mention that there are two and three pass techniques, but probably after the description of the basic principles.
 * 4) Not all lighting models "should" have an ambient term. There are many types of environments in which zero ambient lighting is very appropriate (space simulations, for instance), and there are global illumination techniques which very much should not have an ambient term.
 * 5) I think that caption which currently reads "Pass two, depth map transformed onto the scene" should actually read "Visualization of the depth map projected onto the scene." It is merely a visualization of the depth map, and not really part of the rendering, which is why I chose to word it the way I did. As above, I also think we should remove the references to passes from the images as well to match the suggestion above.

- Rainwarrior 21:58, 13 August 2006 (UTC)


 * Looking back on material, you're right. I must have been mixed up with vertex normals at the time... or something...
 * I'm using OpenGL terminology. That scale bias matrix is required in the calculations; it isn't automatically done by any implementation, at least any of importance, as of my knowledge.
 * I was trying to keep this article simple when i wrote it, as shadow mapping is a very complex idea to get a grasp of (or at least, for me). When any computer graphic concept is introduced, it always starts out as a simple implementation, then goes on to say that there are better, more common, or efficent ways of doing it, and I think that a three pass approach is the simplest.  I would keep it, even if to emphasize the importance of ambient light in a scene.
 * Even in space, there is still light bouncing off of stuff, even in the shadows. The only place I have been that is completely dark is far into a cave.  I will reword it to something like "all good lighting models should have...".
 * It said "shadow map" which is incorrect, as it is the depth map on the scene.
 * Anyways, it's now much better than (as you said) "some magic happens", as it was before.
 * praetor_alpha 00:21, 14 August 2006 (UTC)


 * Yes, the scale bias would usually be required in OpenGL or DirectX hardware implementations. It's not that I am suggesting that any implementation does it automatically, it's that it's something that is more or less specific to those types of implementations. A software rendering shadow mapper, or a procedurally generated shadow map would not have this need.


 * In space, there are very few things to bounce light off of that take up enough of a solid angle view from the surface of an object that they would have an appreciable "ambient" light. Everything's just too far apart. Take a look at this image of Jupiter taken by Cassini. I think it wrong to say that the ambient term has anything to do with how "good" a lighting model is (as well, it is usually the least expensive component, computationally, so it's not like omitting it gets you a cheaper model). Now, if one object is close to another in space, there are cross-reflections, but this will only apply to the surfaces of those objects that face one another. Local lighting models just aren't "good" at this, and having a global ambient light is a very poor approximation in this kind of situation.


 * I have some suggestions as to the structure of the page, maybe I'll create it in my userspace first so you can see what I have in mind. I like the material you have added, as it helps with the specific type of implementation that is common, but I feel like there is a mixture of two types of information, one about the practice of shadow mapping, and one about this particular implementation, and I would like to separate them. - Rainwarrior 01:09, 14 August 2006 (UTC)


 * If an implementation had a black spot for the shadow where there logically should be (in an environment like ON earth), it would definitely a bad implementation. So a "better" one would have the same lit up a bit.  Besides, I worded it so that it isn't a requirement for a good one, then you can go on about the space/cave exception.  I wrote this article based from my experience programming graphics, using bits and pieces of tutorials and powerpoint presentations on the net.  All of the basic ones (which means about 90%) were three pass solutions, which the tutorials said were the simplest.  Then they said that there are several (complicated) ways to combine the last two.  Then I made my program use the shadow_ambient extension, so I could focus on other things.  I, like the tutorial authors, prefer not to dive too much into the various ways (shadow_ambient, nVidia's register combiners, shaders, soft shadows, etc...).  And frankly, I don't have the energy, or the time (I'm going to college in two weeks) to learn or describe them here, especially without going into more 'some magic happens' scenarios.  I'm glad you're into this!  praetor_alpha 02:59, 14 August 2006 (UTC)

I have rewritten the section as I proposed above. All of your information remains, I think, but I have tried to make it clear which parts belong to specific implementations (i.e. through hardware, or OpenGL, etc.), and which are an integral part of the shadow mapping process.

I did make one omission though: I removed your image of the scene rendered with black shadows. It makes more sense to render the shadowed scene first, and then draw lights on top of it (the reverse makes lighting impossible). A dicussion of the difference between lighting with a global ambient and without I think belongs over at Phong reflection model, which I've linked to from the appropriate place. Also, generally with shadows, a little bit of diffuse light is included as well. If you think the two passes need to be visualized, I would reccomend you change the first image (the one I removed) to be of the scene completely in shadow instead (to do shadowed first, then draw light over top of it). - Rainwarrior 17:54, 14 August 2006 (UTC)


 * Looks very good. I like it. praetor_alpha 13:05, 15 August 2006 (UTC)

NVIDIA link broken
http://developer.nvidia.com/attach/6769, the second external link, is broken at the time I write this - August 8th, 2011. Anyone have a substitute link? --DangerOnTheRanger (talk) 22:11, 8 August 2011 (UTC)

OT? - Disambig with "shadow buffer" CS term
An maybe-odd-seeming question here. Is there a "mainstream unix" object known as a "shadow buffer" (or is that another name for a shadow table)? If so, I think a disambig on this page should point to it - although not a full disambig page, as the graphics term 'shadow buffer' is most certainly the more used meaning. I've run into the term in the Apple Newton ROM (CShadowBufferSegment, along with CBufferSegment) - so it is altogether possible that it is NOT a standard UNIX term (the NOS implementation of the stdc libs is strange and certainly not complete). But I have no idea whether there is a UNIX term 'shadow buffer' at all - which is how I ended up here.. Jimw338 (talk) 17:49, 3 July 2012 (UTC)