Talk:High-dynamic-range rendering

Cards that support HDR
S3's GT/GTX series doesn't seem to be on there. I'm aware that their cards aren't exactly "powerful", but they do indeed support Shader Model 4.1 and HDR

Discussion Cleanup?
Would anyone object to a discussion cleanup? I think it would be far better to mirror the contents of the actual article and follow that method. This is getting pretty messy! ~Xytram~ 19:33, 8 August 2007 (UTC)

HDR + AA
"However, due to the stressing nature of HDRR, it's not recommended that full screen anti-aliasing (FSAA) be enabled while HDRR is enabled at the same time on current hardware, as this will cause a severe performance drop. Many games that support HDRR will only allow FSAA or HDRR. However some games which use a simpler HDR rendering model will allow FSAA at the same time."

I removed that paragraph, because I made a mistake 1-Many games will tell you that it is not compatibles 2-Even if you force it with your graphic card driver it will not do any anti-aliasing.


 * As per the above reply, I have recently ran Eldar Scrolls IV: Oblivion on my GeForce 7800GT. I forced 8xAA using the nVidia driver software and enabled HDR. I couldn't really see any significant performance difference nor I see any anti-aliasing . Although to include anything on the page would require valid benchmarks. I find it quite funny that, in Oblivion, if you try to enable AA it will warn you and force 'Bloom'. Maybe we could research this to see what card can successfully do both? Nut I couldn't find one yet I deleted this like a moron.~Xytram~ 14:48, 20 November 2006 (UTC)


 * The GeForce 7800GT cannot support MSAA and HDR at the same time, that's why you didn't notice any performance differences. When you forced 8xMSAA, you essentially did nothing in game, because HDR was still active.


 * The only Nvidia cards that can do both at the same time are the new 8 series cards. I don't remember which exactly for ATI. —The preceding unsigned comment was added by Special:Contributions/ (talk)


 * Unless I'm misundertanding you, this is not true; it depends on how the HDR is implemented. Valve's implementation can handle both HDR and AA at once on a 7-series.  I know this because I just tried it in Lost Coast on my 7900GS, and it certainly rendered in HDR and it was certainly antialiased.  I don't think Bethesda's can.  81.86.133.45 21:45, 5 April 2007 (UTC)


 * No this is not a misunderstanding... I own a 7800GT and it is not possible to use HDR & AA at the same time - even forced. ~Xytram~ 19:31, 8 August 2007 (UTC)


 * Such an awful lot of blah blah about an easy thing like this. MSAA runs the pixel shader several times with a slightly jittered position and averages the results (with optional weighting factors) before writing to the render buffer. No more no less. GeForce 8 class hardware as well as the newer Radeon cards can do this automatically if the user turns a switch in the control panel, older cards do not. However, unless you hit the maximum instruction count already (quite unlikely), this can be implemented trivially in the shader on older cards to the exact same result. The total number of shader instructions and texture fetches etc stays the same as if it was "hardware supported". The only real difference is that it doesn't happen automagically. —Preceding unsigned comment added by 91.35.150.102 (talk) 17:14, 18 December 2007 (UTC)

Allow me to clarify things a bit. What pre-G80 nvidia cards don't support is AntiAliasing a floating point render buffer. Games that use render to texture with an FP texture for HDR (farcry, among others) can't AA HDR content. Valve's implementation performs the tone-mapping step in the pixel shader, so they use a path based on standard integer render targets coupled with a histogram for overall scene luminosity detection. That way AA works OK. Starfox (talk) 20:42, 27 March 2008 (UTC)

Floating point
"Many monitors ... do not support floating point constrast values."

Of course not. Because they are analog devices. (rolleyes) &mdash;the preceding unsigned comment is by 195.210.242.74 (talk &bull; contribs) 13:38, 31 October 2005 (UTC)


 * Except, of course, for the HDMI or DVI ones that have digital inputs. Which support digital integer values, but not digital floating point values. --Nantonos 22:09, 14 October 2006 (UTC)

List of games that support HDR
Shouldn't this section only include games that have already been released? &mdash;the preceding unsigned comment is by Whursey (talk &bull; contribs) 20:12, 28 December 2005 (UTC)


 * I don't see why, as long as the games are indicated as unreleased. &mdash;Simetrical (talk • contribs) 05:56, 29 December 2005 (UTC)

Discerning HDR with "illusions" of HDR
Some older games that are do not specifically use DirectX 9.0c features should not really be counted as "HDR" games. I think a common mistake for some is that they think advanced blooming techniques (which may make the surface of an object brighter) counts as HDR, but blooming is only a part of HDR.

I'll admit, I can't even figure out if a game is using HDR or not.

The simulated HDR effects used in certain console games may share many characteristics with 'true' hdr rendering, beyond mere light bleed ('bloom') effects. For example, see Spartan Total Warrior and Shadow of the Colossus on PS2.


 * For me, as a photographer, I think I've never seen HDR in games. For me HDR is that when I expose the insides of the church and light is coming thru open windows, I can see the blue sky outside (taking around 6 pictures, mapping the pictures so that both the inside AND the outside is exposed correctly).
 * Not over-exposed bright-white windows what we get normally without applying HDR.
 * And this is what I see in the games. Big bloom around windows which to me look exactly like the normal/low dynamic range pictures what we get without cams exposing for the insides.
 * I don't think it matters if you manipulate the picture in 32 bits if you're then still letting the over-exposure get the best of you... 221.186.144.238 05:37, 12 November 2007 (UTC)


 * Blooming really has nothing to do with HDR, it is a non-correct, but good enough looking approximation for the convolution effect of real cameras (and of the eye, which is an aperture device, too). Bloom should only be visible on very strongly lit areas, hence the connection with HDR. Unluckily, a lot of game designers have such a hype about bloom that they put in a much too strong effect, causing the scene to look like a cheap 1970s soft porn movie.

What HDR is about (as the previous poster already said) is having a much larger difference between bright and dark image areas than 8 bits can offer. Because hardly any output device can reproduce the range that our eye can distinguish (or even the range that appears in nature), we have to use tone mapping to compress this range, effectively resulting in a LDR image. Black can't be more black and white can't be more white. However, it still matters that we process the information in high dynamic range, since we can dynamically modify the exposure depending on what we look at (and the overall brightness), much like the human eye does, too. This results in a much more natural look and feel: if you look at a bright window in a moderately lit room, the previously dark interior of the room apparently fades to black, and if you turn your head, the previously black room reveals detail which apparently was not there before, as the eye adapts to the situation. —Preceding unsigned comment added by 91.35.150.102 (talk) 17:37, 18 December 2007 (UTC)


 * Blooming is very important to HDR, without it the only other perceptible effect is to lose colour banding from lack of precision. Since it is how the higher luminance values (e.g. 2.0) are represented differently to the "old" range (e.g. 1.0) its a key part of HDRR, not just an unrelated post-processing effect (although when done badly it often is). Unbloomed HDR images look just like LDR to the uninitiated, i.e. computer game players... this might have something to do with all the (overly) heavy blooms that get used. --Jheriko (talk) 07:54, 19 June 2009 (UTC)

SM2.0b
It seems unlikely to me that SM2.0b increased precision from 8 bit integers to 16 bit integers as even as far back as ATi's 9x00 line of cards they had 24 bit floating point support according to this link. Does anybody have a better reference? Qutezuce 19:46, 9 January 2006 (UTC)

I think I had Shader Model confused with the version Pixel Shader, I apologize for keeping those two in the same line when they are clearly not (It should be Pixel Shader 2.0b, which is still a component of Shader Model 2.0). I'll put up the list of hardware again, and make this correction. But it's uncertain whether or not Pixel Shader 2.0b can be supported by any hardware supporting Shader Model 2.0, unless developers deliberately made SM 2.0 not take full advantage of the video card. XenoL-Type 14:22, 11 January 2006 (UTC)


 * My comment was in regards to this sentence "Shader Model 2.0b allowed for 16-bit integer based lighting percision" in the History section, not the section about video card support. Qutezuce 22:26, 11 January 2006 (UTC)


 * The article was referring to videocards in their approach to SM2.0. I will fix it though, and ignore that whole pixel shader thing because that'll get confusing.

After reading your latest edit I think you have misunderstood the point I'm trying to make. The paragraph (as it stands now) implies that SM2.0 had only 8 bit integer precision, and SM2.0b upgraded that to allow 16 bit integers. However I doubt this claim because this link states that ATi's 9700 hardware already had 24 bit floating point support. So if their hardware had 24 bit floating point support why would they only allow 16 bit integers? So I am asking you to provide a source or a reference for your claim that SM2.0b upgraded from 8 bit integers to 16 bit integers. Qutezuce 23:20, 11 January 2006 (UTC)


 * Implications, and misunderstandings. But before I begin, first of all, hardware capabilities are different than software capabilities. Say for example, just because having Intel HyperThreading Technology doesn't mean you'll get better performance on programs, the software has to take advantage of it. The Radeon 9000 series may have 24-bit FP support, but that doesn't mean it uses it if the software associated with the hardware prevents it from using it. It's almost like saying just beause the 9000 series supports 24-bit FP, implies that it could support SM3.0. Anyways, I'm tempted to believe that most games today only use a 16-bit lighting percision model based on this  and that's where 16-bit integer based came from, 16-bits because 16-bit light percision has only been used thus far and that the 9000 and Xx00 series were towards the end of its life to a new generation, integer based because SM2.0 does not support FP-based lighting percision. But Radeon 9000 only supports software that is Shader Model 2.0 or lower as far as I'm concerned (it's like hardware that supports up to DX8 can't do DX9 effects), and unless SM2.0 gets a 2.0c with FP support, Radeon 9000 and Xx00 series are confined to an interger based lighting percision model.


 * But the article's changed to disregard that "only 16-bit integer" thing.

XenoL-Type 00:51, 12 January 2006 (UTC)
 * Actually, that's incorrect from the what I've seen. By no means does SM2.0 lack support for FP lighting precision -- SM2.0 actually mandates it. If you check the article Qutezuce mentioned, you'll see they mention that cards must support at least 24-bit floating point. This was actually a big thing a while back since the 9x00's provided only 24-bit and people were complaining about it's lack of precision compared to the FX5x00's (slow as mud) 32-bit floating point. PS2.0b primarily added the ability for longer shader programs, and 32-bit floating-point.
 * Note that since SM2.0 supports the nescessary floating point precision (24-bit is quite sufficient), it does indeed have the capability to support HDRR, and the article should be revised to indicate this. It should also be mentioned that few pieces of sofware actually make use of it though (since you are absolutely correct that hardware which goes unused is essentially useless). JigPu 06:08, 14 January 2006 (UTC)


 * According to MS's site (which again is according to nVidia), the section Interpolated color format, Shader Model 2.0b has an 8-bit integer minimum. Shader Model 3.0 has a 32-bit floating point minimum. And the upgrade description states a "Higher range and precision color allows high-dynamic range lighting at the vertex level". The hardware may support floating point operations, but again, if the software, as the site says, does not do floating point calculations, then the hardware, regardless, doesn't do floating point calculations. Let me put it this way, the 9000 series (which should only be consisted of the 9500 and beyond since 9250 and below is DX8) can do high percision coloring, but since it's limited to Shader Model 2.0 effects, it can only do Shader Model 2.0 effects. It can't do 2.0b, which is supported only by the Xx00 series.


 * XenoL-Type 00:51, 12 January 2006 (UTC)


 * Looking at the site you mention it says "8-bit integer minimum". I think the word minimum is key here. I think the reason it says minimum is simply because that is the minimum you need to have to in order to have SM2.0 support, but you can go above and beyond that. DirectX 9.0 supports the use of cards with floating point support, however it only requires 8 bit integer for SM2.0, but that doesn't mean if the hardware has floating point support that Direct3d blocks the use of FP simply because the card doesn't support all the features of SM3.0. If this wasn't the case they why would they put the word "minimum" in their table? If 8-bit was both the minimum and maximum then the word minimum is misleading.
 * Now I will fully admit I'm not an expert at this subject matter, and it appears you aren't either. We should get someone who actually knows what they are talking about to clean up the article. Qutezuce 20:30, 18 January 2006 (UTC)


 * I know what you mean, and I know I'm not an expert, but I do know one thing. The 9000 series was meant to be designed specifically for Shader Model 2.0 and below. Since the Shader Model 2.0 API only does calculations in integers, again according to that site, then that's all the hardware calculates. If the software is only sending integers, the hardware isn't going to calculate in FP. Or it will, but it's just that it'll be something like 20.000000. XenoL-Type 17:20, 18 January 2006 (UTC)


 * The minimum requirement for SM2.0 may only require integers, but that doesn't mean a graphics card which supports SM2.0 can't do FP. SM2.0 doesn't limit what you can do to only integers, it is a minimum set of requirements that hardware has to either meet or exceed to be qualified as SM2.0. I'm saying that cards like ATI's X800 exceed the SM2.0 minimum requirements by allowing 24 bit FP, but don't quite reach the requirements of SM3.0 (because it lacks things like dynamic branching). Think of SM2.0 as a set of minimum requirements to run a game, if your computer setup exceeds those requirements then it's not going to ignore the extra 500MHz or 256MB RAM that is available to it. Qutezuce 01:37, 19 January 2006 (UTC)


 * Actually, we're not concerned about the hardware capabilities of the videocard. We're concerned about about the software portion that is SM2.0 that allows HDRR, all it says now is that Radeon 9000 supports HDRR but in its SM2.0 form. The article has been fixed to clear up any hardware misunderstandings. XenoL-Type 15:40, 19 January 2006 (UTC)


 * What do you mean when you say "the software portion that is SM2.0 that allows HDRR"? Are you refering to Direct3D allowing HDRR? Or are you talking about games that use 24 bit FP? If you mean the former then that is what my last reply was about, Direct3D allowing the use of 24 bit FP above and beyond the minimum requirements of SM2.0. Qutezuce 23:52, 19 January 2006 (UTC)


 * When talking about HDRR, it should only be talking about Shader Model 2.0 since after all HDRR is a shader effect, not an effect caused by the hardware. Leave the 24-bit FP calculation for the R300 topic. And also, I'm still waiting upon the theory I have that Shader Model 1.0 is really DX8.0, and SM2.0 is DX9.0. The thing is though, nVidia is the forerunner of HDR technology (so to speak). Therefore there could be bias against ATi, but since other publications are saying that SM2.0 in any form is only integer based, then that's the way it will be until someone actually tests it.


 * And most games seem to incorporate 16-bit lighting percision, or rather 64-bit coloring. Using 32-bit right now would eat up a lot of space. XenoL-Type 21:11, 19 January 2006 (UTC)


 * Gaming support and what real world games actually use is a valid issue, but it is not the issue at hand here. The issue of FP support is not just an R300 issue, as the GeForce FX series supported 16 bit FP and 32 bit FP. Where are these other publications that say SM2.0 is integer only? A minimum requirement of 8 bit integer sure, but integer only, you have not shown anything that says that. Finally, your first sentence ("When talking about HDRR, it should only be talking about Shader Model 2.0 since after all HDRR is a shader effect, not an effect caused by the hardware.") does not make any sense to me. Qutezuce 05:30, 20 January 2006 (UTC)

Asked about this. DX9.0 is left out in the blue, but DX9.0a is 16-bit FP, 9.0b is 24-bit FP, and 9.0c is 32-bit FP.

All HDR games though use 16-bit to increase efficiency.XenoL-Type 22:33, 20 January 2006 (UTC)


 * You asked who? You seem to be completely missing my point time and time again. I'm saying that although certain minimums may be requirement to be called SM2.0 or SM3.0 or whatever, that does not limit the use of 16 bit FP, or 24 bit FP, or 32 bit FP in Direct3d if the underlying hardware supports it, irregardless of the SM version. Qutezuce 06:55, 21 January 2006 (UTC)

the ATI FireGL v5250?
The v5000 and v5100 cards are included on the table, but the v5250 is not. Is this an oversight, or does the card not support HDRR? I've done a bit of research, but I can't seem to find out. Thatcrazycommie 15:20, 5 June 2007 (UTC)

HDR Support
Radeon X300 isn't the oldest card that supports HDR. I just played Lost Coast with a Radeon 9600 and it showed up (beautifully, I might add). &mdash; Ilyan e  p   (Talk)  17:57, 10 January 2006 (UTC)
 * I'm pretty sure it's not just advanced blooming either. &mdash; Ilyan e  p   (Talk)  18:00, 10 January 2006 (UTC)

I'll put up the list from all the cards supporting Shader Model 2.0, but can you make a quick confimration that on the "Advanced" settings in the Video options that it says "Full HDR" somewhere? Thanks. - XenoL-Type 14:24, 11 January 2006 (UTC)

Wasn't the Matrox Parhelia 512 the first to support HDR?

I think I can make sense of this: The Parhelia was the 1st (it was pretty much the only selling point, which they also branded as 40-bit and "Gigacolor". I believe it was available in April while several months later in August ATI announced what was later called Radeon 9000 series, released a few months later and included the above mentioned Radeon 9600 over a year later in Oct 2003. These cards are based on ATI's "Radeon R300 Architecture"; Speculation: someplace the Names "Radeon X300" and "Radeon R300" got mixed up, though this seems like a mute point since the Parhelia was the 1st, and many original articles still online highlight this feature prominently.

Cards for HDR
Maybe it should be noted that RD3xx and below does not support half-float blending?

On Paul Debevec site, you can see HDR was possible even on NV15. It's not that HDR wasn't possible before: it is that it's just much easier with this machinery ready to go (I don't say it: he writes on his page). 83.176.29.246 11:13, 10 June 2006 (UTC)

Are you sure it wasn't the "Qaudro" version? But then again I believe professional graphics developers at the time used the NV15 as a "poor-man's" professional card. I'll buy that it could render scenes in HDRR though, but not in real time. And find a note about the ATI card, unfortunately we don't need more unsourced information than is needed. XenoL_Type, 18:34, 15 June 2006 (UTC)

GPU Database
I found this website http://www.techpowerup.com/gpudb which includes ATI, Matrox, nVidia and XGI GPU specs. As stated on this website, "all cards with SM2.0 support HDR". The GPUDB pages shows the chipsets DirectX, Pixel & Vertex shaders specification. I'm not sure which needs to be supported (i.e. Matrox Parhelia supports PS1.3 and VS2.0). Might be useful if someone could have a look and we can get the cards list updated. ~Xytram~ 14:57, 20 November 2006 (UTC)

(Edit: Oops! My mistake, I've just noticed it's included in the article External Links. Seems I've been beaten to it.) ~Xytram~ 15:15, 20 November 2006 (UTC)

DX10 HDR Cards
The new ATI DX10/SM 4 cards are almost being released, so I've added them to the list with SM 4.0 cards with HDR. That are: HD 2900 series (XTX and XT), HD 2600 series (XT and Pro) and HD 2400 series (no info) —The preceding unsigned comment was added by 83.116.90.70 (talk) 17:57, 30 April 2007 (UTC).

Shader Model history
I did some thinking and this crossed my mind, and it probably should've been obvious.

I believe that DirectX 8 and beyond is another name for Shader Model. If OpenGL coorelates to the Shader Model we hear from, then it'll break the argument. Anyway, probably when DX8.1 came out, we get Shader Model 1.1, such as Deus Ex: Invisible War which requires DX8.1, but doesn't need 9.0 (even though it comes with it). DirectX 9.0 saw the release of Shader Model 2.0. I've heard many reports though that the Radeon 9700/9800 series (which includes 9500 and 9600) litterally tore through GeForce 5 series on all Shader Model 2.0 benchmarks from 3D Mark 03, but I wouldn't know, and I'm willing to bet that Source thinking the GeForce 5 series is a DX 8.1 card is a behind the scenes deal with ATi and Valve.

Anyway, when DirectX 9.0b came out, that was probably when Shader Model 2.0b came out, seeing how there's a relation to the versions. (If DirectX 8 is Shader Model 1.0, DirectX 8.1 is Shader Model 1.1, and DirectX 9 is Shader Model 2.0).

Of course, the pattern is broken when DirectX 9.0c is Shader Model 3.0. But I hear DirectX 10 will only add a geometry shader, not a new Shader Model. XenoL-Type 22:11, 17 January 2006 (UTC)

hold on here.......this article shows an impression that ati radeon Xx00 do not support hddr(ones which support sm 2.0b)........why is that when some of the older ati models do....... i got an x700....will it work???

if i create virtual memory of 256 mb on my hard disk.......along with 256 mb ram.....will it improve my gaming experience???

i got a 256 mb ram.........if i create a virtual memory of 256 mb on my hard disk......will it improve my hl2 experience???

Age of Empires 3
Do age of empires 3 use the HDRR effect?

Just games?
My god this whole article reads like it was written by a bunch of teenage gamers showing off their flashy new graphics cards. You do know that HDRI was pioneered in 1985 by Greg Ward? Probably before most of the authors of this page were born. I've tried to clear up some of the explainations in this article, but there's a lot more to be done. Although there's already an HDRI article, so perhaps this article could be renamed to make it clear that it's just about real-time rendering done with fragment shaders in modern GPU's. Imroy 12:02, 31 January 2006 (UTC)


 * I agree, this article seriously needs to be rewritten. Qutezuce 20:11, 31 January 2006 (UTC)


 * I don't see the need to rename the article; the use of "rendering" in the title should be sufficient in describing that HDRR doesn't involve photography or other non-electronic image processing. May I also add that HDRR should cover any form of computer graphics application besides those from computer and video games, and little is covered about it here at the moment. In addition, I see way too many game screenshots; some of them have to go. ╫ ２５ ◀RingADing▶  17:29, 2 February 2006 (UTC) ╫

Apparently HDRR as people mention reffers to the real-time rendering of 3D scenes in a High Dynamic Range. As far as anything else goes for real time renders, games are the only viable application. Gaming engines provide the best way to show off something that's done in real time, because settings on games can be turned on or off on the fly, making it easy to grab comparison screen shots. Even if say Pixar made Finding Nemo or The Incredibles in HDRR, you probably couldn't obtain a LDR version for comparison.

The only other program that's not related to gaming is that HDRR demo in the links.

XenoL-Type 18:42, 1 May 2006 (UTC -9)

Cleanup February 2006
I tagged the graphics cards section for cleanup because I think there's a preference in WP:MOS for not using color codes. --Christopherlin 07:24, 9 February 2006 (UTC)

Seperating the article further
The article looks a bit jumbled with a lot of main points. To organize it a bit better, I thought we could seperate this by HDRR in general (such as history, etc), technical details (what it does), and applications (so far HDRR is primarily in gaming, but if you can find some sort of other application like CAD or computer generated movies, that'd be great).

got it wrong?
"This means the brightest white ( a value of 255 ) is only 256 times darker than the darkest black ( a value of 0 )."

I'm tired so I don't dare editing, but, shouldn't it rather be : This means the darkest black ( a value of 0 ) is only 256 times darker than the brightest white ( a value of 255 )?


 * Yes, I fixed it. Thanks. Qutezuce 01:44, 26 April 2006 (UTC)


 * So you resolved the divide by zero problem, eh? Last time I checked black is infinitely darker than any other colour value... --Jheriko (talk) 07:55, 19 June 2009 (UTC)

Other programs that use HDR.
This is a placeholder for a comment that I withdrew (redundant information). Remove if required. --CCFreak2K 10:53, 9 July 2006 (UTC)

Contrast ratio confusion
In the section High_dynamic_range_rendering the article currently seems to confuse monitor contrast ratio with dynamic range ratio (with only the latter being in association with what is discussed in this article). A display/monitor/beamer can have a contrast ratio as high as 10000:1 while still having a poor dynamic range ratio (think of a display that can display a very dark black and and a very bright white but only comparatively few grey steps between this black and white as an example for a display with good monitor contrast ratio and bad dynamic range ratio).

I would favor a clear separation of those two terms so there is no confusion. --Abdull 18:01, 11 August 2006 (UTC)

Precursor to HDR?
In January 2002 jitspoe released an "automatic brightness" Quake2 engine which imploys an early method of HDR. The screen's brightness adjust according to the avg brightness of the pixels on the screen. Could this possibly be included? download (on fileplanet, unfortunately) CheapAlert 21:54, 6 September 2006 (UTC)

Accurate reflection of light
I've noticed that this paragraph is being changed. I think it could be better worded overall - since its quite confusing. The first paragraph starts "Without HDRR..." and during the paragraph is changes to HDRR 'enabled'. Then the second paragraph starts "...rendered with HDR".

I'll reword it here, so people can make their own changes/comments before we change the articles version. Anything in bold below I've modified from the original wording.

--- Without HDRR, the sun and most lights are clipped to 100% (1.0 in the framebuffer). When this light is reflected the result must then be less than or equal to 1, since the reflected value is calculated by multiplying the original value by the surface reflectiveness, usually in the range 0 to 1. This gives the impression that the scene is dull or bland.

However, using HDRR, the light produced by the sun and other lights can be represented with appropriately high values, exceeding the 1.0 clamping limit in the frame buffer, with the sun possibly being stored as high as 60000. When the light from them is reflected it will remain relatively high (even for very poor reflectors), which will be clipped to white or properly tonemapped when rendered. Also, the detail on both the monitor and the poster would be preserved, without placing bias on brightening or darkening the scene.

An example of the differences between HDR & LDR Rendering can be seen in the above example, specifically the sand and water reflections, on Valve's Half-Life 2: Lost Coast which uses their latest game engine "Valve Source engine". ~Xytram~ 13:13, 24 November 2006 (UTC) ---

Since no one has made any changes or comments, I'm guessing everyone is happy with my rewording. I'll update the article. Comments are still welcome! ~Xytram~ 12:54, 5 December 2006 (UTC)

HDR game list
I understand why it would have been removed, the list would be getting too large now, though, would it not be worthwhile listing the games that use SM2.0 HDR?

I'm sure that list would be rather small 193.60.167.75 15:29, 23 January 2007 (UTC)

Some of this article sounds like advertising
Specifically, this section:
 * One of the few monitors that can display in true HDR is the BrightSide Technologies HDR monitor, which has a simultaneous contrast ratio of around 200,000:1 for a brightness of 3000 cd/m2, measured on a checkerboard image. In fact this higher contrast is equivalent to a ANSI9 contrast of 60,000:1, or about 60 times higher that the one of a TFT screen (about 1000:1). The brightness is 10 times higher that the one of the most CRT or TFT.

It looks like it's aimed at advertising the BrightSide Technologies monitor, without mentioning any other manufacturers that make the monitors. Could someone more knowledgeable about this please fix this? Thanks. Mike Peel 23:09, 2 February 2007 (UTC)

Are there any other monitors that exceed TFT dynamc range ? For those that can't be bothered to click the link, Brightside use an array of backlights to increase contrast.

I am tempted to delete "3.2 Graphics cards which support HDRR" because that's just advertising, too !

... and all mention of games / game engines that support it, too !

Instead, I've added some useful information back in again ! If you can add to or improve what's there, feel free. Deleting it is not constructive.

--195.137.93.171 22:26, 9 September 2007 (UTC)

Specific types of HDR rendering
Anyone know a good place for a good explanation of the various types of HDR (FP16, FP10, Nao32, etc)? Derekloffin 20:57, 20 February 2007 (UTC)

Please Verify
Quote from the article:

"The development of HDRR into real time rendering mostly came from Microsoft's DirectX API."

I reckon that I remember ID Software publications (smaller articles on the net) stating that they require graphics card vendors to include 32bit resolution colour handling in order to actually proceed with further development of hdr on a hardware based scheme. Could this please be verified in order to get it straight? And also I believe that it was well before DirectX actually began supporting HDR.

AFAIK they did it purely in software these days (Quake 3 engine) without supportive hardware nor software APIs, aka DirectX 9.0?, being available.

Besides that, ID Software or some of its main representatives also is authoritative, besides others, in that they actually drive on development of existing graphics card technologies in both hardware and software.

Besides that also, HDR and other similar effects are all due to appropriate shader programming and by that post-processing of the currently rendered scene. Therefore, DirectX should be seen as only [the first] API incorporating the possibility to actually do HDR rendering.


 * Actualy HDR has almost always been possible on hardware (e.g. Voodoo cards etc...) for small scenes and tests through clever hacks, its just not been practical for production. Just like most other shader effects... shaders just make it a ton easier to do. --Jheriko (talk) 07:58, 19 June 2009 (UTC)

DirectX bias
This article seems quite bias towards DirectX, it is obvious that most HDR rendering engines are written for DirectX at the moment this does not mean that its not possible in, for example, opengl.

Maybe it would be worth adding a new section Development of HDRR through OpenGL or words to that effect to detail the history of HDRR and non-DirectX paths? ~Xytram~ 19:22, 8 August 2007 (UTC)

I'll second this thought, OpenGL is fully capable of HDRR, and can deliver equivalent or superior quality to DirectX. This best exemplified by Id Software's work with idTech 5. Other engines/libraries to consider include Cube2, Ogre3d and Irrchlight. 156.34.153.134 (talk) 01:37, 16 January 2012 (UTC)

Disambiguation needed on topics and techniques.
This article seems to jumble together different things:

- Rendering using Images Based Lighting, as shown by in Debevec's work and widely used as a lighting technique in 3D rendering. Image Based Lighting is most realistically done using HDR images. In this use, the HDR image data is an input file to the 3D rendering process, one of the source texture maps.

- High Dynamic Range output from a 3D rendering process, which can involve a renderer outputing floating point data into a file format such as as (most popularly) OpenEXR. This allows for realistic blurs, blooms, and optical effects to be simulated during compositing, allows more latitude for lighting adjustments during compositing, and it makes tone mapping possible when the image is finally converted into a viewable dynamic range. The article focuses on a feature of some game engines that compute or store HDRI illumination data so that camera exposure changes and optical effects can be simulated in games. This should really just be a sub-section of the overall discussion of High Dynamic Range rendering.

(If these things aren't going to be fixed, then maybe the title of the page should be changed to "High Dynamic Range Rendering (in Video Games)" and people interested in high-end graphics should just add to the main entries on High Dynamic Range Imagery and to the existing stub on Image Based Lighting.)

Jeremybirn 14:42, 13 September 2007 (UTC)

Bad example of tone mapping
In the current example given for tone mapping (from DoD: Source), the affected area is very small, and the effect isn’t very pronounced. —Frungi 04:32, 12 October 2007 (UTC)

I agree, it's a bad example. The Source Engine is a very poor compared to more impressive engines such as Unreal Engine 3.--TyGuy92 (talk) 21:30, 26 January 2008 (UTC)

Fair use rationale for Image:Dodtmio.jpg
Image:Dodtmio.jpg is being used on this article. I notice the image page specifies that the image is being used under fair use but there is no explanation or rationale as to why its use in this Wikipedia article constitutes fair use. In addition to the boilerplate fair use template, you must also write out on the image description page a specific explanation or rationale for why using this image in each article is consistent with fair use.

Please go to the image description page and edit it to include a fair use rationale. Using one of the templates at Fair use rationale guideline is an easy way to insure that your image is in compliance with Wikipedia policy, but remember that you must complete the template. Do not simply insert a blank template on an image page.

If there is other fair use media, consider checking that you have specified the fair use rationale on the other images used on this page. Note that any fair use images uploaded after 4 May, 2006, and lacking such an explanation will be deleted one week after they have been uploaded, as described on criteria for speedy deletion. If you have any questions please ask them at the Media copyright questions page. Thank you.

BetacommandBot 13:47, 26 October 2007 (UTC)

HDR safing power
HDR actualy was designed to extract more computing power from CPU(GPU) for games, because as you see "HDR" merginge higher intensity lighting and then diference making by some antialiasing... Thus safing computation power and giving it for better textures and more poligons etc... Thats way newer CPU/GPU capapable to show them good in new games... Thus HDR is in 4-6 bits for each colour instead 8 bits. Alpha also reduced to 4-6 bits instead 8... And on this all puted antialiasing of 24 bits 16M+ colours. Because colourse (and lightin interaction) calculations is harder to do in more bits, because say 8 bits is 256 numbers precision and 4 bits only 16 so diferents of sixteen time, while antialiasing is no hard task at all by making it 2-4 times only harder, so say 4 times win. Because actual computer speed and game looking depending on HDD speed which depending on HDD size varing from 10 to 25 MB/s and flash SD cards particularly is about 2-3 MB/s writing speed and 4-5 MB/s reading speed. So not very hoply that SSD is more than 10 MB/s. And USB data bus wide is 4 bits BTW. While blue conector have 14 signal pins and one minus, but probably 12 bits analog "antialiasing" "compensating" 24 bits and making 4 bits for each colour, while monitor connector with white colour have 24 signal pins (8 pins for colour) one minus (corpus) and 4 for USB. CD-ROM 1x means 150 KB/s, thus all cdroms are the same reading speed of about 2 MB/s or 10x-12x and 40x etc is just disk rotation speed... RAMs may needed just for writing in HDD in better order... But RAMs speed is almost the same like flash... From new cdroms reads at 5 MB/s and 2 pins in blue vga is for speakers stereo. And actualy hdr even don't exist and for example in tomb raider legend/aniversary full screen effects without deep field is just all points highing by some amount of intensity (each colour moving say by 10 intensity levels from 255, so say if there was 120:150:250 intensity then it becoming 130:160:255, similar like gamma should be clever instead doing it not linarly (by say 50 rising by 20 to 70 and 200 rising by 30 to 230 and it is not linary and gamma thats because very harmful for image quality) and deep field is the same like on CRT monitors giving more brightnes and then more gray picture becoming, because if say there is colour 40:100:230 then it becoming say 50:105:230 and it more gray say blue... Games becomes better looking only because of rising frenquency of 386 or more precisly HDD speed, better architecture and programing and more memory to safe it. Then calculated in 24 bits colourse signal is converted to 12 bits blue monitor conector with two stereo pins and one GND, so then signal traveling in monitor scheme/tranzistors/tube mixing signal and making not distingushability of diference between 16.5 milions colours and 4096 colours, because even LCD crystals and transistors until signal coming out becomes chaoted and mixed like itself antialiasing of some kind... Maybe even games never use 32 bits colour but 16 always (4 bits for smoke and dark glass, briliant etc). You see, if say in CD or HDD will be writen some errors and those errors are not part of general code, but encoding some sound or picture, then those errors are not fatal system errors and actualy more kHz for sound need just for bigger amplitude sound, because signal recorded in HDD is very weak dude and it is hard to amplifay without errors many times, so it is recorded longer and thus between 8 and 16 kHz you can see diference, but it not nessasary mean such frenquency of sound. So in DVD there is 10-20 thousunds bits in 1 cm long, it's not likely that the same number transistors is in 1 cm long chip, because one wrong transistor breaking all computation and if say lazer making such 1 micrometr or 1 nm holes in silicon which then filled with hot liquid gold or aliuminium, it's not likely it will gonna work. So microsoft keep silence about this, because for example pi number calculation is wrong after many numbers like 10 milions or trilions, because first thing which need to do is to check or much all number calculated on diferent (super)computers and seems nobody doen this and so many big calculations may be wrong unlike no problems then some small point little bit changing it colour in jpg texture... And dificult (not say only sky) jpg texture or picture taking about 10 times less bytes than bmp of good quality. So say 1000*1000 texture of bmp will cost 24 Mbits or 3 MB, so jpg about 0.3 MB, so there about 1000 textures of such resolution and there already not enough 512-1024 MB of GPU RAM or 1 bilion triangles Lara boob and there also not enough 1 GB of RAM if you thinking RAM is 10-100 times faster than HDD...

Missing images?
"In the HDR rendering on the near right" - where are the images? —Preceding unsigned comment added by 199.203.230.140 (talk) 10:53, 22 April 2010 (UTC)

correct HDR algorithm
In current computer griphics HDR algorithm is not exactly correct. It calculates all pixels brightness and depending on average all pixels brightness increases brightness of display. So if color was RGB(230:200:170), then after HDR it will be RGB(255:250:220). So if nearly white color is under HDR it becoming with value 255. So real HDR like human eye see is graying dark places and bright places almost don't changing. So need for correct HDR (kinda my algorithm) just increase brightness and pull down contrast, so that it always will not exceed 255 value for values which before HDR was just little bit smaller than 255 (like 180 or 230). If you want more understand what brightness ant contrast value represent then you can go to ATI/AMD "Catalyst Control Center" and select "Desktop color", then you will know how much need pull contrast down after rising brightness up (and this you can insert then in game code). For example, defaut brightness is 0 and default contrast is 100. For example for HDR if brightness is 30, then contrast must be 88 and all in game bright textures will not become 255 (which before have values about 200 and higher). — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 08:11, 30 August 2011 (UTC) If line is in the right up corner exactly then HDR will be mostly correct. It shouldn't be hard to make some simple formula how much decrease contrast, when brightness increases. Yes, I already have formula after little bit trying, if brightness increasing 25, when contrast decreasing 10. So brightness increasing 2.5 times more than contrast decreasing.
 * For truly truly correct HDR the same need to do for white (from sun, but can be any color) glowing around bright object. Glowing Such effect is done, I think, with some bluring and textures sprites, so need decrease this texture contrast [by say 10] and then increase brightness [by 25]. But texture do not have contrast and brightness. So it can be more difficult or can be even impossible. So need to do somthing similar like gamma (by the way gamma is wrong, because it rising dark colors not enough so only average colors are increased too much; gamma correction is curve and contrast and brightness combination is straight line). To do this need add to each color more if he is darker and less if he is not so dark. But maybe it's not even textures used, but still must be done like this, like for scene in general. Don't underestimate it shining around bright objects like from sun, because around my hand on sun becoming white halo [brighting all objects around hand so they look gray-white and depending on distance from hand lightened with sun, halo effect fading. It must be around any object like cloud or around bright spot in object texture (cloud or any bright spot is considered bright if it is illuminated by sun).
 * The difference between hand and cloud can be, that hand do not strongly shining face and light from face do not making everything under white mask. Cloud (illuminated by sun) even small amount falling on face or in eye (but maybe it together on face) spreading light to eye so it everything, what was dark[er] see in gray (through gray mask; gray mask means like colors fading and everything becoming dark, but it gray, but you accept it as dark or maybe accept like gray if you very sensitive).[Iluminated with sun] hand can't illuminate so strong viewer face, so only around hand is visible bright light fading from hand and if hand is about 1/4 of field of view, then it almost brighting all scene (but farther objects from hand almost are not under white mask (in computer graphics need gray mask-texture with alpha plus color of object and depending on object brightness)). — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 08:57, 30 August 2011 (UTC)
 * The main reason, why in correct HDR algorithm I don't using gamma instead of brightness and contrast together is, that gamma color RGB(0:0:0) is always the same no matter how hight gamma is. So it will not give effect of gray for darker colors after HDR. Not very dark color mixing with gray is roughly speaking the same as mixing color with black or in over words the same as decreasing color (like from RGB(130:150:80) to RGB(80:100:20)) intensity. So gamma red color RGB(100:0:0) before HDR will make pixel color after HDR the RGB(150:0:0). With combination of brightness and contrast from pixel color before HDR RGB(100:0:0) you will get pixel RGB(150:50:50) after HDR. The same is for original official algorithm, except that official algorithm converts small bright parts of scene into white (adding 50 for each RGB value, do not depending how bright this value is already, while mine algorithm adding 50 for dark and say 20 for brighter and 10 for very bright and 5 or almost 0 for very very bright RGB values). — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 11:04, 30 August 2011 (UTC)


 * In game Crysis with Directx 10, there is not so much HDR, it seems only brighting all scene depending on angle between rays going from eyes and ground. In game "Call of Juarez" can be also in such way, but ground is illuminated very strong when looking to ground and very much white parts becoming on ground when looking to it (ground normal and view vector is the same). Perhaps even all games using brightness depending on angle between view vector and ground normal (only not in night time). In that case still need to exactly like told before, if angle between ground and viewer vector is big (or between ground normal and viewer vector angle is small) then increase brightness by 50 and decrease contrast by 20. But in all this conditions must be sun or thin clouds and it can be pretty correct, because even small bright sun spot in field of view puting white or gray musk on all scene, which is not in sunlight. Alternative way (if you don't like gray for dark colors and kinda don't want loose range of colors for bright objects, then simply decrease brightness to -50 and contrast increase to 120. So all RGB values under 50 will be black. Also possible combination to not loose colors (like middle way) then brightness set to -25 and contrast set to 120 (this is average between my algorithm and official algorithm); then all colors under 25 will be black and all colors more than 229 will be white).

Notice, that even white surface illuminated by light bulb must never exceed 150-200 unless from very close distance. Only sun light illuminated objects can be more than 200. Don't think, that human seeing so wide dynamic [light] range, because even with sun illuminated monitor screen do not making everything on monitor invisible (only colors under 150-200). And monitor shining stronger than paper illuminated by lamp light. Also white papers in shadows are not so dark compare to white papers in direct sunlight. And everything what is in sun shadows is not brighter than under sky without sun. And under sky without sun light bulb or any lamp (not too far) adding some visible value of light to ground brightness like 50 to 150. So still perhaps best way is to kill all colors under value 50 or even under 100 (decrease brightness by 50 or 100) and increase contrast by 20 or 40 respectively. Then it will have dynamic range under sun and all colors smaller than say 100 (max is 255) will be black. It just can be little bit like coming to 16 bits (high color), but since only about twice divided monitor lighting range, then it still should be invisible difference, because no matter that somebody talking about 10 bits per RGB channel (30 bits total instead standart 24 bits +8 alpha), 8 bits per RGB channel is more than enough (it will be as about 7 bits per channel). If player looking straight to horizon then contrast decrease only by half and brightness increases by half of maximum value (brightness will be -50 and contrast 120). If looking to player legs then brightness is at default value 0 and contrast is at default value 100. If looking straight up (to sky; to sun) then brightness is -100 and contrast is 140.


 * It apears crysis is realy not angle HDR type and then all overs. It really calculates all pixels and summing frame pixels and dividing by number of pixels and if total color is dark then brightness increasing and it also counts sun light like value 1000 or similar and room light like 200 (but maybe not, sun is just brighter, because it's almost don't make difference for final result if sun light is 240 or 900; I think it's just increasing brightness of normal frame if frame is too dark). If you want to be sure, go under stone or jeep and let small amount of sky light be in the screen, then sky part will all turns to white and if you will not be under stone and look to sky at same angle then sky is normal blue (brightness increasing by some amount of frame, when small part of sky is visible).


 * How to simulate colors contrast in game? In ati/amd catalyst control center>> Desktop Management>> Desktop color there is default contrast value 100 and default brightness value 0. Contrast regulation range is from 0 to 200. Brightness regulation range is from -100 to 100. So as I say brightness simply adding some number to color or subtracting some number from color. To simulate brightness there is formula b=y/2.55, where -100100): f=(x-100)*c/200, where 100<x<200; 0<=c<=255. Here x regulates contrast from 100 to 200. If x is 100 and color is 255, then according to formula f=(x-100)*c/200=0 nothing will be added. If x is 120 and color is 150, then to color 150 will be added f and color will be
 * 150+f=150+((x-100)*c/200)=150+((120-100)*150/200)=150+(20*150/200)=150+(150/10)=150+15=165.
 * So from 150 color will become 165.
 * If color is 255 (maximum value), then color, after using contrast with number x=120, will become
 * 255+f=255+((x-100)*c/200)=255+((120-100)*255/200)=255+(20*255/200)=255+(255/10)=255+25.5=280.5.
 * As you see if color intensity (ranges from 0 to 255) is bigger then to color is added more (maximum to color 255 can be added 127.5 at x=200, but it still will be 255, but just if line would go farther). Because in first example is added 165-150=15 and in the second example is added 280.5-255=25.5.
 * Now if x<100, then it will be subtracted from color. So contrast formula for x<100 is the same f=(x-100)*c/200.
 * If x is 80 and color is 150, then to color 150 will be added f and color will be
 * 150+f=150+((x-100)*c/200)=150+((80-100)*150/200)=150+(-20*150/200)=150+(-150/10)=150-15=135.
 * Maximum can be subtracted 127.5 from color 255. For example if x=0 and color c=255, then color 255 after applying contrast simulation will be:
 * 255+f=255+((x-100)*c/200)=255+((0-100)*255/200)=255+(-100*255/200)=255+(-255/2)=255-127.5=127.5.
 * So this is really simulation of contrast (exactly like it is on desktop) and with this formula possible with big precision, I guess, to calculate range of colors with intensities from 0 to 1; from 1 to 2; from 2 to 3; from 3 to 4; from 4 to 5. Not quantized of course, but this is if you see everything shined by Sun (but still don't see directly sun, but just surface shined by sun) and there is some building without windows and with opened doors and this building in the shadow, then you probably will not see what is inside this building or very weakly because of darkness. So this algorithm will make everything in building black or dark. I guess human eye can see in 5 ranges of light intensity. In First range [average] light intensity is 5 times weaker than 5th range. It's like compare Sun light with lamp light. Sun light about 5 times stronger. This must be correct because human eye iris in retina can change radius about 2-3 times from smallest to biggest if smaller iris radius is 1, then biggest eye iris radius is 2.5. And light intensity is 2.5*2.5=6.25 times stronger. So here how [my] algorithm works. All colors which was from 0 to 51, after my algorithm procedures will become from 0 to 255 if all pixels average color is 25.
 * And if all pixels average color is 76 (most colors are in range from 51 to 102), then after my algorithm calculations this range will be expanded to monitor colors from 0 to 255. Colors which don't fit into range 51-102 will be eliminated (cuted). Each range must have about 51 color strengths. Number of ranges can be very much, but only ranges can not be like from 0 to 10 or from 230 to 255, because if average all pixels color is smaller than 25 or bigger than 229, then it still must be in this range 0-51 or 204-255 correspondingly, because sun will never be dark and night will never be very bright (like day). Or at least it correct for night, because in very bright artificial light maybe really is everything white, but nobody sow such light, so better to adapt it for sun only (204-255 range maximum range). But if it will not be adapted, for example after algorithm calculates average scene light is very bright, then perhaps it's not so bad, because brightest white still be white and at night it will be just bright (but not gray). But still hard to imagine such dark scenes, but still they possible, so better do not allow brightness and contrast combination algorithm to expand range from 0 to 10 into range 0-255 or 200-255. So algorithm principle is to take range and expand it to range from 0 to 255. For example if average pixels intensity is 125 (from 0 to 255 possible), then we dealing with range from 100 to 151, which will be expanded to range from 0 to 255 monitor intensity.
 * What range of colors will be chosen from original range 0-255 if brightness set to 100 and contrast set to 0? Range (from 39 to 167) calculation formula is this for down range limit:
 * 0+f+b=((x-100)*c/200)+(y/2.55)=((0-100)*0/200)+(100/2.55)=39.2157
 * and this for upper range limit:
 * 255+f+b=255+((x-100)*c/200)+(y/2.55)=((0-100)*255/200)+(100/2.55)=255-255/2+100/2.55=255-127.5+39.2157=166.7156863,
 * where f=(x-100)*c/200; b=y/2.55; 0<x<200; -100<y<100; x controls contrast; y controls brightness; c is color intensity. If you want to calculate what color after applying brightness and contrast then use this formula:
 * $$c_a=c_b+f+b=c_b+(x-100)c_b/200+y/2.55=c_b+\frac{xc_b}{200}-\frac{c_b}{2}+\frac{y}{2.55},$$ where $$c_b$$ if color before applying range selection and color $$ c_a$$ is color after applying color range selection by selecting x and y values for contrast and brightness regulation.
 * So we get range from 39 to 167 and it's obviously too wide. We want smaller range colors expand to range 0-255. But we want vice-versa effect, so then need select brightness -100 and contrast 200. Range (from -39 to 422) calculation formula is this for down range limit:
 * 0+f+b=((x-100)*c/200)+(y/2.55)=((200-100)*0/200)+(-100/2.55)=-100/2.55=-39.2157.
 * [just showing with 1 almost the same result: 1+f+b=1+((x-100)*c/200)+(y/2.55)=1+((200-100)*1/200)+(100/2.55)=1+1/2 +100/2.55=100/2.55=40.7157].
 * and this for upper range limit:
 * 255+f+b=255+((x-100)*c/200)+(y/2.55)=255+((200-100)*255/200)+(100/2.55)=255+255/2+100/2.55=255+127.5+39.2157=421.7157.
 * This time all colors values under 39 and above 167 (422-255=166) will be black or white. To eliminate more upper and down colors need to change some constants in formulas of contrast and brightness.
 * After all pixels average color value is calculated (which can be from 0 to 255 in game without HDR) in frame, then need this average value p insert into final color formula and multiply with some coefficient k. And then k*p need to multiply with brightness in formula:

$$c_a=c_b+f+b\cdot k\cdot p=c_b+(x-100)c_b/200+ykp/2.55.$$
 * Then if average frame all pixels color value is big, then will be selected range of upper colors (like from 180 to 221). And if average frame all pixels colors sum is small then will be selected bottom dark colors range (like from 30 to 81) and expanded to range from 0 to 255. Just colors values can't be less than 0 and more than 255. If computer don't do it itself then need to made this with if statement (like if $$c_a> 255$$ then $$c_a= 255$$). So best it made, that if light is weak, it means it is weak, like lamp at meter distance to white paper gives color about 50 and sun to white paper gives color about 250. Flashlight (projector) at meter distance also gives white paper pixels about 50 color value. So then if you are in very bright place (like under sun light) then light added with projector or lamp almost don't change surface brightness. :Although without radiosity (ray many times reflection from not necessary specular surfaces) this almost gives only effect for lamps and specular reflections and everything else will use ambient color or lighting (equal lighting no matter what light is like sky or inside room without lights, but only light from window spreading everywhere). Like I say, there can't be white small part of sun even on not white surface so turning to white should be minimal. In reality it maybe even about white there appears dark and in over places there's no dark little bit farther from very bright part illuminated by sun. So there maybe even possible to made with gray or black or white glare about very bright object, but current algorithms seems still making such glares about all objects (even with weak light), but if not some bright then possible almost invisible becoming due alpha grayness-blackness or just multiplied glare effect by light risen exponentially. If instead glare with white to change to glare to black then possible with this would be most realistic. But glares around object possible can be not very distant from object and need to make more distant or change distance depending on intensity. Human really seams don't see difference if glare turning scene around object to white or to gray or to black-dark. It depends how human thinks or it is black around or white around or gray around, only one certain - objects around object illuminated with sun disappearing almost and impossible to recognise they color (if it dark). But it still can be adaptation effect and not glare, just hard to explain how human see very dark and very shiny object illuminated by sun at same time (maybe because human can see only one object at once or maybe wide range human have for weak and strong light, in second case, then almost don't need HDR, except that two combined lights don't must made very bright light, but made like almost nothing changes if many lights added).


 * Here is formula for selecting range 51 color units from 0-255 color units and expanding to 0-255 color value:
 * $$c_a=5 c_b-(5 p-127.5)=5 (c_b- p) +127.5,$$
 * where $$c_b$$ is color before applying HDR to pixel; $$c_a$$ is color of pixel after HDR; p is average all pixels color before HDR (summed up all pixels colors and divided by number of pixels [and divided by 3, because each pixel have 3 colors]); 0255$$, then $$c_a=255$$; if $$c_a<0$$ then $$c_a=0$$. And if $$p<25.5$$ then p=25.5 must be. And if $$255-25.5=229.5>p$$ then p=229.5.
 * For example, if $$c_b=200$$, $$p=70$$, then:
 * $$c_a=5 c_b-(5 p-127.5)=5 \cdot 200-(5\cdot 70-127.5)=1000-(350-127.5)=1000-222.5=777.5,$$
 * so this color of pixel will be white (if over two colors also become more than 255) because $$c_a=777.5>255.$$
 * If $$c_b=100$$, $$p=70$$, then:
 * $$c_a=5 c_b-(5 p-127.5)=5 \cdot 100-(5\cdot 70-127.5)=500-(350-127.5)=500-222.5=277.5,$$
 * this color will be maximum red or green or blue, because 277.5>255.
 * If $$c_b=85$$, $$p=70$$, then:
 * $$c_a=5 c_b-(5 p-127.5)=5 \cdot 85-(5\cdot 70-127.5)=425-(350-127.5)=425-222.5=202.5.$$
 * This color will be 203 from 0 to 255 possible on screen.
 * It will be not good for textures and it the same as there is almost 6 bits per color RGB channel (because $$51\approx 64=2^6$$), but textures are noisy itself so it fixes itself. And who say that 6 bits per channel is not enough? 16 bits RGBA can be not enough like in 16 bits colors RGBA(4:4:4:4) or RGBA(5:5:5:1), so it's still more than $$2^4=16$$ or $$2^5=32$$.


 * What is best way to compare Sun light intensity with monitor capabilities? Best way is to have monitor which corpus is white. Then made monitor screen to show white color at full brightness. And need let sun shine on monitor white corpus. I made such experiment and estimate what compare with sun light monitor white color looks like gray. But even possible that some part of sun intensity human do not see, I mean, even shined on monitor white corpus (or white paper), so even better way to compare not white with white but bright gray painted paper with monitor showing bright gray color (for example in paint program RGB(180:180:180)). Then will be visible how much times sun [illuminated objects] more intensive than [objects displayed in] monitor screen. It have no sense to compare sun (when looking in sun) intensity with monitor, but only makes sense to compare white paper illuminated by sun with monitor screen [showing white color], because when looking to sun, everything illuminated by sun do not becoming black or dark, but just sun is white and iris of eye is shrinked to the same [minimum] size don't matter if you looking in white color [illuminated by sun] or directly in sun.

So white monitor color is about 2-5 times weaker than the white paper illuminated by sun.


 * Here is normalized (in range 0-1, which in programing more common) formula for selecting range 51 color units from 0-255 color units and expanding to 0-255 color value:
 * $$c_a=5 (c_b- p) +0.5,$$
 * where $$c_b$$ is color before applying HDR to pixel; $$c_a$$ is color of pixel after HDR; p is average all pixels color before HDR (summed up all pixels colors and divided by number of pixels [and divided by 3, because each pixel have 3 colors]); 01$$, then $$c_a=1$$; if $$c_a<0$$ then $$c_a=0$$. And if $$p<25.5/255=0.1$$ then p=0.1 must be. And if $$(255-25.5)/255=229.5/255=0.9>p$$ then p=229.5/255=0.9.


 * There is DirectX10 SDK and HDR example (if you will install DirectX10 SDK and Visual Studio C++ 10) in directory "C:\Program Files\Microsoft DirectX SDK (June 2010)\Samples\C++\Direct3D\HDRLighting" (need launch HDRLighting_2010.vcxproj). The same code using even "Crysis" game, which is DirectX9 HDR code. To change original code to my proposed, need those lines in "HDRLighting.fx" file

if( g_bEnableToneMap ) \\tone map is checkbox for enabling HDR {   vSample.rgb *= g_fMiddleGray/(fAdaptedLum + 0.001f); vSample.rgb /= (1.0f+vSample); }
 * to replace with those lines:

if( g_bEnableToneMap ) {   if(fAdaptedLum>0.9) {   fAdaptedLum=0.9; }   if(fAdaptedLum<0.1) {   fAdaptedLum=0.1; }   vSample.rgb =5*(vSample.rgb-fAdaptedLum)+0.5; vSample = max(vSample, (half4)0.0); \\selecting maximum between color and 0; final color>=0; this line is not necessary vSample = min(vSample, (half4)1.0);	\\selecting minimum between color and 1; final color<=1; this line is not necessary }


 * If we want to expand colors interval range not from 51 to 0-255 monitor range, but from 85 to 0-255 range, then we must use this formula:
 * $$c_a=3 (c_b- p) +0.5,$$
 * where $$c_b$$ is color before applying HDR to pixel; $$c_a$$ is color of pixel after HDR; p is average all pixels color before HDR (summed up all pixels colors and divided by number of pixels [and divided by 3, because each pixel have 3 colors]); 01$$, then $$c_a=1$$; if $$c_a<0$$ then $$c_a=0$$. And if $$p<((255/3)/2)/255=(85/2)/255=42.5/255=0.1667$$ then p=0.1667 must be. And if $$(255-42.5)/255=212.5/255=0.83>p$$ then p=0.83.


 * If we want to expand colors interval range from 127.5 to 0-255 range, then we must use this formula:
 * $$c_a=2 (c_b- p) +0.5,$$
 * where $$c_b$$ is color before applying HDR to pixel; $$c_a$$ is color of pixel after HDR; p is average all pixels color before HDR (summed up all pixels colors and divided by number of pixels [and divided by 3, because each pixel have 3 colors]); 01$$, then $$c_a=1$$; if $$c_a<0$$ then $$c_a=0$$. And if $$p<((255/2)/2)/255=(127.5/2)/255=63.75/255=0.25$$ then p=0.25 must be. And if $$(255-63.75)/255=191.25/255=0.75>p$$ then p=0.75.
 * Maybe human can see weak and strong color at same time (and monitor can give such wide range of brightness) or maybe sun is just with 1.5 times or with only 2 times stronger illuminating white surface than monitor white color and then actually maybe don't need to choose above and belove average colours only to 25.5 or to 42.5, but maybe is even better 63.75.
 * Also there is from bright light some bright spot, which stays for about half minute and it can be misunderstood as HDR (or eye adaptation), but this spot (or many spots blinking after looking at sun or enough bright light or surface with specular properties) is just blocking over images instead something have to do with eye adaptation. So too much looking at direct sun and specular highlights from cars or bright clouds make thinking, that there is some long adaptation, but there just some blinking spots, which almost don't have any effect on fast adaptation (less than 1 second - this is real eye iris adaptation and vision too, except few annoying spots). Human eyes scanning all view and faster than per 1 s adapting to each object(s) colour in each point and thus object looks almost without HDR, except pain in the eyes before adaptation and glare around bright objects. But HDR better than no HDR, because two light sum is not realistic without HDR (this is most important thing). Scene without HDR is almost perfect (each light must be adjusted if lights intersects) if you don't have projector and no changing day to night. If one light strong at night, but invisible at day, then this is purpose of HDR, because impossible made such effect without HDR. Without radiosity ambient color of objects and all surfaces of earth or house is obstacle for truly correct HDR with narrow range selection and expansion to 0-255 range. It is because shadows under different strength of light are not balanced and are either too dark or too bright, so ambient color must change with time (day or night) and weather, but it won't work in closed places like house, when house [windows or/and doors] can be closed and open. But if house is pretty static with always opened windows then ambient color also [like outside] can change with day time and weather.

How to play game Crysis with better HDR

 * Need have demo or full game "Crysis". In directory "C:\Program Files\Electronic Arts\Crytek\Crysis SP Demo\Game" file "Shaders.pak" need to rename to "Shaders.zip" and extract with winrar. Then to go to directory "Shaders\HWScripts\CryFX" and rename (or without rename open with notepad) the file "PostProcess.cfx" to "PostProcess.txt". This file controls all HDR and bloom effects. Then need to change this lines of code:

vSample.xyz += cBloom * BloomScale; vSample.xyz = 1 - exp( -fAdaptedLumDest * vSample.xyz );
 * to those lines:

vSample.xyz += cBloom * BloomScale /2.5;  // bloomscale=2.5 vSample /=5.5; fAdaptedLum /=5.5; if(fAdaptedLum>0.75) {   fAdaptedLum=0.75; }   if(fAdaptedLum<0.25) {   fAdaptedLum=0.25; }   vSample = max(vSample, (half4)0.0); vSample = min(vSample, (half4)1.0); vSample.rgb =2*(vSample.rgb-fAdaptedLum)+0.5; vSample = max(vSample, (half4)0.0); vSample = min(vSample, (half4)1.0);


 * It is also recommended to change line in "skyHDR.cfx" from this "Color.xyz = min(Color.xyz, (float3) 16384.0);" to this "Color.xyz = min(Color.xyz, (float3) 16384.0)/2.5;", because sky near horizon and combined with sun looks too bright-white.
 * Also recommended in file to change this line "half fNewAdaptation = fAdaptedLum + (fCurrentLum - fAdaptedLum) * ( 1 - pow( 0.98f, 100 * ElapsedTime));" to this line "half fNewAdaptation = fAdaptedLum + (fCurrentLum - fAdaptedLum) * ( 1 - pow( 0.98f, 300 * ElapsedTime));". Because human eye iris [to new light] adapting in 0.1-0.5 s and not in 1-2 s. But if adaptation will be too fast like 0.01 s, then brightness will flickers when you move just a 1 mm your mouse.
 * Some things, which must to be changed to even more improve quality is ambiant lighting and flashlight and over artificial light strengths including Sun light. Maybe even there lights generated with some exponent or logarithm function, so then it even worse or no square distance attenuation of light (but seems square distance). Flash light is too strong compare with Sun light.
 * Here how it looks after changing HDR codes: http://imageshack.us/g/585/crysishdrwithlerpandmy.jpg/.

eye iris adaptation (size changing) is rudiment

 * According to my experiments, eye iris size changing in different strength of lighting do not changing colors brightness at all. So eye iris adaptation can be explained only like Son Goku tail from "Dragon balls" or nails. Perhaps eye iris adaptation play important role when wasn't more superior vision systems for bugs during evolution and is in human gens. Because it's hard to believe, that at bigger eye iris size [which is in dark] more average scene value can be taken (because wider field of view deciding adaptation) and at smaller iris size at bright light to avoid glare and various bloom effects from Sun. So hard to believe that only for this so unimportant things iris adaption is and more logical like rudiment or iris size changing better attracting for mating. By my estimation iris adaptation time is half second (0.5 s - really english language is very primitive in this regard or maybe need to say "half of the second" and then it makes difference). And eye iris radius can be 2-3 (closer to 3, about 2.7) times bigger in dark than in bright place.
 * So eye adaptation isn't real, but human still can see at same time darkest color which can monitor show in absolute dark and bright scene. In reality there is very not much situations when sun lighted objects and very very dark colors can be seen at same time, so maybe there can be some eye adaptation but there still is situations, when at night even with very strong car light (which can't by radiosity effect to illuminate very dark objects) there still simultaneously visible darkest colors and car illuminated with [two] lamps light and lamps light itself and it looks the same at day illuminated by same strong lamps light. So human really seeing wider range simultaneously than can give monitor at night (with turned off lights). And human seeing range is about 2-3 times (maybe even 5 times) larger than can give monitor. Also in monitor at day dark colors are killed with room light and this makes to wish HDR even more. So there is only two ways how to made image in games: or made game without balanced lighting and all lights (weak and strong) have very similar strength or another way is HDR way, which turning too bright colors to white color and too dark colors too black color (in game "Crysis" there is kinda weak HDR range and it seeks dark colors too made gray instead leting become them black if average scene lighting is strong; bright colors in "Crysis" game becoming white like in all HDR algorithms including mine). — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 12:59, 18 October 2011 (UTC)


 * There actually is a way to made weak light look enough strong in dark, but if this light added to half stronger light then strong light almost don't changing in good illuminated scene. Here is algorithm:
 * 1) $$k=\ln(sample.r+sample.g+sample.b);$$; 2.718=e<=sample.r<=255+e; e<=sample.g<=255+e; e<=sample.b<=255+e; e=2.71828.
 * 2) $$c=2^k.$$
 * 3) (sample.r / c)*46.9114132; (sample.g / c)*46.9114132;  (sample.b / c)*46.9114132; maximum will be 255 and minimum 0.
 * 4) ln(255+e)=ln(255+2.71828)=ln(257.71828)=5.551867; ln(257.71828*3)=ln(773.1548455)=6.650479346; $$2^{5.551867057}=46.9114132;$$ t=255/46.9114132=5.435777407; multiply each [of 3] color channel by 5.4357774 if you want work with values 0-255.
 * 5) (sample.r / c)*46.9/255; (sample.g / c)*46.9/255;  (sample.b / c)*46.9/255; maximum will be 1 and minimum 0.
 * 6) moon light 5-20; white paper illuminated in room with lamp light must be before algorithm 30-100; by sun light illuminated white paper before applying this algorithm must be 200-255. After algorithm moon light will be 30-70; white paper (after applying this algorithm) in room illuminated by lamp will be 100-180; white paper illuminated by sun light after applying this algorithm will be 230-255. And no gray colors (logarithm for each color channel would made "color.rgb" gray and this can be mistake in over HDR algorithms; if was RGB(30:20:10) then after wrong algorithm will be RGB(100:90:80) and with this correct algorithm RGB(30:20:10) will be RGB(100:66:33))!
 * Algorithm first calculates sum of red, green and blue colors of pixel and then natural logarithm makes small difference between strong light illuminated pixels and with weak light. This difference is too small so rising 2 power k. But maximum color 255 must be 1 and minimum color 0, so we do step 5). By doing this algorithm, weak colors will not become gray and weak light added to strong light will almost do not affect pixel brightness, but will make only about half less bright than brightest pixel if pixel is dark. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 17:50, 18 October 2011 (UTC)
 * Weakness of this algorithm is that it for example color RGB(255:255:255) will made RGB(121:121:121) and color RGB(255:0:0) will made RGB(255:0:0). Another example is that color RGB(128:0:0) it will made (205:0:0) and color RGB(128:128:128) it will made RGB(97:97:97). One more exammple is that color RGB(128:128:0) it will made RGB(129:129:0). And one more example color RGB(255:255:0) it will made RGB(159:159:0). And color RGB(64:64:64) it will made RGB(78:78:78). And color RGB(64:0:0) algorithm will made RGB(168:0:0). And color RGB(64:64:0) it will made RGB(104:104:0). The good news is that we can multiply by about 1.5 and so one channel is still the same and for two channels it is very positive: 159*1.5=238.5. So another step:
 * 7) 1.5*(sample.r / c)*46.9/255; 1.5*(sample.g / c)*46.9/255;  1.5*(sample.b / c)*46.9/255; if color channel >1, then color channel must be 1; maximum will be 1 and minimum 0.


 * Here "shaders.pak" http://www.megaupload.com/?d=2URCLOQY file, which need to put (replace) in "C:\Program Files\Electronic Arts\Crytek\Crysis SP Demo\Game" directory or "\Crysis\Game" for full version. Actually among main HDR code original crysis code have many combinations of HDR code which add HDR effect to main code like gamma and colors matrices light shafts. Thus I think bloom, glare, light shafts and main HDR is only those necessary. Bright pass filter maybe which is in tutorial demo and is similar to glare or glow of bright objects. So for now this pak have removed many original lines of not main HDR and main HDR changed to this "vSample.xyz =3*(vSample.rgb-fAdaptedLum)+0.5;" and in "SkyHDR.cfx" file corrected with this "Color.xyz = pow(2, log(min(Color.xyz, (float3) 16384.0)));", where log mean natural logarithm (ln), so this changing division by 2.5 and reparing very dark colors, but dark colors of blue sky now little more gray, but since is over [main] HDR in "PostProcess.cfx" file, then this gray are only at dark places and with dark horizon (early at morning for example). Code which I describe in Sky HDR if would be used with lights, then would make perfect HDR without white and black areas when selected small range from big range. But this HDR (if applied only to added lights)

Color.xyz = min(Color.xyz, (float3) 16384.0); //original // Color.xyz = Color.xyz /2.5; //my Color.xyz = log(Color.xyz); //my ln(255)=5.54126 Color.xyz = pow(2, Color.xyz);  //my 2^5.54= 46.56788792 // Color.xyz = Color.xyz *5.47; //my 46.56788792*5.475876434=255 //second line and fifth line must be or deleted together or leaved together, it almost the same
 * still have small weakness, there can't be very colorful lights, because they will become little bit gray, especially if lights are not strong. But to simulate yellow sun light or blue sky light or blue sky light at night with moon light, it is more than enough, but just for example dancing games with many colorful lights this type of HDR is not good enough. But solo channel (red, blue or green) lights it HDR making without gray adding. First all lights in scene added together for example, p.rgb=RGB(50:70:0)+RGB(10:30:40)+RGB(200:200:200)+RGB(230:230:230)=RGB(445:530:470), then k.rgb=log(RGB(445:530:470))=ln(RGB(445:530:470))=RGB(6.098:6.27:6.15) (log in High Level Shader Language is natural logarithm ln), then  $$c.rgb=2^k=RGB(68.5:77.3:71.15)$$, then final light for pixel is f=c*3=RGB(206:232:213) or  if you want same sizes F=c*7.5=RGB(445:560:534).
 * Assume moon light is RGB(5:5:8) (from 255 max), room light [at 2 meters distance from lamp on white paper] is RGB(55:50:40), sun light is RGB(230:225:210) on white paper. Then after algorithm moon light will become RGB($$2^{\ln(5)} : 2^{\ln(5)}: 2^{\ln(8)}$$)=RGB(3.05:3.05:4.23) and this need multiply by 5.47, so moon light will become RGB(3.05:3.05:4.23) *5.47=(17:17:23) (moon light [if you don't playing videogames at night] is exception and night lights you must simulate with changing ambient lighting, over wise if you change $$\ln$$ to $$\log_{10}$$, then you will get too bright shadows from flashlight; or you can peak moon light stronger than it is, like RGB(15:15:15) and you will get RGB(36:36:36) and I guaranty it will not have impact on shadows from flashlight). After applying algorithm, room lamp light on white paper from 2 meters distance will become RGB($$2^{\ln(55)} : 2^{\ln(50)}: 2^{\ln(40)}$$)=RGB(16.08:15.05:12.9) and this need multiply by 5.47, so will become RGB(16.08:15.05:12.9) *5.4759=(88:82:71) (room light perhaps better should be choosen little bit stronger like RGB(100:100:100), which after algorithm will become RGB(133:133:133)). Sun light on white paper without specularity will become RGB($$2^{\ln(230)} : 2^{\ln(225)}: 2^{\ln(210)}$$)=RGB(43.35:42.7:40.7) and by multiplying with 5.475876 we get RGB(237:234:223). For stronger HDR, instead rising $$2^{\ln}$$ we can choose $$1.5^{\ln}$$ and decrease ambiant light (this is light under shadow; means how bright is shadow; this is how bright object under shadow of sun light or flash light or lamp). So this algorithm weak light alone makes strong and weak light added to strong light makes overall lighting without noticeable difference. If you don't plan using any lights in videogame, but only Sun light, then you don't need this algorithm. Roughly can say, that in this algorithm need all lights sum passed through this algorithm multiply with diffuse [lighting] and with texture colors, but texture must be multiplied with ambient lighting first, which should made texture brightest colors about 10-50 and diffuse lighting from 0 to 1 multiplied after making texture brightest values 255 to 10-50; so it means, that in the end of algorithm need everything (final color(s) result) divide by 10-50. But actually ambient lighting is just another light without intensity fallow, so better first to multiply each light with diffuse [lighting] (N*L), which can be from 0 to 1 depending on angle between surface and light, and add all lights. Ambient lighting usually don't need to multiply with diffuse, because sky shining from all sides. Ambient lighting must be about from 10 to 100 depending on how strong HDR you want to make ($$1.5^{\ln}$$ or $$2^{\ln}$$; ambient 10-20 if 1.5). So then all lights including ambient lighting added, then we pass through algorithm $$2^{\ln(ambient+diffuse*light1+diffuse*light2+diffuse*light3)}$$; then what we get, we multiply with texture colors, which can be from 0 to 1. And if texture with lighting need to clamp to 0-1 values then need divide by 255.
 * Kinda official or faster way to made similar thing is $$(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))$$, but all lights must be from 0 to 1 and better each light not exceed 0.8 (especially not sun light). For stronger HDR formula become this $$texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))$$, which increasing very weak light almost 4 times and strong lights intensity almost don't changing. But official formula multiplying texture first and I suggest don't do it, because dark and not so dark colors will be not such colorful and become more gray. So texture must be multiplied after algorithm and not by all lights sum like this $$(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))$$.
 * So why in general $$texture*(5.4759*2^{\ln(255*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))})/255$$ formula better than this $$texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))$$ ? Answer is that there almost no difference. In first formula weak light would loose color like from RGB(192:128:64) to RGB(209:158:98), and in second formula light also will lose color but little bit differently like from RGB(192:128:64) to RGB(219:170:102). For weak colours difference bigger: first algorithm RGB(20:10:5) converts to RGB(43.7:27: $$5.4759\cdot 2^{\ln(5)}$$)=RGB(43.7:10: $$5.4759\cdot 2^{1.6094}$$) =RGB(43.7:27: $$5.4759\cdot 3.05133$$)=RGB(43.7:27:16.7)=RGB(44:27:17); second algorithm RGB(20:10:5) converts to RGB(255*0.145:255*0.07547: $$255\cdot 2\cdot (5/255)/(1+(5/255))$$)=RGB(37:19.2: $$255\cdot 2\cdot 0.0196/(1+0.0196)$$)=RGB(37:19.2: $$255\cdot 0.0392/1.0196$$)=RGB(37:19:255*0.03846)=RGB(37:19:9.8)=RGB(37:19:10). — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 17:03, 27 October 2011 (UTC)


 * According to my experiments adaptation time from lamp light [lighted room] to very very weak light is 20-25 seconds. And adaptation time between average and strong lights is about 0.4 second. So adaptation time is quite long only for very very weak light. But it really not 20 minutes and even not a 1 minute. Eye adaptation time from very very weak light to stronger and to average lighting and even to very strong is also 0.4 s. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 02:35, 28 October 2011 (UTC)  It apears that adaptation to very weak light 20-25 seconds is because of blinking bloom-glow from strong light and according to my experiments if only part of view have bright light in eye, then another part adaptation is instant. Thus I come to only one logical explanation, that there is adaptation similar to adaptation to color and based not on eye iris size, but on some induction of previous light. Because it's obvious, that if one part is adapted and another part of field of view need adaptation time and after turning head or eyes you can see that you either see or not, thus it really can't be because of eye iris size changing, if everything around is black. So eye iris is really rudiment and can play role only as pain causing factor before adaptation to strong(er) light for measuring difference in luminance of scene. In best case scenario iris adaptation can play role only for adaptation to weak lighted objects, if there is some errors in my experiments due too very strong radiosity (endless raytracing), which eliminating sense of transition from strong light to weak and vice-versa and due to perhaps wider human visibility dynamic range or some brain colors filtering mystery. But human seeing as he have very wide dynamic range and eye iris size don't play any role to human visibility, but only small chance, that that iris play role for adaption to weak colors.

Look how I kill HDR

 * Assume sun light illuminating paper 3 times stronger than average lamp at 2 meters distance. Human seeing at same time very weak and very strong color (or whatever you believe, like turning weak colors to black and strong to white, but I never see such things like in filmed videos). So only for videocamera need HDR, because monitor white color showing about 1.5-3 times weaker than sun illuminating white paper. If we will not use HDR for videorecorders then average colors will be too dark.
 * If we will use this algorithm
 * final.rgb=3*(color.rgb-average)+0.5; 0<color.rgb<1, 0.1667<average<0.8333, 0<final.rgb<1;
 * or official algorithm:
 * final.rgb=(color.rgb/average)-0.3333; 0<color.rgb<1,  0.25<average<0.75, 0<final<1;
 * THEN in condition of, say, orange color RGB(255:100:0)~RGB(1:0.392:0) (check through paint program, it's orange) and at average=0.5, we will get color [after HDR] about RGB(255:245:0), so no more orange color. But the whole point is, that if orange color was RGB(204:80:0), and average=0.75, then we after HDR algorithm we will get color about RGB((204/255)/0.75-1/3:(80/255)/0.75-1/3:0)=RGB(272/255-0.3333:106.67/255-0.3333:0)=RGB(1.0666-0.3333:0.4183-0.3333:0)=RGB(0.7333:0.085:0) ~ RGB(0.7333*255:0.085*255:0)=RGB(187:22:0). This is the whole point, if we use HDR too seriously (too strongly, not weakly), then we orange color will become red color like from RGB(204:80:0) to RGB(187:22:0). So whole HDR image will be only in 6 colors RED, GREEN, BLUE and pure yellow, cyan and pink/violet colors. And there no cure from this, but only to use HDR very weakly [in computer graphics, in videogames].
 * Maybe official HDR algorithm more like this:
 * final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1,  0.25<average<0.75, 0<final<1;
 * but it's really don't changing anything. Need to use HDR textures (combination of textures photographed with different time opened photomatrix), but it's silly and requiring too much work. And Should be enough monitor range to make see as human see, so better don't play with HDR and use lights compressing algorithm (increasing weak light alone and making sum of strong and weak light almost the same as strong light alone), like this:
 * $$texture * (5.4759 * 2^{ln(255 * (ambient + diffuse * light1 + diffuse * light2 + diffuse * light3))}) / 255,$$
 * then it would be most correct as human see. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 15:00, 9 November 2011 (UTC)
 * Correction: This algorithm:
 * final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1,  0.25<average<0.75, 0<final<1;
 * changing everything. By using only division you can't change natural color to another. This algorithm disadvantage to compare with my (and over which using subtraction 0.3333) is that it don't adapts to bright light, but if bright light is strong (average is big), then image is unchanged, but this can be even better. And if there is dark colors domination then brighter colors turns to white like in previous algorithms. At minimum average=0.25 all colors becoming 4/1.3333=3 times stronger. At average=0.5 all colors becoming 2/1.3333=1.5 times stronger. At average 0.75 and above we have normal image like without using algorithm. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 11:20, 10 November 2011 (UTC)


 * There is even better way, than lights compression. This better way is luminance compression like this:
 * final.rgb=color.rgb * 2/(1+max(color.r, max(color.g, color.b))); \\ function "max" choosing bigger number; 0<color.rgb<1;
 * now lights don't loosing color not a bit. It just increasing weak colour (which consist of 3 channels RGB) and barely increasing strong colour. This algorithm in video games can be combined with HDR algorithm
 * final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1,  0.25<average<0.75, 0<final<1;
 * and even here very much benefit would be if average is calculated choosing biggest number from 3 RGB channels of each pixel and all pixels strongest channels summed up without division by 3. In this way there will not be wrong adaptation to bright grass, when only green color dominating (kinda color RGB(0:200:0) and no need to think, that it is RGB(0:200/3:0)=RGB(0:67:0) and increase all luminance dramaticly, that green becoming far stronger than 255 (about 300-400 after adaptation)). — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 08:17, 17 November 2011 (UTC)

Reviving my real HDR algorithm

 * It apears, that with "max" and many "if" function, my real HDR algorithm can be functional and expand range even of bright parts (upper levels like from 255 to 170 and turn to black level below 170). So this is real HDR algorithm which choosing part of range 255, say part of 85 levels and expanding those 85 levels to 255 levels. So this is algorithm unmodified and which making only 6 colours (RGB and yellow, cyan, pink and of course black and white, so in most cases will be 6 basic colours):
 * final.rgb=3*(color.rgb-average)+0.5; 0<color.rgb<1, 0.1667<average<0.8333, 0<final.rgb<1.
 * And here is modified algorithm:
 * color.rgb=color.rgb + 0.001;   \\ to not divide by 0 latter
 * colormax=max(color.r, max(color.g, color.b));
 * if(colormax=color.r)
 * kcolorg=color.r / color.g;
 * kcolorb=color.r / color.b;
 * }
 * if(colormax=color.g)
 * kcolorr=color.g / color.r;
 * kcolorb=color.g / color.b;
 * }
 * if(colormax=color.b)
 * kcolorr=color.b / color.r;
 * kcolorg=color.b / color.g;
 * }
 * min(color.rgb, 1);     \\ choosing minimum value between "color.rgb" and 1 for each of 3 channels
 * max(color.rgb, 0);     \\ choosing maximum value between "color.rgb" and 0 for each of 3 channels
 * min(average, 0.8333);  \\ choosing minimum value between "average" and 0.8333
 * max(average, 0.1667);  \\ choosing maximum value between "average" and 0.1667
 * finalmax=3*(max(color.r, max(color.g, color.b)) - average)+0.5=3*(colormax - average)+0.5;
 * min(finalmax, 1);      \\ choosing minimum value between "finalmax" and 1
 * max(finalmax, 0);      \\ choosing maximum value between "finalmax" and 0
 * final.rgb=3*(color.rgb - average)+0.5;
 * min(final.rgb, 1);
 * max(final.rgb, 0);
 * if(finalmax=final.r)
 * final.r=finalmax;
 * final.g=finalmax / kcolorg;
 * final.b=finalmax / kcolorb;
 * }
 * if(finalmax=final.g)
 * final.g=finalmax;
 * final.r=finalmax / kcolorr;
 * final.b=finalmax / kcolorb;
 * }
 * if(finalmax=final.b)
 * final.b=finalmax;
 * final.r=finalmax / kcolorr;
 * final.g=finalmax / kcolorg;
 * }
 * I know, it's little bit tricky and it makes not equal strength if maximum color was before any algorithm 255 and and in over case 170, then for example color RGB(255:170:0) will become after algorithm the color RGB(255:170:0), and if there was color before any algorithm RGB(170:113:0), then it will become after algorithm RGB(0:0:0), if average in both cases is more than 0.8333. You can say, it's not fair, but it is really the best way to do it and almost or truly without all wrong colours distortion and disbalance consequences. So with this modified algorithm there all colours don't changing they colours at all and don't turning into 6 basic colours like in unmodified algorithm "final.rgb=3*(color.rgb-average)+0.5;". So in this algorithm (and in unmodified also) too weak and too strong colours will be lost and turn into black or white (or almost white or yellow/cyan/pink if one of RGB channels is 0). In official algorithm like "final.rgb=(color.rgb/average)" will be lost only too strong colours if "average" is small, but in official algorithm no HDR (or very weak) in bright scene (if "average" is big).
 * The biggest problem, I afraid, is, that there is not possible to calculate average based on each pixel maximum colour (to choose maximal channel from each RGB pixel and sum up all pixels maximal numbers), but only average of all channels of all pixels. This is very bad for my real HDR algorithm, because if all pixels will be filled only with one channel of RGB, then algorithm will underestimate real pixel brightness by 3 times (so on average it is 2 times underestimation in normal scene). Algorithm will shift to dark colors range and most colours will turn to white (very bright). This is also problem in official algorithm and thats why need average (0<average<1) multiply by 2 or by 3, so official algorithm must be:
 * final.rgb=0.75*(color.rgb/(average*2))=0.375*color.rgb/average; 0<color.rgb<1, 0.25<average<0.75, 0<final<1; \\can be 1 channel active or 3 channels active, so on average 2 channels active.
 * So if we multiply all channels average (0<average<1) by 2 or by 3 in my real HDR algorithm, then algorithm may work only if in most pixels dominating one or two active RGB channel(s). And if we will not multiply average by 2 or by 3, then algorithm will never adapt to bright colours and brighter parts of scene will be white (too bright).
 * For official algorithm multiplying average by 3 is also tragic, because if all pixels are RGB(255:0:0), then after official HDR algorithm "final.rgb=(color.rgb/(average*3))" they will become RGB(85:0:0). And if we will not multiply by 3 and leave average of all channels pixels to be 0<average<1, then colour RGB(255:100:0) will turn to RGB(255/((255+100)/2):100/((255+100)/2):0)=RGB(255/177.5:100/177.5:0) ~RGB(1/0.696:0.392/0.696:0)=RGB(1.4366:0.56337:0). Say, we wanted prevent from such things and multiply by 0.75 and don't wanted that average to be more than 0.75 (if more than 0.75, then average=0.75). So then 0.75*RGB(1.4366:0.56337:0)=(1.07745:0.4225:0) everything is almost OK, especially if we don't let average to be more than say 0.6667, then:
 * final.rgb=0.6667*color.rgb/average; 0<color.rgb<1, 0.25<average<0.6667, 0<final<1; \\ weakest colours maximum 0.6667/0.25=2.6667 times can be increased
 * But then we almost don't have HDR for bright colours and have kinda compressed bright scene. But I think most important in HDR is, that for example lamp illuminated white paper will not look gray, but will look white due to adaptation and if we wouldn't use HDR [official] algorithm it would look gray like RGB(70:70:70) instead some RGB(200:200:200). But if average don't let to be big enough, then RGB(85:85:85) will be adapted only to RGB(170:170:170) (because 0.6667/0.3333=2), but even this can be enough for lamp illuminated white objects to don't look so ridiculous dark gray instead white.
 * Very good solution for official algorithm is this:
 * final.rgb=0.5*color.rgb/average; 0<color.rgb<1, 0.25<average<0.5, 0<final<1.
 * This is because if only one channel active and it is 0.5, then we get average 0.5/3=0.1667, but then average can't be less than 0.25, so we get 0.5*0.5/0.25=1. If there is two of RGB channels active and both equal to 0.5, then we get average=2*0.5/3=0.3333 and we get 0.5*0.5/0.3333=0.75. And if there is 3 channels RGB active and each is 0.5, then average is (0.5+0.5+0.5)/3=0.5 and we get final 0.5*0.5/0.5=0.5. So in this case, whatever colours will be, they will never exceed 1 (unlike 0.6667*0.6667/0.25=1.7778). For example if we have RGB(0.5:0.3333:0), then average is 0.8333/3=0.2778 and final.r=0.5*0.5/0.2778=0.9 and final.g=0.5*0.3333/0.2778=0.6. Another example is RGB(0.5:0.7:0) will be turned after algorithm to RGB(0.625:0.875:0). So this "0.25<average<0.5" condition is very important (minimum bright/white light/colour in scene, because I think, you don't want to play game where half scene only white colour). — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 21:15, 23 November 2011 (UTC)  No, nothing important here 0.5*0.6667/0.25=1.3333, so still exceeds 1, if only one channel of RGB active in each pixel (average=0.6667/3=0.2222, so 0.25).
 * If human eye is capable to adaptation, then much bigger chance it works like official algorithm "final.rgb=color.rgb/average" and even small part of bright lights makes adapt to this small light and not to rest dark scene. The whole point is, that if human at very strong lighting seeing bright values from 5 to 255 (from 0-255 possible), then at very weak ligth human seeing from 0 to 51 possible (and upper than 51 would be overbright, but assume there is only at weak light maximal values 51). So at strong light human weak colours such as 1, 2, 3, 5, 10, 20 seeing 5 times less sensitive and those numbers seeing as 0.2, 0.4, 0.6, 1, 2, 4, but under 1 value he already don't see, so he see 5, 10, 20 at strong light as 1, 2, 4. So you can subtract from algorithm 5/255=0.0196, but it almost don't make any difference, but if you persist, then algorithm would look like this:
 * final.rgb=color.rgb/averageMSP-0.0196*averageMSP; \\ 0.2<averageMSP<1; averageMSP is maximum single pixel luminance in visible scene (in frame).
 * And if we want to adapt not according to maximum single pixel brightness of maximum this pixel channel brightnes, then we use average of all pixels channels or all pixels channels maximumus:
 * final.rgb=color.rgb/average-0.0196*2*average; 0.1<averageMSP<1;
 * but then we get some pixels overbrighted, but do you subrtract 5 from 255 at maximum average or 1 from 255 at minimum average (all pixels luminence is 5 times bigger) this don't makes any difference. So if we want to try simulate human eye adaptation, then we must much more give attention to all bright pixels, than to weak colour pixels. This can be done if average is computed using square root of each pixel luminance, but all numbers from 0 to 1 (and only after average sum calculated everything divide by number of pixels channels). And of course would be much better to sum up only maximals pixels channels (RGB) under square root. In this way we get bigger average, for example instead (0.2+0.9)/2=0.65, we get $$(0.2^{1/2}+0.9^{1/2})/2=(0.4472+0.94868)/2=0.6979$$. It can be root of any order like rise 1/3 or 1/4 for adaptation to very weak if weak colours are really weak like if most colours values 0.05-0.2 and do not adapt if there is even only 1/5 of strong colours (and 4/5 all weak) or adapt just little bit. Another way to do it is use numbers from 0 to 255 and calculate average like sum of all channels (or maximals of each pixel channels) in logarithm, like this $$46.018*(\ln(255)+\ln(3))/2=46*(5.54+1.0986)/2=46*3.3199=153$$ ($$255/\log_{2.7}(255)=255/\ln(255)=46.018$$); (255+3)/2=129. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 08:38, 26 November 2011 (UTC)
 * Update to black shrift. Natural logarithm function is really expensive and not very practical but is equivalent to square root of 5 degree. For example $$46.018*\ln(128) =223$$ and $$255\cdot (128/255)^{1/5}=255\cdot 0.87123=222.$$ Another example $$46.018\cdot \ln(50) =180$$ and $$255\cdot (128/255)^{1/5}=255\cdot 0.7219=184.$$ One more example, $$46.018\cdot \ln(5) =74$$ and $$255\cdot (5/255)^{1/5}=255\cdot 0.4555=116.$$ And here 74 is not equal to 116, because to match exactly, need to choose power approximate 0.31 instead 1/5=0.2. Then we get $$255\cdot (5/255)^{0.31}=255\cdot 0.295565=75.$$ Also $$255\cdot (128/255)^{0.31}=255\cdot 0.807621=206.$$ Well, it appears it's not replace Natural Logarithm, but gives very similar result.
 * Why I saying "If human eye is capable to adaptation", because human eye iris size changing can be rudiment, because at strong light hard to tell difference between 1 and 5 (from 0-255 possible if 5 appears at strong weak light and 1 at strong light). But more than this is, that strong light, especially sun light by passing into eye iris through lens reflecting from eye iris and eye white "ball" thing and then by physics laws light passing from one matter to another (from eye lens to air) makes light reflection first from iris and white part of eye and then this light goes, where eye lens and air intersects and reflects from air back to iris (you can check how laser pointer reflecting from air if you direct it into window). So this from air reflection in eye lens probably makes most, if not all, light blooms, glows, glares and so on and so pretty weak colours (say from 0 to 20-50 from 0-255 possible) are overgrayed (overlighted, overtaken) with this strong light refection inside lens from air. And even from iris itself due to not ideal flat surface of eye iris, light from strong riris illuminated point goes to near bumpy iris receptors and very weak light near strong light is mixed with strong light shining halo, glare. Also iris physical size difference not necessary must give 5-7 times bigger sensitivity at maximum eye iris size than at minimum eye iris size, but can give only 2 or 1.5 or 1.3 times bigger sensitivity at maximum eye iris size than at minimum eye iris size (this would mean, that monitor maximum white colour is 1.3-2 times weaker than white paper illuminated by sun and that lamp light at 1-3 metters distance not so weak compare with sun light, but then two such lamps must stronger illuminate than direct sunlight). So if, say, 2 times stronger weak colour at maximum eye iris size than at minimum eye iris size, then at maximum eye iris size human seeing 1-128 (from 0-255 possible, 0 is black) and at minimum eye iris size human see 2-255 (from 0-255 possible, 0 is black). But say human eye, probably not selecting only this two ranges or 1-128 or 2-255, but between also, like 1.5-191 and hard to see difference and hard to tell if there is some darker objects at strong light (or near strong light/luminance) due to eye iris adaptation or due to blanking effect of various blooms and glows due to reflection light from air inside eye lens. And at all colours comparison is hard task even if they are on monitor separated by black space and one is RGB(255:0:0) and over RGB(191:0:0), then if they not near each over hard to tell which is which. Maybe iris size becoming not rudiment only when it is from average to big and from average to small nothing changing at all, etc.
 * BTW I make all possible tests to see if red or green or blue turning to gray if this basic colour is very very weak (need to have monitor with big contrast ratio, some stupid CRT monitors can be even better with too big contrast ratio, that less than 50 is not seen, so need to do display driver software contrast and brightness calibration if you still want to use it). So RGB colours if they are very very weak then from first look it's harded to tell diference between blue and green and much easier between red and any over, but don't matter how weak they are there still possible to say colour at any time with 90-99% correct answer, especially for red and if all weak colours of red, green and blue a displayed together. Specular highlights of all 3 colours and threshold of colour RGB(1:0.4:0) makes it say red raver than orange so number of possible colours decreasing in dark and if object is of two mixed channels RGB, then stronger channel will be seen only at very weak light and weaker will be under threshold of visibility. They are pretty weak so need concentration, maybe thats why hard to recognise colours in dark. So on monitor either you see ver very weak colour of separate chaneel red, green, or blue or don't see nothing at all at night. So don't dare to say about some gray colours bullshit at night, that you have something in eye to see everything monochrome. Dark colours just look dark and thats how it is. If you want to look in game at night, then specular highlights must dominate of material, but this in most cases comes naturally and especially and most LCD monitors with small contrast 300:1, there even 0 shining like 30-50 on monitor with big contrast like 1000:1 or bigger. So such monitors with small contrast better suited to use at day and of course this LCD led light still almost overcoming number 3 or 5 or ten so you still don't see this weak colours or if see they not pure red or gree or blue, but they turned from pure red or green or blue to such like they strong analogs RGB(255:200:200) for red, RGB(200:255:200) for green, RGB(200:200:255) for blue, so there no need in game to simulate gray for dark illumination, because LCD monitor Led backlight and room light graying weak colours pretty much itself already. But I have to admit, that with too big contrast monitors turning all colours spectrum little bit in direction into 6 basic colours, like my unmodified algorithm, red, green, blue, cyan, yellow, pink, because 128 is no more two times weaker than 255, but about 2.2 times and 64 is not 2 times weaker than 128, but about 2.5 times. http://imageshack.us/g/827/rgbcolorsdark2.png/
 * And if we want to adapt not according to maximum single pixel brightness of maximum this pixel channel brightnes, then we use average of all pixels channels or all pixels channels maximumus:
 * final.rgb=color.rgb/average-0.0196*2*average; 0.1<averageMSP<1;
 * but then we get some pixels overbrighted, but do you subrtract 5 from 255 at maximum average or 1 from 255 at minimum average (all pixels luminence is 5 times bigger) this don't makes any difference. So if we want to try simulate human eye adaptation, then we must much more give attention to all bright pixels, than to weak colour pixels. This can be done if average is computed using square root of each pixel luminance, but all numbers from 0 to 1 (and only after average sum calculated everything divide by number of pixels channels). And of course would be much better to sum up only maximals pixels channels (RGB) under square root. In this way we get bigger average, for example instead (0.2+0.9)/2=0.65, we get $$(0.2^{1/2}+0.9^{1/2})/2=(0.4472+0.94868)/2=0.6979$$. It can be root of any order like rise 1/3 or 1/4 for adaptation to very weak if weak colours are really weak like if most colours values 0.05-0.2 and do not adapt if there is even only 1/5 of strong colours (and 4/5 all weak) or adapt just little bit. Another way to do it is use numbers from 0 to 255 and calculate average like sum of all channels (or maximals of each pixel channels) in logarithm, like this $$46.018*(\ln(255)+\ln(3))/2=46*(5.54+1.0986)/2=46*3.3199=153$$ ($$255/\log_{2.7}(255)=255/\ln(255)=46.018$$); (255+3)/2=129. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 08:38, 26 November 2011 (UTC)
 * Update to black shrift. Natural logarithm function is really expensive and not very practical but is equivalent to square root of 5 degree. For example $$46.018*\ln(128) =223$$ and $$255\cdot (128/255)^{1/5}=255\cdot 0.87123=222.$$ Another example $$46.018\cdot \ln(50) =180$$ and $$255\cdot (128/255)^{1/5}=255\cdot 0.7219=184.$$ One more example, $$46.018\cdot \ln(5) =74$$ and $$255\cdot (5/255)^{1/5}=255\cdot 0.4555=116.$$ And here 74 is not equal to 116, because to match exactly, need to choose power approximate 0.31 instead 1/5=0.2. Then we get $$255\cdot (5/255)^{0.31}=255\cdot 0.295565=75.$$ Also $$255\cdot (128/255)^{0.31}=255\cdot 0.807621=206.$$ Well, it appears it's not replace Natural Logarithm, but gives very similar result.
 * Why I saying "If human eye is capable to adaptation", because human eye iris size changing can be rudiment, because at strong light hard to tell difference between 1 and 5 (from 0-255 possible if 5 appears at strong weak light and 1 at strong light). But more than this is, that strong light, especially sun light by passing into eye iris through lens reflecting from eye iris and eye white "ball" thing and then by physics laws light passing from one matter to another (from eye lens to air) makes light reflection first from iris and white part of eye and then this light goes, where eye lens and air intersects and reflects from air back to iris (you can check how laser pointer reflecting from air if you direct it into window). So this from air reflection in eye lens probably makes most, if not all, light blooms, glows, glares and so on and so pretty weak colours (say from 0 to 20-50 from 0-255 possible) are overgrayed (overlighted, overtaken) with this strong light refection inside lens from air. And even from iris itself due to not ideal flat surface of eye iris, light from strong riris illuminated point goes to near bumpy iris receptors and very weak light near strong light is mixed with strong light shining halo, glare. Also iris physical size difference not necessary must give 5-7 times bigger sensitivity at maximum eye iris size than at minimum eye iris size, but can give only 2 or 1.5 or 1.3 times bigger sensitivity at maximum eye iris size than at minimum eye iris size (this would mean, that monitor maximum white colour is 1.3-2 times weaker than white paper illuminated by sun and that lamp light at 1-3 metters distance not so weak compare with sun light, but then two such lamps must stronger illuminate than direct sunlight). So if, say, 2 times stronger weak colour at maximum eye iris size than at minimum eye iris size, then at maximum eye iris size human seeing 1-128 (from 0-255 possible, 0 is black) and at minimum eye iris size human see 2-255 (from 0-255 possible, 0 is black). But say human eye, probably not selecting only this two ranges or 1-128 or 2-255, but between also, like 1.5-191 and hard to see difference and hard to tell if there is some darker objects at strong light (or near strong light/luminance) due to eye iris adaptation or due to blanking effect of various blooms and glows due to reflection light from air inside eye lens. And at all colours comparison is hard task even if they are on monitor separated by black space and one is RGB(255:0:0) and over RGB(191:0:0), then if they not near each over hard to tell which is which. Maybe iris size becoming not rudiment only when it is from average to big and from average to small nothing changing at all, etc.
 * BTW I make all possible tests to see if red or green or blue turning to gray if this basic colour is very very weak (need to have monitor with big contrast ratio, some stupid CRT monitors can be even better with too big contrast ratio, that less than 50 is not seen, so need to do display driver software contrast and brightness calibration if you still want to use it). So RGB colours if they are very very weak then from first look it's harded to tell diference between blue and green and much easier between red and any over, but don't matter how weak they are there still possible to say colour at any time with 90-99% correct answer, especially for red and if all weak colours of red, green and blue a displayed together. Specular highlights of all 3 colours and threshold of colour RGB(1:0.4:0) makes it say red raver than orange so number of possible colours decreasing in dark and if object is of two mixed channels RGB, then stronger channel will be seen only at very weak light and weaker will be under threshold of visibility. They are pretty weak so need concentration, maybe thats why hard to recognise colours in dark. So on monitor either you see ver very weak colour of separate chaneel red, green, or blue or don't see nothing at all at night. So don't dare to say about some gray colours bullshit at night, that you have something in eye to see everything monochrome. Dark colours just look dark and thats how it is. If you want to look in game at night, then specular highlights must dominate of material, but this in most cases comes naturally and especially and most LCD monitors with small contrast 300:1, there even 0 shining like 30-50 on monitor with big contrast like 1000:1 or bigger. So such monitors with small contrast better suited to use at day and of course this LCD led light still almost overcoming number 3 or 5 or ten so you still don't see this weak colours or if see they not pure red or gree or blue, but they turned from pure red or green or blue to such like they strong analogs RGB(255:200:200) for red, RGB(200:255:200) for green, RGB(200:200:255) for blue, so there no need in game to simulate gray for dark illumination, because LCD monitor Led backlight and room light graying weak colours pretty much itself already. But I have to admit, that with too big contrast monitors turning all colours spectrum little bit in direction into 6 basic colours, like my unmodified algorithm, red, green, blue, cyan, yellow, pink, because 128 is no more two times weaker than 255, but about 2.2 times and 64 is not 2 times weaker than 128, but about 2.5 times. http://imageshack.us/g/827/rgbcolorsdark2.png/


 * So contrast is each pixel color multiplication by some number (or division). Brightness is some number addition (or subtraction) to all pixels colours. And if you want use combination of brightness and contrast that line in AMD display drivers control center in up right corner would be precisely in right upper edge and bottom of line would be higher than in bottom left edge, then you need to brightness add 2.55 more, than from contrast subtract, for example, brightness=100, contrast=100-100/2.55=61 (defaults brightness=0, contrast=100).
 * Now I tell you about gamma algorithm, which used widely as brightness and contrast. Gamma can be controlled by changing $$k_g$$; $$0.5<k_g<3.5$$. Gamma algorithm is this:
 * $$final.rgb=(color.rgb)^{1/k_g}.$$
 * Gamma algorithm almost the same as this algorithm "final.rgb=color.rgb*2/(1+color.rgb)" if compare with $$k_g=2$$ or this "final.rgb=color.rgb*3/(1+2*color.rgb)" if compare, when $$k_g=3$$, but gamma in both cases increasing colours values little bit more, then those two respectively.
 * And I admit, that for monitors with very big contrast ratio like 1:10000, little bit of gamma can correct colours ratio for example 255 must be 2 times brighter 128; 128 must be 2 brighter than 64; 64 must be two times brighter than 32 and so on. For big contrast monitors 64 is about 3 times brighter than 32; 32 is about 4 times brighter than 16. You must see the same colour don't matter if it is RGB(255:100:0) or RGB(128:50:0) or RGB(64:25:0).
 * For HDR gamma can be used for compressed luminance:
 * $$final.rgb=(color.rgb)^{1/2}.$$
 * But in this way you will get colours graying, because orange colour will become almost like yellow, so algorithm should be this:
 * $$final.rgb=color.rgb / sqrt(max(color.r, max(color.g, color.b)));$$ 0<color.rgb<1. Function "sqrt" is square root in programing language (HLSL). Function "max" choosing bigger number from two numbers. Compressed luminance is good for adding weak and strong light and don't get overbright light; and weak light still be looking pretty strong alone. But then why need such things like light attenuation so perhaps better use normal HDR without compressed luminance. BTW sky light is blue, lamp light is yellow, together white, thats how they not overbirighting each over perhaps. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 14:13, 7 December 2011 (UTC)
 * Funny thing is, that if there is 3x3 grid, then each of 9 squares getting 10 Watts light energy. And if light is at same distance, but 10 times stronger, then each of 9 squares of same 3x3 grid getting 10 times more energy and not 100 times! I don't know clear reason why decibels measured sometimes with square root and why in logarithm plot, but this reason must be very stupid. I even sow how in HDR there are tryings to use square. So actually big contrast like 1000:1 monitors say 255 colour is 3 times stronger than 128, and 128 is 3 times stronger than 64 and 64 is 3 times stronger than 32 and so on. And on normal (perhaps cheaper) contrast monitors like 300:1, the colour 255 is 2 times stronger than 128, and 128 is 2 times stronger than 64 and so on. For say really very big contrast monitors like 10000:1, the colour 255 is 5 times stronger than colour 128, and 128 is 5 times stronger than 64, and 64 is 5 times stronger than 32 and so on. Of course there can be, that such big contrast like 100000:1 can mean, that 255 is 100000 times stronger than 0 and not than 1. But you know how with those LCD colours, if there is strong led light behind, then you get strong and 0 and 1, at least it should be in most cases, but who knows, maybe really 1 can be 10-1000 times stronger than 0 and this is the whole point and quality of big contrast ratio monitors. From here not hard to see the whole point of contrast ratio of monitors. It depends in what contrast ratio videocamera recoreder recording, I mean, how much times 1 is weaker than 255. Or is it about 300 times weaker or 1000 or 10000, because colours will be wrong and textures if they not match each over. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 22:11, 8 December 2011 (UTC)
 * If you have monitor (with big contrast like 4^8=65536:1), where 255 is 4 times stronger than 128, and 128 is 4 times stronger than 64, and 64 is 4 times stronger than 32 and so on. Then by rising gamma to value $$k_g=2$$, an algorithm "$$final.rgb=(color.rgb)^{1/2};$$ 0<color.rgb<1" will be applyied and you will get, that 255 is 2 times stronger than 128, and 128 is 2 times stronger than 64, and 64 is 2 times stronger than 32. Because $$0.4^{1/2}/0.1^{1/2}=0.632/0.316=2.$$
 * If you have monitor (with big contrast like 8^8=16777216:1), where 255 is 8 times stronger than 128, and 128 is 8 times stronger than 64, and 64 is 8 times stronger than 32 and so on. Then by rising gamma to value $$k_g=3$$ you applying algorithm "$$final.rgb=(color.rgb)^{1/3}; $$ 0<color.rgb<1" and you will get, that 255 is 2 times stronger than 128, and 128 is 2 times stronger than 64, and 64 is 2 times stronger than 32. Because $$0.8^{1/3}/0.1^{1/3}=0.9283/0.46416=2.$$
 * For monitors with contrast 2^8=256~300:1, there is no point use gamma correction, because 1 (and even 0) shining pretty strong. So if monitors developers don't put they own calibration into monitor (that 0 is 1000 times weaker than 1 and 1 is about 300 times weaker than 255), then gamma should perfectly to let you to choose desired contrast ratio (from say 50:1 to 100000:1) by changing coefficient $$0.5<k_g<0<3.5.$$ Good thing about gamma is that it don't rising 0 at all. So this is main advantage of big contrast monitors over small contrast monitors (which have strong 0 and contrast between 1 and 0 is about 2:1 or at most 10:1), because if 0 is very black, then better visible weak colours like 3, 5, 10, if gamma is more than 1 (default gamma=1). But for some reason at least for old some CRT monitors contrast and brightness combined correction "contrast=100-brightness/2.55" rising too weak colours better and in correct contrast (you must judge if contrast between colours is correct by comparing 10 with 20 and 255 with 128 or 10 with 5, and if in all cases two times smaller number looks like two times weaker then contrast is correct, by correcting with gamma for some reason disappearing difference between 255 and 128 and difference between 5 and 10 is very big and between 10 and 20 very small, but it's maybe because in CRT (cathode ray tube) monitors screen becoming too negative and for weak colours it's big difference and for strong colours almost no difference, also after some time (after about 20 minutes) in CRT monitors screen becoming charged and weak colours becoming weaker; so for LCD monitors gamma should do everything correct). This contrast and brightness combined correction "contrast=100-brightness/2.55" difference between weak colours doing almost invisible; if before this correction colour was 10 and was two times stronger than 5, then after correction colour 10 is about 1.1 or 1.3 times stronger than 5, but for strong colours almost nothing changing, like if 128 was 2 times stronger than 64, then after correction 128 is 1.9 times stronger than 64. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 19:25, 12 December 2011 (UTC)
 * If you have monitor with contrast ratio 2^8=256:1, where 255 is 2 times stronger than 128, and 128 is 2 times stronger than 64, and 64 is 2 times stronger than 32. Then by changing gamma from 1 to 2, you will get contrast $$(\sqrt{2})^8=1.4142^8=16:1$$. Then 255 will be 1.4142 times stronger than 128, and 128 will be 1.4142 times stronger than 64 and so on. Because $$\sqrt{0.2}/\sqrt{0.1}=\sqrt{2}=1.4142.$$ So if for HDR $$k_g$$ changing from 1 to 2, then weakest color is 1/255=0.003921568 and if scene is very dark then weakest colour will become $$0.003921568^{1/2}=0.062622429$$ and this is 0.0626*255=15.9687=16. Another example if $$k_g=1.5$$, then $$0.003921568^{1/1.5}=0.00392^{2/3}=0.024867943$$ and this is 0.02487*255=6.34=6. So at $$k_g=2$$ range 1-16 will be expanded to 16-64, because $$(16/255)^{1/2}=0.062745^{1/2}=0.250489716$$ and this is 0.2504897*255=63.87=64. So at $$k_g=2$$ we want to subtract 16 or 16/255=0.0627. At $$k_g=1.5$$, we want to subtract 6 or 6/255=0.0235. At different $$k_g$$ ($$1<k_g<2$$), we want subtract proper values from 1 to 16 or from 1/255 to 16/255. So algorithm is this:
 * $$final.rgb=(color.rgb)^{1/k_g}-(1/255)^{1/k_g}=(color.rgb)^{1/(2-average)}-(1/255)^{1/(2-average)};$$ 0<color.rgb<1; 0<average<1.
 * Also we may want, that during weak lighting in scene, when 1/255=0.0039, would look like 16/255=0.0627, so then we do not subtract anything:
 * $$final.rgb=(color.rgb)^{1/k_g}=(color.rgb)^{1/(2-average)};$$ 0<color.rgb<1; 0<average<1.
 * But if we do not subtract, then contrast ratio from 1:16 will become 16:64=1:4, will become very small. And if we subtract, then contrast ratio will increase 3 times, because before algorithm if is 1:16, then after (17-16):(64-16)=1:48. But unfortunately subtraction changing normal colours balance. So better use normal algorithm "$$final.rgb=color.rgb/average$$". Or to use correction, which don't brings distortion of natural colors balance:
 * $$final.rgb=((color.rgb)^{1/(2-average)})/average;$$ 0<color.rgb<1; 0<average<1.
 * This way you will get weakest colour rised by 16 and more and in proper colors natural balance. For example if average=16/255=0.062745, then color=1/255=0.00392 is rised to:
 * 1) $$final.rgb=(1/255)^{1/(2-16/255)}/(16/255)=0.00392^{1/1.937254902}/0.062745=0.057247641/0.062745=0.912384286;$$ or 232.66=233; so need average put to some limits like 0.5<average<1;
 * 2) $$final.rgb=(1/255)/(16/255)=0.00392/0.062745=0.0625;$$ or 15.9375=16.
 * Another example, average=128/255=0.5, color=16/255=0.062745 and for first case 0.5255;
 * 2) $$final.rgb=(100/255)/0.5=0.392/0.5=0.7843;$$ or 200.
 * 2.1) $$final.rgb=(100/255)/0.2=0.392/0.2=1.960784314;$$ or 500=>255.
 * And if average=128/255=0.5, color=1/255=0.00392 and for first case 0.5<average<1, then:
 * 1) $$final.rgb=(1/255)^{1/(2-0.5)}/0.5=0.00392^{1/1.5}/0.5=0.02487/0.5=0.049735887;$$ or 12.68=13;
 * 2) $$final.rgb=(1/255)/0.5=0.00392/0.5=0.007843137;$$ or 2.
 * 2.1) $$final.rgb=(1/255)/0.2=0.00392/0.2=0.019607843;$$ or 5. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 21:57, 12 December 2011 (UTC)
 * Note, that algorithm, which using gamma don't matter if is combined with this "final.rgb=color.rgb/average" algorithm or not, still making contrast between say 128 and 64 instead normal 2:1, it making $$0.2^{1/1.5}/0.1^{1/1.5}=1.5874:1$$ or bigger depending on "average" like $$0.2^{1/1.25}/0.1^{1/1.25}=1.7411:1$$ instead normal 2:1. So this algorithm graying combined or not with with this "final.rgb=color.rgb/average". But graying all colours equally don't matter they strong or weak and contrast between all colours depending only on "average".
 * Compressed luminance algorithm "final.rgb=(2*color.rgb)/(1+color.rgb)" graying the same don't matter if used before or after this "final.rgb=color.rgb/average" algorithm or used alone. But it graying stronger colours more than weaker and contrast after this "final.rgb=(2*color.rgb)/(1+color.rgb)" algorithm between say 128 and 64 is smaller than between 20 and 10. For example [2*0.2/(1+0.2)]/[2*0.1/(1+0.1)]=[0.4/1.2]/[0.2/1.1]=[0.3333]/[0.1818]=1.83333, so contrast becoming 1.8333:1 after algorithm, compare with noraml 2:1 before algorithm (here was colours 0.1*255=25.5=26 and 0.2*255=51). And if colours are 128/255=0.5 and 64/255=0.25, then [2*0.5/(1+0.5)]/[2*0.25/(1+0.25)]=[1/1.5]/[0.5/1.25]=[0.6667]/[0.4]=1.6667, so contrast between 128 and 64 equal to 1.6667:1 instead normal 2:1. So you can imagine, that there small contrast between 255 and 128 (contrast after algorithm becoming 1.5:1, because [2*1/(1+1)]/[2*0.5/(1+0.5)]=[1]/[1/1.5]=[1]/[0.6667]=1.5).
 * But I tell you secret, that average is calculated using only 16 textures centers (pixels) or less likely variant, that each sixteen pixel on screen (width*height/16). So still no so very real average and the slower adaptation, the better. So best of all to use maximum of all 16 pixels and maximum of this pixel all channels RGB instead average and then it will go perfectly in all algorithms. If color=230/255=0.9, colormax=230/255=0.9, then:
 * 1) $$final.rgb=0.9^{1/(2-0.9)}/0.9=0.9^{1/1.1}/0.9=0.90866/0.5=1.009624247;$$ or 257.454=>255; 0.5<colormax<1;
 * 2) $$final.rgb=0.9/0.9=1;$$ or 255. — Preceding unsigned comment added by Versatranitsonlywaytofly (talk • contribs) 22:59, 12 December 2011 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 4 external links on High-dynamic-range rendering. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20071011004622/http://ati.amd.com/products/Radeonhd2400/specs.html to http://ati.amd.com/products/radeonhd2400/specs.html
 * Added archive https://web.archive.org/web/20080701085758/http://ati.amd.com/products/Radeonhd4800/index.html to http://ati.amd.com/products/radeonhd4800/index.html
 * Added archive https://web.archive.org/web/20110307074455/http://www.unrealengine.com/features/rendering to http://www.unrealengine.com/features/rendering/
 * Added archive https://web.archive.org/web/20070624212106/http://www.gsulinux.org/~plq/ to http://www.gsulinux.org/~plq

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 03:21, 2 April 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on High-dynamic-range rendering. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20110323182005/http://source.valvesoftware.com/rendering.php to http://source.valvesoftware.com/rendering.php

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 14:01, 3 November 2017 (UTC)