Wikipedia:Reference desk/Archives/Science/2019 August 8

= August 8 =

Colour specification
What do the variables x10, y10, and Y10 stand for in specifications such as at Flag of Belarus? Two of them are under the heading “Colour coordinate”, for which I found the closest match in the article CIE 1931 color space, but that doesn't use those variables. ◄ Sebastian 07:39, 8 August 2019 (UTC)


 * The little x and little y indeed coordinates in the CIE_1931_color_space, see specifically #CIE_xy_chromaticity_diagram_and_the_CIE_xyY_color_space. The little 10 is to indicate that the colors are "as seen" by a CIE 1964 10° Standard Observer, see #CIE standard observer. Big Y is luminance, defined at #Meaning of X, Y and Z. Essentially this is all meant to define not simply what the color is, but exactly what it is supposed to look like to the human eye. Someguy1221 (talk) 07:58, 8 August 2019 (UTC)


 * 👍 Great answer, thanks! ◄ Sebastian 08:10, 8 August 2019 (UTC)

Why don't we see a 4-D color space?
Reading the section “” cited by Someguy1221 above, I wonder why we don't see a four-dimensional color space with our 4 types of receptors. With the rod cells already providing brightness information, the cone cells should be free to give us three dimensions of chromaticity. (Conceivably, our photoreceptors might be linearly dependent, but their spectral sensitivity is so different that that's out of the question). Or are the rod cells so saturated in normal light that they don't provide any information? ◄ Sebastian 09:06, 8 August 2019 (UTC)
 * 4-D? Seeing into the future? ←Baseball Bugs What's up, Doc? carrots→ 12:28, 8 August 2019 (UTC)


 * Possibly because humans are adaptive to brightness and we're (as a result) terrible at judging it. We're not good at brightness comparisons across the field of view and we just can't do brightness over time. So that fourth axis would be real, and we can now quantify it with our instruments, but as an issue of perception it would be of very little value to us. Andy Dingley (talk) 12:59, 8 August 2019 (UTC)
 * I'm not sure about the cones and color perception, but human's brightness perception occurs on a logarithmic (magnitude) scale and not on a linear scale. See Weber–Fechner law, the section on vision.  -- Jayron 32 16:36, 8 August 2019 (UTC)


 * @BB: At the best, we can see into the past.  If looking at the stars at night, the closest object you may see is 4.2 years old.  The tip of your nose is a bit closer, but neither Alpha Proxima nor your proboscis may exist at the time of perception.  Sniff!  --Cookatoo.ergo.ZooM (talk) 18:34, 8 August 2019 (UTC)


 * In the case of the nose, the time it takes to convert photons on the retina into the concept "nose" would be far longer than the time for those photons to travel from the nose to the retina. SinisterLefty (talk) 19:45, 8 August 2019 (UTC)
 * The light is from how the star looked 4.2 years ago, but the star is a lot older than that. What I'm wondering is what the 4-D is about. I didn't see it in the linked article. ←Baseball Bugs What's up, Doc? carrots→ 20:37, 8 August 2019 (UTC)


 * They are asking why a color can be fully defined by RGB (or cyan/magenta/yellow) values, that is, 3 color "dimensions", as opposed to RGB and brightness, for 4 "dimensions". SinisterLefty (talk) 23:10, 8 August 2019 (UTC)
 * OK, thanks for the explanation. ←Baseball Bugs What's up, Doc? carrots→ 23:51, 8 August 2019 (UTC)


 * The reason that most humans see "three" colors - or more accurately, that their eyes are photopically stimulated in three different bands of light wavelengths - is called trichromacy. There is a biochemical explanation for this: specific chemicals called photopsins exist in the eye and are responsible for the color sensitivity of the human eye.  Most humans with nominally-"standard" color vision have exactly three (and not two, and not four) different chemicals that are sensitive to specific wavelengths of light: OPN1SW, OPN1MW, OPN1LW.  These are the chemicals that mediate the light sensitivity in the "cones," the color-sensitive retina cells.  A different chemical, rhodopsin, resides in the "rods" or "brightness-sensitive" cells.  Rhodopsin has a stimulus-response pattern with a photoresponse that does not significantly depend on the input wavelength of the photon, so long as it is within the visible spectrum.
 * Human vision is very complicated, and there is a great diversity in the human population: some humans have physiological differences that cause things like color blindness; and any discussion about human color vision would be incomplete if we didn't raise the philosophical complications about the subjectivity of perception; and we need to be careful about word-choice when we are distinguishing between "normal" and "non-normal" humans, simply based on their physiological differences. Ultimately, the reason that most humans have color-vision that is very well modeled by exactly three color parameters is because most humans have exactly three color-sensitive chemicals in their eyes.
 * When we choose to use "X/Y/Z" or "R/G/B" or some other scheme to numerically model the human's perceptual response, we're simply doing a little bit of math to project a sub-sampling of wavelength sensitivity on to a three-element basis. The degree to which the basis accurately, cleanly, and efficiently models human perception ultimately depends on the application: for example, if we're mixing paints, we might want to use the first three of a "CMYK"-like color-space, defining three different paints that mix to form a specific color, which we can then darken by adding black paint.  Or, if we're glowing photons out of a phosphor screen, we might want to use a "YUV"-like color space where the brightness (or amount of glowing) is intrinsically linked to the color that's getting glowed.  Or if we care about compressing videos so that we can stream more colorful cat-video over more internet-wire while using the same number of bits, we might project the exact same color primaries onto a different set of numbers.
 * What you will very rarely find - at least, not usually - is a system that uses more than three primaries. For example, such a system could produce a full color spectrum with precise control of the power at all wavelengths: this is the domain of very specialized and expensive equipment like the monochromator, or these very nice Gamma Scientific 35-wavelength tunable LED emitters that let you "almost" dial a spectrum with per-wavelength accuracy.  The thing is, a human eyeball can't see the difference between, say, red light at one wavelength, and equiphotopic red light at another wavelength, and nor can the brain to which it is attached; but a camera can see that difference - the photofilter chemicals in the camera aren't identical to the human eye's biological photpsins! - so this is a real problem if we want the human eye to "see" the same color when it's comparing the the camera's output to the real scene!
 * Nimur (talk) 21:42, 8 August 2019 (UTC)
 * Or are the rod cells so saturated in normal light that they don't provide any information? Yes, basically, as I understand it. In typical daylight your vision (photopic vision) is more or less entirely the product of the cone cells alone. (From rod cell: …rods have little role in color vision…) As the light gets dimmer (mesopic vision), the cone cells respond less and the rods begin to contribute more, progressing to scotopic vision and then full night vision, where only the rods contribute. Typical color spaces model photopic vision, I believe. There might be some interesting models for dim-light vision, though I don't have much knowledge of that. Looking into the scientific literature on dim-light vision is probably the thing to do if you're interested in the subject. --47.146.63.87 (talk) 21:55, 8 August 2019 (UTC)
 * The Purkinje effect article may be of interest in this context. {The poster formerly known as 87.81.230.195} 2.123.24.56 (talk) 00:48, 9 August 2019 (UTC)
 * Self-correction/clarification: scotopic vision is vision under little-to-no light; there's no "stage" after that. I wasn't quite sure when writing the above message, but then I re-read the articles in detail. --47.146.63.87 (talk) 05:18, 9 August 2019 (UTC)

Partially redundant to the above, but yes, rods become saturated somewhere between 1 and 10 lux, which is really not that bright. Cones also have a logarithmic response to luminance, but the response is generally even flatter than it is for rods. Basically, color vision is actually not that great at distinguishing changes in total brightness. So yeah, I'd imagine sensory processing with these differences would be quite challenging. There is also the issue that rods actually have a much longer integration time than cones, so it's not even really the same type of information. Someguy1221 (talk) 06:05, 9 August 2019 (UTC)
 * We often perceive brightness changes as simply the spurious shadows that cause them. In terms of information, with most objects it's uncharacteristic noise. --2600:1700:90E0:E040:D133:FAC5:3366:749E (talk) 03:11, 14 August 2019 (UTC)