Talk:Cardinal point (optics)

Diagrams etc.
A few things: --Bob Mellish 17:08, 9 March 2006 (UTC)
 * 1) I believe the principal planes are actually principal surfaces (curved) unless you're using the paraxial approximation. Worth mentioning, or leave that to another article?
 * 2) The nodal points are coincident with the principal points if the lens the same medium on both sides. Probably worth adding.
 * 3) What diagrams do we want? I was thinking of basically redrawing Hecht (2nd ed.) figure 6.1. Would fig 6.2 & 6.3 be useful also, or too much?
 * 4) Any preferred labelling scheme? Hecht uses F1, F2, V1, V2, H1, H2, etc., whereas Jenkins & White use F, F', A, A', H, H'.

--Srleffler 17:53, 9 March 2006 (UTC)
 * Yes, the principal surfaces are actually curved, if you're not using the paraxial approximation. Perhaps that could be mentioned. The article (and most of the rest of geometric optics on Wikipedia) assumes the paraxial approximation though. I'm not sure how useful it is to consider the cardinal points at all, once one moves beyond paraxial optics.
 * 1) It's in there already.
 * 2) Hecht's 6.1 and 6.2 look good. I'm not so sure about 6.3.
 * 3) No preference on numbers vs. primes. I prefer V to A for the vertices, for obvious reasons. Greivenkamp uses F, F', P, P', V, V', and N, N', which seems nice and simple. Any idea why the more conventional symbol for the principal planes is H? One of those things inherited from German, perhaps?

--Bob Mellish 01:35, 10 March 2006 (UTC)
 * OK, I've left it out of the diagrams for now.
 * 1) Oops, yes, so it is. Sorry.
 * 2) Diagrams of the first two have been added. Let me know if you want anything tweaked in them.
 * 3) I've followed that convention. No idea about H, probably German as you say,

Excellent diagrams, Bob! The dashed "virtual" extensions on the rays are good, and they pointed out to me that the definition of P and P' that I had written was unclear if not outright incorrect. --Srleffler 02:18, 10 March 2006 (UTC)

Thanks Srleffler for integrating the section I wrote about the back focal plane into this article - and improving it. --Richard Giuly 08:51, 28 July 2006 (UTC)

H for 'Haupt' would fit as German for 'principal' or main, as in main station = Hauptbahnhof --195.137.93.171 (talk) 06:24, 7 March 2008 (UTC)

I've seen HH' called the 'Hiatus' - that could well be used in German, too - a lot of German is Latin-based. (I don't know if 'Hiatus' is really Latin - just sounds like it !) If the planes are ever transposed so that the the space is used twice, (negative gap) then Hauptplan would be a better word. (I don't know if that is possible - diverging lenses ? Concave lens made of air underwater ?) --195.137.93.171 (talk) 02:42, 8 March 2008 (UTC)


 * Hiatus (optics) : "By neglecting the distance between the lens' principal points, known as the hiatus, s + s' becomes the object-to-image distance. This simplification, called the thin-lens approximation, can speed up calculation when dealing with simple optical systems." . Using H for Hiatus instead of P would be like a little reminder or clue that the thin lens approximation has been used. Which page should it #REDIRECT to ? --Redbobblehat (talk) 22:06, 11 August 2009 (UTC)

I noticed what I think is an error in the diagram of "various lens shapes" -- in 8. the R1 label should I think be + not - I don't know how to go about correcting an image, sorry if this is the wrong place to mention it. akay (talk) 13:55, 26 January 2021 (UTC)
 * Yes, you're right.--Srleffler (talk) 17:40, 30 January 2021 (UTC)

Nodal points ~ entrance pupil ?
"a ray that passes through one of them will also pass through the other"

I know what you mean, but the diagram shows that is not what you have written. The beam does not pass through either NP, and only an axial ray will pass through both NPs ! The ray is aimed at the NP, but is refracted to pass (through the midpoint) between the NPs. (I suspect 'midpoint' is only true for the simple symmetrical case, and is not a general rule.) The exit ray appears to have come from the other NP after being refracted by the last surface.

I think listing & linking to 'misconceptions' is misleading and unnecessary. If I understand it correctly, the distinctions are petty and misleading, in themselves. Isn't the 'entrance pupil' where the iris diaphragm appears to be, when viewed from the front of the lens ? And the nodal point is at the centre of that ?

If you are not convinced, consider the trivial case of the pinhole camera. All nodal points and planes co-incide at the hole. If you close the iris to a point, you have a 'virtual pinhole' where the point seems to be, when viewed from in front. I leave the maths to you ...

--195.137.93.171 (talk) 03:01, 7 March 2008 (UTC)

I suggest : replace The nodal points are widely misunderstood in photography, where it is commonly asserted that the light rays "intersect" at "the nodal point", that the iris diaphragm of the lens is located there, and that this is the correct pivot point for panoramic photography, so as to avoid parallax error. These claims are all false, and generally arise from confusion about the optics of camera lenses, as well as confusion between the nodal points and the other cardinal points of the system. The correct pivot point for panoramic photography can be shown to be the centre of the system's entrance pupil.[1][2][3] with The correct pivot point for panoramic photography can be shown to be the first nodal point, at centre of the system's entrance pupil, where the diaphragm appears to be.

The rest seems redundant and unsupported, if not actually wrong ! If we are to expand Wikipedia to refute everything that is false, then it will be infinte. Let's put the 'true facts' in first.

--195.137.93.171 (talk) 03:01, 7 March 2008 (UTC)


 * Your message above illustrates exactly why the article needs to say that there is a misconception here. The nodal point is not at the center of the entrance pupil. Pretty much everything you wrote after the first paragraph is wrong or irrelevant, because you're assuming something that is not true.


 * In general, it's important to explicitly address misconceptions in Wikipedia articles, both for the purpose of educating the reader, and to ensure that editors do not insert incorrect information drawn from sources that are themselves incorrect.


 * Now, about your first paragraph: you are of course correct that the ray doesn't pass through the nodal points. The article's description glosses over this issue. I'll try to rephrase it.--Srleffler (talk) 03:56, 7 March 2008 (UTC)


 * so these are 'authoritative sources' ?


 * "Pumpkin" doug.kerr.home.att.net - an electrical engineer, specialising in telecomms
 * vanwalree.com "I am not professionally involved in photography, optics ... I work as an underwater acoustician. This website is just a hobby "
 * "Rik Littlefield" 19 pages of PDF dr looks impressive but he seems to be a computer guy, not an optics guy ! eg he talks as though there were just one nodal point, rather than two - front & rear. (I skimmed it - he knows.) Basically he shows that the Entrance pupil works. He doesn't show that the front nodal point is distinct from the pupil, or that the front nodal point doesn't work. I think the verbosity just indicates confusion and the truth is simple - they are the same ! See below.

No-one out there really studied optics? Love Wikipedia ! I challenge anyone to come up with a definitive solution in a text-book or a peer-reviewed scientific journal. Until then I would suggest that the Wikipedia community should not pretend to have the answer to the controversy ! Delete ? Google Scholar, anyone ?

Personally I think that this article is confusing, and that the front and rear principal planes (=entry & exit pupils) each contain a principal point called the front or rear nodal point. If you rotate a camera to take a panorama you rotate the camera about the front nodal point. If you rotate just the lens, you are better to use the rear nodal point, but some blurring of objects near the camera will inevitably result (unless the two nodes are the same point). You can see where the front plane/pupil is by simply looking at the diaphragm through the front of the lens. Ditto for the back. Due to refraction in the glass elements, the physical diaphragm is (probably) not really where the 'pupil' appears to be visually.

I think my opinion is as good as any I've seen ! It is based on the principle that the light you use to see the diaphragm ought to behave the same as the light you use to make an image with the lens. No ?

Good luck with the research ! If I get really bored I might even dig out my textbooks. --195.137.93.171 (talk) 05:44, 7 March 2008 (UTC)


 * This is why we have a rule against original research on Wikipedia—so we don't have to deal with edits based on an editor's own opinion. You're right that the nodal points normally coincide with the principal planes. This is only true, however, if the surrounding medium is the same on both sides of the lens. See Greivenkamp's book, page 11 (full reference in the article). Where you go really wrong is in assuming that the entrance pupil must coincide with the front principal plane. They do not in general coincide. Page 26 of the book shows a schematic example where the entrance pupil is in front of the lens group, and the principal planes are both inside the lens. If you want a simple example, consider a plano-convex lens used flat side first. The front principal plane is located t/n from the front vertex, where t is the thickness and n is the refractive index. This places the principal plane inside the lens. The rear principal plane is at the rear vertex. The entrance pupil is coincident with the flat front surface of the lens.


 * Yes, some of us here have studied optics. You're not the only one here with a physics degree. --Srleffler (talk) 17:22, 7 March 2008 (UTC)

Nodal point = entrance pupil ! Proof ?
"The nodal point is not at the center of the entrance pupil." I can disprove that with a fairly simple thought experiment. It's not obvious, but not complicated either ! Look at the image.

1) The object of choosing the point about which to rotate the camera is that the entrance pupil should be stationary.

2) The angle between the ray illustrated and the axis is not special - any ray aimed at the front nodal point N must emerge as if from the rear one, parallel to the input direction. (See definition of nodal point !)

3) If the aperture is positioned to prevent vignetting when stopped down to a small hole, all rays aimed at the front nodal point must pass through the centre of the physical aperture. (Definition of vignetting)

4) All those lines converging on one point must be radii of a circle, centred on the nodal point (just considering the 2D plane of the image, it is the same for a sphere !)

5) The principle of reciprocity states that light going along any path A->B and light going from B->A will follow exactly the same path, but in opposite directions (Look for a source if you want, or just think about it! Hint: light takes the quickest path ...)

6) If all the rays aimed at N pass inwards through the hole at the centre of the aperture, any light emitted through the hole or reflected from its rim must pass outwards along the same set of paths. (follows from 5) )

7) Therefore an eye observing light emitted through the hole or reflected from its rim will see the hole as if it were at N, the nodal point

8) The entrance pupil is where the 'mirage' of the aperture appears to be, when viewed from the front.

9) The nodal point N is seen at the center of the entrance pupil.

10) Rotating the camera about the nodal point does not move the centre of the entrance pupil.

I wish I had time to do a 3D visual simulation of it, but if you take it one step at a time, you should understand.

Nothing there says (or feels like) 'paraxial approximation'.

Nothing relies on symmetry of the lens.

There may be second-order effects in extreme wide-angle lenses, which distort the entrance pupil so that the centre appears to be off-centre, but I think that means that the pupil will move when you rotate the camera, no matter which point is the centre of rotation. You would have to move it as well as rotate it in order to keep the aperture still ! Not a good lens for panoramas.

Of course, the diaphragm may be in the 'wrong' place - ie not in the place that gives zero vignetting ( point 3) above ). This is usually the case when the diaphragm is exposed - ie in front of the glass, as in 'convertible lenses' where you unscrew the front element and just use the back half ! In that case, yes, go with the hole, not the node ! —Preceding unsigned comment added by 195.137.93.171 (talk) 12:50, 7 March 2008 (UTC)

Hope this helps explain, and defuses the controversy.


 * "Rotate about the nodal point or the entrance pupil ?"


 * "Both - they are the same !"

It's like arguing over whether a glass is half-full or half-empty - just different ways of describing the same thing !

You will probably find authoritative papers supporting each side, since there is no contradiction !

Please, please remove the section about misunderstandings - no-one is wrong. It just looks silly.

--195.137.93.171 (talk) 11:35, 7 March 2008 (UTC)


 * I see a couple of problems with the "proof" above. The first is that it presumes the camera lens is designed with the aperture at the nodal point to minimize vignetting. I don't know much about camera lens design, but it seems to me that there might be other constraints that affect where the aperture is placed. Indeed, you noted some examples of lenses that don't put the aperture at the nodal point.


 * The second and more critical problem: In 10 you are proposing rotating about an image of the nodal point that coincides with the center of the entrance pupil. This is not the nodal point itself, which is located at the aperture. Rotating about the actual nodal point certainly would move the center of the entrance pupil. Note also that the nodal point is not a physical object and does not form an image. It is meaningless to say that the nodal point is "seen at" the center of the entrance pupil. --Srleffler (talk) 17:32, 8 March 2008 (UTC)

If I remember correctly, putting the pupil at a point other than the principal plane also tends to invite (cautious weasel wording ...) barrel or pincushion distortion, as well as vignetting. Hence my use of the word 'wrong'. OK, my 'proof' only works for normal lenses, but are people likely to try making panoramas with others ?

A really great, but mind-bending, example is your favourite front-telecentric lens, where you put the aperture at the rear focal plane. Then the entrance pupil appears at infinity. So you rotate the lens about a point at infinity ? You move it in a plane perpendicular to the optical axis. Not your normal panorama. However, the absence of parallax is precisely the sort of reason that telecentric lenses are used in optical projectors for measuring 3D objects. (The barrel/pincushion distortion could be tricky - or does it disappear again at that extreme ?) It does begin to make sense eventually.

I'm not sure if I really want to think about the nodal points of a telecentric lens. I suspect one (or both) of them may not exist (or be simultaneously at +/- infinity, at least)?

Considering the absolute extremes of optical design may not help the novice or lay-reader, but will test the generality of statements for those experts seeking absolute truth and precision. (The exception proves the rule ?) How do we satisfy both categories of reader here ? If extreme examples are to be used, I think we should at least warn users that they are extreme. Maybe experts will read this discussion page, so we could keep it really simple up-front in the article ?

--195.137.93.171 (talk) 21:20, 7 March 2008 (UTC)


 * That illustration really isn't about the lens design. It's just a diagram with a lens and a few rays to illustrate a principle that applies generally to all lenses.


 * In general, articles on technical topics like optics need to cover the general information for non-experts, and information suitable for people who need more technical detail. Usually the general information comes first. Even in the general-reader material, though, it's important to ensure that all information presented is accurate. This article is legitimately more slanted to the technical reader than most, because it is covering a topic that is mainly a matter of technical detail. We do need to ensure that the nontechnical reader who comes here looking for info is not excluded, though.--Srleffler (talk) 17:43, 8 March 2008 (UTC)

Filter rays by angle - removed
I deleted this section. An aperture at the rear focal plane can be used to filter rays by angle, since:
 * 1) It only allows rays to pass that are emitted at an angle (relative to the optical axis) that is sufficiently small. (An infinitely small aperture would only allow rays that are emitted along the optical axis to pass.)
 * 2) No matter where on the object the ray comes from, the ray will pass through the aperture as long as the angle at which it is emitted from the object is small enough.

Note that the aperture must be centered on the optical axis for this to work as indicated.

Angle filtering is important for DSLR cameras having CCD sensors. These collect light in "photon wells"—the floor of these wells is the actual light gathering area for each pixel. Light rays with small angles with the optical axis reach the floor of the photon well, while those with large angles strike the sides of the wells and may not reach the sensitive area. This produces pixel vignetting.

1) It seemed out-of-place - not really central to "Cardinal point" - the original theme of this article.

2) The image shows light emitted axially from the object, but the text discusses light hitting a sensor perpendicularly, instead

3) I suspect "photon wells" are not really shaped like water wells, but are Quantum wells - a concept in quantum physics. A bucket you can collect photons in, if you like, but don't picture a real bucket with conical sides and a handle - it's just a metaphor !

4) all materials reflect more if light hits them at a shallow angle than if the light hits at right-angles - that could explain the drive for axial light.

5) maybe that image and explanation belong in an article on telecentric lenses - it's very specialised ?

--195.137.93.171 (talk) 03:54, 7 March 2008 (UTC)


 * You seem to be deleting material based on personal speculation rather than knowledge or external sources. This is not a good editing practice. Your points 3 and 4 are speculative. The issue with detectors is that CCD detectors are sensitive to the angle of incidence of the light, so cameras based on them may require a narrower cone of rays at the detector than a film camera would. See Vignetting for more discussion of this. I'm not sure offhand if this angle sensitivity is due to a physical well as the article implies (which would be in addition to the quantum well), or if the description of this angle sensitivity as a "well" is metaphorical.


 * Your point 2 seems to be based on misunderstanding. The article discusses a general geometric optics effect, and the image illustrates that. The article then goes on to give a specific example of where that effect is used, which happens to involve keeping the angle of incidence of light on a detector close to perpendicular. The image doesn't illustrate that application. --Srleffler (talk) 04:28, 7 March 2008 (UTC)


 * I've just read telecentric - does my head in - convinces me these are poor examples.
 * Is adding speculative material (shape of photon well) not also poor editing practice ?
 * Maybe I should have just flagged it 'citation needed' or something !
 * 4 is not speculative - I think it's called snell's law or fresnel effect - had a quick look for source - will continue.
 * I'm about to add to my comment on the other image below ....


 * --195.137.93.171 (talk) 04:42, 7 March 2008 (UTC)


 * Raising your concern on the talk page was a good idea. Deleting a big chunk of text without discussing it here first was not. You are deleting material based on your own misunderstanding of it.--Srleffler (talk) 04:47, 7 March 2008 (UTC)


 * I don't have time for edit wars - I challenge you, or anyone to find reliable sources for/against 4) --195.137.93.171 (talk) 05:00, 7 March 2008 (UTC)


 * Don't bother - 4) is based on Fresnel equations. OK - I ignored polarised light !
 * Or you can avoid the maths and graphs by just looking at the reflection of a window(no - blue sky is polarised!) lamp in something shiny, while you tilt it to vary the angle ! The reflection is dimmest at 90 deg. Even something black & polished will do. It's more common experience than speculation. --195.137.93.171 (talk) 05:58, 7 March 2008 (UTC)


 * The issue is not whether Fresnel reflection increases with angle of incidence. Of course it does. The issue is that you were speculating that this is the reason why CCD-based cameras are designed to limit the angle of incidence at the sensor. There could be an additional angle-sensitivity in this type of sensor, which is what the article implies. I don't personally know one way or another, but I know that your speculation is not sufficient grounds to change the article.


 * I am thinking now that the paragraph on DSLR cameras should probably be deleted for another reason, though: the angle filtering described in the diagram is object-space angle filtering: the aperture in the rear focal plane limits the range of angles in object space which are accepted by the lens. The DSLR cameras clearly need image-space angle filtering. This is related to your point abou the rays not being perpendicular at the detector. --Srleffler (talk) 17:34, 7 March 2008 (UTC)


 * Since the sensors generally have RGB colour filter arrays (exceptfoveon) it's a fair bet that there's a pretty complex stack of many thin layers of materials of different refractive index acting as an anti-reflection coating as well. I was considering a simple case. The coating is probably not just optimised for normal perpendicular incidence, so that its reflectance-as-a-function-of-angle helps reduce pixel vignetting, working in tandem with the telecentric lens design.


 * The telecentric article explains DSLR pixel vignetting purely in terms of the RGB sensor filter - no mention of the underlying CCD/CMOS photon wells.


 * Yes, if you swap the words 'object' and 'image' and reverse the arrows, your front-telecentric diagram becomes rear-telecentric. My main point: "Does it really relate to 'cardinal points' at all ?" They are on-axis: telecentricity is off-axis.--195.137.93.171 (talk) 21:41, 7 March 2008 (UTC)

back focal plane - image removed
I deleted this image



1) Too small to see

2) It is a weird telecentric lens - a very special case that doesn't behave like a normal lens.

3) the only parallel rays come from different points on the object !

4) it's just a co-incidence that the parallel rays depicted cross at the BFP

4a)parallel rays from points that are closer together will cross behind the BFP

4b)parallel rays from points that further apart will cross in front of the BFP

You want an object at infinity, with the image formed in the BFP. This isn't !

Sorry - again, the diagram may be of some use on the telecentric lens page ?

--195.137.93.171 (talk) 04:09, 7 March 2008 (UTC)


 * I reverted your deletion. The image correctly illustrates the point being made in the text. You just didn't understand the optics here. Your point 3 is exactly the point: parallel rays from different parts of the object cross at the back focal plane. This is not a coincidence, contrary to your point 4. Your points 4a and 4b are simply wrong. --Srleffler (talk) 04:14, 7 March 2008 (UTC)
 * I improved the caption to better reflect what is going on. As to the size of the image: click on the image to see it full-scale. The image on the page is a thumbnail, which provides a link to the full picture. Logged-in users can set the default size for thumbnails so they can have them appear bigger if they wish.--Srleffler (talk) 04:17, 7 March 2008 (UTC)
 * Regarding your point 2: the image is a diagram illustrating the specific point being made in the text. The property described is true of all lenses, telecentric or not. The lens depicted is, in fact, not telecentric.--Srleffler (talk) 04:19, 7 March 2008 (UTC)


 * The 'co-incidence' is that you happen to have illustrated a telecentric lens ! I therefore withdraw 4a) & 4b) above !
 * I give up ! I only have an honours degree in physics, albeit 23 years ago and 13 years in optics & electro-optics research. No more time to waste - what does the community think ? --195.137.93.171 (talk) 04:47, 7 March 2008 (UTC)
 * The image really is no coincidence. The phenomenon depicted would be true of any lens (in the limit where geometric optics is a good approximation, of course). Ray diagrams are tricky to interpret correctly. --Srleffler (talk) 04:56, 7 March 2008 (UTC)
 * Let me see if I can jog your memory: would it sound more familiar to say that a lens maps angle in the far field to position at the focal plane? Parallel rays entering the lens always come to a point in the appropriate focal plane, in geometric optics. It doesn't matter what kind of lens it is, or where the object and image planes are located. This is related to Fourier optics, and is based on the fact that each position in the focal plane corresponds to a plane wave component in the incident light, with a particular direction of propagation. --Srleffler (talk) 05:07, 7 March 2008 (UTC)


 * Ah, now you're testing me. Lol! Even my favourite pinhole 'lens' will translate angle in far field to position in the 'focal plane' - you don't even need to invoke 'simple lens', let alone 'geometric optics' for that - 'Fourier optics' are overkill, if not barely relevant! Fourier optics is more to do with the 14-point starburst when you stop down a 7-blade iris diaphragm, although I suspect that the iris blades also contribute, due to reflection off rounded edges, which are an imperfect knife-edge ! Fourier optics is more to do with angle in the far field mapping to spatial frequency in the near field - ie the aperture plane, not the focal plane. And phase in the near field determines amplitude in the far field, and vice-versa! And small features in the aperture affect the large or wide parts of the far-field, while the small features in the far-field are determined by the large features of the aperture.


 * Great. Now we're on the same wavelength. Every lens translates angle in the far field to position in the focal plane. That is all that is involved here. The lens doesn't know whether a ray comes from infinitely far away or from a nearby object. Rays arriving at the lens with a given angle of incidence map to a particular point in the focal plane, therefore rays that leave the object parallel to one another cross at the rear focal plane, as the article says. The image illustrates this by tracing a few selected rays, showing that rays that leave different object points with the same angle pass through identical points (cross) in the focal plane.


 * You complained above that this is a weird lens. It's actually not. The lens illustrated is just an ideal thin converging lens. Such a lens cannot be telecentric, since the entrance pupil coincides with the plane of the lens. What is throwing you off is that the rays shown are not the typical ones one would use to analyze a camera lens. None of the rays shown is a chief ray or marginal ray. Rather, the rays shown are chosen to illustrate the effect being discussed. The chief rays would go through the center of the lens, and would not be parallel to the axis.--Srleffler (talk) 17:54, 7 March 2008 (UTC)

Did I pass? Your turn. Why did the f:64 school of photography not use 35mm cameras ? And why can a very small hawk not see better than a human ? And how small should a pinhole be ? --195.137.93.171 (talk) 06:46, 7 March 2008 (UTC)


 * Lol. I'll have to think about those. Offhand, the second and third appear to involve the diffraction limit, but perhaps there is more going on. No time for this now.--Srleffler (talk) 17:54, 7 March 2008 (UTC)


 * That's fine - "diffraction limit" as a two-word answer will score 100% for the first two and > 80% for the third. The pinhole diameter causes a circle of confusion in ray optics which works with diffraction to produce an airy disk. it's surprisingly insensitive to hole size, so a bigger hole than determined by max sharpness is probably worthwhile for the sake of speed. Answers depend on object/film distances.


 * I resisted the question about the speed of an unladen swallow - you might not know the answer is 'African or European swallow?' - it's a Monty Python thing.


 * I knew you were just testing me by mentioning fourier optics as an explanation of the ray diagram. Ray diagrams are fundamentally incompatible with fourier optics - that uses gaussian beams instead. There is no such thing as a 'ray' in fourier optics! Lol!--195.137.93.171 (talk), 7 March 2008


 * I didn't intend it as a test, but perhaps did mean for it to grab your attention. What I had in mind is that Fourier optics is based on the fact that one can decompose any optical field on a basis set of plane waves. (Pardon me if I'm going over stuff you know—others may read this too.) See Fourier optics. When a plane wave is incident on a lens, the lens focuses that plane to a single point in the focal plane.  Any light distribution incident on the lens can be represented as a sum of plane waves with different angles, each of which maps to a different point at the focal plane. The lens thus maps angle of incidence to position in the focal plane just as is represented in geometric optics. Fourier optics is not "incompatible" with geometric optics; it is a superset of it. Any physics that is correctly described by ray optics can also be described by Fourier optics or by a diffraction integral model. The reverse is not true, because ray optics is a simplification of the underlying physics.--Srleffler (talk) 19:53, 8 March 2008 (UTC)

I note that you seem to be equating 'focal plane' and 'aperture' in your Fourier 'explanation' - another symptom of a 'telecentric lens' !
 * No, see just above. Fourier optics predicts the mapping of angle to position in the focal plane, and this is fundamentally the basis of Fourier optics.--Srleffler (talk) 19:53, 8 March 2008 (UTC)

Maybe if I put the two diagrams side-by-side you may appreciate why I thought the first one was just as telecentric as the second?



The rays are identical !

You have chosen to plot 'telecentric rays'.

I agree that it could be a simple lens.

A simple lens can be a telecentric lens.

It becomes telecentric when you put the aperture in the focal plane !

Any lens can be a telecentric lens.(citation ?)

I think the term 'telecentric lens' is very misleading. Telecentricity is more a function of the aperture than the glass. Perhaps we should rather speak of a 'telecentric aperture'? 'Telecentric optical system' would be best.

Maybe it's not a weird lens, but the focal plane seems a weird place to put the aperture.

Does this clarify ?

I still think the diagrams are misleading and confusing, due to the telecentricity.

By the way, please don't delete the images - the telecentric page has a 'diagram request' on it, and I've linked these in the talk page.

Oh - just re-read that page - why do you deny that either lens is telecentric ?

I thought the whole point of the second image was demonstrating telecentricity ?

That page uses the definition : Telecentric: The chief rays, that is the rays through the center of the entrance or exit pupil, are all parallel to the optical axis, on one or both sides of the lens, no matter what part of the image space or object space they go through. Isn't that true in the diagrams - the middle one of the three rays from each of the two object points ? Am I missing something ? I don't think I'm being dense - this really isn't clear.

Each different point on the object 'sees' the entrance pupil in a different place, immediately below it, but 'located' infinitely far away.

Not meaning to be personal, but would I be wrong in deducing that you are primarily a microscopist ? That would explain the predilection for telecentricity. WP:NPOV ? WP:UNDUE ? Am I doing it right ?

For my side, I declare working in planar gradient-index optics, optical waveguide, fibre-optics, integrated optics modulators, laser diodes. Then I was sidelined into QA/QC - microscopy, photography, optical metrology etc. Then I got into IT - 7 years as a web-developer, now unemployed. I'm now tickling some dormant grey cells and practicing typing English rather than script languages. (Feel free to delete the last 2 paras - some of this should probably be moved to our personal talk pages when resolved !)

It's been a good meeting of minds. Sorry if I overdid the 'Be bold' Wikipedia philosophy. I'm not really a vandal. Still - it has to beat months or years of inactivity.

--195.137.93.171 (talk) 23:38, 7 March 2008 (UTC)


 * You're right that the lens with the apeture is telecentric; I missed that. For that lens the middle ray in the cone from the edge of the object is the chief ray. The same ray in the other drawing is not the chief ray, because the aperture stop is in a different place. The actual chief ray in that case is not shown, but would go diagonally from the edge of the object through the center of the lens (since the aperture stop of a simple lens is the boundary of the lens itself).


 * I stand by my claim that simple lenses cannot be telecentric, but with the qualification that adding an aperture displaced from the lens makes the optical system no longer just a simple lens. Telecentric lenses are pretty weird. I agree with most of your comments on telecentricity above.


 * Wrong guess. I'm a laser physicist. I didn't draw the images used in this article, although I'm responsible for a lot of other text here. --Srleffler (talk) 19:53, 8 March 2008 (UTC)

refractive index of 1 (e.g., air)
Would it appear petulant to suggest that vacuum would be a better example ? --195.137.93.171 (talk) 06:31, 7 March 2008 (UTC)
 * Not at all. Air (treated with the approximation n≈1) is a pretty important case in classical geometric optics, though, and needs to be addressed. I tried rephrasing the article a bit. See if you like the new text better.--Srleffler (talk) 18:00, 7 March 2008 (UTC)
 * It might be worth making it clear that nAIR is never exactly 1. The differences are non-trivial even to naked eye - twinkling stars, mirages, heat-haze, squashed setting sun. I recently came across a Nikon FAQ about why a lens focus-ring will turn beyond the infinity mark - mountaineers work in a lower refractive index due to altitude. Cynically I had thought it was due to sloppy manufacture, or a marketing ploy ("... infinity and beyond ...") akin to the guitar amps that 'go to 11'. Plus we always noted the daily barometer reading in our optics lab notebook - I can't remember the experiment for context, but it wasn't just a token gesture. --195.137.93.171 (talk) 01:15, 8 March 2008 (UTC)

Focal plane images
I updated the formerly-blue focal-plane images with SVGs. Now that I can see them clearly, I still don't really like them. As was discussed above, they are telecentric lenses, which is an unusual case. It was only in the last few days that I started to realize that an optical microscope is essentially telecentric, which is why microscopists talk of the back focal plane as the location for the apertue, whereas photographers think of the back focal plane as almost equivalent with the image plane. I'm not sure if this page or photographic lens or optical microscope should make this distinction; perhaps they all should. Just to clarify, am I technically correct about the back focal plane? Unless someone objects, I may start making these clarifications. —Ben FrantzDale (talk) 01:19, 20 May 2008 (UTC)


 * The lenses are not really the point. These are just simple ray diagrams that illustrate several principles that are discussed in the text. They are not camera lenses. They are not microscope lenses. The first diagram illustrates that the image plane is distinct from the focal plane when the object is a finite distance away, and shows that rays that leave the object parallel to one another cross at the back focal plane. These two principles are true of any optical system, but it would be hard to illustrate them in a small raytrace diagram of a conventional lens. The second diagram illustrates object-space angle filtering (i.e. telecentricity). Of course we use a diagram of a telecentric system to illustrate the principle that is responsible for telecentricity!


 * One correction: only the system shown in the second diagram is telecentric. The first is not. The lenses and rays illustrated are identical, but telecentricity (angle-filtering) is determined by the placement of the aperture. This might seem strange, but it really shouldn't. The rays illustrated happen to be the same, but not every ray through each system is shown. Diagrams showing the chief and marginal rays of each lens would make it clear that only one of the two systems is telecentric, but that wouldn't serve the purpose for which these diagrams were made.


 * If photographers think of the back focal plane as equivalent to the image plane they are mistaken in general, and need to be corrected. The two planes only coincide when the lens is focused at infinity.--Srleffler (talk) 03:55, 20 May 2008 (UTC)


 * I made a tweak to the wording. The article used the word "aperture" when it probably should have said "stop". The bfp is not in general the location for the aperture stop. If you want to filter rays by object space angle, however, you want a stop at the rear focal plane. It could be a field stop in a camera, for example. If that angle-filtering aperture is the aperture stop, then the system is object-space telecentric. --Srleffler (talk) 04:29, 20 May 2008 (UTC)


 * Thanks. At very least you clarified some things for me. —Ben FrantzDale (talk) 11:18, 20 May 2008 (UTC)

Reqdiagram
We could do with a diagram for surface vertex, obvious as it may be. —Ben FrantzDale (talk) 20:55, 26 May 2008 (UTC)

Rotating Lens for panorama
I'll try again. Sorry for the delay - I let life get in the way of wikipeding for a bit !

The more I think about it, the more I am convinced this section is just plain wrong. The nodal points are widely misunderstood in photography, where it is commonly asserted that the light rays "intersect" at "the nodal point", that the iris diaphragm of the lens is located there, and that this is the correct pivot point for panoramic photography, so as to avoid parallax error. These claims are all false, and generally arise from confusion about the optics of camera lenses, as well as confusion between the nodal points and the other cardinal points of the system. The correct pivot point for panoramic photography can be shown to be the centre of the system's entrance pupil.

For practical lenses, designed to minimise vignetting and distortion, it is not necessary to make the distinction - the front nodal point is the centre of the system's entrance pupil. That is why experiments can show either point to be correct. People that show 'entrance pupil' is correct don't show that 'nodal point' is wrong.

A simple thought experiment will help clear up the confusion. We need to separate the nodal point and entrance pupil, and think about what happens.

Consider a lens which is specially constructed so that the aperture stop has two degrees-of-freedom. Not only can you vary its radius, but also its physical position by moving it parallel to the optical axis. This lets you investigate the more general case of unusual lenses, where the front nodal point is not at the centre of the system's entrance pupil.

Before you move the aperture away from its normal position, you rotate the camera about the agreed common point (front nodal point = centre of the entrance pupil). Why ? Because that is the point at which the lens focuses all light from the object point to the same constant stationary image point on the film even when the camera is rotated. There is no 'parallax'. Near objects do not move relative to distant objects. The camera's 'point-of-view' does not move. The image on film is not blurred by motion.

Note that it is not the aperture stop that is focussing the light, it is the refractive (glass) part of the lens. The front nodal point is a property of the glass, not of the aperture. The aperture only determines whether a ray passes through the lens to the film or not - not where it lands on the film, nor where it intersects other rays.

Now move the aperture along the optical axis. The entrance pupil is where the aperture appears to be, so it has to move along the optical axis, too. The glass hasn't moved, so the front nodal point has not moved, so it is no longer at the centre of the entrance pupil.

OK Now what happens when the camera is rotated about the same point as before (the front nodal point, no longer at the centre of the entrance pupil)? Nothing that contributes to the mapping of points in image space to points on film has changed. The glass hasn't been moved so it must focus the light exactly as it did before - to the same point on the film.

Therefore moving the aperture stop axially does not affect the motion, or lack of motion, of the image on the film. The aperture doesn't affect the focussing (bending of rays). It plays no part in the mapping of points in object space to points in image space.

If you follow the Wikipedia article quote above and change the panoramic-pivot-point to follow the motion of the entrance pupil instead, what happens? If you had rotated about that new point before moving the aperture, then the image would have moved across the film. The movement is determined by the focussing properties of the lens - by the refraction - by the glass - not by the aperture. Therefore the image will be blurred now.

QED ?

I do not find responses of the form "That is meaningless" or "You are confused" to be useful. They may lead me to question whether you understand what I say, or what is happening physically. I believe that the above is perfectly clear and meaningful.

--195.137.93.171 (talk) 01:23, 3 July 2008 (UTC)


 * Welcome back. Before I get to your main argument, a couple of points:
 * I'm still not sure about your assertion that the nodal point coincides with the entrance pupil in a well-designed lens. Do you have a reference to support this claim? Your attempted proof above fails for the reasons I already pointed out. The nodal points coincide with the principal planes for a lens in air. It's not at all obvious that the entrance pupil must coincide with the first principal plane, and I suspect you are mistaken on this point.
 * Showing mathematically or geometrically that the entrance pupil is the correct choice is sufficient, unless the argument implicitly or explicitly assumes that the nodal point coincides with it. You have argued that the nodal point normally coincides with the entrance pupil in a good camera lens. That might or might not be true, but it is clearly not true for a completely general optical system. A correct and general optical analysis will therefore either show that the correct center of rotation is the nodal point, or that it is the entrance pupil, regardless whether they happen to coincide in certain selected optical systems. There are only three possibilities here:
 * The analyses in the references contain errors.
 * The analyses assume (perhaps implicitly) that the nodal point coincides with the entrance pupil.
 * Your analysis contains an error.
 * Not having dug through any of the analyses in detail yet, I find #3 at least as likely as #1. I'll think about your argument in more detail and expand this reply.--Srleffler (talk) 03:33, 3 July 2008 (UTC)


 * Still thinking about the arguments, but a couple lines from the Littlefield essay jump out at me:
 * "in simple terms [...] the aperture determines the perspective of an image by selecting the light rays that form it. Therefore the center of perspective and the no-parallax point are located at the apparent position of the aperture, called the “entrance pupil”. Contrary to intuition, this point can be moved by modifying just the aperture, while leaving all refracting lens elements and the sensor in the same place."
 * The aperture stop's effect on an optical system is subtle, and easily overlooked. I think you have erred in assuming that the location of the no-parallax point had to be determined by the refractive elements alone. This is starting to remind me of the discussion above about telecentric lenses—telecentricity is related to parallax, and is determined solely by the placement of the aperture. --Srleffler (talk) 03:47, 3 July 2008 (UTC)
 * The Littlefield essay is quite instructive. The key point is that when you talk about parallax you are dealing with objects that are not exactly in focus (because only one distance can be exactly in focus at a time). Objects that are not at the exact optimum distance are slightly blurred. When rays are clipped by an aperture, the blurring of the out-of-focus image is altered, and its center can be shifted. This shouldn't really be surprising—it is only the action of the aperture stop that allows the not-exactly-in-focus parts of the image to be reasonably sharp in the first place. A very large aperture causes objects away from the correct object distance to form very blurred images. Selecting a more limited ray fan with an aperture sharpens the out-of-focus image, but affects its position in the image plane. Littlefield summarizes this: "In general, when we impose a small aperture, the in-focus image stays the same, except for getting dimmer as we make the aperture smaller. The out-of-focus image also gets dimmer, by the same amount on average, but this is accomplished by leaving intact the portion of the blur that corresponds to having the center of perspective at the aperture, while eliminating all other portions of the blur."
 * Littlefield also directly contradicts your claim that the nodal point coincides with the entrance pupil, and shows a detailed raytrace of a camera lens illustrating this. He writes, "In real lenses, there is no particular relationship between locations of the entrance pupil and front nodal point."--Srleffler (talk) 04:54, 3 July 2008 (UTC)
 * My favorite part about that essay is the pictures. That is the essay that convinced me that the pupil position is really what matters. One realization that helped me understand it is this: by putting the entrance pupil in a strange place, as Littlefield does, the small aperture he adds doesn't make different rays hit the film, it just cuts off some rays that would have gotten to the film. That is, the f/# has gone up and so the depth of field has gone up so he's removing some of the rays of the defocus blur. You can get a similar odd effect with your eye. If I hold a pencil tip a few inches in front of my eye, backlit by the screen, it is out of focus. If I then move my finger between the pencil and my eye, it too is out of focus, but as the blurry image of my finger gets near the blurry image of pencil tip, the pencil appears to bend toward my finger. It isn't that the light rays are bending, it's that the blur is being selectively removed geometrically. —Ben FrantzDale (talk) 10:51, 3 July 2008 (UTC)


 * Welcome to the debate, Ben! OK, maybe rotating about the entrance pupil may reduce the increased-blurring-due-to-rotation for object points that form blurry discs in the image because they are out-of-focus. I wasn't really considering elongation of a circle-of-confusion - that is certainly a possible second-order effect. I was just concerned with the mapping of object-space points to image-space points-of-focus for simplicity- if it's blurred anyway, parallax is probably a very small contribution to blur. The lateral movement of the point of view will be small compared to the aperture diameter. Considering image-space rather than image-plane is more general, but avoids the confusion of "when you talk about parallax you are dealing with objects that are not exactly in focus".


 * If we consider image space rather than an image plane, then we can certainly talk about parallax while considering only points of focus, not the cones or beams comprising the rays that pass through them. The first effect of parallax is to re-arrange the points-of-focus, due to changing point-of-view. If a lens is badly-designed, the aperture may obscure some rays, causing a darkening and probably a shift of the centre of the patch of light, by cutting off one side. But that is a secondary effect. In any case, wouldn't that necessarily improve the sharpness, by 'stopping-down' one side ? We are trying to improve sharpness, rather than preventing a shift of the centre of the unsharpness, aren't we ?


 * To a large extent, this is the evidence for the benefits of keeping the entrance pupil at the front principal plane - to avoid vignetting.


 * --195.137.93.171 (talk) 17:31, 6 July 2008 (UTC)
 * You should really go read the essay we referred to first, and then come back to this. The images on the film need not be visibly blurry. The aperture makes things that are not exactly in focus appear reasonably sharp—it increases the depth of field of the camera. This alteration of the rays passing through the camera both sharpens images of objects outside the ideal object plane, and shifts their apparent positions such that the perspective captured by the film is that of the entrance pupil.


 * Considering image space rather than the image plane is not going to help you, because the film is a plane. All that matters is what the film (or CCD) sees. Anyway, go read the essay and then we can discuss it. There is no point discussing this further until you've read it.--Srleffler (talk) 02:51, 7 July 2008 (UTC)

object and image points, planes, distances, etc
Is there a good reason why the object point, image point, object plane, image plane, object distance and image distance are not defined in this article ? If not here, where do they belong ? Redbobblehat (talk) 23:51, 8 August 2009 (UTC)


 * Certainly more could be contributed, especially since there is little consensus on any global standards of definition, but in regards to the lens theory, and study of ray tracing, I've seen that because of lens imperfections, these points spread out into "circles of least confusion", meaning that if any toric distortion in the actual complex lens system exists, it will create axis of focal points fairly perpendicular to each other, with a circle of least confusion in the middle. A study of geometrical optics texts may come across this, and also in the study of mathematics this is seen during a study of cross sections of cones, ect.  You're welcome to write on the subject, with consideration for encyclopedic content.  StationNT5Bmedia (talk) 02:22, 9 August 2009 (UTC)


 * Mentioning also differences in waveform, wavelength, and wavefront language that transcends simple ray tracing techniques. Instead of describing a point, a fold in waveform more accurately describes the physics in optic systems.  StationNT5Bmedia (talk) 03:43, 9 August 2009 (UTC)


 * You're of course right about circles of confustion, etc., but note that when one describes an optical system in terms of its cardinal points, one is choosing to ignore all that. The Gaussian optics model (which models an optical system in terms of its cardinal points) is a paraxial model—aberrations are ignored.--Srleffler (talk) 05:19, 9 August 2009 (UTC)


 * Redbobblehat, I wrote much of this article based on Greivenkamp's book, which I don't believe identified the points and planes you mention as "cardinal". Before including them, we should locate a source that considers them to be "cardinal" points and planes. Note that the cardinal points discussed in the article remain fixed when the object moves relative to the lens, while all of the points and planes you mention change. The cardinal points are features of an optical system which characterize its first-order optical properties. The object and image points, etc., are not.--Srleffler (talk) 05:19, 9 August 2009 (UTC)
 * Thanks guys - I'm not a trained physicist but I hope you would agree that including (linkable) clear definitions of these fundamental optics concepts somewhere in wikipedia would be helpful. My question is "where?" Perhaps the fact that Gaussian optics redirects to Cardinal point (optics) is the problem ?


 * The only "definitions" of "cardinal points" I have managed to find refer to points of the compass. I was unable to determine whether Gauss coined it's use in optics or whether he inherited it. Certainly most (but not all) diagrams of the cardinal points (optics) show the object and image planes, etc. but don't explicitly define them as cardinal points. In fact, object and image planes etc are so taken for granted that few authors even think of reiterating their definition! (a couple which do and ). I'm guessing that the "cardinal points" of an optical system are so-called because they are the key reference points for spatial orientation within that system. I find it difficult to imagine a (functional) optical system which does not include an object and an image ...? Would it be fair to say that the official cardinal points (optics) are relatively meaningless when removed from the context of image and object points ? Redbobblehat (talk) 10:15, 9 August 2009 (UTC)


 * Conceptually speaking, the same language could describe events in microwave radiation, AM/FM radio signals, VHF/UHF television transception, rainbow spectral emissions, sun spots, magnetism, electric fields, inductance, capacitance, potential charge distribution, other electromagnetic phemomenon like photoreception, fuzzy logic, and neural pathways. The leap forward is to use this language as a conventional standard in those applications being developed.  There are few references to the use of telescopes before the last 400 years, and even less to the use of lenses within the last 3000 years.  So, on the cosmic perspective, these concepts are what I would consider to be theoretical exploration. StationNT5Bmedia (talk) 18:04, 9 August 2009 (UTC)


 * Redbobblehat, actually optical systems that do not involve an object or image are quite common. Not all optical systems are for imaging. I'm a laser physicist, so I commonly use optical systems that expand, collimate, or focus a laser beam rather than forming an image in the classical sense. Another example is optical systems used in illumination (e.g. a car headlamp, or the light source in a microscope), where the object is to obtain a particular distribution of light rather than an image of some object.


 * The main issue, though, is that the object and image planes are not properties of the optical system. A single lens can form images of objects at a variety of distances; the position of the image plane depends on where the object is relative to the lens. The cardinal points, on the other hand, are properties of the optical system. Unless the geometry of the lens changes (as in a zoom lens), the cardinal points remain fixed.


 * You're right that the main problem is that Wikipedia currently lacks an article on Gaussian optics. That would be the right place to explain object and image planes, and to explain how one determines their location for a given optical system. I've often thought about writing that article, but there are always other things to do... --Srleffler (talk) 19:05, 9 August 2009 (UTC)


 * OK, I'm out on a limb here ... From a very cursory glance, many "Gaussian-~" concepts appear to be applications of Gauss's maths to Maxwell's electromagnetic physics, and given that Gauss died in 1855 and Maxwell's ideas got going from 1864 onwards these "Gaussian-~" attributions are surely posthumous! Could this mean that a "Gaussian optics" article might legitimately ignore them, and so require only a little effort to collate Cardinal point (optics) (and maybe Gaussian beam?) with a brief section (see below) about object and image planes thrown in? In other words, change this article title to Gaussian optics and make Cardinal point (optics) a sub section thereof ?


 * Sketch for a brief section on object and image planes, etc. : this may be simplistic and poorly worded -
 * The object point is the point of origin for a cone of rays which enter an "optical system", and the image point is where that same cone of rays is brought back into focus by that "optical system".
 * The object plane is a (virtual?) surface which includes the object point and is perpendicular to the optical axis, and the image plane is the same deal with the image point.
 * When the object plane is at an infinite distance from the lens, the incident rays are parallel (collimated) and, by definition, the image plane is at the rear focal plane of the lens. Similarly, when the object plane is at the front focal plane the refracted rays are parallel and the image plane at infinity.
 * Although the terms "object" and "image" clearly originate from imaging applications, they may be used to denote the source (object) and product (image) of any system which follows the principles of Gaussian optics. (Will that do?)


 * Pinning down these simple "Gaussian" definitions should make it much easier to define/explain a whole bunch of concepts such as : circle of confusion, coma, depth of field (near and far points), field curvature (?), image circle, amongst others.


 * Newtonian and Gaussian conjugate equations use different definitions of object distance and image distance. Gaussian object distance is measured between the first principal plane and the object plane, and Gaussian image distance is measured between the second principal plane and the image plane. Newtonian object distance is measured between the object plane and the first focal plane, and Newtonian image distance is measured between the second focal plane and the image plane. In practical photography, this and other complexities are circumvented by calibrating the focusing distance as that between the image plane (aka film plane) and the object plane (aka 'the subject'). Redbobblehat (talk) 12:12, 10 August 2009 (UTC)


 * Gaussian optics doesn't really depend on Maxwell. Gaussian optics is a very simple geometric model of imaging. I don't know the history, but I presume it actually does go back to Gauss. Don't confuse it with Gaussian beams, though. They are something else altogether, not related to Gaussian optics. Making a new article on Gaussian optics, with the cardinal points as a subsection sounds good.--Srleffler (talk) 16:48, 10 August 2009 (UTC)
 * That's what I wanted to hear. Converting Cardinal point (optics) to Gaussian optics would not be such a mammoth can of worms. --Redbobblehat (talk) 14:07, 11 August 2009 (UTC)


 * Your sample text on object and image planes seems good, but I would drop the reference to "collimated", which may be misleading. Light from an object plane at infinity is not "collimated". All rays from a given point on the object plane are parallel, but rays from different points on the plane are incident on the lens at different angles. One thing to be aware of: some authors refer to "the object point" as the point where the object plane crosses the optical axis; others refer to "an object point", which can be any point in the object plane. I would drop the point about "...they may be used to denote the source (object) and product (image) of any system which follows the principles of Gaussian optics." Gaussian optics is about imaging. I don't think it is an appropriate model for non-imaging systems.--Srleffler (talk) 16:48, 10 August 2009 (UTC)
 * Thanks Srleffler, I've struck out those technical dubiousities ;) On the issue of "object point" usage as a specific point on the optical axis vs any point on the object plane - does the same problem arise for other point/planes as well - ie focal point/plane, image point/plane, principal point/plane, etc ? Would this part of the paraxial approximation issue ? ... or have I misunderstood ? --Redbobblehat (talk) 14:07, 11 August 2009 (UTC)
 * No, the cardinal points are explicitly points on the optical axis. --Srleffler (talk) 02:31, 12 August 2009 (UTC)

Procedures for locating the cardinal points for a thick lens
1. By ray-tracing : (I quote Malacara pp.25-26 verbatim because my grasp is not sufficient to paraphrase) "By tracing two paraxial rays, the six cardinal points for a lens may be found... Figure 19 shows the six cardinal points of a lens when the object index of refraction is no and the image side index of refraction is nk. The points F1, F2, P1, P2 are located by tracing a paraxial ray parallel to the optical axis at a height of unity. A second ray is traced parallel to the optical axis but from right to left. The incoming rays focus at F1 and F2 and appear to refract at the planes P1 and P2. This is why they are called principal planes. These planes are also planes of unit magnification. The nodal points are located by tracing a ray back from C parallel to the ray direction F1A. If this later ray is extended back towards the F2 plane, it intersects the optical axis at N1. This construction to locate N1 and N2 shows that a ray which enters the lens headed towards N1 emerges on the image side from N2 making the same angle with the optical axis."

2. Experimentally : "Figure 14-8 illustrates a procedure by which the focal points and principal points for a thick lens may be experimentally determined." (Concepts of Classical Optics, John Strong, 2004, pp.311-312.) Redbobblehat (talk) 12:12, 10 August 2009 (UTC)


 * The authors are simplifying the examples by using an object such that one ray is being traced from being virtually located near infinity, so that the dioptric power of divergence from the source is nearly zero power, hence an essentially collimated ray trace parallel to the optic axis. Another incidence of a ray trace is given that will pass through the optic center of the thick lens system without deviating direction, the simplest diagram showing a straight line at some angle passing through the center treated as a thin lens.  The first ray trace will come to intersect the optic axis at the focal point, and continue till it intersects the 2nd ray at the image plane, giving some concept to the magnification.   These two ray traces are used to simplify the concepts of dioptric power, so that mathematically, divergence of zero power, and convergence of zero power allow using the lens front and back focal lengths for measuring distances along the visual axis.  StationNT5Bmedia (talk) 14:27, 10 August 2009 (UTC)


 * In simplifying examples, if a ray from the object plane passes through a thick lens primary focal point, then the system can be quickly treated as a thin lens, if the path inside the lens is ignored. Otherwise, separate equations must be used for determining the paths inside a thick lens system, including entrance & exit pupil aperatures, image & object space focal lengths, wavelength (nanometers), angles of incidence, luminance, chrominance, magnification, and not to mention any internal refractive aberrations, or reflections.  In other words, it gets much more complicated doing the math for a complex lens system.  It becomes difficult in calculations to distinquish refractive power from focal distances, and where an image plane will occur in that as distance from the lens changes, so does the power of convergence in the object or image space by virtue of it having travelled some distance along the visual axis of your ray tracing model.  StationNT5Bmedia (talk) 14:41, 10 August 2009 (UTC)


 * Nice graphic, StNT5B - especially the lonely pine tree on a gently sloping hillside: very Zen ;). If you put this back-to-back with a conventional ray diagram it would illustrate how the "point spread" factor (?) is conventionally ignored - maybe it belongs here? (BTW, that reminds me: I suspect the "object distance doesn't affect a pinhole camera's image sharpness" 'approximation' needs a little caveat ... I came up with c = a + (ai/o) where c is circle of confusion, a is aperture diameter, i is image-aperture distance and o is object-aperture distance - but I'm not confident on my maths!)


 * I think I just about follow your commentary! For myself, I am glad that there are a number of different approaches/formulae for modelling different aspects or functions of an optical system; which method (or model) you use depends on what specifically you want to know and what input data you have available. An uber-formula that accepts all parameters and predicts all characteristics would IMHO be impractical (unless you have a "point and click" lens-design software package and a bunch of eager Phd students to collect the data for you). Here's a couple of examples I'm only dimly aware of - so please correct me if I'm askew:
 * If you only want to know the Magnification (or Effective focal length) of a system, you can use Newton's lens equation, which only requires 3 handy data (effective focal length (or magnification) and the distances from the respective focal points to the object and image) - you don't need Gaussian optics to approximate what happens to the rays in between the focal points.
 * Likewise, exit and entrance pupils can be located by relatively simple perspective geometry, which (AFAIK) does not use any of the Cardinal points.
 * Perhaps this article might benefit by expanding a little on what Gaussian optics / Cardinal Points are useful for - their primary applications and limitations (paraxial approximation is already mentioned), and maybe even a couple of disambiguation pointers to topics which are frequently confused with Gaussian optics? For example, would it be fair to say that "Gauss's Cardinal points are pretty good approximations/short-cuts which save you having to calculate the angles of refraction at two curved surfaces for each ray refracted by each lens" ? --Redbobblehat (talk) 22:28, 10 August 2009 (UTC)


 * It does two things for you. Once you calculate the cardinal points, you can quickly determine how any object will be imaged. You can also quickly trace any paraxial ray through the system. (A paraxial ray is a paraxial approxiamtion to a true ray.)--Srleffler (talk) 01:25, 11 August 2009 (UTC)


 * "Once you calculate the cardinal points, you can quickly determine how any object will be imaged." Doesn't Newton's formula allow you to do this even more quickly? Are we talking about the same limited set of "imaging" properties (namely "distances" along the optical axis and "heights" perpendicular to the axis)? I'm thinking that Gauss's cardinal point system extends Newton's similar triangles method in order to (efficiently) raytrace what happens in between the first and last focal points of the system ... I'm tempted to call this the "Gaussian space" (F1 to F2) as distinct from "object space" (O to F1) and "image space" (F2 to I). This conception is similar to Gauss's "nodal space" (between P1 and P2) which might as well be labeled "noodle space" or "here be dragons" ;) Is that too simplistic ? --Redbobblehat (talk) 15:17, 11 August 2009 (UTC)


 * If I'm reading Greivenkamp right, Newton's formula and Gauss's formula are both "Gaussian optics". They just differ in what information one needs. The Gaussian form is more generally useful, because you don't necessarily know where the focal points of an optical system are.
 * That's only the Gaussian form - which puts the cart before the horse! To locate the principal point, you first need to know the focal length and locate the focal point. Newton's lens user's formula, xx' = f2, assumes unit magnification (h=h') so I'm semi-guessing that his experimental method to determine (define) the effective focal length of a lens used the distance from the focal point to the image of a parallel light source at unit magnification. In Opticks (1704), Newton used a 1/3 inch hole in his window shutter as a parallel light source in several experiments, so with the image of this projected (by his favorite 4" lens) onto a piece of paper, he could easily find the focal point (smallest spot). Then, by moving the paper further away until the image diameter is exactly 1/3", he can measure 'the unit magnification distance from the focal point', which is the EFL. This method is more mathematically correct than measuring from focus point to a lens surface, and works equally well for "thick" or compound lenses (no thin lens approximation required - that only applies to the Gaussian definition in Dioptrische Untersuhungen 1840 ). --Redbobblehat (talk) 21:46, 15 August 2009 (UTC)


 * If I'm on the right track, Gauss's (effective) principal point, therefore, initially defines the geometrical center of magnification of the conjugates. The triangle between principal plane and focal point is simply Newton's unit-magnification-image plane to focal point triangle flipped horizontally about the focal point - a kind of virtual image if you like. Ray also emphasizes that the principal planes are 'planes of unit magnification' - meaning the image at P is identical to the image at P'. But Greivenkamp's observation (p.11) that the "nodal points ... define the location of unit angular magnification for a system" begs the question - why should nodal points be different from the principal planes? ... unless the nodal points are used for non-paraxial rays or something ... ? --Redbobblehat (talk) 21:46, 15 August 2009 (UTC)


 * Distances and heights yes, with the latter giving the magnification, but additionally Gaussian optics lets you trace any ray (paraxially) through the system. This lets you solve all kinds of problems. The space between the principal planes is not terra incognito. One can raytrace there just as easily as in object and image space. Of course, the importance of this is much diminished by the fact that one can now use a computer to trace huge numbers of rays without making the paraxial approximation.


 * If you want to think about spaces, you can divide an optical system up into n+1 optical spaces, where n is the number of surfaces in the system. Gaussian optics provides a mapping (a conformal mapping, maybe?) between any two of these spaces. Each space has an image of the object, and a pupil (an image of the system aperture). These images don't necessary fall between the surfaces that define that space. The first optical space in the system is the object space. The pupil in object space is the entrance pupil. The image of the object is the object itself. The last optical space in the system is the image space. The image there is the final image of the optical system, and the pupil there is the exit pupil.


 * I'm not fond of terminology that confuses the principal planes/points with the nodal planes/points. In most optical systems, they coincide. They don't always coincide in more general cases, however.--Srleffler (talk) 02:51,12 August 2009 (UTC)


 * NB: Choosing to use an undeviated ray straight through the optical centre of the lens requires treating the latter as a thin lens. One doesn't have to do this in Gaussian optics; one can model thick lenses as well, in which case one chooses other rays. (Real lenses do not pass rays through their centres without deviation.)--Srleffler (talk) 01:19, 11 August 2009 (UTC)


 * I'm unfamiliar with the term "circle of confusion". I have studied systems that have a "circle of least confusion" involving toric surfaces.  Unless you're just joking about it, the expression used in finding the "circle of least confusion" speaks of the circular spot between two perpendicular cylindrical lines of focus in the image space that would look sort of like it came from a slit-lamp StationNT5Bmedia (talk) 02:42, 11 August 2009 (UTC)


 * Photographic usage : When an image "point" is in (best or sharpest) focus, it's size is the "circle of least confusion" for the system. When an image point is not focused at the "film plane" the truncated cone makes a larger, dimmer spot on the film - its circle of confusion. Does that make sense ? --Redbobblehat (talk) 03:13, 11 August 2009 (UTC)


 * I considered creating the topic "circle of least confusion" and redirecting it to circle of confusion, but there are two other topics that already use the expression in Astigmatism, and Chromatic Aberration. These are also suggested in search results for "circle of least confusion" in the search field box. It'ld bestn't t'create nor redirect that expression, or create the topic, redirect it, and link it inside the other two articles that are mentioning it. StationNT5Bmedia (talk) 14:16, 11 August 2009 (UTC)

If (...), nodal points coincide with principal points (not planes)
In the subsection Cardinal point (optics) it said that "If the medium on both sides of the optical system is the same (e.g., air), then the front and rear nodal points coincide with the front and rear principal planes, respectively." I changed "planes" into "points" here, as having points coincide with planes sounds a bit like sorcery...

I did not change another statement in the same sentence, as it is not wrong but only incomplete. Fot the nodal points to coincide with the corresponding principal points, it is not necessary that the medium on either side is the same, but the refractive index. If the media are different but happen to have the same refractive index, the nodal points will also coincide with the corresponding principal points.

--HHahn (Talk)  12:57, 15 January 2010 (UTC)

Today's (August 19, 2010), changes
I restored some material that was deleted by another editor today, trying to merge his new material into the old. His edits added some good explanation, but deleted too much valuable information and left the article with a poor lead section. A good lead always starts by explaining what the subject of the article is. You can't start an encyclopedia article with a qualification like "Strictly speaking the concept of cardinal points applies only to..."

There may be differences in perspective at work here. For nearly all real optical systems, the cardinal points allow useful approximate modelling of the true performance of the system. Modern optical design is done primarily by ray tracing and other computational techniques, using cardinal points and other paraxial properties only as a rough first approximation to the system's performance. I applaud the effort to introduce some information on the important concept of transformations between optical spaces in an ideal system, however. Hopefully this can be expanded further.

I'm not comfortable with including peripheral definitions such as those for "optical space", "optical axis", and "Rotationally symmetric optical system" as subsections, especially not so high in the article. Generally, material like that should be in other articles that we can link to. I removed optical axis since there is already an article on that. For the others I'm not sure yet whether the definitions should be moved into new articles, or whether the material can be dealt with some other way. As it is, it breaks the flow of the article too much and pushes the definitions of the cardinal points themselves down where the reader is less likely to see them.--Srleffler (talk) 05:06, 19 August 2010 (UTC)


 * I am the person that made the changes you undid. I agree the introduction was too brief and had some stylistic problems, but I have some concerns with the introduction as you reedited it.  First, stylistically I think the article is too concerned with the practical application of the cardinal points to the design of optical systems.  The cardinal points have important theoretical significance and I feel the theoretical aspects should be emphasized since the cardinal points are first and foremost theoretical constructs.  Their practical application can be mentioned later after fully developing the theoretical properties.


 * I certainly agree that peripheral definitions should be avoided which is why I think concepts such as entrance pupil, exit pupil, marginal and chief rays should be completely expunged from this article. These are important concepts to be sure but they are outside the scope of a discussion limited to the cardinal points.  Certainly, once the aperture stop is established, the cardinal points can be used to calculate the location of the pupils, but I think stops and pupils should be part of another article.  Discussion of stops and pupils after discussing the cardinal points is the approach taken in most textbooks.  If stops and pupils are ignored then marginal and chief rays should also be excluded since they can only be defined by stops and pupils.


 * Most importantly I do have some concerns about accuracy. The introduction states that lens vertices are cardinal points.  They are not.  The cardinal points consist only of the three paits of focal points, nodal points and principal points.  To be sure there are lots of equations that relate the distance from the vertices to the principal points or focal points, but that doesn't make vertices cardinal points.  Lens vertices are usually the only physical landmarks in an optical system so measurements must be made relative to lens vertices which explains why there equations that relate the position of the principal points relative to the vertices, but again that doesn't make the vertices cardinal points it just means that vertices are reference points.


 * Also, the term 'cardinal planes' is non standard terminology. the planes perpendicular to the optical axis containing the focal points are called focal planes and the planes containing the principal points are called principal planes or planes of unit magnification.  Usually, there is no need to refer to the focal and principal planes collectively and in any case the term cardinal planes would not be used since it may mislead readers into beleiving there are important planes associated with the nodal points.


 * Also, I think the introduction confuses the relationship between cardinal points and Gaussian optics. The cardinal points apply equally well to both the Newton's and Gauss' approaches.  Newton placed the origins of object and image space at the focal points Gauss placed the origins aat the principal points.  The reason that Newton's equations differ so much from Gauss' is that in Gauss' approach the origins of object and image space are conjugate.  In Newton's system they are not.


 * Today, I made only minor changes to address the most important accuracy issues. Before I make any further edits I would like your feedback on these comments.  Thanks

TedEyeMD (talk) 21:48, 19 August 2010 (UTC)


 * I will think about this more carefully and reply tomorrow (no time today). I will check some references regarding some of the concerns you have raised above. I welcome a better explanation of the theoretical importance of cardinal points in ideal systems, but not at the expense of a clear and correct explanation of their practical value in the (approximate) analysis of real systems. Some of your edits to the article today are not completely correct, for example the first sentence: "The cardinal points consist of three pairs of points located on the optical axis of an ideal, rotationally symetrical, focal, optical system (IRSFOS)." This implies that only such systems have cardinal points, which is simply not true. --Srleffler (talk) 05:25, 20 August 2010 (UTC)


 * By the way, could you please take a look at the section Ideal optical imaging system? You wrote that an ideal system must meet three conditions, but you only gave one of them. I'm not sure if some text got deleted by accident, or if you left off without finishing what you were writing.--Srleffler (talk) 05:29, 20 August 2010 (UTC)

I took a look at Greivenkamp's book, which is a reference I have used in the past when writing material for this article. He does refer to "cardinal points and planes", on page 6. He does not refer to vertices as cardinal points, and I am fine with not referring to them as such in the article.

Information on practical application of the cardinal points in the analysis of real optical systems is at least as important as explanation of their theoretical importance to idealized systems. We need both in the article. It may well be best to explain the theoretical/idealized case first.

Recent Changes August 20, 2010
Your point about the incompleteness of the section on ideal optical systems is well taken. I actually fell asleep before finishing this sorry I will fix it today. TedEyeMD (talk) 13:19, 20 August 2010 (UTC)

As far as whether or not only IRSFOS systems have cardinal points or all rotationally symmetric focal optical systems (RSFOS) have cardinal points is a question of viewpoint. To me, only ideal systems can have cardinal points. When taken as a whole non ideal systems do not have cardinal points, but certainly the paraxial regions of such systems do have cardinal points. If you want to say the "system" has cardinal points becasue its paraxial region has cardinal points well I suppose you could say that. However, I have found that many people are unclear on the difference between the entire system and the paraxial region. It is perhaps a fine distinction but to me some degree of clarity is added when one distinguishes between the system as a whole and the system's paraxial region. I am basically following the approach taken by Welford in Aberrations of Optical Systems. First, Welford establishs the properties of ideal optical systems even though he states a priori that such systems do not exist in practice (except for a plane mirror). Then he shows that this is useful because real optical systems do behave ideally in the paraxial region which can have a significant size in some cases.

I agree that the application of cardinal points to optical design is an important feature, but first I think we should be clear on what the cardinal points are before going on to applications. TedEyeMD (talk) 13:19, 20 August 2010 (UTC)


 * Non ideal focal systems certainly have front and rear focal points and principal points. It seems strange to assert that such systems don't have "cardinal points".--Srleffler (talk) 21:12, 22 August 2010 (UTC)

Recent Changes August 21, 2010
I expanded the introduction and renamed it to what I thought was a more appropriate and informative title for the paragraph. I expanded the discussion on optical spaces. I added a section explaining the difference between afocal systems that even if ideal and rotationally symmetric lack cardinal pionts and focal systems which have cardinal points. This also seemed a good place to introduce the notion of focal points. TedEyeMD (talk) 19:17, 21 August 2010 (UTC)


 * Overall, I like what you have written. I think the article is much better beginning with a discussion of transformation between optical spaces. I made a bunch of "style" edits to make it conform better to Wikipedia's stylistic conventions (sentence case for section headings, very restricted use of boldface, etc.). I made a few more substantial changes that I hope will make a few points clearer.


 * I didn't see a clear definition of "conjugate" in there anywhere. That concept really needs to be explained, at least briefly, before it is used. We can't assume the reader will have any idea what is meant by the term.


 * The paragraph in the introduction that reads "One point of each pair is in object space and the other in image space. The principal points are conjugate to each other, and likewise the nodal points are conjugate to each other. The focal points are not conjugates." is a particular problem. We can't assume this early in the article that the reader knows what object and image space are, or what is meant here by "conjugate". I am thinking about deleting the paragraph or moving it somewhere else in the article. Ideally, the introduction should be comprehensible to a smart high school student with an interest in optics, even if the article gets more technical later.--Srleffler (talk) 22:12, 22 August 2010 (UTC)

Hi I noticed some of your changes today. You added some links to other wikipedia pages which is helpful but I have a few concerns. For instance the link to rotational symmetry goes to a page that describes n-fold rotational symmetry. The problem is that the type of rotational symmetry I am referring to here is not n-fold so I think the link may be more confusing than helpful. Similarly the link to optical system goes to a general page on optics and the reader has to go along way to find optical system. The link to ray defines ray in terms of physical optics and this is counter to a decades long trend to make geometrical optics as free of physical optics as possible. The point about usage of conjugate is well taken.

When you talk about writing for a high school audience you raise a very important point. At what level should the article be directed to? Certainly I have found many Wikipedia pages that are well above high school even undergraduate level. Could you provide some link that would provide guidance on this point?

Also, I would like to add some illustrations could you provide some guidance on how to add drawings and illustrations? TedEyeMD (talk) 01:19, 24 August 2010 (UTC)


 * I see your point about the articles on rotational symmetry and the link to optical system. I disagree about ray (optics). That article is not defining rays in terms of physical optics, but rather putting the geometrical optics concept of rays into proper context. It is important for readers (at least those who are interested in the details) to understand that rays, as used in geometrical optics, are an abstraction. Light is not really composed of rays. Rays are an abstract model that helps us perform approximate computations of the behaviour of light in real optical systems. They have no fundamental importance beyond that. (This is similar to my view of cardinal points. Transformation between optical spaces using Gaussian optics is a convenient approximation to reality, nothing more.) It is important for articles to link to the articles that explain or define key concepts like rays.


 * I don't have a link handy to a page on appropriate level for articles, but I know I have read an essay or guideline on this in the past. I think there is broad agreement on Wikipedia that the appropriate level for an article varies depending on the subject. Since cardinal points are used in introductory optics, it seems sensible that the article on them should be approachable to someone new to the subject, like a high school student who has started learning about optics in science class and wants to know more. The article can start simple and get more technical; the reader may learn enough to be able to follow along or not. It's OK if readers get some information from a simple introduction and then drop out as the article moves into more technical discussion. The simpler material and explanations has to come first, though, so as not to put off readers who lack the background for the more technical stuff.


 * Other articles may naturally be more technical, because the topics are of narrow interest and are unlikely to be read by someone who doesn't have a higher level of education. It's pretty common, though, for science and especially math articles to be written at much too high a level for the subject. Bad examples are best not followed.--Srleffler (talk) 04:28, 24 August 2010 (UTC)


 * I can't provide much guidance on how to contribute illustrations, but if you browse around the help pages starting with the image upload page you should be able to find some helpful information. It's a good idea to look first before you start drawing. I know that some image formats are preferred over others for technical reasons. I believe there is a vector graphic format (svg, perhaps?) that is preferred for illustrations because they scale well and are easy for other editors to make changes to (keeping with the philosophy of the whole encyclopedia being easy to edit and update). Bitmapped formats like JPG are discouraged for illustrations. I think the preferred format has free editing software available online, but I don't know for sure.--Srleffler (talk) 04:37, 24 August 2010 (UTC)

By the way thanks so much for helping with the reference to Welford's book. Also, I don't think the wikipedia page on the paraxial approximation is very good. The paraxial approximations are really three specific approximations and the wiki page simply talks about the small angle approximation. So, again I have removed the link to that page since I am going to go into that subject in some detail on this page. So, I took all references to paraxial out and will discuss paraxial rays later in detail.TedEyeMD (talk) 01:37, 24 August 2010 (UTC)


 * The right solution to that problem is to improve the article on the paraxial approximation, not to remove links to it and certainly not to try to work an extended explanation of it into this article. Part of what makes Wikipedia such a powerful resource is hyperlinking: articles need not and should not explain in depth all the concepts they use, but rather should provide natural links to those explanations. Readers who need more background can follow the links to get it; those who don't can ignore them. Trying to explain everything in one article leads to bloated articles that are hard to read.--Srleffler (talk) 04:28, 24 August 2010 (UTC)

Well yes and no. Sure 'ideally we should link to other articles, and certainly such links do keep articles from becoming bloated. On the other hand when the linked to article is not just uninformative, but has a high potential to mislead then I think a link should be removed otherwise the reader may not simply ignore the linked to article, the reader may be confused by the linked to article. While the ideal solution is to edit the linked to article may not be practical because of time constraints. So I think there is something to be said for omitting links in certain cases.

Could you provide some guidance on how to generate and insert diagrams? Thanks TedEyeMD (talk) 18:55, 24 August 2010 (UTC)


 * See above for what little advice I have to offer.--Srleffler (talk) 03:34, 26 August 2010 (UTC)

I noticed that a couple of times srleffler has restored something to the effect of exact calculation requires the application of Snell's law and the law of reflection each time the ray reaches an interface between two media. I have several problems with this sentence. First, it implies that both the laws of reflection and refraction are used every interface. In fact at any one interface only one or the other is used not both. Second, the law of reflection is just a special case of Snell's law so you technically don't need to mention it. Third, in addition to Snell's law and the law of relection there is the law of rectilinear propagation that is just as important in exact ray ttracing. Forth, there is no need to be so detailed here. This section deals with the concept of treating an optical system as a physical device for achieveing a mathematical transformation. the details of ray tracing need not be mentioned here. so I deleted it. TedEyeMD (talk) 01:09, 25 August 2010 (UTC)


 * I haven't followed what's going on, but I'd like to point out that you do indeed often need to consider both reflection and refraction, via the Fresnel equations, at each interface, so that you can analyze both the desired main imaging path and the various unwanted flare reflection paths. I'm not sure what you mean by reflection being a special case of Snell's law. Dicklyon (talk) 04:10, 25 August 2010 (UTC)


 * Sure it is certainly true that at each refracting surface there is both a reflected ray and a refracted ray as the Fresnel equations describe. However, but I do not agree that you "indeed often" have to consider the reflected ray.  In fact, in geometrical optics and lens design the reflected ray is almost always ignored unless as you say one is looking at flare or stray light which is usually done (if at all) after the basic design is finished using only the transmitted rays.  Moreover, don't forget that lens coatings further decrease the importance of the reflected ray.  Also, remember that Fresnel's equations are part of physical optics, not geometrical optics.  In any case, when a lens designer does a design usually only the refracted ray is of concern.


 * Now, why is the law of reflection a special case of Snell's law. Well there are strict sign conventions often not taught in basic corses but very important.  For instance all distances and angles are directed and light travels in the positive z direction (left to right) in media with positive refractive indices.  For light to travel from right to left the refractive index is negative - yes negative refractive indices.  Now consider  (n)sin(angle of incidence)  = -(n)sin(angle of refraction).   The solution to this is Angle of incidence = - Angle of refraction which is the law of reflection when the proper sign conventions using directed angles are applied since the angles of incidence and reflection are of opposite signs.  TedEyeMD (talk) 01:30, 26 August 2010 (UTC)


 * The reason I insert sentences like that is that I want it to be clear to the reader that the transformation described is an approximation to the true effect of a real optical system. This is an important point that should not be glossed over. Optical theory is a hierarchy of approximations to reality, with the subject of this article being the lowest tier—the simplest and worst approximation. The article needs to put its subject matter into context, and say something about what the next step is, for obtaining a better approximation to reality. It's interesting that you describe the subject of the article as being an "optical system as a physical device for achieveing a mathematical transformation" rather than the other way around. As a physicist, I tend to view math as a tool for describing the physical world. I think we are approaching this subject matter from different directions.


 * I'll try to take your other concerns into account in the wording, if I reinsert something like this. One certainly can apply both laws at every interface, as Dick notes. The fact that one of the laws can be constructed as a special case of the other is not a barrier to mentioning both. I agree that rectilinear propagation is equally important.--Srleffler (talk) 03:34, 26 August 2010 (UTC)

The fact that one is a special case of the other does not preclude mentioning both, however the fact that it has no place in an introductory level article on cardinal points does preclude mentioning. If you want to trace multiple rays then for each ray going into the optical system multiple rays come out. None of your illustrations show this and it would really confuse introductory level readers (the high school students you referred to earlier). Moreover it would take things to a higher level of approximation and as you note we are dealing with the lowest level of approximation so multiple reflections do not belong here. I find it very difficult to justify a discussion of fresnel's laws and multiple reflections in this article. I am aware of no textbook that introduces multiple reflections at this stage except perhaps to mention that they should be ignored at this point. 75.33.34.83 (talk) 00:08, 27 August 2010 (UTC)


 * Fair enough. I agree that introducing ray splitting at surfaces (via reflection and refraction) is beyond the scope of this article. It gets in the way of discussing the idea of a transformation where one ray exits for each one that enters, which is certainly an important optical approach (both in Gaussian optics and in ray tracing).--Srleffler (talk) 03:27, 28 August 2010 (UTC)

Do not read too much into my wording "optical system as a physical device for achieveing a mathematical transformation" I debated over how to word it and considered doing it the other way around, but I chose this wording for pedagogical reasons. By wording it this way my intent was to surprise the reader since we usually think as you do that the physics is primary and the math secondary. Pedagogically by wording it the other way around I wanted to catch the reader off guard and make them really think about this. Our differences are due not to a difference in technical background but rather to a difference in pedagogy. I too am a physicist and an optical engineer that has taught optics at the post graduate and post doctoral level. As an engineer I am well aware of the approximations made in applying paraxial optics to real world optical systems. However, as an educator I am also quite aware that there is a great deal of confusion between first order optics, paraxial optics, and Gaussian optics. The three are clearly related but in fact different. Worse, often the literature confuses the three using them synonymously which is inappropriate. It was my intention to do a complete rewrite of this article. I am doing that rewrite bit by bit for a few reasons. First, when I started editing this article there was some serious misinformation such as including lens vertices as cardinal points. Also, there was a lot of extraneous information about stops and pupils that was very interesting but not about cardinal points. So, I thought some changes were immediately necessary. Second, it will take some time to do a full rewrite so I was trying to do it piecemeal.

I realized immediately from all the discussion of stops and pupils and the inclusion of vertices into the cardinal ponts that you are definitely interested in the practical applications and I think that's great. However, there is no need to sacrifice rigor for practicallity. My plan was to start with theoretical rigor. Which is why I started by using the word ideal. If you begin with only ideal optial systems have cardinal points then no approximation is necessary. I planned later to introduce approximations. I would explain each approximation as it was introduced and why. The reader would finish with a very clear understanding not only that the cardinal points only approximate the behavior of real world optical systems they would also know precisely what the approximations were and what the implications of those approximations are. My approach is first let's explain exactly what the cardinal points are without approximation which means with ideal systems then show how by using approximations we can apply the concept to real optical systems. However, I would put the approximation lower in the article. I certainly realize that you want people to know that the cardinal points approximate the behavior of real systems. I have absolutely no intention of "glossing over" that point. Indeed, I want the reader not only to know that the cardinal points approximate the behavior of real world systems but also to know why the cardinal points approximate the behavior of real world systems. However, you keep redoing the changes I make so I have to re-edit and don't have time to introduce new material. Please be assured I will bring in approximations quite explicitly. Indeed to do so I would really appreciate it if you could help me find out how to introduce my own illustrations that would help a lot thanks. 75.33.34.83 (talk) 00:08, 27 August 2010 (UTC)


 * I see. Overall, I like what you are trying to do and encourage you to continue. I don't agree with everything, but hopefully we can find compromises that work for both of us (and produce a great article). Like you, I don't like the confusion students often have between different optical approaches and approximations. I want Wikipedia articles to put optical techniques into context; explaining how they relate to other techniques so readers are not left with the impression that transformation between optical spaces using the cardinal points is an exact model, for example.


 * Piecemeal editing is fine. It's how things work here. Yes, having people changing what you are trying to write is inconvenient, but it can make for better articles in the end. Two editors with differing opinions are forced to think about the material more deeply and to consider alternate points of view. That's good.


 * I am fine with focusing on the rigourous, idealized theory first in the body of the article, and deferring the discussion of practical application until later. The introduction needs to mention both, though. A good introduction provides a brief summary of the most important features of the subject of the article. This is important. Unlike when teaching a class or writing a textbook, you cannot assume that your audience will read the article all the way through. Encyclopedia articles need to work for someone who needs a little information, and reads the introduction to get a general idea of the concept and then stops reading that article. --Srleffler (talk) 03:27, 28 August 2010 (UTC)

A specific comment: In this edit you moved the list of the three cardinal points back down to where it was (and made some other changes in the wording). This is a problem, because as the intro stands you use the specific cardinal points in discussion before you have introduced them to the reader. You have to say what the cardinal points are before you say something like "only four points are necessary: the focal points and either the principal or nodal points". It just doesn't flow right the way it is.--Srleffler (talk) 03:31, 28 August 2010 (UTC)

I moved the more mathematical material discussed above lower in the article, because it is less approachable than some of the other material, and because it was left incomplete by the editor who added it. It didn't actually get around to explaining how the cardinal points are related to the mapping between optical spaces.--Srleffler (talk) 23:28, 7 September 2013 (UTC)

Drawings
The recommended file format for drawings and diagrams on Wikipedia is SVG. Inkscape is a free editor for these files. For more info, see How to draw a diagram with Inkscape. You upload the image at Upload. Be sure to read and follow the prompts. When uploading images, you have to specify the license under which they are released. The upload page's prompts try to help you with this. For an image you drew yourself common choices would include the CCbySA license, or just releasing the image to the public domain.--Srleffler (talk) 06:25, 28 August 2010 (UTC)

Toepler points
There are numerous names for this addition to the gaussian model, IMO "Toepler points" is the most distinctive :
 * “Negative principal and nodal points. Negative principal (Toepler or symmetric) points are those conjugate points of a system for which the lateral magnification is – 1, i.e. at 2f in both object and images spaces from the (usual) principal points. The negative nodal points are points for which the angular magnification is unity and negative. They lie as far from the focal points as the ordinary nodal points (q.v.) but on opposite sides. Also known as Anti-principal and Anti-nodal points. See Centred optical systems.” [Gray, H.J, 1958, Dictionary of Physics. p.355]

AFAIK (from citation) they originate in Toepler, A. Bemerkungen ueber die Anzahl der Fundamentalpuncte eines beliebigen Systems von centrirten brechenden Kugelflaechen: Pogg. Ann., cxlii. (1871), pp.232-251.

The main use of toepler points (AFAIK) is to actually measure the focal length of a lens, using the conjugate distances at unit magnification. This method is implicit in Newton's conjugate equation, but for a full description of the method (and his contraption "the focometer") see Thompson, S.P. 1912 The Trend of Geometrical Optics : "Proceedings of the Optical Convention 1912" p.297-307. --Redbobblehat (talk) 01:11, 11 February 2011 (UTC)

(BTW: Gauss, C.F. Dioptrische Untersuchungem Goettingen, 1841. The "nodal points" were proposed/added by Listing, J.B. Beitrag zur physiologischen Optik Goettinger Studien, 1845 (in german). Apparently the term "cardinal points" was coined by Felice Casorati (mathematician) and Galileo Ferraris - but I haven't found chapter and verse for that one.) --Redbobblehat (talk) 01:11, 11 February 2011 (UTC)

Principal planes and points
Section: Principal planes and points

This section and the accompanying diagram “Various lens shapes...” do not explain the concept clearly. The diagram shows a random collection of arrows impinging on lenses from all directions, signifying nothing. I respectfully suggest that this section be rewritten with a more meaningful diagram. --Prof. Bleent (talk) 14:41, 24 April 2015 (UTC)


 * The arrows are meant to indicate the direction to each surface from its center of curvature. I agree that it's not a very good diagram.--Srleffler (talk) 04:08, 25 April 2015 (UTC)