Talk:Normal curve equivalent

Copyright
See this for a discussion about the copyright issue for the image on this page. Chris53516 13:52, 2 August 2006 (UTC)

Conversion chart
A user posted a conversion chart (PDF from the Texas education department) that displays a conversion from percentile ranks to normal curve equivalents. I checked this with some software I use for another test, and it was the same. I thought that the conversion was dependent on the normative sample that took the test, so I though that each chart would be different. After I checked it on my software, I'm not so sure. Can we get an statistics expert to weigh in on this issue? &mdash; Chris53516 (Talk) 18:03, 1 December 2006 (UTC)


 * I am not an expert psychometrician (or at least not in the near future) so I will not comment factually, but in response to your question, as I understand it, the NCE does not take the normative sample into account. I believe the NCE is meant to provide a (very) rough concordance among tests. It is a transformation of a z-score (probably computed by the test publisher) onto another normal distribution with mean 50 and SD 21.06. I am not sure how those values are chosen, but I believe it is meant to put all tests on the same continuum. If I am correct, then I maintain my stance that the NCE is a somewhat silly measurement *because* it does not take into account the normative sample and psychometric properties and content of the test. On the other hand the National Percentile Rank *does* depend on the sample as does the National Stanine. Most standardized test batteries use the NPR as their primary scoring technique.


 * Educated opinion: The phenomenon that you mention could be because you had a very large sample, as did the TEA, so the percentiles and NCEs may be the same between both tests, as n goes to infinity. -- StatsJunkie 06:05, 9 May 2007 (UTC)


 * The purpose of the normal curve equivalent is to solve the problems associated with using percentile ranks and stanines in statistics. It is inappropriate to use percentile ranks and stanines in statistics because these types of numbers do not have equal intervals between each scores. That would be like the inch changing in size from 2 to 3 and 7 to 8. The distance between percentile rank 1 and 2 is a different size than the distance between 49 and 50. The normal curve equivalent solves this problem. Therefore, it is incredibly useful when conducting statistics and is not a "silly measurement." No measurement "take[s] into account the... psychometric properties and content of the test." That's not possible. If you are correct in saying that the sample size impacts the NCE distribution, then it actually does take into account the performance of the students; however, my feeling is that it does not since it relies on the z-score. Either way, NCEs are indispensable when statistics on student achievement are necessary. &mdash; Chris53516 (Talk) 14:38, 9 May 2007 (UTC)
 * Correct. After I did a lot of thinking about this, the NCE is not measuring performance on a test. Rather, it is measuring a latent variable (say, "mathematics achievement, 6th grade") via performance if you will. The missing piece of the puzzle is the concept of validity. A big assumption with the NCE is that the test actually measures what it purports to measure (the latent variable). If this assumption is met, the NCE measures more than achievement, and thus can be used to make comparisons across test batteries. So it is this concept of validity that allows us to use the NCE beyond an indicator of performance, rather than the fact that it uses a z-score. I am investigating the claim about averaging NCEs and equispaced intervals for my own knowledge, because in addition to your statement, the Terra Nova 2 Spring Norms book defines the NCE similarly. Most of my work has been limited to computer adaptive testing. I guess our analog of the NCE is ability level ("theta" score) which is similar in nature, but only computed using the standard normal. When I find some free time I am going to test my claim that percentiles converge the NCEs as the sample size goes to infinity. I suspect that it is true due to asymptotic theory of sample quantiles, and that this is how the strange value of 21.06 is obtained. I could be wrong. StatsJunkie 22:05, 17 May 2007 (UTC)

Troubling article
In its present form this article is misleading and weird.

As the concept is defined here, the only difference between a z-score and a "normal curve equivalent" is that z-scores make the average 0 and the standard deviation 1, whereas these make the average 50 and the standard deviation 21.06.

Just as with z-scores, this is applicable just as much if scores are not normally distributed ("bell-shaped") as it is if they are. The normal distribution plays no role at all in the definition of the concept, yet the name of the concept suggests otherwise.

The number 21.06 looks arbitrary. The article does not hint at where it came from. Michael Hardy (talk) 15:42, 28 January 2010 (UTC)

Contradictions
Some excerpts:

[sic]
 * is a score received on a test based on the percentile rank.
 * It is a measurement of where a student falls on a normal curve, indicating a student's rank compared to other students on the same test.[3]
 * The equation for converting this scores
 * is as follows:[1]
 * NCE = 21.06z + 50
 * where z is the z-score (standard score)
 * where z is the z-score (standard score)

If the formula given is correct, then this is not based on percentile rank, and it has nothing to do with the normal curve. Michael Hardy (talk) 15:47, 28 January 2010 (UTC)

Sigh
The web page linked to instantly clarifies in few words what nobody bothered to say in the article with its many words. This really looks like something written by someone in the habit of taking mathematical formulas as dogma without wondering why they make sense. Michael Hardy (talk) 15:53, 28 January 2010 (UTC)

further edits
In this edit I got rid of some nonsense and explained where the number 21.06 came from. Whoever wrote this seems gullibly credulous about mathematical assertions. I'll be back for more. Michael Hardy (talk) 20:45, 28 January 2010 (UTC)