Jump to content

Talk:NTSC/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Length

This article is way too long. Historical aspects of the evolution of NTSC, including all explanations why certain aspects were defined that way, should be in a separate "History of NTSC" article, comparisons to others systems in a separate article; framerate conversion info should be dropped and replaced by links to Television standards conversion and Telecine articles. NewRisingSun (talk) 16:24, 21 November 2007 (UTC)

I agree with some of these but I think the history should stay in (and be expanded) - I don't think fragmenting the history of every subject out of the main articles is generally desirable. The article is now about 36 kb, which should no longer be a problem for most browsers. Some copyediting definitely appropriate. Lots of in-line references, so I took out that tag, too. --Wtshymanski (talk) 04:31, 4 February 2008 (UTC)


How to organize encyclopedia material for such a complex topic

I think we need to get into the head of the person reading this. Starting what a lay person is likely to have experienced in his/her own everyday life should come first. In any case, I think an Outline would be a good place to start. I noticed that the article starts with many ideas that are foreign to most people. So the article should start off GENTLY. As with most informative text, simple to complex is the best approach, as long as the simple text is also true and correct. And avoiding tightly interrelated forward references as much as possible serves greatly to the learning experience of someone reading the article. Coverage could go something like this? (I know, my wording is clumsy.) I include [comments] between brackets. --208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

The term "NTSC" has several practical meanings depending on the context.--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

For the public in general, the "NTSC" term is imprecise industry jargon meant to convey a "TV standard" or "video standard". The purpose of a standard is to insure that TV broadcast signals, prerecorded media such as DVD and VCR tapes, and equipment that conform to the same standard, are compatible with one-another, in order to prevent operational failures such as scrambled pictures, missing color, no sound and the like.--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

For local analog over-the-air TV signals, a few countries like the U.S. , Japan and a few owthers used the NTSC standard, while most others used other analog standards such as "PAL" and "SECAM". Mixing standards, such as in an attempt to use an older PAL TV to receive an NTSC TV signal, results in partial or complete operational failures such as scrambled pictures, no color, or missing colors. This is why PAL/NTSC combination equipment is often found at airports around the world because travelers often end up with equipment of the wrong standard for the local media and TV signals.--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

[Comment: The above usually satisfies what most people want to know in a practical sense, and for that reason something like it should appear first in the article. In the practical sense, it is not tightly interelated with other facts, and so is not really forward reference.]--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

[Comment: In order to understand NTSC in a practical sense, some of the political reasons, such as those actually debated in the U.S. congress, should be reported.]--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

"NTSC" stands for "National Telvision System Committee", which was, on two separate occasions, an industry group set up by the U.S. government (FCC) to write federal regulations governing the engineeering specifications of local over-the-air analog TV broadcast signals. The purpose of these regulations was to insure interoperabilty between the transmitted TV signal and the TV sets receiving that signal, fulfilling a basic public interest of having any TV set work correctly with any TV broadcast signal. The work of the first NTSC became federal regulation in 1941, authorizing only monochrome (black and white) broadcasting. The work of the second NTSC, which we can call "color NTSC" here, became federal regulation in 1953, authorizing a system of color broadcasting to replace another color TV standard known as the CBS color system. --208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

Nine years after the NTSC monochrome TV standard was approved, the CBS color system was approved by the federal government in 1950. Unfortunately, unlike with the case today, the CBS color system was not backward-compatible with the then millions of monochrome TV receivers already in the viewing audience. These older monochrome (black and white) sets failed to work with the CBS color telecasts, even in black and white. Interestinly, CBS color recievers themselves were dual-standard, making these CBS color receivers backward compatible so as to receive monochrome NTSC telecasts for display in black and white.--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

[ The above are the simple facts, all verifiable of course. ]--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

[Comment: Note that talk about mechanical spinning "color wheels" and electronic versus mechanical reproduction was all in the press, it was actually a red herring and politial posturing because in the lab at least, I'm sure that an all-electronic system could be adapted to receive the CBS telecasts. There is information I have read attesting to this going back 50 years. Perhaps reporting on both the political posturing and the articles refuting this is in order ? If so, it's really a side show, and should only be included later in the article after a quick explanation on how color TV works, because, being a red herring, it would serve the same purpose it did 50 years ago in the press, which is to confuse. And the last thing an encyclopedia article should do is to confuse. The real problem, also coverered widely in the press, was backward compatibility.]--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

While the public may have thought that going from black and white to color was as easy as replacing black and white film with color film for photography, the technical hurdles for TV technology were actually huge, requiring many tens of millions of dollars of industry investment to make it practical. [Comment: Is this conjecture?]--208.252.179.21 (talk) 16:23, 16 March 2009 (UTC)--Ohgddfp (talk) 16:29, 16 March 2009 (UTC)

NTSC is just a colour encoding system!

This seems to be lost on a lot of people, and this article doesn't do much to explain it. NTSC is purely related to how colour is encoded over luminance. NTSC-M is the North American broadcast standard, and it's the M rather than the NTSC that designates number of lines, frames per second, etc. The third paragraph of the PAL article makes this clear, but it is only briefly mentioned in this NTSC article under the "Refresh rate" section: "The NTSC format—or more correctly the M format; see broadcast television systems—consists of 29.97 interlaced frames".

Given that it's the M format that determines this, and not NTSC, I propose that the section on refresh rate be moved to Broadcast television system and that only colour-related technical details be left in this article. I also propose that the third paragraph of the PAL article be adapted and included in this NTSC article. The opening paragraph of this article should also be modified, as it doesn't mention colour at all! Balfa 20:09, 21 June 2006 (UTC)

I'm sorry, but you're wrong. The National Television System Committee had (has?) control over all aspects of the forma, not just the colour subcarrier.
Atlant 22:06, 21 June 2006 (UTC)
Oh. That's unfortunate, I was liking the idea of seperation :) So then the line "The NTSC format—or more correctly the M format" is incorrect? Balfa 13:24, 22 June 2006 (UTC)
I don't think it is. I was always under the impression that NTSC was a color encoding system as well. If it weren't, what's up with this? "Each monochrome system is assigned a letter designation (A-M); in combination with a color system (NTSC, PAL, SECAM), this completely specifies all of the monaural analogue television systems in the world (for example, PAL-B, NTSC-M, etc)." (Broadcast_television_system#ITU_identification_scheme). I think it's important in this article to define SPECIFICALLY what components NTSC covers, and which ones it doesn't, because like PAL, the term is erroneously used quite often.

The NTSC standard does include line and field rates, but to use PAL as synonymous with the 625 line system is wrong - PAL is just a colour coding system, not a line standard. Not all PAL systems are 625, and not all 625 sytems PAL. The author should have known this, that they are not interchangeable terms.

Nevertheless, DVD video disks sold in Europe are marked as being encoded in the 'PAL' system, in spite of the fact that they owe absolutely nothing to the PAL colour encoding system. 20.133.0.14 09:25, 5 April 2007 (UTC)
So the entire article should retain these inaccuracies because of popular marketing terms? Wouldn't it be easier to explain that in one line, rather than misinform people even further? —Preceding unsigned comment added by 71.192.23.231 (talk) 01:32, 28 January 2008 (UTC)
I think the term NTSC needs to be clarified in this article, as it has meanings at different levels, if you think in terms of signal protocols. My understanding is that NTSC for a broadcast signal refers to the combination of three factors: line and refresh rate, colour encoding, and the transmission modulation scheme (which is the combination of luma, chroma and audio). When dealing with consumer devices such as DVD players, set-top boxes, PCs or games consoles, the device may have a component, RGB or Y/C (s-video) output. Here NTSC means something different, as there is only the line and refresh rate, and the colour encoding elements. The term NTSC will live on for a very long time after all 'NTSC broadcasts' have finished. John a s (talk) 10:06, 23 February 2008 (UTC)
I think the verifiable facts are the best, although not all of them need to be at the beginning of the article.--208.252.179.24 (talk) 21:31, 13 March 2009 (UTC) By the way this is me. The signiture button doesn't work right, so I'll put my four tildes here now and other parts that are mine. 208.252.179.26 (talk) 01:26, 16 March 2009 (UTC) Me. Ohgddfp (talk) 01:48, 16 March 2009 (UTC)

1) NTSC stands for "National Television System Committee", which was an industry group set up by the U.S. government to create an "engineering standard" that would be incorporated into the federal rules and regulations of local over-the-air analog TV broadcasting stations. (The purpose of an "engineering standard" is to insure interoperability between the transmitted signal and television receivers in order to prevent operational failures such as scrambled pictures.)--208.252.179.24 (talk) 21:31, 13 March 2009 (UTC) 208.252.179.26 (talk) 01:26, 16 March 2009 (UTC) me. Ohgddfp (talk) 01:48, 16 March 2009 (UTC)

2) The first NTSC established a monochrome-only engineering standard. This is where several of the engineering specifications, such as number of lines (525), number of active lines (approximately 483), video bandwidth (somewhere between approximately 3 MHz and 4.3 MHz), 2:1 interlace, horizontal blanking period (somewhere between approximately 8 and 12 microseconds), FM sound located 4.5 MHz higher than the visual carrier frequency, maximum deviation of FM (+ or - 25 kHz, preemphasis for the FM sound, positive sync with 75 percent of carrier blanking, some kind of logarithmic transfer function for the video, amplitude modulation using negative polrity for the composite video. Spectrum spacing of visual carriers to be 6 MHz apart. This all the law of the land in 1941. I'm not sure if the "M" designation was even in use at that time.--208.252.179.24 (talk) 21:31, 13 March 2009 (UTC) 208.252.179.26 (talk) 01:26, 16 March 2009 (UTC) me Ohgddfp (talk) 01:48, 16 March 2009 (UTC)

3) A few years later, the second NTSC, which we can call "color NTSC", added color information to the transmitted signal in the form of a transparent watercolor-like overlay signal (called color difference signals) on top of the existing monochrome picture signal. This achieved backward compatibility with the previous monochrome-only service, which was the the political reason for the invention of color NTSC. The engineering standards for the monochrome signal remained essentially the same as before. The two kinds of signals, monochrome and color-difference, remained effectively separate while being transmitted on the same channel of a color program. For backward compatibility of older monochrome receivers, a monochrome reciever would simply use only the monochrome part of the color broadcast signal to display the program in black and white. The color receiver would utilized both the monochrome portion and the color differnce portion together so that the image would appear in full color. The reason the color receivier needs the monochrome portion of the signal is provide infomation on the lightness and darkness of the colors. This is the same as what a black and white photo acomplishes. When manually colorizing a black and white photo using transparent watercolors paints, as was popular many years ago, the transparency of the paints allowed the lightness (like with yellow) or darkness (like with violet) of the color to be determined by the black and white photo, while the other qualties of the color (saturation and hue) were detemined by the watercolor paints. In the same way, an NTSC color reciever overlays color differnce attributes on top of the monochrome picture. This technique has several benefits, one of which is backwards compatibility where the color receivier can recieve monochrome broadcasts for display in black and white simply because a monochrome broadast has no color difference signal to produce a colorizing overlay.--208.252.179.24 (talk) 21:31, 13 March 2009 (UTC) 208.252.179.26 (talk) 01:26, 16 March 2009 (UTC) me Ohgddfp (talk) 01:48, 16 March 2009 (UTC)

4) Adding more information like color differnce attributes to a monopchrome TV signal already occupying the full assigned bandwidth was once thought to force an increase in the spectrum bandwith in order to avoid self-interfernce within the channel. But increasing the bandwidth causes a wider spectrum spacing between carriers. This in turn would have forced some stations off the air to make room in the spectrum, reducing viewer's choice of programming. Even worse, the change of carrier frequencies would have caused exisitng monochrome receivers to malfunction, destroying backward compatibiity. So the NTSC had no choice but to maintain the same bandwidth and add the color difference information anyway, causing intentional self-interference. This can be seen as occasional fine dot crawl, expecially at the borders of strong contrasting colors, and false moving rainbow colors running through fine woven clothes patterns and other finely detailed patterns. This cross-color interference is one of the chief critisims of NTSC picture quality, even though many people do not notice these defects.--208.252.179.24 (talk) 21:31, 13 March 2009 (UTC) 208.252.179.26 (talk) 01:26, 16 March 2009 (UTC) me Ohgddfp (talk) 01:48, 16 March 2009 (UTC)

The reason many people do not notice these NTSC picture quality defects often is because the NTSC borrowed a clever technique called "frequency interleaving", and combined it with some other benefits provided by breaking up the picture information into luminance attibutes (the monochrome image as explained above) and color-difference attributes (the watercolor-like overlay). The color-difference information is coded at the TV station into a "chroma" signal which, because it occupies the same general frequency range as fine detailed luminance, is not easy to separate, and consequently appears as a finely detailed monochrome interference dot-crawl pattern on the viewing screen of both color and monochrome receivers. [photo examples here]. Color receivers decode the chroma signal back into the color-difference attributes of the picture, and then overlay these attributes on top of the monochrome image to form a full-color picture. But unfortunately, the dot-crawl interference still apppears in the monochrome image. The genius of the dot-crawl interference pattern is that it is finely detailed and seems to move, making it difficult for the untrained human eye to follow or see. Also, the strength of the pattern is strong only in those areas of the scene where the color is strong (highly saturated). A gray object in the scene has no color, therefore no color differnce attributes, therefore no chroma, and therfore no dot-crawl. In fact, careful close viewing of the dot-crawl on a black and white receiver reliably shows which object(s) in a scene is strongly colored. On a monochrome reciever, a trained observer can actually examine dot-crawl or the lack of it to determine if a movie is transmitted in black and white or color. [photo examples here].--208.252.179.24 (talk) 21:31, 13 March 2009 (UTC) 208.252.179.26 (talk) 01:26, 16 March 2009 (UTC) me Ohgddfp (talk) 01:48, 16 March 2009 (UTC)

So the NTSC also had to make sure the dot-crawl interfrence pattern would be finely detailed to reduce visibility of it. This means that the chroma portion of the signal must have a narrow bandwidth, and therefore the color-differenc information itself must carry only one-third to one-eight of the infomration present in the lunminance image. But this makes the colors bleed into each other horizontally. Back to colorizing a black and white photograph with transparent watercolor paints, it's amazing that a sloopy aplication of the paint such that they bleed a little into each other, still results in what appears to the human eye as a perfectly colorized detailed full-color image with no color bleeding. That's because the human eye cannot see detail in the color hue and color saturation vairations provided by the water color or the color difference attributes in the case of TV. For human vision, all the fine detail is only in the monochrome (black and white) image. So the federal rules and regulations written by the NTSC mandate certain colors (orange and cyan on the I axis) to have a bandwidth of at least 1.3 MHz, which is substantially narrower than the 4 MHz bandwidth of the luminance channel. Even though it's still on the books with the force of law today in 2009, TV stations generally ignore this federal regulation by reducing bandwidth too far. Other color difference combinations require at least a 0.6 MHz bandwidth. Amazingly, many TV stations ignore this even this federal law as well by reducing bandwidth too far (see broadcast U-matic analog videotape format) (see also NTSC to digital TV signal converters used in digital TV broadcast stations). A lot of NTSC picutre qualtiy complaints come from these violations of federal regulations. (is this an opinion or fact? Need a source.)--208.252.179.24 (talk) 21:31, 13 March 2009 (UTC) 208.252.179.26 (talk) 01:26, 16 March 2009 (UTC) me Ohgddfp (talk) 01:48, 16 March 2009 (UTC)

NTSC Standard

The following was formerly in NTSC standard which, being redundant, I have redirected to NTSC:

NTSC standard: Abbreviation for National Television Standards Committee standard. The North American standard (525-line interlaced raster-scanned video) for the generation, transmission, and reception of television signals.

Well, which is it? Is it "National Television System Committee", as this article says, or "National Television Standards Committee"? 24.6.66.193 06:42, 12 August 2007 (UTC)

All the early books call it "National Television System Committee".Ohgddfp (talk) 01:39, 14 March 2009 (UTC)

I was told, by some "old timers", (the chief engineers that wanted nothing to do with the new Japaneese video products I was hawking), that it stood for "National Television Standing Committee".

Of course the "standing" joke was "Never Twice the Same Color". 72.155.80.11 19:14, 17 August 2007 (UTC)

Exact frame rate

I think it should be made more clear that the actual frame rate of NTSC is 30/1.001 Hz. 29.97 Hz is an inexact approximation. That 29.97 is the actual rate is a common misconception which I would prefer not to propagate. Dmaas 16:39, 7 January 2005

OK then, How about 29.9700299700299700299700299700299700299700? More exact but still not accurate (the last 6 digits actually recur). 29.97 is accurate enough for most purposes. 20.133.0.14 09:31, 5 April 2007 (UTC)
Where is your evidence for the exact frame rate being 30/1.001 Hz? Several sources say that the original frequency of 30Hz was reduced by 0.1% which means it became 30 × (1 − 0.001) which equals 29.97 exactly. I'm not saying you're necessarily wrong – I'm simply interested in accuracy. 83.104.249.240 15:42, 20 August 2007 (UTC)

There are two exact frame rates. In 1953 it was exactly 7159090 / ( 455 * 525), which is approximately 29.970026164312 Ohgddfp (talk) 02:17, 14 March 2009 (UTC)

In about 1978 or so, the subcarrier was changed to exactly 5000000 * 63 / 88 -- See Code of Federal Regulations Part 73.628 "TV transmission standard". Ohgddfp (talk) 02:17, 14 March 2009 (UTC) Ohgddfp (talk) 02:43, 14 March 2009 (UTC)

So after 1978 the frame rate is exactly 30000/1001, which is approximately 29.970029970029970029... Ohgddfp (talk) 02:43, 14 March 2009 (UTC)

Why is it that PAL at 50 fields per second produces 25 frames per second, while NTSC at 60 fields per second gives 29.97 frames?

NTSC is not 60 fields per second. It's about 59.94 fields per second. Ohgddfp (talk) 02:43, 14 March 2009 (UTC)

And if i got this wrong, then what is the reason that NTSC has 29.97 frames, which is not the half of 60? This is not explained in this article or any related ones, like PAL, Interlace, Frame rate. Someone who knows, please clarify. —Preceding unsigned comment added by Tapir666 (talkcontribs) 15:51, 12 September 2007 (UTC)

The exact frame rate is exactly half the exact field rate. The 60 is an approximation. Dicklyon 18:42, 12 September 2007 (UTC)
Erm, the reason has to do with the subcarrier frequency of the colour signal, and how they would have interfered at exactly 30/60. I don't have any references; it's just what I once heard.203.22.17.215 12:55, 30 September 2007 (UTC)

The subcarrier is always 455 times half the horizontal scan frequency. With the subcarrier frequency at approximately 3579545 Hz, and considering 525 lines, this puts both the frame rate and the field rate at the wierd numbers of about 59.94 fields per second, and the frame rate exactly half the field rate at approximately 29.97 frames per second. The reason for the wierd numbers is very surprising -- it's because of the sound. The subcarrier frequency was chosen so that not only frequency interleaving would take place with the video, but surprisingly, so that the minor intermodulation distortion in the receiver would produce interference beat patterns witht the 4.5 MHz difference between the visual and AUDIO carriers, such that the 920 MHz interference beat is ALSO FREQUENCY INTERLEAVED, WHEN THERE IS NO AUDIO MODULATION. But with talking, this trick breaks down and the interference beat becomes more visible. So to reduce intermodulation distortion, the audio carrier is 40 dB weaker than the video carrier at stategic points within the receiver. So it's the SOUND that is the reason for the wierd scan frequencies. The designers could have ignored the sound issue and used at slightly different subcarrier frequency to put the rates at exactly 30 and 60 Hz, but then the sound carriers going thorugh the same amplifiers as the video carriers (to save on receiver costs) would need more careful design and more expensive TV receivers. Ohgddfp (talk) 02:43, 14 March 2009 (UTC)

Unsharp

Why is NTSC so unsharp and blurry compared to PAL? --Abdull 21:17, 17 Jan 2005 (UTC)

Because both the vertical line count and bandwidth available for horizontal resolution are higher (effectively about 338x483 compared to 403x576 according to one source. The PAL image has nearly half-again as many "pixels". There is less motion information (lower frame rate) and colour information, but the sharpness you perceive depends mostly on the luminance info in individual fields.
Now that you mention it, there is a general POV cast to the article that seems subtly defensive of NTSC and critical of PAL. Hopefully it can be gradually rewritten to strain that out (a more accurate assessment might be that both systems suck ;). - toh 21:00, 2005 Mar 5 (UTC)
Be a little cautious. The video bandwidth of the various PAL implementations varies from a low of 4.2MHz (the same as NTSC) up to a much-nicer 6.0MHz. It's only the systems with video bandwidths greater than NTSC that have horizontal resolutions better than NTSC. (Admittedly, I think this constitutes the majority of PAL systems in the world, although the less-great 5MHz video bandwidth seems most popular).
There's a pretty nice description (and table) at Broadcast television system.
In the final analysis, I think you got it right: both systems suck ;).
Atlant 22:18, 5 Mar 2005 (UTC)
Actually, both systems are quite clever. PAL enjoys a sharper image, not for the reasons already discussed, as for the use of a comb filter in separating color information. NTSC was based on a cheaper low-pass filter, and suffered for that. These days, however, analog sets for either system implement comb filters in the digital decoders commonly used. One issue in PAL was that where NTSC has a four field sequence that makes editing sometimes a pain, PAL has an eight field color sequence, which must almost always be violated in editing. But the good news has always been that they are both infinitely superior to SECAM! —The preceding unsigned comment was added by 24.126.195.100 (talk) 15:51, 3 May 2007 (UTC).


Yes I've just read this and I definately get the feeling that NTSC is being painted in a better light than PAL. I'm not sure what it is, something subtle in the way it's written. BTW the PAL entry is very poor compared to this and the SECAM one !
--GeorgeShaw 17:49, 2005 May 11 (UTC)
I concur. It's something subtle, in the subcarrier methinks. [Score: 1, Funny] 203.22.17.215 12:57, 30 September 2007 (UTC)
Well come on, some PAL people! The gauntlet is cast, step up to the plate, and all those other sportsy metaphors! Let's see some activity over there on the PAL article! After all, we'll soon all be swept away by digital TV systems. :-)
Atlant 18:11, 11 May 2005 (UTC)

The question should really be "why are 525 line systems blurry compared to 625 line ones?" 625 line systems use 20 - 50% more horizontal and 20% more vertical resolution than 525 sytems. This is a function of the line number and system bandwidth rather then the colur system used. 625 NTSC, which I have seen, should be sharper than 625 PAL, because the luminance and chrominance do not ovelap as much, so can be separated more efficiently.

I concure that 625 NTSC will be sharper than 625 PAL. That's because the better comb filters don't work at good with PAL's 8 field sequence. But all in all, PAL usually looks better because of an even bigger reason. European producers are more careful with their engineering. I've seen broadcast NTSC done right, which is very rare. And at normal viewing distances with a 27 inch screen, it has been mistaken for HDTV. The biggest mistakes are 1) SYSTEM modulation transfer function (a COMPREHENSIVE measure of detail and sharpness), that is measured from TV camera lens to viewing screen, is not maintained according to any standard, and 2) Automatic equalization based on the already mandated ghost canceling signal (in the vertical blanking interval) is not utilized by receiver manufacturers. The result of these two stupid American mistakes is frequently blurry pictures, even on cable TV systems! Ohgddfp (talk) 02:55, 14 March 2009 (UTC)

Map

Note that the map is cropped such that a good half of New Zealand is missing. Does anyone have the original image? Or is a completely new one necessary to fix this? --abhi 17:18, 2005 Apr 15 (UTC)

Consumer-grade equipment

Reading magazine reviews of both HDTV and SDTV sets, it seems to me that many TV manufacturers deliberately deviate from the NTSC specification; apparently it's normal to have some amout of "wide angle color demodulation" and "red boost" inherent in NTSC color decoders; this does not seem to be a common manufacturing practice with PAL devices. That of course contributes to the "Never the same color" stigma as well; maybe someone with a more in-depth technical understanding of this could write about this in the article. NewRisingSun 16:10, 16 Jun 2005 (UTC)

IIRC, this was introduced in something like the 70s where the decoders were "rigged" so that a wide range of colors in the hue range around caucasian flesh tones rendered closer to some appropriate value. This was done, of course, because NTSC sets have a hue (phase) control and folks were always mis-setting that control, making people's faces green or red instead of whatever color caucasians nominally are. I had assumed that this sort of thing was phased out as more consistenty-accurate decoders became available; certainly most NTSC sets bury the hue control somewhere in some menu these days and I don't think I've touched one for years.
Atlant 16:49, 16 Jun 2005 (UTC)
From what I've read, it actually has gotten worse with all those "improvement" features. One TV reviewer wrote that they use nonstandard color decoders that produce "warmer" colors because they use "colder" phosphors to make the picture look brighter. It also seems that these nonstandard algorithms are different for Japanese and American TVs, probably because Asian fleshtones are different than caucasian ones. Compare this data sheet for some TV chip thing: http://www.sony.co.jp/~semicon/english/img/sonyde01/a6801857.pdf
On page 17, it reads:
AXIS (1) : R-Y, G-Y axis selector switch
0 = Japan axis R-Y: 95° ´ 0.78, G-Y: 240° ´ 0.3
1 = US axis R-Y: 112° ´ 0.83, G-Y: 252° ´ 0.3 (B-Y: 0° ´ 1)
I would assume that a TV set using this chip could not correctly reproduce colors at all (no matter what the "Hue" setting is), as R-Y should be at exactly 90° (don't know what that fraction number means, maybe "gain" or something). Again, I wonder if and how to best write this up for the main article. NewRisingSun 19:32, 16 Jun 2005 (UTC)
The "wrong" demodulation angles have the exact same effect as changing the values of the linear matrix, and therefore is part of the color space conversion from the colorimetry of the TV signal to the colorimetry of the TV screen. Because gamma issues make a linear matrix insufficient to perform color space conversion that is effective for both normal pale colors and highly saturated colors, a compromise might be used. It seems likely that different manufactures might strike a different compromise. To avoid compromise, a color space conversion must use the following three steps, at least conceptually - 1) Remove gamma correction to obtain linear R G B values. 2) Use a linear matrix to perform the color space conversion from the TV signal color space to the color space of the receiver's viewing screen. 3) Apply new gamma correction that correctly compensates for the non-linear characteristics (if any) of the receiver's viewing screen. --208.252.179.23 (talk) 17:08, 18 March 2009 (UTC) --Ohgddfp (talk) 17:10, 18 March 2009 (UTC)

I concure that American TV makers fool around with color to give images a high contrast snap in the TV store. This is a stupid American practice that had the practical effect of putting us behind the Europeans. Here's the history. From 1954 to 1961, manufacturers used the correct NTSC phosphors that produce an extreme wide range of colors, including color that people have never seen before. But in 1962, better energy efficiency and therefore smaller and cheaper power supplies was effected by using higer efficiency phosphors at the expense of poorer color reproduction. If the receiver manufactures had only used a proper degamma - linear matrix -- regammer for color space conversion, all normal colors would reproduce just as well. But no. Only a linear matrix was used for color space conversion, which worked for many normal less saturated colors, but screwed up the more saturated colros. Even in the broadcast studio, around 1967 or so, SYMPTE-C phosphors became the broadcast monitor standard, contrary to federal regulations. Actually, federal regulations don't regulate monitors or receivers in this regard, but they do insist that video cameras and endoders be designed to work best on a reciever that uses NTSC phosphors. So camera designers did this pretty much, and TVs used color space conversion circuttry for the new phsphors. And even these broadccast picture monitors had a "matrix" switch to turn on the same linear compensation (color-space conversion) for good rendition on normal colors. (Yes, it does make the picture more red than without the color space conersion, but this is correct.) But like in the consumer receivers, strong colors got screwed up, especially all shades of red looking like the same shade of red-orange. And with a hue control (commonly called "tint") manufactures didn't even try to make the decoding circuitry stable. Sloppy operations at even the network level also contribuited mightily to the green and purple people problem. So instead of fixing lousy decoding circuits and making the engineers at transmitter manufactures do a good job (differential phase problems), manufacturers hit on the idea of making all colors close to flesh tone actually BE flesh tone. This happened in the 1970's, called automatic color adjustment under various cute names, quite a bit later than the phosphor changes of 1962. In the 1970's a new round of much brighter phosophors came about to even futher reduce maximum color saturation. Some TVs could not reproduce a decent red at all, making those colors red-orange. One RCA set had a red phospshor that was just plain orange. Only mild reds could be reproduced on that set. Combined with engaging the automatic color adjustment system, the 1970's represented the dark ages of accurate color rendition, with most of even the most expensive TVs (Magnavox and Capehart) with auto color engaged never showing the color green or the color purple EVER. Everything was brown, orange and flesh, with the grass dark cyan and the blue sky light cyan. I've repaired a lot of sets during this time. Unbelievable. In the 80's things started to improve. The Japanes imports used what the Europeans use now for phosphors, better than the American sets of the 1970's. But even some Japanes sets had problems with color accuracy. Anyway, now the solid state studio equipment and the solid state TV sets have good circuit statbility so that a hue control is no longer needed, even though it appears in the menu from shear inertia. Today, the best NTSC sets ever made for accurate color rendition is the 21 inch round 1955 RCA Victor, because it uses full DC restoration with the proper NTSC phosphors and the proper MTSC decoder, the 1954 model used proper I/Q demodulation as well, but had a very small screen. A very distant second for accurate color are the very expensive Runco honme theatre systems. Granted, the old TVs were maintenance hogs, but when fixed correctly, they had the best (most accurate) color, even in a reasonably lit room. Ohgddfp (talk) 03:24, 14 March 2009 (UTC)

Did you never learn about paragraphs at school? Your long contribution would have been much easier to read if it had been broken up into paragraphs.

Current NTSC sets still have and need a hue control (usually called 'tint'). The instability of the colour displayed by NTSC sets is nothing to do with the design of the set or any changes to the circuitry within. The hue changes are a phonomenon of NTSC colour transmission that will never go away because they are caused by differential phase distortion in the transmission path.

Indded, on the multi standard TV sets now almost universal in Europe, the controls, which are now in menus rather than knobs on the front panel, feature a 'tint' adjustment. It is usually unavailable ('greyed out') in all TV modes except NTSC. 20.133.0.13 (talk) 08:19, 21 September 2009 (UTC)

About "Current NTSC sets still have and need a hue control (usually called 'tint')." There is an important point to clarify here. Differential phase is NOT an inherent quality of NTSC. What is true is that NTSC is VULNERABLE to EQUIPMENT that has differential phase defects. PAL and SECAM have VERY LOW vulnerability to defective differential phase. But equipment defects are much less than they used to be. I suppose NTSC needs a tint control if we are talking about OUTSIDE the U.S., but I give a different reason for that. It's due to globalized TV receiver design, combined with poorly trained personal and/or poorly maintained equipment in other NTSC countries. Here's why. In the old days there were three sources of tint error. 1) By far the biggest was ironically, manual adjustment of tint, at both the smaller TV stations (burst phase) and in the receivers (tint). 2) Unstable circuits at both the studio and in the receiver which required that adjustment of tint. 3) Differential phase problems in poorly maintained (direct color) video tape equipment and transmission equipment, which was by far the smallest cause of wrong tint in the U.S. Now TODAY, bad engineering practice in small TV studios where personal adjust tint (burst phase) unecessariily remains a problem with small companies. But most channels on analog cable TV are from larger companies. So differential phase problems in the main transmitter (or cable RF amp), which is historically the smallest of the three causes of wrong tint, is the only real problem that remains, and even that is smaller than in the past. So small that in practice, people only rarely fiddle with tint, and then only because someone else fiddled with it before. Since the very act of fiddling with tint (and burst phase) is today MORE LIKELY in the U.S. to cause tint errors than all other problems combined, including differential phase issues in the transmitter, it's better to leave that control out. So why has it not been removed? The inertia of history, where historically, NTSC has a tint control, while PAL and SECAM did not. So it's no surprise that this inertia is carried over today. Tint may actually be REQUIRED in NTSC areas outside the U.S. due to poorly trained operators at TV stations and/or poorly maintained studio and transmitter equipment in those countries. But I would have to research that myself to be sure. Globalized product design, recongnizing the need of other countries besides only U.S. customers, is giving U.S. customers a tint control that they are now better off without.

So what is the point of the foregoing? It's a countrary opinion. What reliable source has given the LOGIC behind the claim that differential phase TODAY is bad enough to require a tint control? First of all, a differential phase problem will indeed respond to a CAREFUL tint adjustment, but only PARTIALLY. Only skin color will be fixed, but that makes other colors worse, although correct skin tones make a net improvement. But what if a study shows that, due to a combination of HUMAN NATURE and how small the differential phase problem really is, that a tint control is more likely to make tint errors WORSE than no tint control at all? Without some definitive study that can be cited, it looks as if anything along these lines really cannot be put into this kind of article. Ghidoekjf (talk) 02:48, 3 July 2010 (UTC)

Variants, NTSC-J

Um, isn't the stereo audio system also different on the Japanese system? The North American version uses a similar approach to FM stereo (suppressed-carrier AM difference signal added to audio before medium-width FM modulation), but moves the frequencies down so that the stereo pilot tone is the same as the TV line frequency; the Japanese stereo implementation predates this and IIRC inserts a second FM subcarrier onto the audio before the main FM sound modulation step. --carlb 01:01, 22 August 2005 (UTC)

I really don't see how NTSC-J could be completely compatible with NTSC-M, when some of the studios in Japan that produce video in NTSC-J use 50 Hz AC power, while North America is exclusively 60 Hz. Wouldn't that result in horizontal sync failure when attempting to view such an NTSC-J signal on an NTSC-M receiver without first converting the signal? -- Denelson83 04:33, 6 August 2006 (UTC)
It has been a long, long time since television equipment used the power line for a frequency standard. They use crystals now.
TV stations used the power frequency as the source of the timing of the field rate, but not because of any necessity to use a reference frequency for both the transmitter and receivers. Indeed TV receivers don't take their vertical timing from the power frequency at all. The transmission station could use any frequency standard it liked and the receivers would sync to the signal perfectly.
The power frequency was used for a very different reason. The CRTs in the receivers are suseptible to distortion from external magnetic fields produced by nearby motors or transformers. By using the power frequency, such distortions were not eliminated, but they at least remained static and thus less noticeable. 20.133.0.13 (talk) 08:27, 21 September 2009 (UTC)
Bryan Henderson 21:57, 14 October 2006 (UTC)
Denlson83 is correct, but maybe not for the reason he states. I discovered I could not get the colors on my Wii to be exactly right unless I used NTSC-J. I fiddled with the brightness and contrasts and other settings for half an hour trying to get it right with NTSC. So I think the artificial might be inaccurate on that point.Lotu (talk) 01:42, 12 March 2008 (UTC)

"Pixels" in NTSC

Many readers are familiar with the pixel dimensions of their computer monitors. If the above estimate of 338x483 is accurate, it would be educational to mention it in the article. (I'm insufficiently knowledgeable to confidently do this.)

The graph at the bottom currently claims boldly that VGA and NTSC are the same, 640 x 480. NTSC "pixels" of course are about a billion times blurrier than any 640x480 VGA monitor. Tempshill 00:04, 30 July 2005 (UTC)

A VGA monitor is 640x480, RGB, non-interlaced. That means it displays all 480 lines each 1/60th of a second (instead of showing every second line, then going back for the rest next time, NTSC-style). It also means that the three individual colours are kept separate throughout the system.
NTSC uses a black-and-white signal (nominally at least 640x480 interlaced visible image resolution) plus two colour difference signals (sent at a significantly lower horizontal resolution). Some old TV's did a poor job of interlacing the two fields (240 lines each, vertical resolution) back together to get 480 visible lines and made a mess of of separating the colour information from the main picture; those built before the widespread use of comb filters gave significantly below-VGA results virtually always.
Some claim NTSC to be capable of up to about 720x480 for the monochrome portion of the image (these numbers, and variants, were used in the DVD standards) although broadcast sources vary widely. (Dish Network's DVB video is typically running at 544x480, on some DBS systems there's even significant variation between different TV channels/same provider). MPEG2 compression of broadcast signals introduces further loss of detail; analogue broadcast introduces snow and noise which affects the image most adversely at the high frequencies used for fine image detail. As for "a billion times blurrier", let me see, 640 x 480 divided by a billion - no that's not even one pixel; NTSC isn't quite that bad yet although I'm sure they're working on it. ;) --carlb 01:01, 22 August 2005 (UTC)
About "MPEG2 compression of broadcast signals introduces further loss of detail". Only if the operation is botched, as it very frequently is. The lower bit resolution (as opposed to image resolution) for fine details is not what people are seeing when they perceive loss of detail. The loss of detail comes from a third-rate algorithm that does a rotten job of separating fine luminance detail from the chrominance. Since MPEG encoding systems tend to treat chroma the same as random noise, the NTSC to MPEG2 conversion process eats up a lot of encoding bits. So it becomes very important before MPEG encoding to clean all of the chrominace out ot the lunminance channel. One possible solution is to band-limit the luminance channel to 2 MHz, where the 0 to 2 MHz frequency range of broadcast grade signals contains no chroma at all. Another solution is to bandlimite the luminance to 3 MHz, where sub brodcast grade signals contain no chroma. In this case, it's ironic the the lesser grade NTSC signal produces a superiour result! Another way is to use a simple digital comb filter, with several non-linear tricks. Another issue is that the above mentioned band-limiting in the luminance channel creates ringing artifacts unless a gentale in-band rolloff toward 2 or 3 MHz is applied. The gentle rolloff in turn makes the image look even less sharp. Some non-linear processing algorithms can make high contrast detail show up better than low-contrast detail, making everyone look like Mr Bill. (Remember Satuday Night Live?) Either way, resolution is usually ruined. The correct way to separate chrominance from luminance is to use a complex and expensive comb filter with 3D motion compensation. The luminance response to any staionary detail test pattern, diagonal or not, should indicate flatness all the way to the limit of what analog video tape recorders and digital NTSC VCRs were capable of. This is sometimes done, with supurb results. Unfortunately, NTSC quality is usually an after thought, where NTSC inputs on new digital equipment are included so that the manufacturer can claim backward compatibility with legacy equipment. Such "NTSC" inputs are horrifiyingly included even in some of the most modern and expensive digital telvision equipment, with ghastly results. But the attitude in the studio is seems to go like this - 'Oh, yeah, it has picture, color and sound. The bad results? Well, it's NTSC. What do you expect?'. --Ohgddfp (talk) 17:43, 18 March 2009 (UTC)
You might want to also discuss a related concept called TV lines per picture height. -- Denelson83 04:37, 6 August 2006 (UTC)

Remember that digital images seen on the Internet are not often separate RGB. Like with NTSC/PAL/SECAM, they are also luminance with high detail, combined with lower-detailed color-differnce information. Converting this within a computer for delivery to an RGB monitor does not improve matters. Inside an NSTC color reciever, there is also a conversion from luminance/color-differnce informtaion to R G B, where red, green and blue video is applied separately on separate wires to the cathode ray tube. Ohgddfp (talk) 03:48, 14 March 2009 (UTC)

MOre about separate RGB: Some Internet images not used very much are called "bitmapped" with file extension .bmp. Some versions of .bmp are indeed RGB, but are not popular because of too much bandwidth over the Internet. Very popular is JPEG. This is the one that uses luminance information separated from color-difference information, with less color-difference resolution than the luminance resoltuion. The MPEG-2 type of architecture, used on the Internet and is also the basis for digital television, also does this. Luminance with color-differnce separation feature, has been adopted by all analog TV standards around the world, along with most digital Internet video and all broadcast digital television. This is the technique first used by the system that introduced it, NTSC. Ohgddfp (talk) 04:17, 16 March 2009 (UTC)

Here it is: A digital system that can produce all the detail that the over-the-air analog NTSC system can reproduce must have a minimum pixel count of 453 horizontal by 483 vertical. Ohgddfp (talk) 03:48, 14 March 2009 (UTC)

List of Countries

Diego Garcia isn't in the Pacific. I have no idea which continent it is associated with. --MGS 13:37, 4 August 2005 (UTC)

Diego Garcia is in the Indian Ocean

Explanation of Standard

In the Standard section, voodoo is used to justify the number of lines in an NTSC frame. It should explain why the line frequency is 15750Hz. --66.66.33.245 23:59, 21 August 2005 (UTC)

The number 15750 is easy enough to calculate; take 30 (the number of complete frames sent per second) and multiply by 525 (the total number of lines in one complete frame, including any hidden beyond the screen edge). 30 x 525 = 15750 :) --carlb 01:01, 22 August 2005 (UTC)

The 15.750 KHz frequency was used with tho original black and white NTSC standard. Colour NTSC uses a 15.734 KHz line frequency 80.229.222.48 10:35, 24 June 2007 (UTC)

Vertical Interval Reference (VIR)

Would it be worth mentioning the "patches" applied to attempt to fix/salvage NTSC by inserting extra data such as VIR (according to SMPTE Historical Note JUNE 1981, "The vertical interval reference (VIR) system constrains the receiver settings of chroma and luminance to follow the values set at the studio")?

It would seem like, much in the same way that DNA has supplanted dental records to identify accident victims defaced beyond recognition, VIR, comb filters (and a largely-unused GCR [ghost canceling reference] signal intended to replace VIR) could allow viewers to identify images defaced beyond recognition by NTSC? --carlb 01:47, 22 August 2005 (UTC)

In my opinion, yes, you should discuss this. So be bold!

Atlant 12:17, 22 August 2005 (UTC)

I worked in studios inserting VIR into the signal. Conceptually, a very good idea. In practice, ALWAYS DONE WRONG. If all TV sets after 1981 implemented VIR, studios would be forced to make it work properly in order to keep the viewing audience. This is one of those few instances where government regulation (mandating VIR in all receivers with hue controls removed) could have helped. Ohgddfp (talk) 03:58, 14 March 2009 (UTC)

Cleanup

The "Standard" section really needs to be cleaned up. It makes reference to too many magic (arbitrary numbers). The section should either give an overview of how the various important frequencies were choosen, or explain where the magic numbers come from: get rid of "then one multiplies xyz MHz by 2 and then divides by yzx before adding zxy to obtain the frequency" by explaining where xyz, yzx and zxy come from. And explain WHY one multiplies and divides by these numbers; otherwise you are just doing magicmath.

The section also claims that NIST operates a 5MHz reference broadcast for this purpose. Is this really true? If it is, then "not a coincidence" needs to be removed (and a reference cited, perhaps?). My gut says that it isn't true because there would be all sorts of problems with the phase (for various technical reasons) of the NIST broadcast at the NTSC transmitter vs. the NTSC reciever. --66.66.33.245 23:59, 21 August 2005 (UTC)

NIST would be the owners of the US shortwave atomic-clock radio stations WWV (Colorado) and WWVH (Hawaii), which are on exactly 2.5, 5, 10 and 15MHz on the radio dial IIRC. These weren't created solely for use by television stations; they're available to anyone with a shortwave receiver. Their existence would make it easier for TV stations to adjust various frequencies at the transmitting site to match a known standard; they would not be being used directly within individual TV receivers though. A quartz crystal of 3.579545MHz (for the colour subcarrier) and a plain 'ol variable resistor or two (to set vertical/horizontal hold manually) were the standard equipment in TV sets, with possibly more crystals being added to the mix later to control the chips behind the newer digital tuners. --carlb 01:07, 22 August 2005 (UTC)

By the way, the 1954 model Westinghouse color TV used NO CYRSTALS. In its place, a manual "color synchronization" knob was availble for the set owner to turn. If misadjusted, the color would break up into nothing but rainbows running through the picture. So it was easy to find the right setting, which was when a normal color picture appeared. Just like adjusting horizontal sync. Once adjusted, the color sync would become perfect because it locked onto a smaple (transmitted burst) of the subcarrier. In this way, if the studio locked onto NIST, then indirectly, the TV receiver (with or without crystals) would also be frequency-locked onto NIST. Ohgddfp (talk) 04:05, 14 March 2009 (UTC)

Note 1: In the NTSC standard, picture information is transmitted in vestigial-sideband AM and sound information is transmitted in FM.

Note 2: In addition to North America, the NTSC standard is used in Central America, a number of South American countries, and some Asian countries, including Japan. Contrast with PAL, PAL-M, SECAM.

Source: from Federal Standard 1037C

standard (moved here from article)

I removed the section labelled 'standard'. It is too technical for most people, not informative enough, the tone is too 'chatty', and is unnecessary. (Of course, all that is strictly IMHO.) Some of this may want to be moved back so I place it here intact.


525 is the number of lines in the NTSC television standard. This choice was not an accident. The reasons for this are apparent upon examination of the technical properties of analog television, as well as the prime factors of 525; 3, 5, 5, and 7. Using 1940s technology, it was not technically feasible to electronically multiply or divide the frequency of an oscillator by any arbitrary real number.

Of course it was feasable to multiply and divide more complicated numbers with vaccum tubes. IT was just harder and therfore not economically justified, that's all. Ohgddfp (talk) 04:08, 14 March 2009 (UTC)

So if one started with a 60 hertz reference oscillator, (such as the power line frequency in the U.S.) and sought to multiply that frequency to a suitable line rate, which in the case of black and white transmission was set at 15750 hertz, then one would need to have such a means of multiplying or dividing the frequency of an oscillator with a minimum of circuitry. In fact, the field rate for NTSC television has to be multiplied to twice the line rate to obtain a frequency of 31500 hertz, i.e. for black and white transmission synchronized to power line rate.

One means of doing this is of course to use harmonic generators and tuned circuits, i.e. if using the direct frequency multiplication route. With the conversion of U.S. television to color, beginning in the 1950's the frequencies were changed slightly, so that a 5 MHz oscillator could be used as a reference. The National Institute of Standards and Technology (NIST) transmits a 5 MHz signal from its standard time and frequency station WWV which may be useful for this purpose. The 5MHz signal may be multiplied by a rational number to obtain the vertical frequency.

Interestingly enough, when one analyzes how people get the 59.94 vertical field rate, one realizes that it is just 60 hertz multiplied by 1000/1001. Now 1001 in turn has prime factors of 7, 11, and 13, so that when cascading simple flip flop based circuitry it is possible to take a 60 kilohertz reference source and divide it by 1001 exactly to obtain the vertical field rate. It is not a coincidence that NIST operates a radio station, WWVB, that broadcasts a time and frequency standard synchronized to an atomic clock on this frequency, that is, 60 kHz.

Actually it is a coincidence. That WWVB broadcasts on 60kHz is nothing to do with television. There is worldwide agreement on what frequency time standard stations should broadcast on. 60kHz is one of the frequencies agreed on (and also used in UK where broadcast TV has no use for the signal). In any event, time transmitters are a poor choice of frequency standard because the signal is discontinuous. 20.133.0.13 (talk) 08:38, 21 September 2009 (UTC)

If 5MHz is [i]multiplied[/i] by rational number as stated above you get a very high number of megaherz - not the low frequency vertical field rate. There is no rational number which 5MHz can be divided by to give the vertical rate. It seems that as well as the world's worst television system, the USA has the world's worst mathematicians!

About "There is no rational number which 5MHz can be divided by to give the vertical rate." Actually, 5 MHz can be divided by the rational number, 250250 over 3. The rational number being (250250 / 3) .--Ohgddfp (talk) 00:27, 18 March 2009 (UTC)

59.94005994005994005994005994005994... = 5000000 / (250250 / 3) --Ohgddfp (talk) 00:27, 18 March 2009 (UTC)


525 (3 × 5 × 5 × 7) was chosen as a comprimise between the 441 line (3 × 3 × 7 × 7) standard (developed by RCA and widely used before 1941) and the 605 line (5 × 11 × 11) standared proposed by Philco 80.229.222.48 10:42, 24 June 2007 (UTC)

Australia used NTSC?

The list of (former) NTSC countries now includes Australia. AFAIK, Australia only experimented with NTSC before choosing a 625-line system (one possible factor was that the country uses 50-Hz electricity). If anyone can give more background on this, it would be helpful. ProhibitOnions 21:40, 12 January 2006 (UTC)

Looks like my assumption was right. Thanks for fixing this. ProhibitOnions 23:47, 26 February 2006 (UTC)

The UK also experimented with 625 NTSC (but never 405 NTSC as the article states.) Before the UK went colour, the BBC wanted to use 625 line NTSC and ITV 405 line PAL. I have seen pictures from this experiment in 625 line NTSC. One of the arguments was that for the same line standard NTSC gives sharper pictures because the luminance / chrominance interleaving is more perfect, giving less overlap so allowing better use of bandwidth.

The UK experimented with 405 and 625 versions of both NTSC and PAL. By 1962 it had been decided (by the [[Pilkington Commitee]) that all future services would use 625 lines but it wasnt until 1967 that PAL was chosen (after a decade of experemintation)

Shift of field rate

Corrected explaination of shifted field rate. Reason is to minimize interferences between

  • luma <=> chroma

AND

  • chroma <=> audio
about "explanation of shifted field rate". Shifting the field rate from 60 Hz to 59.940059.9400... Hz is not to minimize luminance-chrominance crosstalk interference. It's only to reduce visibilty of the 920 kHz beat between the 4.5 MHz sound carrier and the 3.58 MHz chroma subcarrier that is caused by non-linearities in certain receiver amplifiers. With sound modulation, this trick works only a little. With silence, this trick works quite well. It is indeed another frequency interleaving trick, this time at the lower frequency of about 920 kHz. --Ohgddfp (talk) 00:37, 18 March 2009 (UTC)

Clarification on the "beat" paragraph?

After reading this article, the first explanation for why there is now a 59.94059...field rate is still unclear. The text that I feel needs improvement reads:

"When NTSC is broadcast, a radio frequency carrier is amplitude modulated by the NTSC signal just described, while an audio signal is transmitted by frequency modulating a carrier 4.5 MHz higher. If the signal is affected by non-linear distortion, the 3.58 MHz color carrier may beat with the sound carrier to produce a dot pattern on the screen. The original 60 Hz field rate was adjusted down by the factor of 1000/1001, to 59.94059... fields per second, so that the resulting pattern would be less noticeable."

It's clear that the 4.5MHz audio carrier and the 3.58MHz color carrier will beat and produce a low frequency of 920KHz. But why does this generate interference? Interference with what? How does adjusting the field rate down prevent this interference? I don't know the answers, but I'm sure somebody does.

The 920 KHz beat frequency that may be produced is well within the video frequency range, so if the beat frequency occurs at all, it is displayed on the picture as a pattern of bright and dark dots with roughly 920/15.75=58 dots per line. If this dot pattern were not synchronized with the video frame, its constant motion would be very distracting. By choosing the frequencies as they did, the dot pattern, if it occurs, is:
  1. synchronized with the rest of the video so it "holds still", and
  2. reverses phase each line, helping to hide its presence.
Clear now?
Atlant 19:28, 11 May 2006 (UTC)


Well... Thank you for the response, but I'm still confused. Here's what I'm confused by:

(1) Is the intent to hold the dot pattern still, or to make the dot pattern move? An earlier entry in this NTSC article says: In the color system the refresh frequency was shifted slightly downward to 59.94 Hz to eliminate stationary dot patterns..."

(2) How does shifting the line frequency down slightly cause the dot pattern to change? 920KHz/15.75KHz = 58.412. 920KHz/[(1000/1001)*15.75] = 58.354. So the number of dots doesn't change...why are the dots stationary in one case, and move in the other?

This is my first forray into wikipedia editing, and I realize that asking lots of techie questions is perhaps best left for tech forums, rather than wiki discussion pages. On the other hand, one of the reasons I love wikipedia is because it does delve into the details of complex problems. If this can be clarified so that an amature can understand it, I think that would be beneficial. Thanks!

The difference frequency is 58.5 times the line frequency. This means that the dot pattern with an unmodulated carrier is stationary, but out of phase between successive lines. This means that the "bright" dots on one line line up with the "dark" dots on the next, so they cancel each other out when viewd from a reasonable distance. If the frequency had not been shifted that number would have been 58.21, resulting in a moving dot pattern without the cancellation. However since the carrier is FM, this makes nonsense of all this, as the frequency is varying anyway, so the dot pattern would be moving except during periods of perfect silence, which are very rare.

Here it is: The purpose of chosing the frequencies is to make the dots MOVE when the audio is silent. This motion makes it hard for the eye to follow. Also, not only alternating lines alternate the dots, but also alternating FRAMES do this as well. In one frame with a light dot, the next frame has the very same dot a darker color, producing a kind of optical canceling effect in the eye between frames. And yes, the scheme starts to break down when sound comes on. Typically, during a silent portion, the beat pattern displayed on screen seems to dissapear. When fixing sets during a soap opera with many silent spaces, I had to wait for actors to speak to more easily see the problem my customers were complaining about. Keeping the 4.5 MHz sound 40 dB down in the dual sound/video IF amplifiers was the key to adequatly reducing the intermodulation distortion. The frequency scheme also helps. Ohgddfp (talk) 04:15, 14 March 2009 (UTC)

Better explanation of 30/1.001?

I found this information on why the change from 30 to 30/1.001 was needed. It's an excellent explanation, and I think this is likely to be accurate also: (credit: Bob Myers) http://groups.google.com/group/sci.engr.advanced-tv/msg/108815e3089c4d53

I recently received some mail asking where the NTSC 59.94 Hz field rate came from in the first place. Thinking that this might be a topic of general interest, I've decided to post a short discussion of this here - hope no one minds!

Before the NTSC color encoding system was added to the U.S. TV standard, television WAS at 60.00 Hz; it was set at this rate to match the power line frequency, since this would make interference from local AC sources less objectionable (the "hum bars" would be stable in the displayed image, or - if the TV rate wasn't exactly locked to the line - at least would move very slowly). Actually, in some early systems, the TV vertical rate WAS locked to the AC mains!

A problem came up, though, when trying to add the color information. The FCC had already determined that it wanted a color standard which was fully compatible with the earlier black-and-white standard (there were already a lot of TV sets in use, and the FCC didn't want to obsolete these and anger a lot of consumers just to add color!) Several schemes were proposed, but what was finally selected was a modification of a pixel-sequential system proposed by RCA. In this new "NTSC" (National Television Standards Committee) proposal, the existing black-and-white video signal would continue to provide "luminance" information, and two new signals would be added so that the red, green, and blue color signals could be derived from these and the luminance. (Luminance can be considered the weighted sum of R, G, and B, so only two more signals are needed to provide sufficient information to recover full color.) Unfortunately, there was not enough bandwidth in the 6 MHz TV channels (which were already allocated) to add in this new information and keep it completely separate from the existing audio and video signals. The possibility of interference with the audio was the biggest problem; the video signal already took up the lion's share of the channel, and it was clear that the new signal would be placed closer to the upper end of the channel (the luminance signal is a vestigial-sideband AM signal, with the low-frequency information located close to the bottom of the channel; the audio is FM, with the audio carrier 4.5 MHz up).

Due to the way amplitude modulation works, both the luminance and the color ("chrominance") signals tend to appear, in the frequency domain (what you see on a spectrum analyzer) as a sort of "picket fence" pattern. The pickets are located at multiples of the line rate up and down from the carrier for these signals. This meant that, if the carrier frequencies were chosen properly, it would be possible to interleave the pickets so that the luminance and chrominance signals would not interfere with one another (or at least, not much; they could be separated by using a "comb filter", which is simply a filter whose characteristic is also a "picket fence" frequency spectrum. To do this, the color subcarrier needed to be at an odd multiple of one-half the video line rate. So far, none of this required a change in the vertical rate. But it was also clearly desirable to minimize interference between the new chroma signal and the audio (which, as mentioned, is an FM signal with a carrier at 4.5 MHz and 25 kHz deviation. FM signals also have sidebands (which is what made the "picket fence" pattern in the video signals), but the mathematical representation isn't nearly as clean as it is for AM. Suffice it to say that it was determined that to minimize chroma/audio mutual interference, the NTSC line and frame rates could either be dropped by a factor of 1000/1001, or the frequency of the audio carrier could be moved UP a like amount. There's been (and was then) a lot of debate about which was the better choice, but we're stuck with the decision made at the time - to move the line and field/frame rates. This was believed to have less impact on existing receiver than a change in the audio carrier would.

So, now we can do the math.

We want a 525 line interlaced system with a 60 Hz field rate.

525/2 = 262.5 lines/field. 262.5 x 60 Hz = 15,750 Hz line rate.

This is the rate of the original U.S. black-and-white standard.

We want to place the color subcarrier at an odd multiple of 1/2 the line rate. For technical reasons, we also want this multiple to be a number which is fairly easy to generate from some lower multiples. 455 was selected, and

       15,750 x 455/2 = 3.58313 MHz

This would've been the color subcarrier frequency, but now the 1000/1001 correction to avoid interference with the audio:

       60 x 1000/1001 = 59.94005994005994.......

The above relationships still apply, though:

       262.5 x 59.94... = 15,734.265+ Hz
       15,734.265+ x 455/2 = 3.579545+ MHz

And so we now have derived all of the rates used in the current standard.

Removed my old comment 208.252.179.26 (talk) 01:12, 16 March 2009 (UTC)

" ... so only two more signals are needed to provide sufficient

information to recover full color ...". Yes. This "two more signals", which is correct is also called "color-difference" information.--208.252.179.26 (talk) 01:12, 16 March 2009 (UTC) Ohgddfp (talk) 01:53, 16 March 2009 (UTC)

"The possibility of interference with the audio was the biggest problem". Why? Isn't it likely that the possibility of interference is equally problematic for either video, audio, or both? Why is audio the bigger of the 2 problems?--208.252.179.26 (talk) 01:12, 16 March 2009 (UTC) Ohgddfp (talk) 01:53, 16 March 2009 (UTC)
about "The pickets (spectral energy lines) are located at multiples of the line rate up and down from the carrier for these signals." That's wrong. The pickets of a still monochrome image are located at 0 Hz, 30 Hz, and multiples of 30 Hz. A full-color signal for a still picture contains the same monochrome pickets, but also contains pickets at 15 Hz and odd multiples of 15 Hz. So the chroma pickets interleave between the monochrome pickets. In all workable variants of NTSC, the first chroma picket is actually around 2 MHz. In other variants, it's at 3.1 MHz. The FCC however allows chroma energy due to the I color-difference channel to be close to the visual carrier, giving an I channel bandwidth of almost 3.5 MHz ! This resulting "dot-crawl" chroma interference pattern ending up in the luminance channel becomes very visible due to the low frequencies created as as result of such wide bandwidth. But this high visibility of such an interference pattern goes directly against the spirit of making chroma dot-crawl interference appear invisible within the luminance channel. Such a wide bandwidth would also would require a very complicated decoder using hetrodyning in the reciever, since the 3.58 million samples per second inside a reasonable receiver decoder would not be able to recover such a wide bandwidth. So therefore, I bandwidth as a practical measure is limited to no more than about 1.7 MHz, which must be less than one-half the subcarrier frequency. Only a few sets go through the trouble of recovering from the I channel the full 1.7 MHz bandwidth allowed by the FCC. Example - See 1985 RCA ColorTack line of recievers. (The I color-differnce channel is on the orange-cyan axis.) --Ohgddfp (talk) 01:07, 18 March 2009 (UTC)
" ... they could be separated by using a "comb filter ...". We are talking about the reasons for system design decisions. But at the time, comb filters were not considered economically viable, were not even discussed as far as I could find out, and the system was surely designed WITHOUT receiver comb filters in mind. FACT: The first NTSC color TVs were sold in 1954. FACT: The first comb filter equipped NTSC TV was a 1979 Magnavox. Be aware that SIMPLE comb filters like the 1979 version, actually ERASED fine diagonal detail! So what were they thinking (the NTSC) for separating lumna from chroma? A combination of the human eye and a 3.58 MHz notch filter in the lumnance channel is what even some cheap TV sets do today in 2009 !. And yes, the 3.58 MHz notch filter does indeed blurr the image. At the same time, the notch filter fails to take all the chroma out of the luminance channel, so the human eye would have to do the rest of the work to make chroma interference pattern not be seen in the luminance channel. It so happens that the picket fence spectrum with chroma luminance interleaving works like this for still images, and portions of a moving image that are basically still: The luminance of the still portions (or a still picture) becomes a picket fence spectrum. Any luminance motion causes frequency components to appear BETWEEN the pickets. So by engineering the chroma spectrum such that the energy appears BETWEEN the pickets, the resulting pattern interference on the monochrome image is MOVING. (And remember that, because a color reciever overlays color-difference information on top of a monochrome (luminance) image, the chroma-induced interferece pattern unfortuately is physically present on the resulting full-color picture.) By making the interference (chroma) pattern moving, the eye has a more difficult time seeing this interference. Concerning the chroma-induced interference problem, it is this camouflage effect to the human eye that both the RCA color dot sequential system and the later NTSC engineers were working on. So the frequency interleaving separation of luminance and chrominance occurs in large part inside the human visual system. --208.252.179.26 (talk) 01:12, 16 March 2009 (UTC) Ohgddfp (talk) 01:53, 16 March 2009 (UTC)

More about chroma luminance separation: Also, what happens when fine luminance (monochrome) image detail in the same general part of the frequency spectrum gets into the chroma decoder? Being inside the same general portion of the spectrum, the decoder cannot separate fine detail luminance from legitimate chrominance. The result is that fine detail luminance (2 to 4.1 MHz on some high-quality professional monitors (with I/Q demodulation), and 3 to 4.1 MHz on 99.9 percent of all other TVs using R-Y B-Y demodulation) causes false color to be added to the legitimate color. The good news is that the false color on one frame is the color complement of the false color from the previous video frame. Once again, it's the human visual system that comes to the rescue. The eye "integrates" from one video frame to the next, so the two complementary false colors from two consecutive video frames are mixed together inside the human visual system, canceling out the false color, leaving only the legitimate color. This is why, when editing analog (or digital NTSC) video tape, operators must pay attention to the NTSC color frame sequence, which alternates ABABABABABAB.... Now when very strong fine detail generates "illegal" false colors, the colors many no longer be complimentary from one video frame to the next, and the system breaks down a little. Another issue is when the detail patterns caused by weaving in clothes makes smaller islands of false color. Then the seem to move like moving rainbows. Comb filters can help with this as well. --208.252.179.26 (talk) 01:12, 16 March 2009 (UTC) Ohgddfp (talk) 01:53, 16 March 2009 (UTC)

Someting about motion: As I discussed above, the luminance - chromiance separation works inside the human visual system. When chroma-induced interference (dot-crawl) is on the screen, this is motion. That is why some of the better comb filters are 3D. The 3rd dimension being time from one video frame to the next. These comb filters are sensitive to motion, with the theory that any fine detail (such as chroma-induce dot-crawl interfernce into the luminance channel) is not noticed much anyway, especially if it's missing. In this case, moving fine lumminance detail is erase along with the chroma-induced dot crawl because at certain motion speeds, the picket fence phenomenon breaks down in those parts of the image where significant motion is involved.--208.252.179.26 (talk) 01:12, 16 March 2009 (UTC) --Ohgddfp (talk) 01:53, 16 March 2009 (UTC)

Spelling

Standardisation is spelt with an S, not a z. Stop abusing our language! —The preceding unsigned comment was added by Zoanthrope (talkcontribs) .

Don't be stupid. In both American English and British English it's spelt with a Z, see [1]. MrTroy 20:11, 3 August 2006 (UTC)
That's funny; your reference suggests that in British English, it can be spelled either way.
Atlant 23:22, 3 August 2006 (UTC)
It CAN be spelled either way, yes. That still means Zoanthrope was wrong in saying it can't be spelt with a Z, because it's completely valid. May I remind you, by the way, that it's against WP policy to remove other people's comments from the talk page, even if you think it's a personal attack. Which it isn't, where I live "don't be stupid" is hardly offensive, it just means "calm down". MrTroy 08:33, 4 August 2006 (UTC)
Fine. Put your comment back, and hope an administrator doesn't cite your for violating WP:NPA.
Atlant 12:56, 4 August 2006 (UTC)

I apoligise. My mistake. S S S Zoanthrope 18:27, 21 August 2006 (UTC)

Talking of spelling, that "apoligise" doesn't look right. Maybe that's cos I don't use it much! Zoanthrope 18:29, 21 August 2006 (UTC)

It's spelt apologize. Incidentally, it's in the Oxford list of commonly misspelled words :-) -- MrTroy 09:27, 22 August 2006 (UTC)
Or apologise. The page you linked has both versions. The OED prefers -ize, but many other British dictionaries prefer -ise. -- anonymous 29 Oct 2006

For the record, the U.S. spelling is -ize, and NTSC has strong national ties to the United States because most of the Commonwealth uses PAL. --Damian Yerrick (talk | stalk) 23:52, 12 July 2010 (UTC)

difference between looking like an infomercial and looking like a feature film

i thought ntsc had something to do with this. can someone please explain this to me?

What you are refering to could be the difference between a feature film recorded at 24fps which is played using 3:2 pulldown on NTSC, and a television program which might be recorded fully interlaced at 60 different fields per second. Another difference could be that of the difference in contrast, colour saturation and dynamic range between film and television cameras. Ozhiker 19:58, 12 October 2006 (UTC)

I was searching for a copy of EIA RS170a and from what I can tell it has been superceeded with SMPTE 170M-2004 "Composite Analog Video Signal - NSTC for Studio Applications". And in fact is avialable for $US36 from the SMPTE store and a PDF file.

Warning: Standards from private bodies sometimes are in contridiction to federal regulations. But since the FCC is underfunded, local TV stations get away with these violations. The only version of NTSC that was ever legal for over-the-air local analog broadcasting is the federal regulations, which the NTSC wrote. See Code of federal regulations part 73.628 "TV transmission standard". And oh, by the way, "NTSC" is never mentioned in the federal regulations. An argument could be made that officially as far as obeying the law is concerned, there is no such thing as an official legal "NTSC standard" by that name. Ohgddfp (talk) 04:28, 14 March 2009 (UTC)

Citations

The first reference in the article is an explanation that the source for the information in the article has to be purchased through ITU. It should instead be a proper citation as per the manual of style for citations. Whether or not the information has to be purchased through ITU doesn't matter, it should be cited regardless. Also, if specific page numbers or sections can be cited, then cite them. This way it's easier for those who want to verify the information through the sources. Ceros 06:01, 9 December 2006 (UTC)

Warning: Sure, the document from ITU is called "NTSC", but the matrix numbers WERE NEVER WRITTEN BY THE NTSC !! So using ITU is misleading in an article called "NTSC", unless the article is reporting on the ITU document specifically as evidence of alternate "flavors" of NTSC. The only circulating standards document that the NTSC EVER WROTE can be found in only one place. The U.S. Code of Federal Regulations Title 47(FCC) Part 73. Ohgddfp (talk) 04:06, 16 March 2009 (UTC)

Indonesia / Hong Kong / Singapore

Indonesia, Hong Kong, and Singapore use PAL for broadcasting. However they use NTSC-J for gaming. Maybe, it's worth mentioning--w_tanoto 00:41, 24 January 2007 (UTC)

100 Hz Scanning not Common

The article claims that all 50 hz receivers are scanned at 100Hz. This is just not true. Only a very tiny minority of sets used this. The motion artefacts cause by scan doubling were worse than the flcker, so the system never really took off. This is a serious piece of misinformation in the article. —The preceding unsigned comment was added by 82.40.211.149 (talk) 21:52, 17 February 2007 (UTC).

I agree to that. Its not common and was not as good as the original 50Hz - also hes right about the number of TVs which use this. Moooitic 22:18, 2 April 2007 (UTC)

CVBS Error

The article says "One odd thing about NTSC is the Cvbs (Composite vertical blanking signal) is something called "setup." This is a voltage offset between the "black" and "blanking" levels. Cvbs is unique to NTSC."

CVBS stands for Composite Video Burst and Sycs, not Composite vertical Blanking System." It is also used to refer to a Pal signal, so is not unique to NTSC. It has absolutely no connection with the "set-up" or "pedestal" referred to here (althjough obviously a CVBS signal can have this.) The more accurate "Composite Video" article says "Composite video is the format of an analog television (picture only) signal before it is combined with a sound signal and modulated onto an RF carrier. It is usually in a standard format such as NTSC, PAL, or SECAM. Composite video is often designated by the CVBS acronym, meaning either "Color, Video, Blank and Sync", "Composite Video Baseband Signal", "Composite Video Burst Signal", or "Composite Video with Burst and Sync"." —The preceding unsigned comment was added by 82.40.211.149 (talk) 22:06, 17 February 2007 (UTC).

CVBS correctly stands for Colour, Video, Blanking, (and) Sync, the 4 components that make up the composite CVBS signal carried on a single channel. It has no reference to the transmitted signal which can contain a CVBS signal modulated onto the video carrier. Any other translations are usually borne out of ignorance of what it does stand for. See CVBS (second para) - it's correct there. 20.133.0.13 (talk) 08:50, 21 September 2009 (UTC)

Color encoding and Luminance Derivation

The article says "Luminance (derived mathematically from the composite color signal) takes the place of the original monochrome signal." Luminance of course is NOT derived mathematically from the composite color signal but from a weighted average of the red, green and blue gamma corrected signals. It should also be mentioned that the chrominace signals are supressed subcarrier as well as quadrature modulated - this is an important part of the system. —The preceding unsigned comment was added by 82.40.211.149 (talk) 00:15, 18 February 2007 (UTC).

29.97 was an engineering error

This article by Paul Lehrman of Mix magazine

http://web.archive.org/web/20020108053619/http://www.paul-lehrman.com/insider/2001/08insider.html

asserts that the whole 29.97 thing was more the result of faulty engineering assumptions and poor testing than any genuine need to correct "beating" or "interference". If true, and he makes a good case, then all the confident articles about why the frame rate for NTSC had to be lowered to accomodate color are in error. It really ought to be incorporated in this article, but I've found "frame rate" is a topic video people have very dogmatic notions about, so i'll leave it to some one braver than I.

I am also inclined to believe that using 29.97 instead of 30 was a mistake. While it helped a little bit to reduce the effects of amplifier non-linearity in the receiver, it was only a little. Better to have made better reciever amplifiers. I'm talking about the amplifiers that amplified both the IF video and FM audio at the same time. Any non-linearity in these amplifiers cause the 920 kHz beat. Should have just make slightly better amplifiers. Ohgddfp (talk) 04:34, 14 March 2009 (UTC)

480i 680x480 and 576i 720x576

Hello. Well I am not a pro in all that stuff here (just came by to read some about pal conv. issue) but the article says that 480i has 680x480 and 576i has 720x576. Now that makes 4:3 on 480i and 3.75:3 on 576i which makes sense (about the pal issue - black on upper and lower screen-ends) BUT the picture here says something different. Namely that NTSC (480i) is 720x480 - now I think the picture is wrong but anyway I just wanted to know / say some about that. If I got something wrong here please tell me :-D

http://en.wikipedia.org/wiki/Image:Common_Video_Resolutions.svg Is the image i refer to - its the picture at the bottom. SECAM PAL and NTSC share that picture. (as well as some other article)

Moooitic 22:14, 2 April 2007 (UTC)


Digitized NTSC is governed by CCIR 601 which defines a clock frequency such that both 525/60 and 625/50 are digitized with the same number of pixels per line to facilitate scan conversion with minimal artifacts. These are non-square pixels. The image size is is actually 720 by 485 for NTSC and 720 X 576 for PAL/non-Cuban SECAM. The image size for those frames are 4:3 aspect ratio.

The picture shown is misleading because the pixels are non-square. An NTSC pixel is 4/720 of the screen width and 3/485 of the screen height. In other words, the NTSC pixel is about 90% as wide as it is tall. Likewise, the PAL/non-Cuban SECAM is about 84% as wide as it is tall. —Preceding unsigned comment added by 132.228.195.206 (talk) 17:11, 4 March 2008 (UTC)

Some clarification of NTSC pixels. NTSC as defined by the NTSC does not have pixels in the modern sense. But NTSC in the studio is frequently digitized, upon which the digitized version has genuine pixels. But before analog transmission over-the-air, the digitized NTSC signal must be converted back to analog, at which time the pixels once again dissapear. With good engineering, picture distortion from this analog to digital and digital to analog process should be difficult to measure and even harder to see. However, if there were not at least 453 horizontal pixels by 483 vertical pixels when the signal was in the digital domain, the picture quality of the eventual on-air analog signal will be forced to have reduced picture resolution. --Ohgddfp (talk) 01:24, 18 March 2009 (UTC)

"In other words, the NTSC pixel is about 90% as wide as it is tall." I disagree. A digital system with the MINIMUM pixel count capable of reproducing all the picture detail of an analog NTSC system is 453 horizontal pixels by 483 vertical pixels. With the 4:3 aspect ratio, this makes the NTSC pixels wider than they are tall. Ohgddfp (talk) 04:38, 14 March 2009 (UTC) --Ohgddfp (talk) 01:24, 18 March 2009 (UTC)

Is this the reason for what I have wondered about all these years?

Is NTSC the reason why American TV programmes usually look really shit when shown on TV over here? American films always look the same as any other movie though. — Chameleon 10:23, 9 May 2007 (UTC)

Yes and no. See Television standards conversion. NTSC signals look a lot better in their native state than converted to PAL. However, PAL (625/25i) signals look worse converted to NTSC, too. Films are transferred to video directly, and arguably look better on PAL systems, owing to the greater image resolution and the lack of 3:2 pulldown frame rate conversion, so that each film frame is broadcast in a full PAL image only once (however, films are speeded up by 4 percent, which some people find annoying). ProhibitOnions (T) 11:03, 9 May 2007 (UTC)

3:2 pull down is not used in Europe. Films are shown at 25fps instead of 24fps. No one notices the slight increase in speed. —Preceding unsigned comment added by 132.228.195.206 (talk) 17:14, 4 March 2008 (UTC)

American programmes on European TV always seem to be brighter. Not sure if this is some NTSC conversion artefact or if the slightly different gamma values in PAL and NTSC are responsible 80.229.222.48 10:52, 24 June 2007 (UTC)

NTSC N ???

I want to mention that Argentina, Paraguay and Uruguay use PAL N, but never NTSC N in color, this was used only in B/W era. The norm N conform the ITU Region 2 directive for 50Hz countries. There the majority of TV sets and VCRs are "binorm", that is PAL N / NTSC M. And most are "trinorm", that is PAL N, NTSC M and PAL M. --Vmsa 20:45, 29 May 2007 (UTC)

This parenthetical remark does not belong in the body of the article, and I've removed it from the NTSC-N section:

(This might need checking. As far as I can say, Paraguay used PAL-N, while Bolivia and Chile use NTSC-M. While there are similarities in the M and N norm on bandwidth/frequency spectrum usage, the N-norm system used in Argentina and Uruguay both use a PAL-based color scheme. Chile and most of the west Latin American countries use NTSC-M).

After someone more knowlegeable than I has checked it out, the proper information can be put into the article. --Kbh3rdtalk 14:36, 15 July 2007 (UTC)

Hue errors

Living in Europe Ive read a lot about the alleged defeciencies with NTSC regarding hue errors but I would be interesting to see some examples (screenshots).

Also most texts make vauge reference to "phase errors" as the cause of the problem without explaining.

  • 1) How widespread the problem is
  • 2) What (in the way of modifications/improvments) has been done to address the problem
  • 3) Is it confined to regular (terrestrial) TV or does it affect cable TV, Satellite and/or videotape
  • 4) Is it only encountered with inadequete antenna setups (the use of set top "rabbit ears" antennae rather than proper outdoor antennae seems to be more common in the US)

80.229.222.48 11:03, 24 June 2007 (UTC)


There are color correction codes transmitted in the vertical interval since the late 1980s. These used with digital color circuitry and monolithic analog circuitry has eliminated the hue problem. Prior to that, no two sets had the same color. That is why people often refered to NTSC as 'never twice the same color'. —Preceding unsigned comment added by 132.228.195.206 (talk) 17:18, 4 March 2008 (UTC)

I'm in Europe too, but we had a terrestrial AFN station in my area which transmitted in NTSC. I had two multistandard TV sets, so I could watch it, but they probably didn't employ that fancy autocorrection circuitry the previous poster mentioned, so I saw the effect frequently. I have no screenshots, but you can probably simulate the effect if you have a multistandard TV set and feed it some NTSC material, by tweaking the "hue" setting which is there for correcting the error, but you can also misuse it to simulate it.
The error is most prominent in fleshtones, you typically get green or purple faces. The problem is that the error can fluctuate, so even if you've corrected it using the hue setting, it can happen you have to do so again after a awhile.
1) It's a general problem of NTSC, it appears wherever there is an NTSC transmitter.
2) See previous poster.
3) I think terrestrial only, as the problem is mainly caused by bad reception, esp. reflections. Maybe bad cable installations can also display it. Analogue satellite is probably immune as it is FM modulated. VHS is immune too.
4) Bad reception certainly aggravates the problem.
Anorak2 (talk) 10:05, 5 March 2008 (UTC)


"These used with digital color circuitry and monolithic analog circuitry has eliminated the hue problem." I disagree. VIR is not used much in TV receivers anymore. The newer studio equipment and newer TVs simply have much more stable circuitry, so there is less inclination at the studio or the receiver to adjust (and usually misadjust) the phase control. And with the "hue" control buried in a menu these days, it doesn't get misadjusted anymore. Years ago this was a big problem, and VIR did not solve it because of improper implementation. Not until solid state circuitry came along did things improve gradually. Electrically, hue is controlled by electrical phase. Bad phase stability means bad hue stability. Differential phase in TV transmitters was also a bigger problem once. Today's NTSC transmitters are much better in this regard. Reception ghost problems bad enough to cause hue errors could not be fixed by adjusting the hue control anyway. NTSC receivers should never have had saturation, setup, and hue controls, and should always have had automatic equalizers to deal with ghosting. Go into a store and look at a lot of digital TV sets. The menus still have brightness contrast hue saturation. How to please four people watching the same program? How to set these? And even the display digital models are also never twice the same color. Why? Window shoppers change the settings in the remotes. This proves that misadjusted controls were always the biggest reason for never-twice the same color. Ohgddfp (talk) 04:50, 14 March 2009 (UTC)

Please specify the technical details of NTSC-J signal.

Looking at the article i'm really confused about the NTSC-J signal characteristics which says slightly different than normal NTSC signal. What are the technical characteristics such as sampling rate, video bit rate,audio bit rate, resolution(which i suppose is 720x480,is it right?), frame rate(which i suppose is 29.97 fps,is this right?) and other characteristics of NTSC-J signal.

"NTSC III"?

Could someone please include an original (i.e. not Wikipedia-based) reference for "NTSC III" that supposedly defines all aspects of NTSC in a rigid mathemetical manner, as the article claims. I have reviewed the professional literature on the subject, and could not find *anything* about a third National Television Systems Committee. If there was one, at minimum, its final report (that's what committees do, they write reports) should be included in the reference section. For example, the reports of the first two NTSCs are called:

NTSC 1941, Transmission Standards for Commercial Television Broadcasting

and

NTSC 1953, Recommendation for Transmission Standards for Color Television.

as referenced by SMPTE-170M-1999. - NewRisingSun 14:47, 3 August 2007 (UTC)

According to fedral regulatioins, NTSC III is illegal for over-the-air analog broadcasting in the United States. Why? because NTSC III includes numbers that were NEVER WRITTEN BY THE ACTUAL National Television System Committee that wrote the numbers for the U.S. federal regulations. Remember that the federal regulations were written to insure interoperatbility. To achieve this both transmitter (video camera encoder) and reciever must use the same matrix numbers! Going NTSC III causes stations and reeivers to use differnt numbers, reducing interoperability, with poorer color accuracy as a concequence. It doesn't matter that NTSC III numbers are superior for rendition of borders between colors. What does matter for the mnuch more important broad area color accuracy is that both the sending and receiving numbers are the SAME, regardless of what they are. Using NTSC III is actually a new standard, therefore causing interopeability problems for color accuracy, which is contrary to the spirit of why the NTSC wrote the federal regulations in the first place. Ohgddfp (talk) 04:59, 14 March 2009 (UTC)

Cleanup

I refactored the history section, but this entire article needs some serious language cleanup, especially with diction and voice. I see lots of passive voice and poor sentence structure. Tag applied. /Blaxthos 17:28, 7 October 2007 (UTC)

This article is annoying to read. I came looking for information and the article seems to argue with itself in the first paragraph. Someone who can put aside their chauvinism should rewrite the facts.20:31, 9 January 2008 (UTC) —Preceding unsigned comment added by 24.6.198.12 (talk)

SLUS/etc.

SLUS/SLES/etc. needs to be added to this article, due to relevance. —Preceding unsigned comment added by 76.201.156.95 (talk) 16:18, 29 October 2007 (UTC)

Brazil

Actually, analog TV in Brazil uses PAL, not NTSC (as shown in PAL's article).

US Analog Broadcast Shutdown Date

The date for the end of analog broadcasts in the U.S. is February 17, 2009. This information is cited form the FCC's DTV Transition website.

Users, please discontinue changing the date to February 17, 2011 unless a credible source can be cited. 134.48.162.253 (talk) 23:55, 11 January 2008 (UTC)

Primary Colors

There is mention that the primary colors were changed to SMPTE C, even though this is against federal rules. No mention as to why FCC did not act against such brazen rule violators.

Because the FCC rules only state that the signal voltages must be "suitable" for a receiver with the 1953 characteristics (see 47 C.F.R. Section 73.682 (a)(20)(iv)). A picture encoded for SMPTE-C will still look acceptable on a 1950's receiver, as evidenced by pictures of vintage RCA CT-100 sets showing today's programs. NewRisingSun (talk) 16:29, 17 June 2008 (UTC)

About "A picture encoded for SMPTE-C will still look acceptable on a 1950's receiver, as evidenced by pictures of vintage RCA CT-100 sets showing today's programs." I disagree. First of all, a picture distorts the colorimetry quite a bit, and so should not be used in Wikipeida article. And SMPTE-C signals on an set using FCC phosphors makes for bad color accuracy. Sure, hue and saturation can always be adjusted on ANY receiver to get correct flesh tones on a person with even lighting. But the NTSC already knew this, yet they recommended to the U.S. government (FCC) to standardized colorimetry anyway! On an FCC phosphor TV (by the way I OWN a CT-100 ! with small 15GP22 CRT. Interstingly, although the tube is very deep, the viewing screen is COMPLETELY FLAT! ) and I used to watch Saturday Night Live on a 1958 21CYP22 CRT with good gray scale (which has the same FCC phosphors - look it up in the RCA tube manual - the print the crhomaticity coordinates). Feeding an SMPTE-C signal to such a set causes cyan to more than DOUBLE the saturation, and green to increase as well. If it was really suitable, the opposite condition would have to be suitable too; that is NTSC feeding reduced-gamut phosphors. Yet, as soon as reduced-gamut phosphors were introduced in 1962, similar to SMPTE-C, manufacturers immediately produced color space conversion. Older FCC sets getting a newer type replacement tube, repairmen actually were instructred to make matrix modifications to the TV. No, mismatched colorimetries is never suitable. Ohgddfp (talk) 03:40, 16 March 2009 (UTC)

Instead, I would think that FCC rules were not violated, as evideneced by the MATRIX switch found on expensive broadcast picture monitors using SMPTE C phosphors. The MATRIX switch changed the decoding within the picture monitor so that video signals designed to FCC NTSC phosphors would reproduce better. About 1964, TV receivers did the same thing, with their decoders behaving as if the MATRIX phosphor compensation switch was always on. The only reason for turning off this switch on broadcast monitors is to set up the monitor using the blue-only display and a color bar test signal.

That's your conjecture. The scientific journals I cited clearly show what actually happened. NewRisingSun (talk) 16:29, 17 June 2008 (UTC)

Note also that receiver manufactures did not stick with SMPTE C, but later started to go back toward NTSC. Today, some of the saturation previously lost with SMPTE C like primaries has been restored.

I agree that they later started to go back toward NTSC. Ohgddfp (talk) 03:40, 16 March 2009 (UTC)

I'm not sure I know what you're talking about here, but the "red push" seen on receivers is to compensate for the non-standard color temperature used in those sets (9300K as opposed to 6500K).

The point is that SMPTE C was never a SIGNAL standard,

SMPTE-170M, the contemporary broadcast standard, clearly states: "4.1 The green, blue, and red (G B R) input signals shall be suitable for a color display device having primary colors with the following chromaticities [...] The display primaries with the chromaticities specified above are commonly referred to as the SMPTE C set." Please read the actual standards documents; that's why I cited them. NewRisingSun (talk) 16:29, 17 June 2008 (UTC)

and neither was it a receiver standard. It was a defacto standard (read SMPTE standard) for TV broadcast picture monitors only, and not a standard for the transmitted signal or for TV receivers. Now the phosphors on today's receivers are somewhere between SMPTE C and NTSC.

Proof? I've got references for my claims, have you? :) NewRisingSun (talk) 16:29, 17 June 2008 (UTC)

I have proof. See the U.S. Code of Federal Regulations (FCC) Part 73. I think you would agree that federal regulations trump private standards bodies. The regulations mandate a certain formulation of the signal. SMPTE-C is a case in point. It contradicts federal regulations. And remember that the purose of the National Television Sysmtem Committe was to WRITE these federal regulations. And yes, the FCC does not regulate receivers or broadcast monitors, allowing the use of SMPTE-C monitors in the studio, but it does regulate that TV cameras are designed for best operation with NTSC phosphors. Are these federal regulations enforced? I doubt it. But we are supposed to make a distinction of what the NTSC actually said, and what common practice is today, and what federal law requires of local TV stations. And for proof of the matrix switch on broadcast picture monitors, look at the operation manual of a Conrac color studio picture monitor of the 1980's. In it, they explain the operation of the matrix switch. (It's for color space conversion from the NTSC signal that anticipates NTSC phosphors to the actual SMPTE-C phosphors in the monitor.) Oh, and by the way, these federal regulations are still on the books in 2009. Ohgddfp (talk) 05:16, 14 March 2009 (UTC)

There's no real contradiction here. The CFR says the signal must be "suitable" for 1953 primaries. SMPTE-C phosphors were used from 1968 on, but assuming that receivers expected a signal encoded for 1953 primaries, studio monitors used a matrix switch to display a NTSC-phosphor-like picture on the SMPTE-C monitors. Industry standard SMPTE-170M in 1979 then made the switch, telling everyone to no longer anticipate NTSC phosphors but SMPTE-C phosphors. Of course, broadcasters and equipment manufacturers didn't immediately change their practices, but at least since the early 1990s or so, no broadcast or receiver equipment expects the incoming signal to be for 1953 primaries, and so the matrix button has become dispensible. And yes, the old primaries are still in the CFR, but they are indeed not enforced (How could they be? The party bringing a charge would have to prove that a broadcaster expecting receivers to have SMPTE-C primaries makes the signal "unsuitable" for the 1953 reference receiver, which clearly is not the case, as stated previously). NewRisingSun (talk) 09:03, 14 March 2009 (UTC)

About "Industry standard SMPTE-170M in 1979 then made the switch, telling everyone to no longer anticipate NTSC phosphors but SMPTE-C phosphors". SMPTE is not a court or an arm of the government. It's a private club. By doing this, SMPTE could have been brought up on federal charges of enticing broadcasters into a deliberate violation of federal regulations. It's one thing to propose this as a standard, but another to push businesses into illegally enhancing profits. And because this was a move to reduce costs (by using phosphors with high energy efficiency), it now becomes a conspiracy for wide-scale lawlessness. Think Arthur Anderson. Ohgddfp (talk) 03:11, 16 March 2009 (UTC)

I disagree with one point. Here's how a party bringing a charge can prove that a broadcaster is not formulating the signal suitable for 1953 phosphors: Look how the camera is demonstrated at the vendor shows (N.A.B). If the picture monitor in the demonstration produces screwed-up colorimetry when fed a low-saturation GENUINE NTSC test signal (where a simple linear color space conversion really needs to do a good job, and CAN do a good job with non FCC phosphors), a jury can be easily convinced that the camera designers did not try to follow federal regulations. The violation occurs when such a camera is utilized without the necessary in-studio color space conversion to produce a signal that eventually makes its way to the local transmitter. Also, using a non-NTSC picture monitor to judge colorimetry of the broadcast signal is prima facia evidence of a violation. So the local station can protect itself by using black and white monitors and getting affidavites from program vendors that they are enabling the local station to follow federal regulations. IF there is a violation, the station pays the fine, and then turns around and sues the program vendor for purgery. It's the local TV station company is in violation, regardless of where the video source comes from. What about the "America's Funny Home videos" program? TV stations can protect themselves from prosecution by using a correct NTSC picture monitor whenever they want to judge the effect of additonal color-space conversion or even the very need for the additional color space conversion. So it's in the picture monitoring where proof of violation occurs. In short, using a non-FCC monitor to judge picture quality is prima facia evidence of a violation. If people were sufficiently annoyed, especially when spending thousands of dollars on a TV these days to watch old NTSC shows, I'm sure a federal case can be made out of this. Ohgddfp (talk) 02:23, 16 March 2009 (UTC)

About "The party bringing a charge would have to prove that a broadcaster expecting receivers to have SMPTE-C primaries makes the signal "unsuitable" for the 1953 reference receiver, which clearly is not the case,... ". I'm referencing the "clearly not the case". A signal formulated for SMPTE-C phosphors is indeed unsuitable for an FCC reference monitor because the color reproduction accuracy gets all screwed up. It's the deliberate color distortion that makes such a signal unsuitable. The reason is that the FCC is regulating a TELEVISION SYSTEM, which BY DEFINITION implies the CAPABILTY of high fidelity reproduction (to a practical degree), even if the program producer, for artistic reasons, elects not to strive for it. It could be argued that changing the standard constitues a practical approach, but practicalites is not a reason for violating federal law. In this day and age where integrity is in the news, industry should exercise more care and not expect local TV companies to violate federal law just because a private standards body wants them to. --Ohgddfp (talk) 02:46, 16 March 2009 (UTC)

Oh, and you don't think such prosecution could ever happen? Let's look at something that was INVISIBLE to the home viewer, yet many production companies had to make new versions of video taped programs because of excessive horizontal blanking. 1978 to about 1983 or so. Program producers squealed like stuck pigs. And this problem wasn't even visible on TV sets! Granted, it's easier to prove than proving wrong colorimetry. Ohgddfp (talk) 02:46, 16 March 2009 (UTC)

I see we have no disagreement about the fact that contemporary NTSC signals should be assumed to be for SMPTE "C" and not for 1953-NTSC. As for prosecutions, apart from being out of scope of the article, the FCC seems to have quietly accepted the change, given that the ATSC specification specifically lists SMPTE "C" primaries for standard television digital broadcasts. NewRisingSun (talk) 16:25, 16 March 2009 (UTC)

We do disagree. I'm saying that to be legal, all over-the-air analog broadcast signals should be assumed to be for FCC phosphors (1953 to 2009).

Yes, to be legal, you should do exactly that. But to be practical, all over-the air analog NTSC broadcast signals should be assumed to be for SMPTE-C phosphors, just as digital SDTV and DVD 525/59.94 signals. When you're tasked, say, with producing a DVD and you monitor with 1953 primaries, and your customers/reviewers monitor the result using uncorrected SMPTE-C, as is the industry practice, and wonder about the ugly result, using the defense that you're just following a 50-year old law won't help you. :) Since we're debating about what the article should say: the article should state the original 1953 specification, then explain how industry practice has since deviated from that, with sources (which I can provide). Right now, that's exactly what the article does, with sources (which I have provided). I can provide more sources though if you find the existing ones unconvincing, though at some point it's going to become tedious for the reader. I think I'll at least dig up the proper citation for the N.W. Parker article from 196x or so that provides the proper formula for both the linear and non-linear correction used by "matrix" buttons and receiver correction circuits. NewRisingSun (talk) 16:20, 17 March 2009 (UTC)
I concure that the various versions of NTSC, including FCC, defacto, and those by standards bodies should be put into a clever table. With sources like you suggest. For a given standard, data should include parameters, official time span in effect, actual time span in use, popular use (like "DVD players") or "composite wiring in studio" or "composite wiring in home". Of course, the only parameters the NTSC ever wrote, which are the FCC regulations, would be just one of the sources. This should include digitized non-compressed NTSC (on some professional VCRs) as well, and even Broadcast grade U-Matic I guarantee the data will be a real eye-opener (pardon the pun for digital fans). This puts all the flavors of NTSC in nice perspective for the reader. --Ohgddfp (talk) 22:34, 17 March 2009 (UTC)

I am also saying that putting a genuine FCC signal into a SMPTE-C monitor is reasonable IF (emphasise IF) the monitor itself is using color space conversion to correctly fix at least most of the normal colors. TV set designers can do a better job of design, as well as increase cost-effectiveness if they know ahead of time the color space for the on-air signal. This speaks directly to why the FCC mandated these standards in the first place, which is for interoperability between the TV signal and the TV that receives that signal. And I mean all kinds of interoperability, not just "it has picture, sound and color, that's good enough". When readers learn about the SMPTE standards, FCC regulations, and so forth, eventually confusion will set it, which is not good for an encyclopedia article. And of course, you (NewRisingSun) are correct in that potential prosecutions is not legitimate subject matter for an article. But when readers see such contradicitons in the artcle (talking about NTSC as practice versus NTSC as they wrote it versus NTSC as enforced, they might want to know why, and we should tell them in terms of what actually has factually taken place. I assign reasons behind the facts in this talk page to alert people making the article that inconsistencies really do exist and to get people making this article to start thinking about how to report these inconsistencies in the article without causing even more confusion. And by the way, I think the industry is very confused today because of all these complicated issues of which NTSC is the real NTSC, and I have read that color space conversions involving NTSC are actually in chaos. Also, others on this talk page (or was NewRIsingSun?) have mentioned that phosphors have been moving back closer toward NTSC, leaving SMPTE-C behind. This I agree with. But if contemporary signals expect SMPTE-C phosphors, then this opens up another set of issues because a color saturated beyond smpte-C but within the newer higher saturated phosphors would contain one or two of the separate R G B video signals WITH NEGATIVE VALUES as decoded by a waveform monitor employing a strict FCC decoder. Such a reference monitor advertised by Tektronix (specs still available) advertises the detection of "illegal" colors within the transmitted signal, which according to Tektronix, is one or two of the R G B primary values going NEGATIVE when decoded by a strict FCC decoder. Therefore a signal expecting smpte-c color space must cause an occurance of Tektronix's idea of an "illegal" color in order to render a super smpte saturated color. So something has to give. But what is it? Although I may disagree with the FCC, I think the FCC and Tektronix are together on this issue of "illegal" colors. So what's left is either the additional saturation capabilities of the later phosphors are going to waste in order to avoid "illegal" colors in the transmitted signal, or the target color space of contemporary on-air analog signals is NOT SMPTE-C. I know it has to be one or the other. I just don't know which. Maybe in all the chaos, it really is both, causing NTSC to continue being "Never twice the same color" as we go into converting old NTSC videotapes into digital television files. --Ohgddfp (talk) 21:52, 16 March 2009 (UTC) --Ohgddfp (talk) 22:44, 16 March 2009 (UTC)


I'm confused here. What does the Tektronix monitor warning about negative RGB values tell us about which color space to assume for broadcast signals? You can get negative RGB values for any saturation-maladjusted broadcast signal, and you will get negative RGB values with any of the older home computers/game consoles, as well as a few older character generators because they generate the NTSC signal using simple flip-flops/counters/oscillators. NewRisingSun (talk) 16:20, 17 March 2009 (UTC)


About "What does the Tektronix monitor warning about negative RGB values tell us about which color space to assume for broadcast signals?". The warning tells us nothing about which color space to assume. From Tektronix's point of view if the voltage in a certain format (NTSC/PAL and others) goes negative, it's "illegal", not from the FCC point of view, but in terms of violating the particular standard the test instrument is set to. As it turns out, "NTSC" in any version becomes "illegal" when one or more of the R G B values goes negative. Also, FCC defines "picture elements" (yes! in 1953, but actually just the blobs in the camera tube) that when gamma corrected become R' G' B', which, for purposes of defining the encoding formulation, range from zero for zero energy to maximum for reference white. With gamma, there is no clear way to have meaning for negative values for the purpose of the formulation, so one could interpret negative values of any of R' G' B' to be illegale in the FCC sense as well. Tektronix, says excessive positive values as well can cause a signal to be "illegal". Yes, older home computers / games, character generators will indeed light up the alarm on these Tektronix instruments. Tektronix, in their application note Preventing Illegal Colors, gives the "diamond" display on their The WFM700 analyzer. The analyzer does not deal directly with the color space of any one particular set of phosphors; instead it deals only on the voltage ranges, which for stanard NTSC composite video (probably SMPTE ?), stays the same with any set of assumed target phosophors. --Ohgddfp (talk) 22:44, 17 March 2009 (UTC)

About "given that the ATSC specification specifically lists SMPTE "C" primaries for standard television digital broadcasts": I just read the ATSC specs from their website. They talk about syntax fields for explicitely and NUMERICALLTY specifiying chromaticity coordinates, gamma values and matrix values, with special variable names for each of these. If these are not supplied, then yes, they RECOMMEND a default, but they don't actually specify a default. They also mention that certain values are likely, but that doesn't rise to the level of specification. My take on the approach by ATSC is that any really good receiver will do a color-space conversion based on the values and matrix values supplied inside the digital TV signal. This goes to showing that ATSC is not legitimizing any particular version of NTSC, which is evidence for us people modifiying this article that we need to find what the real facts are concerning the various versions of NTSC, and how to report it with easy readability. --Ohgddfp (talk) 22:55, 16 March 2009 (UTC)

By the way, this change in the receiver decoder matrix works great for most normal and pale (low saturation) colors, which is what is seen most of the time. But gamma compensation causes the decoder matrix phosphor compensation scheme to fall apart for most saturated colors.199.232.228.77 (talk) 04:09, 17 June 2008 (UTC)

PAL DVDs on NTSC television

Why is there no Wikipedia article on PAL to NTSC conversion, and so forth? There is information in Wikipedia on region hacks, so I'm wondering why none on the more basic PAL/NTSC situation.

Anyway, I'm wanting to know, how do you know if one's American TV (32") will play a PAL DVD, with a region-free DVD-player? If it will not play PAL DVDs, how do you convert the TV in order to view PAL DVDs? Thanks in advance. Softlavender (talk) 11:58, 25 August 2008 (UTC)

The reasons I've always seen given are that apart from PAL to NTSC home conversion being more complicated than the reverse (see Television standards conversion# Overview), manufacturers have never perceived a market for it. In PAL countries VCRs started incorporating the facility as standard in about the mid 1990s (although you also need a television that can handle the modified signal, usually through the SCART cable) as the main market demand was for being able to watch imported tapes from the US. This was carried forward to DVD players (although there's the issue of region coding but again this is something bypassed) and indeed some DVDs released in PAL countries have actually used the NTSC masters. By contrast the US doesn't have such a following of material from the PAL markets and so there has never been sufficient market demand for PAL playback to be included. Timrollpickering (talk) 11:06, 6 January 2009 (UTC)
Modifiying a single standard NTSC television to become multistandard is not possible with all models, in any case expensive and usually not worth the hassle. A better option is to feed the television set a true NTSC signal by converting it from PAL. This is doable either by an outboard converter attached to a multistandard player, or by a multi-player which has an internal converter. The latter are often incorporated in cheap Chinese no name players. Here's a list. Anorak2 (talk) 14:05, 6 January 2009 (UTC)
Converters that preserve high-quality images must perform a "de-interlacing" operation to convert from interlace to progressive similar to what is required for internal processing inside an LCD TV. "De-interlacing that is very high quality is very expensive to do. In fact very high quality de-interlacing is more expensive to implement than than a dual standard CRT-based TV. The cheap converters available generally do a lousy job at de-interlacing like you see on LCD TVs. Of course, for the target standard, the converter must re-interlaced the image, which is a simple task. In conclusion, if high quality is the goal, then dual standard CRT-based TVs give the highest quality for the dollar. But if the performance of a 12 inch NTSC LCD TV looks good enough to you, then a cheap converter will be good enough as well, besides being the cheapest solution. --Ohgddfp (talk) 02:37, 17 March 2009 (UTC)
Most modern PAL TVs (here in UK) are quite happy to play NTSC, only earlier TVs used the 50hz mains as a reference, so the TV will switch automatically from 50Hz to 60Hz as required. Most DVD players are bought "multi-region" out of the box (i.e. Made that way by manufacturer - my Panasonic is like that), only the very cheap DVD players will be region 2.  Ronhjones  (Talk) 21:52, 13 March 2009 (UTC)
Television receivers have never used the 50 Hz mains for any form of reference. If this were not the case battery operated TV sets could never exist. Broadcasting station do use the mains power as a frame frequency reference, but they don't have to (and indeed ceased to do so in the US once colour was introduced). 20.133.0.13 (talk) 08:57, 21 September 2009 (UTC)

"250kHz Guard Band" ... Where it from?

In `Transmission modulation scheme`, it is described that

>A guard band (...) occupies the lowest 250 kHz of the channel

But I couldn't find the term "guard band" in ITU-R BT.1701 neither in FCC's CFR title 47 part 73 subpart E.


Q1 : Can anyone show us the source of "guard band"?


In this graphic, you can see that the 250 kHz guard band is actually at the high end of the channel. But the upper FM audio sidebands uses up some of the guard band. --Ohgddfp (talk) 01:53, 17 March 2009 (UTC)

Copied from U.S. Government Document - Public Domain (U.S. Code of Federal Regulations Title 47 Part 73.687): "... Also, The field strength or voltage of the lower sideband, as radiated or dissipated and measured as described in paragraph (a)(2) of this section, shall not be greater than -20 dB for a modulating frequency of 1.25 MHz or greater and in addition, for color, shall not be greater than -42 dB for a modulating frequency of 3.579545 MHz (the color subcarrier frequency). For both monochrome and color, the field strength or voltage of the upper sideband as radiated or dissipated and measured as described in paragraph (a)(2) of this section shall not be greater than -20 dB for a modulating frequency of 4.75 MHz or greater. ..." --Ohgddfp (talk) 02:21, 17 March 2009 (UTC)

It is also described that

>which does not carry any signals

well, the FM audio sidebands occupies some of the guard band. It would have to. --Ohgddfp (talk) 01:53, 17 March 2009 (UTC)

But I guess it is impossible to implement/realize sucn a filter (by 1940's technology).

It's doable in 1940. --Ohgddfp (talk) 01:53, 17 March 2009 (UTC)

Q2 : How many decibels are reqired to attenuate (at under 1MHz from visual carrier)? and, Where the numerical value is cited from?

That would be at least 20 dB down at 1.25 MHz or more below the visual carrier frequency. The citing is from U.S. Code of Federal Regulations Title 47 Part 73.687. See above --Ohgddfp (talk) 02:21, 17 March 2009 (UTC)

I asked about "lowest 250 kHz of the channel". NOT higher end.
And , I already readed FCC's documents before I post these questions. So , I know "20 dB down at 1.25 MHz below the visual carrier".
But I want to know "How many decibels are reqired to attenuate at under 1MHz". NOT 1.25MHz. —Preceding unsigned comment added by 121.105.168.173 (talk) 20:11, 17 March 2009 (UTC)
About the lowest 250 kHz of the channel, it doesn't matter. The graphic could have been produced two ways, one with the guard band at the lower end, and the other with the guard band at the high end. Being a guard band, the meaning ends up being the same. Obviously, the meaning of where a channel begins is obviously changed with the alternate graphic, but that actually doesn't mattter. --Ohgddfp (talk) 23:03, 17 March 2009 (UTC)
Oh, I think I know what you are saying. At 1 MHz below the visual carrier, I believe that what we found is all there is. In other words, I don't believe one exists. I think there ought to be a specification. I've always felt that the philosophy behind not specifying everything goes to practical filter design. The graphic says the FCC wants flatness to a certain point, then want a rolloff such that at a certain frequency it is so many dB down. The characteristic of the rolloff itself seems to be undefined, which I consider to be a problem. --Ohgddfp (talk) 22:57, 17 March 2009 (UTC)


"Obviously"?
In higher side of this graphic , the visual signal ends up to 5.75MHz. But in lower side , the visual signal begins form 0MHz. NOT 0.25MHz. It is "Obviously" asymmetric.
Yes indeed it is asymetric. --Ohgddfp (talk) 15:00, 18 March 2009 (UTC)
In addition , higher end(5.75 to 6MHz) region contains sound signal(25kHz deviation) and its sideband. It is unreasonable to call this(5.75 to 6MHz) region as "Guardband which does not carry any signals".
I agree with you that this(5.75 to 6MHz) region as "Guardband which does not carry any signals" does not make any sense. Therefore I would be somewhat surprised if this phase actually appeared in the federal regulations that were written by the NTSC. --Ohgddfp (talk) 15:00, 18 March 2009 (UTC)

Likewise lower end(from 0Hz) because it contains visual signal.

And , Can't You see the sentences "IDEALIZED PICTURE" "Not drawn to scale" in the picture?
Yes. That means the graph can only make sense when viewed IN CONTEXT with other FCC texts. --Ohgddfp (talk) 15:00, 18 March 2009 (UTC)
If You have graduated junior-high-school , Your math teacher may told You "Do not read/speculate intermediate value from graphic".
True, if the graphic is considered by itself. Remember that the graphic can only make sense IN CONTEXT with some other FCC text I included above. This text mentions flatness for a certain region, and down so many dB at a certain point. The part in between is not specified because I belive the NTSC was considering that any reasonable roll-of performance consistent with the FCC spcified phase delays would be good enough. I myself maintain this was a mistake made by the NTSC to not fully specify this. (By the way, I never graduated from junior high-school. I also flunked Kindergarten. ) --Ohgddfp (talk) 15:00, 18 March 2009 (UTC)
We can read only "20dB attenation at 1.25MHz below from visual carrier" from this graphic (and FCC's documents). —Preceding unsigned comment added by 121.105.168.173 (talk) 11:31, 18 March 2009 (UTC)

There is some kind of assumption that because some private standards bodies, manufacturers or authors cook up a guard band that this is somehow necessary. This article is entittled "NTSC", but I don't think the NTSC wrote anything about a guard band. If someone can find something written or adopted by the FCC (see Code of Federal Regulations Title 47) or by the NTSC which wrote only things to be adopted as broadcast regulations, then I would like to know. A reference would be helpful here. --Ohgddfp (talk) 16:08, 20 March 2009 (UTC)

The reasons why no guard band is necessary is because it's OK for FM sidebands located further away from the FM carrier to be only slightly contaminated with the visual rolloff region of an adjacent channel, to the tune of -20 dB. Sure, standards bodies and manufacturers may want to do better than this, but this is not the same as saying the NTSC standard itself requires doing better than this. So we have to be careful because this article is entittled "NTSC". And consider the second reason that neither NTSC nor the FCC requires a guard band: TV channel allocations in a given locale are not adjacent. So the issue becomes one of not creating spurious emmisions outside the 6 MHz allocation. --Ohgddfp (talk) 16:08, 20 March 2009 (UTC)

I am willing to bet that the colored graphic presently in the article, with the explicit guard band, is something that became popularized when letter monochrome varients got labels of the alphabets, like 'M' and 'N'. I would think that cable TV companies would be very interested in following this becase channels are adjacent to each other on a cable TV system, where adjacent channel interference becomes a much more severe issue. But for the purpose of what is "NTSC", it should be made clear in the article that only some, but not all, of the features in this 'M' system were authored by the NTSC. --Ohgddfp (talk) 15:42, 21 March 2009 (UTC)