Talk:Error detection and correction/Archive 1

Possible corrections
There is a link to Scientific Atlanta in the text, which seems like an advertisement. I don't think that Alamouti and STBC codes pertain to this argument: they are analog precoding techniques rather than digital correction ones. Consider to merge this article with forward error correction. Cantalamessa 13:07, 9 December 2005 (UTC)


 * I agree to the merge, unless separate articles are made to discuss error detection and error correction. -- 130.94.162.64 15:46, 17 February 2006 (UTC)

Some questions
From the Voyager section: Color image transmission required 3 times the amount of data ... 3 times compared to what? Black-and-white photos?

In one of the first sentences there is: Error correction in some applications, such as a sender-receiver system, can be achieved with only a detection system in tandem with a automatic repeat request scheme to notify the sender that a portion of the data sent was received incorrectly and will need to be retransmitted.

I would say this is not correct, as error correction can happen without the "repeat request", as long as the error ratio is within the corrigible region.

--Abdull 07:01, 16 October 2005 (UTC)


 * The sentence is correct in that it is possible to achieve error correction with just detection and ARQ. It is of course not the only means of error correction. The purpose of the sentence is to highlight that advanced error correction schemes are not always necessary to achieve error correction capabilitiesl. Dysprosia 08:19, 16 October 2005 (UTC)

I re-wrote that section to make it less confusing. Would you mind checking it to make sure I didn't introduce errors? --68.0.120.35 04:35, 16 February 2007 (UTC)

What about simple checksums used by barcode, credit card and other identification numbers?

Yes, this article could use a paragraph mentioning that checksums are used in those situations as part of an ARQ system.

Merge discussion (done)
Should we merge this article with Forward error correction ? --DavidCary 00:42, 21 September 2005 (UTC)


 * Personally I think a merger may be inappropriate in this case. In my opinion, forward error correction (FEC) is a method of encoding data prior to transmission across a lossy channel such that upon reception of corrupted data, there is a (presumably) improved probability of recovering the original data.


 * EDAC by contrast includes not only correction, but also detection of errors. It does not necessarily limit itself to forward error correction, but is a superset of such techniques which could be applied simultaneously. Parity is one example which is not normally considered a type of FEC!


 * Probably more significant is the fact the EDAC generally applies to data storage and handling (at least in space-borne applicatons), which include tracking errors through a calculation step as well as through transmission across a bus and into a memory device. Storage of data in memory might use triple redundancy, or other majority voting methods. It may involve a type of RAID (redundant array of inexpensive discs) technique. EDAC is thus more like a toolkit of alleviative techniques and methods which include, but are not limited to FEC. --I.v.mcloughlin 22:51, 17 April 2006 (UTC)


 * I think it would be useful to merge these articles anyway, because codes used for error detection are in fact (at least mathematically) the special cases of basic error correction codes (e.g. a parity code is a linear block code). The methods used in analyzing codes' performances are also similar and include investigation of a minimum distance and a weigth distribution. Another argument to merge articles is that any error correction code (at least, a block one) can be used also for detection. For example, if a particular block code has minimum distance d=3, it can either correct 1 error or detect 2 errors, depending on how a decoding side interprets the code.--Ring0 15:27, 29 April 2006 (UTC)


 * Someone suggested that this article be merged with that for forward error correction. I'm against this merge. FEC is a type of ECC, but it is only one of many. For example, Viterbi coding is another type of ECC, which isn't forward-correcting, and yet no one wants to merge that. The merge would be inappropriate. linas 17:30, 26 May 2006 (UTC)


 * Why isn't Viterbi (convolutional) coding forward-correcting? What definition of FEC do you use?--Ring0 17:41, 26 May 2006 (UTC)


 * Since FEC is one of many kinds of ECC, and it is plenty long enough for an article of its own, let me suggest: Move all the FEC-specific sections ("Error-correcting code", "Deep Space Telecommunications", "Satellite Broadcasting (DVB)", and "Information theory and error detection and correction") from the EDAC article to the FEC article. Leave behind in the EDAC article a "forward error correction" section with a brief summary and a link to the FEC article. --68.0.120.35 04:35, 16 February 2007 (UTC)


 * I support the proposal.--Boson 00:11, 28 October 2006 (UTC)


 * I'm against the merger. Error-correcting codes are much broader than Forward-Error Correcting Codes or Single/Double Error Correcting Codes.  They include codes for correcting over a wide variety of channels with and without interaction EFC. -- Trachten 15:50, 16 May 2007


 * Error-correcting code should be redirected to forward error correction rather than this article. Just a suggestion. Mange01 13:27, 26 June 2007 (UTC)


 * Since no one objected, I have now redirected according to my sugggestion above. Mange01 12:40, 17 July 2007 (UTC)

Error correction as a method of dithering
The term "error correction" also refers to a method of dithering, where the difference between desired output and the actual output is stored in an error accumulator and added to the next desired output.

For example, imagine you are controlling an LED with a piece of software. You only have the hardware to turn it on and off, but you want to produce an intensity of 50%.

By rounding the target of 50% down, the first output will have an actual value of off. The difference between the desired output and the target output is 50%; this is stored in the error accumulator.

At a target of 50%, plus the 50% "error" that was missing from the first output, our next desired output value is 100%. This makes our actual output on, and we're back to an error of 0%.

This process continues, and the overall effect is a light that is flickering on and off so quickly that it is percieved as being at 50% intensity, despite it's binary properties.

This same process can be done for any target value.

Sounds like Bresenham's line algorithm.


 * Are you talking about Floyd-Steinberg dithering? --75.37.227.177 04:06, 1 August 2007 (UTC)

SEC-DED
Anyone for doing the merge? -- Taral 17:13, 26 September 2007 (UTC)
 * It should be merged into Hamming code. The section "Hamming codes with additional parity" describes SECDED precisely. 71.41.210.146 19:46, 13 October 2007 (UTC)
 * Done. 71.41.210.146 20:24, 13 October 2007 (UTC)

CRC vs. checksum
Isn't CRC a subset of checksum? If so, just ditch the CRC part.--Adoniscik 02:29, 16 October 2007 (UTC)

no FRC is listed (Functional redundancy checking)
FRC being a capability of early Pentiums, requiring a separate Pentium chip, running in tandem with the primary Pentium, and comparing the outputs for error. —Preceding unsigned comment added by 76.122.167.32 (talk) 20:47, 30 April 2008 (UTC)

Computer ECC memory
Computer memory uses ECC memory. Most desktop PCs do not but most servers do. Someone should describe ECC memory. The one question I am not clear about is whether non-ECC memory has parity bits (a memory expert would understand that question). —Preceding unsigned comment added by 76.87.181.194 (talk) 00:53, 13 February 2008 (UTC)


 * Once you find out, update the article so it clarifies this point, OK?
 * Historically, there were a few computers -- such as the Apollo Guidance Computer, the original IBM Personal Computer, etc. -- that had parity bits in their non-ECC memory. They could detect errors (typically giving "parity error"), but they could not correct them -- they were not "error correcting".
 * I've also heard rumors of "fake parity" memory modules that have a "parity generator chip" (not to be confused with the "Serial Presence Detect EEPROM") replacing an actual RAM chip.
 * But all the DRAM modules I've seen in the last decade have all either had no parity at all, or they were labeled as full ECC memory modules. As far as I can tell, both kinds of DRAM modules use exactly the same RAM chips. The no-parity modules typically include 8 or 16 identical RAM chips. The ECC memory modules typically include 9 or 18 identical RAM chips.
 * Does that answer your question?
 * --68.0.124.33 (talk) 06:02, 8 March 2009 (UTC)

more than communications related
This article, especially in the beginning, suggests that EDAC is all about communication protocols. What about disks! Unless I am mistaken most disk reads rely heavily on EDAC to correct "soft" errors. —Preceding unsigned comment added by DGerman (talk • contribs) 21:43, 20 May 2010 (UTC)


 * It doesn't at all. It is talking about communication channels, which is an abstract concept and not restricted to protocols. In disk communication, the disk is the channel, and transmitter and receiver are the disk controller. If you're unaware of this, you should follow communication channel (though I must admit that in its current state that article does not address this). Nageh (talk) 10:48, 21 May 2010 (UTC)

Berger code
This article currently claims that the Berger code is a kind of error correcting code. I see how the Berger code can detect errors, so it definitely deserves a mention in this article. How does the Berger code correct errors? --DavidCary (talk) 01:51, 9 January 2012 (UTC)


 * There is a whole class of unidirectional error-correcting codes. Berger codes are just one instance of these codes, and while they are error-detecting, only, such codes are actually a degenerate case/subset of error-correcting codes. I was going to write an article on unidirectional error-correcting codes, turning the red link blue, but somehow never came to do it. Some day, maybe. Nageh (talk) 10:09, 9 January 2012 (UTC)

Loopy Link
If you look at Dynamic random access memory, it points to ECC_memory as the main article, which itself points back to Dynamic random access memory as the main article. =P (This has been crossposted to Talk:Error detection and correction and Talk:Dynamic random access memory) Copysan (talk) 04:05, 2 September 2008 (UTC)


 * . Now both this EDAC article and the DRAM article have a brief WP:SUMMARY of ECC memory, both pointing to the ECC memory as the "main article" on that topic. --DavidCary (talk) 05:20, 25 April 2012 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 2 external links on Error detection and correction. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20111002152735/http://www.apmcsta.org/File/doc/Conferences/6th%20meeting/Chen%20Zhenyu.doc to http://www.apmcsta.org/File/doc/Conferences/6th%20meeting/Chen%20Zhenyu.doc
 * Added archive https://web.archive.org/web/20090905174616/http://www.kernel.org/doc/Documentation/edac.txt to https://www.kernel.org/doc/Documentation/edac.txt

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 08:04, 23 September 2017 (UTC)

History of Error Detection and Jewish Scribes Real?
The history paragraph appears speculative and original research. Yes, the scribes were meticulous with copying over texts, and yes, it is known that the had the numerical Masorah from counting the letters. But, is there any evidence that Masorah was used to check whether the text was accurately copied? The Jewish Encyclopedia's discussion the Numerical Masorah does not mention this use of it at all: http://www.jewishencyclopedia.com/articles/10465-masorah Jb12345678910111213 (talk) 18:15, 8 August 2016 (UTC)

History
The history section says Hamming developed codes in 1947 and they were published by Shannon in 1948. Hamming code says the code was developed in 1950. Are we talking about the same code here? ~Kvng (talk) 14:24, 18 May 2018 (UTC)
 * As I understand it, Hamming started thinking about error correcting codes in 1947, came up with the H(7,4) code in 1948, and published it in 1950. There is a parametric series of Hamming codes, but sometimes the most well-known of them, the H(7,4) code, is thought is as the Hamming code. Unfortunately, that is from personal recollection and a little Googling. I don't have a solid secondary source available that explains this. --Mark viking (talk) 18:12, 18 May 2018 (UTC)
 * So Shannon published a preview of Hamming's work? We should change developed to published in Hamming code. ~Kvng (talk) 13:00, 21 May 2018 (UTC)

Information theory and error detection and correction
The following paragraph is slightly misleading:

"Information theory tells us that whatever be the probability of error in transmission or storage, it is possible to construct error correction codes in which the likelihood of failure is arbitrarily low. It gives a bound on the efficiency that such schemes can achieve."

The problem with this is that in case the error is "perfectly random", e.g. for a bit channel, if the error probability per bit is >exactly< 1/2 and bit errors are independant of each other, then there is no code that can preserve >any< information in the channel, e.g. whatever the sender put into the channel, the receiver would only get perfectly random "noise".

Higher bit error probabilities are ok again, because inverting the arriving signal can then be used to invert (1-p) the error probability p to less than 1/2.


 * No error correction scheme can work if the output of the channel is independent of its input - all error, no data. As the error rate increases, the amount of additional ECC information necessary increases until we get to the extreme for a totally unreliable channel where all bandwidth must be devoted to ECC and no data gets through. I believe the current version of the text at correctly implies this. ~Kvng (talk) 23:47, 16 July 2022 (UTC)