Talk:Shannon–Hartley theorem

Dead external link
The external link called "The relationship between information, bandwidth and noise" is dead, the data may have been moved, or even removed. —Preceding unsigned comment added by 134.117.251.2 (talk) 02:40, 25 February 2011 (UTC)

Shannon's theorem?
I don't think this is Shannon's theorem; it is simply his definition of informational entropy (= expected amount of information).

Shannon's theorem is a formula for the maximal rate at which you can send information down a pipe if you know the bandwidth and the signal-to-noise ratio. AxelBoldt


 * Fair enough. I was just replacing garbage, and took planetmath.org's word for it.  One undergraduate course aside, I know no real communication theory -- User:GWO

I would like to see a reference on this page to the means used to obtain the maximum rate: Shannon's reasoning was based on assuming an equivalence between signal space and N-dimensional euclidean space; noise power determined the size of the spheres, the number of dimensions being the number of possible signal tuples, and the result being how to pack the maximum number of spheres into that space, for which the limit was well known. Bukowski

Digital vs analog bandwidth?
Wikipedia just needs more general information about the relationship between digital and analog bandwidth. What other articles/subjects are related to this? - Omegatron 16:30, Apr 24, 2005 (UTC)

Modem comparison needs peer review
Original: The last paragraph (especially "V.90 claims a rate of 56 kbit/s, apparently in excess of the Shannon capacity") is wrong. The V.90 data rate does not account for compression.

Edit by Mark Rejhon: This modem information has now been clarified from Mark Rejhon's edit about a month ago. This needs some peer review. Also, 56 kbit/s is achieved without compression, or rather, 53 kbit/s due to FCC regulations on signal level. But this is done, without compression! It's rather, done, via avoiding a digital-to-analog conversion step, at the telco end. This information is widely available on the Internet, but needs to be confirmed with accurate sources.


 * From my study of the Shannon channel capacity theorem, compression doesn't matter. Information is information, compressed or not. To the best of my knowledge, Shannon made no assumptions about the coding of information. This is his real genius. In general, his analysis is based on "nats", not bits (base e as opposed to base 2). Binary is a convenient coding, so most people talk about bits. As far as I know, Telcos typically sample at 8kHz with 8 bit samples. This means a Nyquist limit of 4 kHz an ideal quantization noise of about 48 dB (maybe even 54 dB if you use half of 256). If you lived 10 feet away from your central office and there was no other interference, you might get that kind of S/N. Reality is that you are probably thousands of feet from the CO and there are hundreds of other lines next to yours adding noise. I am told that 30 dB S/N is possible with BW of 3.5 kHz. This works out to about 35 kb/s. If you assume 48 dB, the channel capacity is 55.8 kb/s. Close to that mystical figure of 56 kb/s, I would think. "Principles of Communication Systems" by Taub and Shilling is a great book. Madhu


 * If you have like, encyclopedia.txt, a 100 MB file, and you compress it down to 1 MB, send it in one second, and them decompress it later, you could say that you had sent 100 MB/sec. I think that's all they meant.  They are talking about the actual transmission speed (in this case 1 MB/sec).  Apparently modems can also do on-the-fly compression, which would give effective rates greater than the actual channel capacity for easily compressed data. - Omegatron 16:38, Feb 17, 2005 (UTC)


 * In my understanding, anyway, the theorem only applies to the actual data being sent, regardless of whether it is compressed data, binary, or all zeroes. In other words, compression of the data does not count towards getting closer to the theoretical max. - Omegatron 17:02, Feb 17, 2005 (UTC)


 * I think we're saying the same thing. The important detail is the terminology. Shannon talks about information rate as opposed to bit rate (he doesn't really talk about bits). In your above example, the information rate is 1 MB/sec. If I have a 100 MB file with truly random data that cannot be compressed (try compressing a compressed file sometime ;-), then the bit rate drops to 1 MB/sec instead of 100 MB/sec. Another way to say this is that a highly compressible file contains less information than an incompressible file. Madhu


 * I agree with Msiddalingaiah; said another way, for a given coding method, data containing less information (making it more redundant) will compress more, requiring less B for a given C. Coding (non-lossy data compression if you will) effectively increases the S/N. BTW, for the example pulled out of the air by Omegatron, if there were such a 100:1 code, the effective increase in S/N would be huge, from say 0 dB at 1 Mb/s to 1204 dB (one thousand two hundred and four dB) at 100 Mb/s! Shannon's channel capacity theorem is agnostic to how the data is coded, and merely places an upper limit on the channel's capacity to carry data, for a given coding method. It does not say that there is a lossless 100:1 code. Somehow I doubt that such a code exists anywhere today, but that's not to say that it might not in the future. The CCT is *not* a proof of existence or non-existence of coding methods. One thing is certain: if the encoding is in anyway "tuned" to the data at hand, the decoder must also contain that tuning information, which itself must have been transmitted in the channel beforehand.


 * Brief study of Shannon's CCT reveals that as the S/N increases without bound, so does the channel capacity. Also, for zero S/N, C = B. Even though Shannon does not discuss binary data, the log2 in the formula suggests that binary representation is very fundamental, and it relates C to binary data word length (we need twice the word length to double C, which is the reality seen by anyone who has messed around with digital filter algorithms and numerical methods.) C is undefined if the noise is zero, but that is not a problem in the real world. These behaviors seem reasonable. I hope these few points may help others to better understand this topic. —Preceding unsigned comment added by 99.40.33.249 (talk) 00:11, 16 February 2011 (UTC)

Several versions
There used to be five different versions of this article. Now there are three. Over the last while, several people have merged these articles (myself included, for one of these merges, but it appears I am not the only one). However, there are still two more articles that still needs to be merged into this article.

These are now the same article:


 * Shannon-Hartley theorem (main article)


 * Shannon limit (now merged into above)
 * Shannon's theorem (now merged into above)
 * Shannon-Hartley law (now merged. seems to be a different but closely related concept.)


 * Shannon's law (now a disambig page)

We need to clarify the differences or merge the similarities between these articles. From what I understand, they are all the same thing, and should be merged and redirected to Shannon-Hartley theorem. Also clarify the relationship to Nyquist-Shannon sampling theorem. (One deals with digital data and one deals with sampled analog data, correct?). Then clarify in bandwidth the relationship between regular analog bandwidth, sampled data bandwidth, maximum digital bandwidth, etc. - Omegatron 20:21, Aug 9, 2004 (UTC)


 * There are two different equations, and two different concepts, but I feel they are related closely enough and have similar enough names that they should all be combined into one article, and the differences and similarities explained within. This way people searching for one will not get confused, not realizing that they are looking at the wrong article. - Omegatron 20:02, Sep 18, 2004 (UTC)

Difference between laws
According to this site http://www.cs.nmsu.edu/~jcook/Classes/DE-CS484/Physical-1.html the "law" section is actually part of the nyquist theorem.

According to this site http://www.fas.org/man/dod-101/navy/docs/es310/DigiComs/digicoms.htm the laws are:

R = W Log2 M

and

C = R Log (1 + S/N)

where:

R = the rate at which data can be transferred, given in bits per second (also known as the baud rate)

W = the minimum bandwidth required to create this pulse

C = capacity in bits per second (bps)

S/N = signal-to-noise ratio (depends of modulation type and noise)

I'm confused. - Omegatron


 * This "cheat sheet" describes it better:


 * http://tcode.auckland.ac.nz/314.5.pdf


 * The capacity related to signal levels is only for noiseless signals. The other formula is a better one and takes noise into account.  It also says that the noiseless one was derived by nyquist, so i think the one labeled "shannon-hartley law" is actually the shannon-nyquist formula. - Omegatron

I think you've combined the topics correctly except there isn't much connection (at least in my mind) between Shannon's Law and the Nyquist sampling rule. On an unrelated note, I found the D = 2B log2M noiseless formula to be confusing, as it's not part of the theorem it's just part of the thinking leading up to the theorem. So I've replaced it with a written version of the same concept which uses a "thought experiment" model instead of a formula to describe how to transmit infinite information over a fixed-bandwidth link (and how noise makes that impossible in practice). technopilgrim 20:36, 19 Sep 2004 (UTC)


 * Alright. the first equation was labelled as shannon's law in the article of the same name.  i was just merging.  they seem to be seen together a lot in articles online though...  i'm skeptical that it should be removed.  maybe added to the nyquist-shannon article instead? - Omegatron

My problem with R = 2*B*log_2(M) is it seems to be neither fish nor fowl. In we were giving the full derivation of the S-H theorem this is an important and non-trivial milestone on the way to the full proof and we would definitely want to include it (as do professors when giving an extended talks on the topic). But we are trying to write a concise encyclopedia article and we can't assume we are addressing an audience of engineering students. Where does the 2*B factor come from? Shouldn't it be simply B? These are advanced questions in information theory (which is why radio communication was decades old before anyone understood this). Not to say that a formula can't be a great way of showing things in some situations, but here it is perplexing (part of the problem is that the explanation accompanying the 2*B*log(M) formula is not quite correct). technopilgrim 19:27, 20 Sep 2004 (UTC)


 * I like your description. It is clearer than the formula.  Thanks. - Omegatron 18:22, Sep 21, 2004 (UTC)

"on each cycle" should be changed to each clock or each transmission or each pulse or something - Omegatron 18:30, Sep 21, 2004 (UTC)

Steganography
Example two concludes: "This shows that it is possible to transmit using signals which are actually much weaker than the background noise level, as in spread-spectrum communications."

Would it be fair to say that that is the principle behind steganography? --Elijah 23:48, 2005 Jan 3 (UTC)


 * No, at least, not for cryptographic steganography. I should say first that steganography is a much slipperier subject than cryptography - it's harder to specify and analyze.  But a spread-spectrum signal need not be any harder to detect than a narrow-spectrum.  A signal that is transmitted at a very low bitrate may be difficult to detect, but even this is not necessarily true.  In steganography, the message is normally hidden inside another message.  The study of analog, technical ways to transmit signals secretly is not really within the purview of steganography - it includes tricks like line-of-sight infrared lasers, transmissions as short, high data rate bursts, and other esoteric tricks.  --Andrew 05:04, Jan 4, 2005 (UTC)

Comparison
In the "R < C" section, is C (the capacity) of the unit "bits per second", as in "my modem has 56 kbit/s"? --Abdull 06:51, 16 October 2005 (UTC)


 * In strict terminology, C should be measured in "bits per channel use", which is a ratio between "bits per second" and "Hertzian bandwidth available for transmission". In this case, C is not normalized and it is in bit/s, but it is not referred to the modem speed (it is rather R): think of it as the capacity, in liters/s, of the lavabo tube (the channel) which empties the pasta pot boiling water (your modem). Cantalamessa 12:30, 8 December 2005 (UTC)

Error in reference list?
Isn´t there an error in the reference list: Shannon´s seminal book from the 40s is "The Mathemetical Theory of Communication", whereas "The Mathematical Theory of Information" (which may prove to be seminal as well) was written by Jan Kåhre in 2002.

timonju


 * Fixed. Well spotted! -- Jheald 11:42, 12 February 2006 (UTC).

Error in noisy-channel coding theorem
Before the change I just made, the statement of the theorem implied that when rate equaled capacity, error could never be made arbitrarily small. This is clearly wrong; a lossless channel can achieve its capacity rate and, in spite of its being somewhat degenerate, does fall within this framework. The 1 December 2005 "fix" was wrong (though what it attempted to correct was also wrong). I've fixed this so that it's clear that the R=C case is not addressed in the noisy-channel coding theorem, but someone might want to double-check my wording on this (which is also in Noisy channel coding theorem). Calbaer 23:24, 25 April 2006 (UTC)


 * There's no such thing as a "lossless" additive gaussian noise channel, so whatever you did is suspect. Dicklyon 06:00, 1 September 2006 (UTC)

New edits
Since the article is named after Hartley and had almost nothing about what he showed or how it related to Shannon's capacity, I did some major edits. I took out the irrelevant thing about the data modem, and editted the other bits. And I tossed the alternate forms where instead of 1+S/N it used (S+N)/N, as just too obvious and redundant. I kept the one that commonly appears in books.

If any of this is not cool, please say so or fix it back, rather than reverting the whole lot. Dicklyon 06:00, 1 September 2006 (UTC)

What is Hartley's law?
User:First Harmonic has rewritten the Hartley's law section in terms of general pulse rates and channel capacity. The way I read the literature, it was better before. Hartley knew nothing of channel capacity, and his law was based on bandwidth. At least, that's what I find in the refs   in combination with John Pierce's books that I have.

First Harmonic, do you have any references for your changes? Dicklyon 04:04, 4 September 2006 (UTC)

Here's the Pierce ref:


 * I think it is somewhat a matter of semantics. Hartley's Law as it was described here prior to my recent edits consisted of an equation relating what was called "rate" to bandwidth B and the number of distinguishable levels M.  I do not know whether Hartley called this quantity the rate or the capacity -- I will take your word that he used the term rate.  First Harmonic 10:44, 4 September 2006 (UTC)


 * Nevertheless, it is clear that the quantity described is the channel capacity, and not the actual tranmission rate. The bandwidth and the number of distinguishable levels set upper limits on the transmission rate, but clearly it is possible to tranmit data at a pulse rate lower than the Nyquist rate 2B, and to use fewer levels than the number that are clearly distinguishable.  That suggests that Hartley's Law provides the upper limit on the transmission rate, which from a purely semantic point of view should be called the channel capacity as opposed to the actual transmission rate.  I think it is important to make this distinction clear to the reader, even if we decide not to call it "Hartley's Law".  First Harmonic 10:44, 4 September 2006 (UTC)


 * I think there is a way that we can reach a compromise that would satisfy both your concerns about the historical accuracy and my issue of semantics and clarity. Perhaps we need to make it clear that although Hartley did not see things in these terms, that the current understanding in light of Shannon's work suggests that although Hartley called it the rate, he was actually talking about what we now would call capacity. First Harmonic 10:44, 4 September 2006 (UTC)


 * I think the connection between Hartley's information rate and Shannon's capacity is more subtle than you're making it out to be, and I already tried to describe that connection after introducing the capacity formula. The concept of capacity as defined by Shannon is a wonderful thing, which Hartley was completely unaware of.  The rates achievable at a reasonable error rate from Hartley's law will generally be much less than Capacity, because M has to be limited to keep the error rate down.  To get close to Shannon capacity, you need a much larger M and an error correcting code.


 * I'll make a cut at a revision and we'll see where we stand. Dicklyon 16:54, 4 September 2006 (UTC)


 * One more thing: data transfer rate is a practical thing with tenuous relationship to Hartley's more theoretical concept of information rate. Dicklyon 16:55, 4 September 2006 (UTC)


 * OK, I redid a bunch of stuff, trying to make the Hartley and Shannon contributions and relationships very clear. Let me know if you object to any of that, or if you have sources that contradict it. Dicklyon 17:53, 4 September 2006 (UTC)


 * ps. Let me re-emphasize that Hartley's rate is NOT what we understand as a capacity except in the case that the channel is an errorless M-ary channel of 2B symbols per second. This is NOT the channel we want to analyze, which is the additive white guassian noise channel of bandwidth B.  Now that I think of it that way, I guess I should add that to say why Hartley's rate is sometimes a capacity.


 * Done. And fixed up a bit what it says about capacity, and linked it, etc. Dicklyon 18:18, 4 September 2006 (UTC)


 * I don't agree with the latest changes that you (Dicklyon) have made. Even if you are correct from a historical perspective, I don't think it is correct from a mathematical or engineering perspective.  I concede that it is important to provide the historical context for how information theory developed throughout the early 20th century, but not at the expense of clearly explaining the latest and most up-to-date understanding of the theory and practice.  The historical development might deserve it's own section to give readers a sense of how these ideas arose and who contributed the key breakthroughs, but that should come after a clear statement of what the theory says, what it means, and how it applies to the real world.  As I said in my comments above, I think it is possible to find a middle ground that will satisfy both of us.  Unfortunately, you reverted just about everything that I was trying to accomplish with my edits of last night.  As I also said above, I don't think you and I really disagree about anything substantive, but only about semantics and emphasis.  First Harmonic 20:34, 4 September 2006 (UTC)


 * On further review, I think that most of the changes you (Dicklyon) made today are really quite good and helpful. In particular, the paragraph that you added discussing how some authors call the Hartley rate a "capacity" for an idealized M-ary channel captures a lot of what I was trying to accomplish.  You also cleaned up the wording quite nicely.  I do, however, think that it needs a bit more explanation of the difference between an actual bit transmission rate versus a channel capacity.  I will make an attempt to add some stuff, and then you can take a look and tell me what you think.  First Harmonic 21:01, 4 September 2006 (UTC)


 * Thanks for your reconsideration. I look forward to your next edits. Dicklyon 22:07, 4 September 2006 (UTC)


 * Hi Dicklyon: In the sub-section entitled "Hartley's law," you use the term "achievable information rate" in the second and third paragraphs, and you represent this concept with the symbol R in the statement of Hartley's law.  I am confused by a couple of things:  (1) From a purley semantic point-of-view, what is the difference between "achievable information rate" and "channel capacity"?  Is it not true that an "achievable rate" is the same thing as a "capacity"?  (2)  Later in the article, when you begin discussing the Shannon-Hartley theorem, you again use the symbol R, but now it represents the actual information rate, rather than the achievable information rate.  To me, this use of a single symbol to represent two different concepts is ambiguous and confusing. I think it would be far less confusing to use the symbol C to represent achievable rate (or capacity) in both cases.  First Harmonic 01:13, 5 September 2006 (UTC)


 * I'm not sure what you mean by semantic, but channel capacity is a statistical concept that Shannon came up with, which is not usually achievable as an information rate (except for error-free discrete channels, which are non-statistical), and achievable rate is a concept that Hartley had based on a bandwidth and what M could be achieved without error. It's subtle but important difference, since to attribute a capacity law to Hartley would be anachronistic, and since the rate equation uses an M that is not really a property of the channel.


 * Perhaps a subscript on Hartley's achievable R would be good to distinguish it from the more general R in the capacity discussion. Quite often such distinctions are not made, and the symbol R is used for rate in more than one context. To me,  however, the bigger confusion is to put a C in Hartley's law and represent it as a capacity. But, some authors do so. I think a subscript M to indicate the dependence on M, rather than just on the channel, would be appropriate. Dicklyon 05:21, 5 September 2006 (UTC)

Test calculation for noisy transmission in Hartly's model: In the transmission model of Hartley, if one assumes M voltage levels with an equidistant distribution from -V to V that are uniformly randomly used, one gets an average signal power (here power is the square of voltage, the usual power differs by a constant factor) of $$\textstyle S=\frac13V^2\cdot\left(1+\frac2{M-1}\right)$$. Now, a gaussian noise of average power N makes a transmission error with a standard deviation of $$\textstyle \sqrt{N}$$. Demanding for an error rate of less than 1% requires that the half-distance V/(M-1) between the voltage levels is greater than three standard deviations (actual error rate is 0.3%), or $$\textstyle N\le \frac19 \frac{V^2}{(M-1)^2}$$. A channel with those properties has a capacity greater then
 * $$C=B\cdot\log_2(1+S/N)

=B\cdot\log_2\left(1+\frac{3(M-1)^2}{1+\frac2{M-1}}\right) \approx 2B\cdot\log_2(\sqrt3\,M) $$. For n standard deviations with error rate $$\textstyle \mathrm{erf}(\frac n{\sqrt2})$$ (see normal distribution), the factor in the last equation is $$\textstyle \frac{n}{\sqrt3}$$. This factor is (for large M) the ratio between the number of "virtual" voltage levels in Shannon's noisy transmission theory and the number of actual voltage levels in Hartley's transmission model.--LutzL 09:47, 5 September 2006 (UTC)

Shannon's noisy transmission theory: Shannon uses a sphere packing argument in K-dimensional space to derive the Shannon capacity as an upper limit for the data rate. This argument is simply dividing the volume of a ball of radius $$\textstyle \sqrt{S+N}$$ by the volume of a ball of radius $$\textstyle \sqrt{N}$$ to get a bound for the number of distinuishable codes of lenght K. This is to be compared with the number $$\textstyle M^K$$ of different codes using independent symbols with M voltage levels. Hence the number of $$\textstyle \sqrt{1+\frac{S}{N}}$$ "virtual" voltage levels. The case K=1 corresponds to Hartley's model.

To get a lower bound Shannon computes in dimension K the probability estimate $$\textstyle p\le \sqrt{\frac{N}{S+N}}^K$$ for the probability p of finding a random point on the sphere of radius $$\textstyle \sqrt{S}$$ inside the ball of radius $$\textstyle \sqrt{N}$$ centered on a fixed point of radius $$\textstyle \sqrt{S+N}$$. The probability that all but one of C random points of radius $$\textstyle \sqrt{S}$$ lie outside the small sphere is $$\textstyle (1-p)^{C-1}\ge 1-(C-1)p \ge 1-Cp$$. If the C code points are choosen randomly then the average error probability of finding more than one codeword close to the received message is $$\textstyle e=1-(1-p)^{C-1}\le Cp$$. The average number of bits per symbol has therefore the estimate
 * $$\textstyle \frac{\log_2 C}{K}

\ge\frac{\log_2 e-\log_2 p}{K}=\frac12\log_2\left(1+\frac SN\right)-\frac{|\log_2e|}{K} $$. Using this estimate on the previous example results in an average increase by $$0.79-8.4/K$$ bits/symbol using Shannon's transmission method. This is only positive for K>10 and results in a gain of one bit every two sybols for K>28.

Shannons argument uses several times the effect of high dimensions on the statistic of the radius of a multidimensional normal distribution (see chi square distribution). Thus it is only valid for rather large values of K. --LutzL 13:54, 5 September 2006 (UTC)


 * I think the reason some authors use the word "capacity" instead of "achievable rate" is because in common, everyday usage, the two terms mean more or less the same thing. I am not saying that the quantity expressed in Hartley's law is equivalent to the channel capacity as formulated by Shannon, and obviously Hartley could have no way of knowing what Shannon would do 20 or 30 years down the road.  What I am suggesting is that Hartley's law is more of a precursor to Shannon's work than the article suggests because Hartley's quantity is in fact a capacity, not Shannon's channel capacity, but Hartley's notion of capacity.  The two are different because Hartley's is based, as Dicklyon pointed out, on assuming an idealized M-ary pulse rate channel rather than a AWGN channel.  But what I am arguing, and some authors agree, is that Hartley's law provides an expression for a channel capacity, and not a data transmission rate.  First Harmonic 12:01, 5 September 2006 (UTC)


 * Furthermore, later on in the Wiki article, the authors have set Hartley's expression equal to Shannon's channel capacity to find a relationship between M and the S/N ratio. If Hartley's is a rate, and Shannon's is a capacity, then it wouldn't make a lot of sense to set the two equal to each other.  First Harmonic 12:04, 5 September 2006 (UTC)


 * Finally, suppose I have a channel of bandwidth B with a signal-to-noise ratio that can support up to M distinguishable voltage levels. Further, suppose that I choose to use a coding scheme where the actual pulse rate f is less than the Nyquist rate 2B.  Likewise, suppose the actual number of levels L is also less than the number of distinguishable levels M.  Then I would argue that the information transmission rate, the actual rate R, is given by
 * $$R = f \log_2(L) \, $$
 * and not by
 * $$R = 2B \log_2(M) \, $$


 * And so in fact, since the first expression is smaller than the second expression, and the second expression really represents an upper limit on the data transmission rate, then I should be able to transmit information reliably over this channel at my chosen transmission rate. And again, based on common everyday usage, that suggests the first expression is the actual tranmission rate, while the second expression is the capacity, or upper limit on the transmission rate.  First Harmonic 12:11, 5 September 2006 (UTC)


 * One other thing: I am not saying that Hartley came up with the "right" answer for channel capacity, or that Hartley's capacity is equivalent to Shannon's capacity.  All I am saying is that Hartley came up with an expression for capacity, not the expression for capacity, and that although his expression is not absolutely correct, he was certainly on the right track.  First Harmonic 12:26, 5 September 2006 (UTC)


 * What you don't get is that Hartley's law is just simply a direct conversion of voltage levels into bits per sympol. It doesn't say anything about noise or errors. Those require a separate analysis that suggests that for very low error rates one can gain some bits per symbol by using a higher number of voltage levels and forming code blocks of several symbols. The error rate per symbol will increase, but with lowest distance matching in a randomly selected codebook the overall error rate will be equal or lower to the one assumed in Hartley's transmission method. The underlying transmission method does not matter up to this point. Only if one wants to give a rate of bits per time one has to take the nature of the transmission channel into account.--LutzL 13:54, 5 September 2006 (UTC)


 * Reading the article again in the light of these arguments I find them well represented in the present version.--LutzL 14:04, 5 September 2006 (UTC)


 * Hartley was certainly on the right track. But there was a long stagnant period and huge conceptual leap needed before Shannon got to the concept of capacity. Dicklyon 15:49, 5 September 2006 (UTC)


 * To DickLyon and LutzL: I respectfully request that you please re-read the arguments that I made earlier today (above), and respond to each of the points individually.  Essentially you have said that you disagree with me, and the reason you offer for disagreeing is actually an echo of the some of the very arguments that I have been making.  You also have not addressed one of the key arguments that I have made, which is related to transmission rates below the Nyquist rate with fewer quantization levels than the channel can distinguish.  As I have stated (more than once), I believe that it is possible to reach a compromise that will satisfy both of us.  It is starting to appear, however, that you do not share my optimism.  Are you willing to meet me partway, or are you simply planning to dig in?  First Harmonic 22:02, 5 September 2006 (UTC)


 * The following is a direct quote from the first paragraph of Hartley's paper entitled "Transmission of Information":
 * "What I hope to accomplish (...) is to set up a quantitative measure whereby the capacities of various systems to transmit information may be compared."
 * I find it interesting that Hartley himself used the term "capacities...to tranmsit information" and not the term "achievable rate to transmit information." First Harmonic 22:55, 5 September 2006 (UTC)


 * I agree. Very Interesting.  I still think it would be unwise to confuse his "capacity to transmit information" with Shannon's more formalized concept of "channel capacity." Dicklyon 23:09, 5 September 2006 (UTC)


 * Then we are at an impasse. In my opinion, as it stands now, I believe that the article is factually incorrect.  I have made more than one fairly good arguments (though not ironclad) to support my opinion.  You have not done much to refute anything that I have argued, and your current position is only that what I want to do is "unwise," which is a fairly subjective argument.  I have made numerous attempts to offer you an opportunity to reach a compromise that would make us both happy, and you have not responded even once that you are willing to consider anything that I have to say or to back off even slightly from your position.  I have even gone to the original source (Hartley's 1928 paper) and found that Hartley himself proclaimed that the very purpose of his paper was to establish the capacities of various transmission systems, in direct contradiction to your claim that Hartley had no notions related to capacity.  And yet, it appears that you are still unwilling to consider anything other than what you already know to be true.  I really am confused.  First Harmonic 00:24, 6 September 2006 (UTC)


 * I don't think it's quite at that point yet. Let's talk.  If you want to put the word capacity in there for Hartley (which I did already), just be careful not to confuse it with Shannon's meaning.  And as to the multi-level below-the-Nyquist rate information rate, that's a possibly interesting bit to add, but is not Hartley's law.  See if you can work it in as background.  Please remind me if there are other points you see a response to, as it takes forever to try to scan the discussion and see what you think I dropped.  And I can't speak for LutzL. Dicklyon 01:17, 6 September 2006 (UTC)


 * Thank you for having an open mind and giving me a fair chance. I need to think about how best to approach it.  I will probably take some time before I can present a straw-man for consideration.  Keep an eye on this page for a proposal.  Thanks again.  First Harmonic 03:39, 6 September 2006 (UTC)


 * Okay, I made a bunch of changes to the article last night and this morning. There is still more that I want to do, but it's a start.  Take a look, see what you think.  If you don't like something, go ahead and change it or improve it.  Thanks.  First Harmonic 13:23, 6 September 2006 (UTC)


 * I'll look more carefully later, but so far I like what you've done. Thanks. Dicklyon 19:40, 6 September 2006 (UTC)


 * I think that Hartleys "capacity to transmit" is meant as a quality, the same as "ability to transmit". To compare different systems with respect to that quality one needs to put them on a scale, that is assign a quantitative measure to that quality. It turned out that Hartleys proposal was right on track, even if it mixes a theoretical with a practical quantity.


 * Indeed, and in Hartley's approach the scale of bits per second was parameterized by B and M. In Shannon's it was parameterized by B and S/N.  These can not really be put into alignment other than by comparing what M corrresponds to what S/N when the bits-per-second numbers are equated, even though their meanings are not quite the same.  Dicklyon 16:04, 7 September 2006 (UTC)


 * Transmitting via a real world system at the Nyquist rate 2B and at the same time ensuring the frequency bound B will lead to noise by ISI - inter-symbol interference. Since this is a systematic disturbation it may be reduced by a digital post-filter adapted or calibrated to the channel characteristics. IMO, one would then, before changing the physical setup of the system, go for error correcting codes that reduce the transmission rate as well. But then I don't now more of Hartley's paper than cited here.--LutzL 06:55, 7 September 2006 (UTC)


 * Actually, Nyquist showed that 2B is the highest pulse rate such that zero ISI can be ensured, so ISI is removed from the issue in these cases. The picture can always be made more complicated by invoking phrases like "real world", but within the mathematic model that their proofs apply to, this is the truth.  And of course you are right that if there is any Gaussian noise, you do need to go to error correcting codes to get arbitrarily close to zero error rate; by Hartley didn't know that yet (or did, but didn't have a mathmatical model for it). Dicklyon 16:04, 7 September 2006 (UTC)

Hartley's law again
First Harmonic, your edits look good. I'm assuming that since you added the Wozencraft and Jacobs citation, the equation you used is probably from there. I'll check mine tomorrow at work to verify. But if so, then a reference link from that sentence to the book would be in order, yes? Dicklyon 01:58, 13 October 2006 (UTC)


 * Yes it is in Wozencraft & Jacobs, as I indicated in the Edit Summary (see the History page).  First Harmonic 02:55, 13 October 2006 (UTC)


 * Thanks. I should have checked your diffs and history. Dicklyon 03:30, 13 October 2006 (UTC)


 * Yes, a reference link would be fine, although I don't think it is necessary since I added W&J to the list of references at the end of the article. Also, I have no idea how to create such a link, nor is it clear that WP has a consistent or well-established policy on how to deal with footnotes and references.  There are several different approaches in many different articles, and I am not really interested in sorting it out.  But feel free to take a stab.  First Harmonic 02:55, 13 October 2006 (UTC)


 * I know what you mean. I do feel free, but I may not take it on either. Dicklyon 03:30, 13 October 2006 (UTC)


 * Personally, I like the way that the IEEE does references in its articles, where the references are listed at the end of the article either in alphabetical order by author or in the order in which the article cites the references. Each reference is numbered from 1 to N, and the citations within the text simply mention the number of the reference, usually enclosed within square brackets, as in [3].  But that's just me.  First Harmonic 13:57, 13 October 2006 (UTC)


 * BTW, the official WP policy is here if anyone is interested. First Harmonic 14:00, 13 October 2006 (UTC)

Shannon Limit
This page redirects from "Shannon Limit" but never once uses the phrase "Shannon Limit". That makes it not very useful for someone trying to find what "Shannon Limit" means.

Google says "Shannon Limit" appears 115,000 times but "Hartley theorem" is 534 times, including many of these being reprints of Wikipedia. (For google scholar -- i.e. recently real research -- it's 5700 to 70. The IEEE journals list 15 articles entitled "Shannon limit," one entitled "Shannon-Hartley" and none entitled "Hartley theorem"

Absent evidence to the contrary, the standard terminology appears overwhelmingly to be "Shannon limit". (Perhaps because whatever Hartley did, it was Shannon's 1948 papers that launched the research that has brought communications close to that limit).

So we need a separate article about Shannon Limit -- that's what people will be looking up and wanting to understand. If someone wants to say "Also called the Shannon-Hartley Theorem," or "builds on the work of Ralph Hartley," that's fine -- but the field calls it the Shannon Limit


 * The term "Shannon limit" appears so frequently because it means so many different things. In general, if someone says their system performs "close to the Shannon limit", they mean close to the bounds imposed by Shannon's information theory.  It is used for the capacity of a bandlimited gaussian noise channel, like the Shannon–Hartley theorem, but also for binary symmetric channels and other channels, and for source coding (lossless data compression).  One very common use is for the required energy per bit or SNR to achieve reliable transmission in the presence of noise; sort of the dual problem of finding a rate limit given an SNR.  So, I don't see how you can have an article on the "Shannon limit" since it's not a defined concept.  You have articles on information theory and this one a particular theorem of information theory.  If there's some other article that appears to be missing, by all means start it. Dicklyon 04:43, 29 October 2006 (UTC)

Merge with "Noisy channel coding theorem" article?
Speaking of the Shannon limit, the Noisy channel coding theorem article also claims to be the Shannon limit article. A sentence in the Shannon-Hartley theorem article says the Shannon-Hartley theorem is a "appplication of the Noisy channel coding theorem". When I read the noisy channel theorem page, however, I find it is essentially the Shannon Hartley theorem, replete with many of the same formulas. Shouldn't these two articles be merged? Or are there really two concepts here and we should keep them distinct? What do other folks think about a merge? -- technopilgrim 20:23, 30 October 2006 (UTC)


 * That's definitely a better place for shannon limit to redirect to; so I changed it. The coding theorem is much more general.  The Shannon–Nyquist theorem is the application of it to the case of a bandlimited continuous-time channel with additive white Gaussian noise, which is quite specific. I oppose a merge. Dicklyon 21:20, 30 October 2006 (UTC)


 * The articles make quite clear the relationship between the two. The Noisy channel coding theorem establishes that, in principle data can be sent without error at a rate up to the Shannon channel capacity C.   This is a fundamental result about the meaning of channel capacity in information theory, regardless of the channel or the noise process.  On the other hand, the Shannon-Hartley theorem is about the calculation of C specifically for the case of a continuous channel with a Gaussian noise process, and then specifically applying the result of the noisy channel coding theorem.


 * The two are distinct, and well factored into their two separate articles. Merger would be distinctly unhepful. Jheald 21:23, 30 October 2006 (UTC)


 * I support a merge. Shannon–Hartley is a specific application of Noisy channel coding theorem and could be handled nicely as a subsection in that article. The fact that the two articles are well differentiated, is not a good reason for keeping such closely-related material in separate articles. I arrive at this merge discussion because I personally find it confusing that Shannon limit, a well-known term, is associated with Noisy channel coding theorem and the theorem that bears Shannon's name lives in a separate article. That problem could be cleared up with a merge. --Kvng (talk) 18:32, 18 August 2010 (UTC)


 * Also note that "channel capacity" links sometimes point here and sometimes to Noisy channel coding theorem through redirect or what have you. A merge would clean this up. --Kvng (talk) 18:39, 18 August 2010 (UTC)


 * If you want to support a merge, jumping into a 4-year-old discussion is probably not the way. Start a new proposal. Dicklyon (talk) 02:41, 19 August 2010 (UTC)


 * Ouch! How would starting a new proposal help us reach a consensus on this? --Kvng (talk) 04:02, 19 August 2010 (UTC)

Can Shannon-Hartley explain different uplink and downlink speed in V.92 modems?
A V.92 downlink can handle 56kbps in the downlink but 48kbps in the uplink. Can this difference be explained by different information capacity according to Shannon_hartley in the uplink and in the downlink? In the downlink, the modulator has digital interface, and utilizes the PCM system for sending one symbol per sample. Perhaps the inter-symbol-interference may be lower in this case, since the modem symbols are synchronized with the PCM sample instants? Perhaps the maximum signal strength is lower, since it may be hard to identify the maximum possible amplitude?

What is the S/N in a PCM system, if there is no noice? Is there a difference between Europe and America because of different PCM systems?

I think it is more appropriate to use the Nyquist capacity limit instead of the Shannon limit when discussing the PCM case, since we now the number of possible levels. However, note that Shannon gives the net bit rate with an ideal error correction code, while Nyquist the gross bit rate.

We may assume that that we could use all N=256 levels, resulting in a gross bit rate capacity of 2*B*log2256 = 16*B bit/s, where B is the analog bandwidth in Hertz. If we assume that the bandwidth is B=4,000 Hz (i.e. ideal filtering), the modulation would result in according to Nyquist we could get 16*4,000 = 64,000 bit/s. If we instead assume only 3,400-300 = 3,100 Hertz Bandwidth, we would get 49,600 bit/s. In practice a frequencies outside the passband may also be utilized. The V.92 maximum downlink speed corresponds to 56,000/16 = 3500 Hertz Bandwidth.

N = 256 levels, without any noise, gives the same bit rate as the Shannon limit would give, if the SNR was 20 log10 256 = 48 decibel.

Mange01 23:29, 30 November 2006 (UTC)

Channel capacity I = maximum net bit rate, exclusive of forward error correction. R = gross bit rate. Comparison of Shannon's capacity to Hartley's law not possible!
The Shannon-Hartley theorem presumes an ideal forward error correction channel code, otherwize error free reception can never be achieved, unless the S/N is infinite. The channel capacity stated by Shannon-Hartley is the net bit rate, while the data rate stated by Nyquist (or the Hartley theorem) is the maximum gross bit rate (raw bit rate). Therefor these two theorems can not relly be compared, as is done in the 3.1 Comparison of Shannon's capacity to Hartley's law section. I suggest that this section is removed. Mange01 09:45, 18 June 2007 (UTC)


 * This book by John Robinson Pierce has exactly that comparison, on page 176 (sorry, it's not part of the online preview). John was a personal friend of Nyquist, Hartley, and Shannon (and myself), and I trust his judgement on this. I'll add the ref. Let me know if you'd like me to email you a scan of the page. Dicklyon 05:23, 19 June 2007 (UTC)


 * Thnx for prompt reply. This article is really professional, and you are an authority in your area. However, I find it strange that forward error correction and code rate is so rarely mentioned. Is that the case in the historical sources as well?
 * I have discussed the relation between net bit rate and gross bit rate further in the article channel capacity. If you claim that I have misunderstood something fundamental, please clearify this in the articles instead of just reverting.
 * Regarding the comparions between Shannon and Hartley - what about the following suggestion for reformulation? Mange01 01:20, 20 June 2007 (UTC)

Comparison of Shannon's capacity to Hartley's law
Comparing the channel capacity to the information rate from Hartley's law, we can find the effective number of distinguishable levels M:


 * $$k2B \log_2(M) = B \log_2 \left( 1+\frac{S}{N} \right) $$

where k≤1 is the forward error correction code rate.

If we assume that k is nearly 1, which corresponds to very little redundant error coding, the required number of levels is


 * $$M = \sqrt{1+\frac{S}{N}}$$

The square root effectively converts the power ratio back to a voltage ratio, so the number of levels is approximately proportional to the ratio of rms signal amplitude to noise standard deviation.

This similarity in form between Shannon's capacity and Hartley's law should not be interpreted to mean that M pulse levels can be literally sent without any confusion; more levels are needed, to allow for redundant coding and error correction, but the net data rate that can be approached with coding is equivalent to using that M in Hartley's law.

"The Nyquist formulation"
Mange01 published the following in the article:
 * Hartley's law, in computer networking literature (References: William Stallings, Data and computer communications, 8th ed, 2007, p. 92. Fred Halsall, Multimedia applications, 2001, p. 307; Behrouz. A. Forouzan, Data communicatons and networking, 2007, p. 86) often ascribed Nyquist, is sometimes quoted in this more quantitative form, as an achievable data signalling rate of R bits per second:


 * $$ R \le 2B \log_2(M) $$

Mange01, I checked your first ref (I had to add myself a reflist template first to see them right), and it talk about a "formulation of this limitation, due to Nyquist" about the signaling rate; that's correct and in agreement with this article. But then it says "the Nyquist formulation becomes" and presents Hartley's result, without attribution. I don't think we should take this to say that many authors ascribe the result to Nyquist; many authors are sloppy about attribution, true. Your second and third (duplicated) ref I couldn't find online. The last one talks about a "Nyquist Bit Rate" of a noiseless channel; this is not Hartley's law, which talk about limiting the number of levels based on the noise, so it can't be ascribing Hartley's law to Nyquist; just more sloppiness, that we don't need to propagate.


 * Yes, the computer networking tradition is sloppy when it comes to physical layer issues! But this is what a large amount of students are studying today. All books that I have found in the area of computer networking refers to Nyquist and "Nyquist bit rate" when they mention the above formula. Books in electrical engineering (data transmission/digital communication/wireless communication) does not talk about Nyquist bit rate, but seldom gives Hartley credit for this work. This mean that most students would search in the Nyquist article when they want to find this law, without success.

Some of your other edits also don't make me too happy; like "The channel capacity (the maximum net bit rate excluding forward error correction code that is possible for reliable transmission) is necessarily less than the Hartley data rate" mischaracterizes both capacity and Hartley's law. I think I'll just revert it all for now. Try again more carefully after studying exactly what channel capacity and Hartley's law say. Dicklyon 01:08, 20 June 2007 (UTC)
 * I don't understand what is wrong in this. This is how I teach my students. Have I fooled them? What should I say instead? Mange01 01:41, 20 June 2007 (UTC)
 * Well, first you said comparison of Hartley's law to Shannon capacity is not possible, and then you did a comparison that didn't do a good job of clarifying what it was trying to say. You can look at the capacity of the analog channel, or of the M-ary channel you get by quantizing to M levels.  These will bound R(M) above and below, respectively.  But to call the capacity "the maximum net bit rate excluding forward error correction code that is possible for reliable transmission" is not really right, since reliable transmission at that rate by using a forward error correction code is not actually possible, in general.  You are right that Hartley's rate and Shannon's capacity are different beasts; the text already tries to clarify that; if there's something you think is missing, or not right, try again or ask here and I'll help. Or get Pierce's book; he was a pretty careful expositor of these concepts. Dicklyon 01:58, 20 June 2007 (UTC)
 * Here's a statement that I think would be correct: "The M-ary discrete channel formed by quantizing 2B pulses per second through the gaussian-noise channel of bandwidth B will in general not be error-free, in which case the channel capacity of that channel will be less than the Hartley's-law rate, and reliable communication over this M-ary channel will only be possible at rates below that capacity".


 * First I must say that you Dickleon should have credit for the high quality of this article.


 * What do you think of my formulation in the introduction of the Channel capacity article?
 * I don't care for it, as it takes a statement that was pretty precise and makes it no longer precisely correct. Dicklyon 21:53, 20 June 2007 (UTC)
 * The sentence that you suggested would further improve the article, but it would not address any of my concerns, which are the following:
 * 1. Many students, authors and teachers in computer networking seemes to believe that the noisy channel capacity is the maximum gross bit rate. However, the channel capacity is a theoretical upper bound for the net bit rate, exlusive of redundant forward error correction. The maximum gross bit rate follows from Hartley's law. Therefor I suggest that gross and net bit rate should be mentioned in this article.
 * 2. Computer networking literature mostly talk about error detection and ARQ, but seldom about forward error correction and code rate. It is not clear to today's students that Shannon-Heartly presumes an ideal forward error correction code, and that the code rate affects the relation between the gross bit rate and the net bit rate. Therefor I suggest that code rate should be mentioned in this article.
 * 3. It is not clear from this article, and seldom from literature, whether Hartley's law is applicable only to low lowpass channels, or also to passband channels.
 * 4. Today, most students associate Hartley's law with Nyquist, but if they search for the law in the Nyquist article they won't find it. The solution would be to extend the Nuquist article with a discussion on digital communications.
 * 5. Students sometimes ask about how 56kbps modems work, and if they really obey Shannon-Hartley, and I am not sure on how to answer. See my question Talk:ShannonâHartley_theorem above.
 * Once again, thnx for your time.
 * Once again, thnx for your time.

Mange01 21:29, 20 June 2007 (UTC)
 * These these can all be addressed within the scope of existing articles, being careful to be precisely correct with what you say. Dicklyon 21:53, 20 June 2007 (UTC)

Someone has asked again (in edit summary) about Hartley refs and Nyquist, so I added a few more refs. If there are good refs that attribute Hartley's result to Nyquist, we could say that, too; but so far they have not been pointed out here. Dicklyon 18:31, 28 June 2007 (UTC)


 * It was me. I wrote: "Citation needed supporting that these equations were Hartley's, since the vast majority of today's computer networking literature, probably incorrectly, gives Nyqvuist the credit". Thank you Dicklyon for all your effort finding these references. I have learned a lot from this discussion.


 * Actually I did not understand the problem with the references I gave you in the beginning of this section. I don't have time to suggest a new text right now. Must go for vacation. So I want to wish you a nice summer! Mange01 19:05, 28 June 2007 (UTC)


 * Of the two refs you mentioned, I commented only on the one I could see online, and you had misinterpreted what it said. Give us a quote from the other if you think it attributes Hartley's law to Nyquist; maybe it does. Dicklyon 20:11, 28 June 2007 (UTC)

Bandwidth of passband signal vs baseband signal?
Is the channel capacity the same for a pass band channel as for an equivalent base band signal, allthough the bandwidths are defined differently? In that case, how can it be explained? Otherwize, the article should reflect this. Mange01 11:21, 26 June 2007 (UTC)


 * I try to answer my own question. I hope I have got it right. Is it okay if I add the following to the article:
 * Definition of B; "B is the baseband bandwidth (upper cut-off frequency) of the channel in hertz;"...


 * "Note that a passband signal (a modulated radio frequency signal) with bandwidth W can be converted to an equivalent baseband signal (by means of coherent detection), with baseband bandwidth (upper cut-off frequency) B=W/2."


 * This image should be improved, but might perhaps be added after that:
 * [[Image:Baseband to RF.png|center|frame|Comparison of the baseband version of a signal with spectrum between -B and + B, and its passband version with spectrum between fc-B and fc+B, i.e. with bandwidth W=2B.]]
 * Mange01 (talk) 22:09, 22 April 2008 (UTC)


 * Your text and picture are only applicable to the double-sideband-modulated passband, as the modulation puts the same signal into two half bands; in the general case, a passband and baseband band work alike. Dicklyon (talk) 22:14, 27 April 2008 (UTC)


 * Thnx for answering, but I don't get it. What bandwidth should I use in the Shannon-Heartly formula in this case? W or B?
 * In digital modulation I have never heard about single-sideband modulation. SSB would distort the phase information, so you cannot achieve lower bandwidth than what you call "double side band" in the digital case. I don't know what you mean by "in the general case", but the most generic modulation method is QAM, since almost all modulation methods can be achieved using the same modulation and demodulation method, and can be described using QAM constallation diagrams. FSK and similar forms has even wider bandwidth requirements than QAM. Mange01 (talk) 15:12, 28 April 2008 (UTC)


 * Sorry for the 9 months of delay; I just noticed that I missed this question. Dicklyon (talk) 21:14, 13 January 2009 (UTC)


 * It depends on what you mean by "in this case". In general, use the full width W of the passband.  When this is true: "Note that a passband signal (a modulated radio frequency signal) with bandwidth W can be converted to an equivalent baseband signal (by means of coherent detection), with baseband bandwidth (upper cut-off frequency) B=W/2," and when the baseband signal you get this way is a real (as opposed to complex-valued) signal, then you're in a context that is only using half of the band, effectively, and you'd use B in that case.  But keep in mind that two such channels will fit into the same band, in quadrature, or by using SSB.  QAM is the general case for which W is the required bandwidth; it's two symmetric signals in quadrature; to convert it to baseband, you need two parallel channels (I and Q, or real and imag) of baseband width B.  Dicklyon (talk) 21:14, 13 January 2009 (UTC)


 * A key thing to remember is that this theorem is that it is a theoretical limit and is based on certain assumptions that are clearly impossible in the real world (such as arbiterally long error correction codes). Real modulation schemes can approach the limit but they can't reach or pass it.
 * As for why noone uses SSB with digital modulation the answer is that is no point. DSB+QAM gives equivilent efficiency to SSB+AM (you can't do SSB+QAM because you lose the needed phase information) and afaict it is simpler to implement in practice. 86.22.248.209 (talk) 18:23, 18 April 2012 (UTC)

Non-clickable link
The title and the text contain an em dash. When a link containing that (instead of the Unicode-to-ASCII conversion) is cut and pasted to reference the article, that link appears as broken. This is especially bad because the em dash is not the proper punctuation mark to use to seperate the guys' names anyway. —Preceding unsigned comment added by 198.147.225.86 (talk) 18:53, 4 March 2009 (UTC)

OnLive section
I removed the following piece of text:


 * OnLive challenge of Shannon - Hartley "law"


 * In a guest lecturer presentation delivered in December 2009 at Columbia University in New York City, OnLine(tm) executive Steven Perlman described his efforts to break the current limitation. In his words: "People talk about Shannon's Law, give me a break, it's not a law - it is a limit". He described how his company's current compression technology delivers 100X Shannon in one wireless channel. He argued that Shannon was not wrong, but that "he was asking the wrong questions".


 * A video clip of the presentation was featured on CrunchGear and is located at OnLive Presentation (the relevant section is discussed around time frame 39:05)

The reason is that it doesn't actually seem to challenge anything about the theorem. Furthermore, the presentation doesn't go into details about how this "100X Shannon" was supposedly achieved. More likely than not, the fact that compression was used (which has to be lossy or impose constraints on the input signal's entropy) wasn't taken into account when comparing the channel capacity to its theoretical limit. It's hard to say more than that without more details, but this is certainly enough reason to remove the aforementioned highly anecdotal section. 217.149.210.16 (talk) 17:45, 17 January 2010 (UTC)

Your removal makes sense, yet perhaps a section briefly explaining multi-channel schemes such as MIMO could be added for clarification, noting different equations apply? Perlman's DIDO (Distributed-Input-Distributed Output) appears to be a flavor of MIMO and so could be clarified or at least mentioned on the MIMO page along with OnLive? It seems likely that many hearing Perlman's pitch will reach this page as I did, so some clarification would be helpful, even if on another page. Although I'm an EE this is not my area of expertise and would like an expert to bring clarity to the likely questions of those hearing Perlman. Those who think DIDO is breaking laws of physics should be assured it is not doing so. MIMO (and DIDO) do not exceed the Shannon Limit: Using multiple data channels requires a factor to allow for the added complexity, which increases ~geometrically with the number of channels. In Perlman's Columbia presentation he mentioned "Shannon's Law" as applying to a single data channel and I understood him to mean DIDO data is encoded over multiple channels and thus Shannon does not apply. — Preceding unsigned comment added by 68.6.87.251 (talk) 16:31, 2 July 2011 (UTC)

Hartley vs Shannon
"Hartley's rate result can be viewed as the capacity of an errorless M-ary channel of 2B symbols per second. Some authors refer to it as a capacity. But such an errorless channel is an idealization, and the result is necessarily less than the Shannon capacity of the noisy channel of bandwidth B".

This doesn't sound right. Since we're talking about capacity, shouldn't Hartley capacity calculation actually result be *more* (as in more optimistic, resulting in more capacity) than Shannon? I can't give it more thought right now and don't dare to edit the article directly, sorry... just wanted to point it out. —Preceding unsigned comment added by 80.35.240.178 (talk) 05:04, 11 October 2010 (UTC)

Yes, that needs to be clarified. It seems to be saying that sending data over a (optimistic) noise-free channel gives fewer bits per second than over a noisy channel, which is obviously wrong.

Actually, both results involve sending symbols at the same frequency -- 2B symbols per second -- over some channel with some given amount of noise.

The Hartley result assumes that we select each symbol from a relatively small set of relatively widely-spaced symbols, such that the noise in the channel is small relatively to the spacing between symbols, so we (usually) don't get an error caused by confusing one symbol for another -- even though there is noise in the channel, the data is received (usually) "errorless". This would be the best (usually) error-free bit/s data rate we could get, *if* the receiver were not allowed to perform any error-correction.

In modern practice, we approach the Shannon result when we select each symbol from a much larger set of relatively finely-spaced symbols, such that the noise in the channel is often larger than the spacing between symbols, so we often get errors caused by confusing one symbol for another -- but the receiver then applies some error-correcting code, resulting in the data being received (usually) "errorless".

It turns out that the "extra" bits we receive -- from using a larger set of symbols -- more than compensate for the "redundant" bits we spend -- on the error-correcting code. So the Shannon limit has a higher bit/s result than the Hartley result.

How can we explain this in the article? --DavidCary (talk) 02:57, 4 September 2011 (UTC)


 * The Hartley result is not a capacity, and doesn't correspond to an error-free rate. It can be low error rate only if M is low enough, in which case the rate will be well below the capacity.  If M is chosen higher, the rate can be above capacity, but the error rate will be high.  Dicklyon (talk) 03:20, 4 September 2011 (UTC)

Infinite bandwidth analog channels and infinite power?
"(Note: An infinite-bandwidth analog channel can't transmit unlimited amounts of error-free data, without infinite signal power)"

I'm not sure that that's true. Given an infinite-bandwidth analog channel, surely it is possible to transmit a countably infinite amount of information as an infinite sequence of harmonics, each signal scaled so the series of powers converges to a finite limit?

Another way to see this is that noise on an infinite-bandwidth, finite power channel has to contain an infinite amount of information because if you keep the power of a noise signal fixed and increase its bandwidth without limit, the information also increases without limit.

So I'm not sure the quoted statement is true, and if it is true, some sort of citation or argument might be needed.

(Also the comma should not be there - sorry for the nitpick.)

TomRitchford (talk) —Preceding undated comment added 10:40, 25 November 2018 (UTC)