Talk:ALOHAnet

Not the original network?
Surely the protocol in which a sender listens to the "network" IS NOT the original ALOHA protocol? I believe that such listening was not a feature of the original protocol, and this is why it had such a low bandwidth utilization - around 18-19%.

Later modifications included using clock pulses - (Slotted ALOHA), and very probably listening before transmiting - as suggested here. I have seen it written that Metcalfe's modifications brought the efficiency of the system up to around 90% channel utilisation - though how many tweaks were needed to get this I'm not sure.

It's difficult to get all the details - there is much which is not easily accessible, or interpretable, at the current time. David Martland 14:56, 22 Oct 2003 (UTC)

There are other details which it would be good to know about for the original implementation. For example, was the transmitter on Oahu (and indeed those at the other islands) "on" all the time? Obviously the receivers would have been on permanently. It might have been possible to somehow idle the transmitter, and then only apply power when there was a packet to send. This might have meant that there was not always a carrier to detect. With that sort of technology, the presence or absence of an FM carrier could have been used to determine whether the "channel" was active or not. However, if the carrier was always on, then what distinguished packet data from idling? Was there some form of idling bit sequence - alternating 0s and 1s perhaps, so that detecting "silence" would have been done by detecting a sufficient number of bits from this bit pattern? Where most data was character data, it would probably only have been necessary to detect a couple of bytes worth to be reasonably sure that it wasn't data. In the case where this pattern actually did occur within a packet, it would simply register as a collision if another station tried to transmit over it.

The situation with inbound (towards Menehune/Oahu) signals would also have been different, as there would have been several transmitters all capable of transmitting on the same frequency. For that situation it would seem necessary to reduce power, or switch off each transmitter when not sending a packet, in order to not mask out the other stations. Perhaps it was this realisation which led the developers eventually to suggest that their decision to use two frequencies - one for outbound and one for inbound data, was in fact the wrong decision - and that a single frequency network should have been developed.

What was the effect of capture ratio on the signals? Since FM was used, and since the stations were quite widely separated, it would actually have been possible for two stations to try to communicate simultaneously, and for only one to fail, due to one having a stronger signal at the receiver. A few dB difference in signal might have rendered this quite feasible. If indeed the acknowledgements were done by echoing the message, then the stronger message could have been echoed back, and then the other station could retry. Was this significant at all? It would clearly have improved the overall capacity, though it could also have meant that some stations tended to mask out others - perhaps consistently, and hence unfairly.

Does anyone know? David Martland 18:43, 22 Oct 2003 (UTC)

Further question: It would also be good to know something about how ALOHA was "really" developed - if anyone knows, and/or is willing to spill the beans. Was the system really carefully worked out, or was it put together by a "go down to Radio Shack, buy it, and try it" approach, and then discover the problems which arise. I suspect that the real developement was a combination of "discovery" and predictive design - nothing wrong with that really - lots of systems get developed this way. Most people would probably be concerned to get wireless communication between remote locations working first, and then worry about problems with collisions etc. later. Is that what happened? David Martland 18:50, 22 Oct 2003 (UTC)

Two reasons?
First section said it was important for two reasons, but only listed one. Was the other removed? Removed "two reasons". Tualha 16:30, 30 Nov 2003 (UTC)

I know the answers to lots of the questions here, after doing a lot of digging. I am writing a paper/talk about this, and will write up a summary for this place, but please be a bit patient. [Ignatios Souvatzis]

Menehune
I figure the menehune needs a mention (and some explanation). The page at http://research.microsoft.com/~gbell/Computer_Structures_Principles_and_Examples/csp0432.htm explains it a bit (with a uselessly small diagram). And I think we should say something like "...this network concentrator was named the MENEHUNE, after a mischievous type of polynesian fairy (see Menehune)". I'd add it in myself, but I can't really figure out where it belongs in the article. -- Finlay McWalter | Talk 21:37, 16 Feb 2004 (UTC)


 * Discussed in ALOHAnet ~Kvng (talk) 17:40, 25 June 2024 (UTC)

The ALOHAnet did not have CS!
It seems clear to me that the article is incorrect in stating that the ALOHAnet network was using CSMA.

With one frequency being used for "multiple access" and the other for the acknowledgements "broadcast" it is clear that this cannot be the case.

Stations would not be able to "listen" (detect carrier) since they just transmitted on MA channel and listened only to the "broadcast" channel for the acknowledgements of their messages.

The system would therefore be best described as MA/CD, but even the "CD" is with a twist. The stations did not really detect collisions, however, they "knew" there had been a collission (or some other problem) when they did not get their acknowledgement on the broadcast channel.

CS was only "invented" by Metcalfe around 1976 and he also made CD a feature of every station ...

Agreed
I'm going to rewrite this later... Notanotheridiot 18:26, 24 April 2007 (UTC)


 * ALOHAnet indicates that ALOHAnet led to development of CSMA but there is no longer an assertion the CSMA was used in ALOHAnet. ~Kvng (talk) 17:43, 25 June 2024 (UTC)

The ALOHA protocol
I removed: (like a grade school classroom at recess) from the end of: This means that 81.6% of the total available bandwidth is basically being wasted due to stations trying to talk at the same time.

possible mistake
I have never cotributed to wikipedia before, so I won't change the article itself, since I don't really know how. But, as I am currently doing a project concerning ALOHANET, while searching for the actual bitrate of ALOHANET I found this document: http://ethernethistory.typepad.com/papers/ALOHAnet.pdf On page 12 it says that ALOHANET was run on two [u]24000 baud[/u] channels, not 9600 as it is written in this article. Someone visiting this place - please verify my information.

Dariusz Wawer [scyth*at*tenbit.pl], 02.12.2007 18:26 —Preceding unsigned comment added by 82.210.137.42 (talk) 17:28, 2 December 2007 (UTC)


 * 24,000 is mentioned on the page labeled 7, the 12th page in the above-linked PDF. This paper was published in 1970 before the network was operational so is probably not a great source for the capabilities of the built network. Another source from 1981 cited in this WP article indicates 9600. ~Kvng (talk) 17:53, 25 June 2024 (UTC)

Miscellaneous Mistakes
Reading the original ALOHANET paper is helpful. Errors:
 * The data rate on the radio channels was 24,000 baud, not 2400 or 9600 baud. That's consistent with the allocation of 100 kilohertz of spectrum for each channel. By modern standards, that's terrible bandwidth utilization, but modems were very primitive then.
 * The remote terminals were not "teletypes". The article uses that word, but the paper does not. The paper describes a typical load as "each user sending one message every 30 seconds". So this was a store-and-transmit terminal, like some predecessor of an IBM 3270. That makes sense: the central machine was an IBM 360/65. which was designed to work with such terminals, not with teletypewriters. It also meant that response delays on the order of seconds weren't a problem.
 * Note that there's no contention on the outbound traffic from the Menuhene, and that most of the traffic is outbound (computer to user).

Incidentally, when I first saw an Ethernet on a tour of the Xerox PARC in 1975, and it was described to me by Alan Kay as "an ALOHAnet with a captive ether". --John Nagle (talk) 06:33, 24 October 2008 (UTC)


 * See above for speed discussion.
 * ALOHAnet mentions teletype as a possible interface but that connects to the TCU which presumably does the store-and-transmit you describe ~Kvng (talk) 17:57, 25 June 2024 (UTC)

1) There are N nodes attempt to send data at time T. 2) The probability of successful transmission of one node is p s
The equation put like this out of context is difficult to understand. Where does this comes from? What are the symbols (i for example is not defined anywhere)? If someone can provide me the reference from where this comes from, I can fix it. — Preceding unsigned comment added by Ingframin (talk • contribs) 15:02, 24 July 2019 (UTC)


 * i no longer appears in the equations. They look adequately documented and self-consistent at this point. Their derivation is not explained.
 * The section has a reference to Tannenbaum but no page number is given so not immediately clear where the content in ALOHAnet comes from. There is a discussion of probabilities starting on page 10 in and similar equations. Perhaps this would be a better source. ~Kvng (talk) 18:09, 25 June 2024 (UTC)