Wikipedia:Reference desk/Archives/Computing/2013 February 11

= February 11 =

Downloading Maven
I want to download Apache Maven, but their website has 18 download links, just for the latest version! Which one do I click? -- Ypnypn (talk) 01:35, 11 February 2013 (UTC)


 * At http://maven.apache.org/download.cgi#Mirror? You want 3.0.4, because it’s the latest version. You want a binary, because if you wanted to build it from source you wouldn’t be asking such questions. You probably want a ZIP rather than a gzip’d TAR, as ZIP is more likely supported by the OS you’re using. So you want . The   and   files are for helping you to feel assured that the mirror (server) you’re downloading from has given you the authentic file and not something else, but if you’re on Microsoft Windows, it’s (comparatively) absurdly involved to utilize either of those, so you may as well just ask how to in another question if you are interested (after all, if you’re on Microsoft Windows, you’re clearly already content downloading random executables with little idea of their authenticity and running them :/ :p). ¦ Reisio (talk) 02:56, 11 February 2013 (UTC)


 * Thanks! -- Ypnypn (talk) 03:11, 11 February 2013 (UTC)

Why do ADSL2+ modems often use AC to AC power supplies?
I was reminded of this while looking for my spare modem recently. A few years back, I noticed all 3 modems I had used a transformer with an AC output, 2 with 9V and one with 18V. It's easy to find ones using similar AC PSUs from a quick search e.g. 15 VAC,  24 VAC,  another 9 VAC,  unclear VAC.

While many of these are old, and I'm not suggesting all or even most ADSL modems use AC (I'm not sure what does with built in wireless routers use, and obviously ones which supplies solely by USB as well as PCI, PCI express and ones built in to laptops must use DC), it does seem quite a lot to me. Beyond those used by ADSL modems, all the other devices I've used, e.g. network switches, (standalone) wireless routers, USB DVB-S receivers (which generally used an external PSU), DECT bases with digital answering machines, laptops, (also external HDDs, battery chargers, mobile phones but these are even less surprising), always seem to use a DC output PSU. Of hand, the only other device I know of which use an AC adapter are 12V halogens, although the reason here is more that the halogens don't care (and the supplies are usually internally to the device); although I'm sure there are a lot of other devices that do use AC adapters (probably some music equipment?).

When I first noticed after a long time searching I think I found someone suggesting that some component works best with AC but can't find it again and it may have just been a forum. Most of the chips must use DC as with every other electronic component.

So does anyone know a reason why ADSL modems seem to be one of the few devices using an AC-AC power supply? Just the preference of some reference design? To discourage the use of crappy switching power supplies with low switching frequencies application-notes. digchip.com/003/3-4102.pdf ? Or was what I read before right and there's some component that uses or works better with AC (and if so what?)?

Nil Einne (talk) 02:44, 11 February 2013 (UTC)

Accessing Android phone's internal/memory card storage
How to use a PC/laptop to read, write to, and delete data stored in an Android phone's internal (and memory card) storage {just like a USB flash drive}? Czech is Cyrillized (talk) 03:45, 11 February 2013 (UTC)


 * Does this model have a USB port ? If so, you can likely connect that to your PC and access files on the memory card.  The internal memory may not be directly accessible, forcing you to move the data to the memory card first. StuRat (talk) 06:18, 11 February 2013 (UTC)


 * Note that newer version of Android will not mount their internal memory or SD card partitions as USB mass storage, but instead will use MTP. This means that you may have to jump through additional hoops to mount the storages on something like Linux, but Windows should work natively.  — daranz [ t ] 06:52, 11 February 2013 (UTC)


 * Those hoops are described here. -- Finlay McWalterჷTalk 19:42, 11 February 2013 (UTC)

Operating System
'''which is the first computer operating system and developed by whom? ''' — Preceding unsigned comment added by Anil Golakiya (talk • contribs) 07:06, 11 February 2013 (UTC)


 * See History of operating systems. Though there isn't really a clear answer...  Dismas |(talk) 07:10, 11 February 2013 (UTC)


 * Timeline of operating systems shows you the road map. OsmanRF34 (talk) 14:44, 12 February 2013 (UTC)

Size of WM data
How large is all the data held by Wikimedia? Are we talking terabytes, petabytes, or exabytes? -- Ypnypn (talk) 18:32, 11 February 2013 (UTC)
 * Not sure but you can see the specs of their dump servers at Dumps/Dump servers. The actual dumps are described in Data dumps. Dmcq (talk) 18:55, 11 February 2013 (UTC)
 * If you're just talking about the current text of the articles on the English Wikipedia excluding the pictures or talk pages but including the markup my estimate would be that would come to in the order of a hundred gigabytes but that's just a guess. Dmcq (talk) 19:19, 11 February 2013 (UTC)
 * The uncompressed size of current revisions (without images) of all English Wikipedia articles is about 10 Gb. With thumbnail images ≈ 100 Gb. You can read about the full size in 2010 here. It is 5.6 Tb uncompressed. Ruslik_ Zero 19:22, 11 February 2013 (UTC)
 * I looked there and it has various statistics but I can't see where the 10GB is mentioned, I think you probably meant compressed instead. At Database download it says "pages-articles.xml.bz2 – Current revisions only, no talk or user pages. (This is probably the one you want. The size of the 3 January 2013 dump is approximately 9.0 GB compressed)." Dmcq (talk) 20:39, 11 February 2013 (UTC)

Overclock
Is it possible with the right cooling setup to over clock my Intel Xeon Processor to 10 GHZ? I currently have it at close to 6 GHZ but is 10 possible? Andrew Wiggin (talk) 21:47, 11 February 2013 (UTC)


 * A question very closely related to this one received many answers on the Science desk, until you deleted the entire section. To directly answer the question: no, this is not feasible.  Even if you have a large budget, a large team of engineers, unlimited time, unlimited coolant, and unlimited funds, you will not be able to make a current-generation off-the-shelf Intel microprocessor run at 10 gigahertz.  You might find Intel's documentation interesting; here is a white-paper, Using enhanced Intel SpeedStep features in High Performance Computing clusters.  As you will read, managing the existing clock-frequency configurations became too large a task for a "small support staff" of high-performance computing experts, so (read between the lines: after shelling out a lot of time and money), Intel released documentation and some software tools to help make the job a little easier.  You are asking to extend the performance beyond the design specification, so it will require a little more work and time and talent and money .  If you are very interested and want to know why this will not work, you can read about digital timing closure.  Large circuits like CPUs are not designed to work at any possible frequency.  Even if you solve all the thermal issues, and if you manage to speed up all your peripherals, and you simply disregard any actual performance gains while you are questing for higher core frequencies, you will eventually reach some frequency at which the individual clock skews between circuit elements are so random that the logic elements no longer function as coherent units.  Above this frequency, the digital error-rate will render the processor useless.  This frequency tends to be close to the maximum commercially-available frequency.  Nimur (talk) 05:35, 12 February 2013 (UTC)


 * Also, if your goal is to render computer graphics, and you have a large budget, then multiple processors set up to do parallel processing is the way to go, not overclocking (at least not to the extent you want to push it). You're spending resources on your coolant system which would be better spent on additional processors. StuRat (talk) 06:15, 12 February 2013 (UTC)


 * Simple answer - No. See for frequency records for standard chips. A simpler chip may be better since it wouldn't build up heat so fast. Not got to 10GHz yet but closing in on it slowly - give it another few years. Dmcq (talk) 11:04, 12 February 2013 (UTC)
 * What would be nice is to get a anynchronous cpu that went fast, see Asynchronous circuit. then all you'd need to do to get it faster would be to pour on the liquid nitrogen and adjust the voltage to get the maximum speed without having to mess with a clock. Dmcq (talk) 16:08, 12 February 2013 (UTC)
 * It is not difficult to get very high frequency electronics - digital or otherwise. X band and higher-frequency microwave electronics for communications satellites (and occasionally for ground-based communication) regularly use digital electronics that switch at 10, 20, 80 gigahertz.  These circuits are not large-scale modern-generation CPUs.  They clock very fast, but they do very little "general purpose computing" at those speeds.  Again, let me re-iterate; the throughput of instructions is not usually CPU bound.  So a faster CPU core is not able to execute programs faster.  That's called the megahertz myth.  This fact becomes more and more and painfully more true as you make the CPU faster and faster and faster - because you're spending less total time using the CPU.  This fact is called Amdahl's law.  So: what's the objective?  To get a very fast clock?  Then use a high-frequency clock circuit.  You can make something tick at hundreds of gigahertz!  If the objective is to maximize computer performance, then a faster clock will get you nowhere fast.  Nimur (talk) 19:14, 12 February 2013 (UTC)


 * Straight sequential processing speed still counts for a lot in commercial applications. If you really want that the IBM zEC12 (microprocessor) is what you want. Otherwise one can often use an array to do the work. Dmcq (talk) 09:45, 13 February 2013 (UTC)

Object oriented design
Hi all, I'm working on a simple app (or at least it started out that way) and I am having trouble working out the "right" relationship among classes. File management will be simple enough (at least for the user) but I now have different classes for: "User Account", "Textbook", "File". The File class manages the filesystem, while the others have an obvious meaning. The current setup (not too late to change) has File doing the real work, since the plan is to use it to store/ retrieve. The others are basically data classes, but no file will exist outside of a user account, and there will basically be no sharing between different users. How should these classes interact, and also, can anyone point me to good websites that discuss OO design by example, showing do's and don'ts?? I don't have time to read textbooks on this sort of thing, since I'm only solving a relatively small-scale problem, and no one else is likely to use these classes, but I do want to read some guidance. Something like this would be ideal, but with actual examples that walk you through the things people have done wrong. IBE (talk) 22:24, 11 February 2013 (UTC)


 * Please tell us exactly what this app is supposed to do. Also, isn't a(n online) textbook a type of file ? StuRat (talk) 22:33, 11 February 2013 (UTC)


 * That sounds like very good advice. It says there 'To design good objects you must think like an object, not a programmer' whereas you sound like you are thinking about the program. However for a simple personal project I wouldn't worry overmuch about that and really classes and overloading are probably pushed a bit too much. Just get something that does the basic work and then try redesigning it if you want to learn about object programming better. For object programming you go from the entities and the data then the methods to the programming rather than starting from the programming. Dmcq (talk) 10:46, 12 February 2013 (UTC)

Thanks to you both. Briefly, the app involves a few trendy extras, but is mostly a souped-up practice journal. You don't load any textbooks (this could change later); you enter the summary details of a textbook, and mark the problems you have done etc. The most complex class (in terms of features) is the "problem/ subproblem" class, but that goes in an obvious enough place, and I'm not exactly confused about it. Yes, a textbook counts as a file, but my point is that a file object seems to make sense for handling loading and saving functions. The reason for having a separate class is to manage Apple's Model–view–controller architecture.

Explanation of MVC
The Controller, called eg. IBE_ViewController, has member data IBE_View * thisIsTheDisplay; // and IBE_Data * thisIsAllMyData (the last object is more like several similar objects, but the View is one object). For the Data to talk to the View, you define a protocol, which confused me for the longest time. Some particular class of IBE_Data will name itself as the data source of the View. The View, in turn, declares a "protocol" called IBE_ViewDataSource. The protocol is just a list of methods, nothing more, and they are named (declared) but not defined by the View. The View also declares a member variable which is of type "", ie. essentially a made up pseudo-type of variable. The declaration goes like this: @property (some compiler instructions) id  theDataSourceForTheDisplay; Anything you assign to this must be of a class that implements the protocol, or in other words, that defines the methods declared by the protocol. Example methods would be "returnTheWholeDataArray" or "returnJustTheNumberOfObjectsInTheDataArray". Now the last thing is that the View Controller will assign a Data object to the variable "theDataSourceForTheDisplay", ie. something like self->thisIsTheDisplay->theDataSourceForTheDisplay = thisIsAllMyData; This is typically done at startup, since the View Controller is close enough (in spirit) to the "main" function. The trick is that "theDataSourceForTheDisplay" knows about "thisIsAllMyData" but doesn't know much about it - it can only access it via the protocol (I think - I'm new to this stuff).

Back to the question
But I don't want a "thisIsAllMyData" - that's too much in one hit. I just want "thisIsMyMainDataObjectForDisplayingStuff". That must include textbook and account information. So I could go defining different classes as all being data sources (there is no limit, I think) but that seems complex, and I think these things are usually kept pretty tight - very few protocols and data classes. Hence the belief that I need a separate file class - I'm trying to think like an object in that the file class is part of a filing system, which handles the records and stuff - like a kind of bureau, rather than the stuff it contains. Dmcq's choice for best advice was in fact what I was thinking of when I posted the question - it is definitely my main concern that I have no real object orientation. Any thoughts? Also, again, if anyone comes across it, if you have a link that goes into the same advice but that gives examples of actual problems and stuffups, that would be fantastic. IBE (talk) 18:35, 12 February 2013 (UTC)
 * Actually I've found a good start, Circle-ellipse problem, and also Object-oriented programming. IBE (talk) 19:04, 12 February 2013 (UTC)