Wikipedia:Reference desk/Archives/Computing/2009 December 6

= December 6 =

Why ALU division is slow
I'm making slides about why integer division in an ALU is slow. To be complete, I've been Googling to see if there are any reasons I haven't included and I see many websites/papers that claim that one of the reasons for the slowness is that you have to check in each iteration to see if you are done. In addition/multiplication, the number of iterations is known from the start. I find that a bit strange. How can it take more iterations than bits in the dividend? -- k a i n a w &trade; 01:42, 6 December 2009 (UTC)
 * Could it be because of the added complexity of calculating fractions or remainders? I don't know that much about how the logic systems keep track of things, but if you were dividing 12 (1100) by 8 (1000), the answer would be 1.5 (0001.01).  Any other odd divisions would end up with more bits. Maybe that's not how it works, I don't know for sure since my digital logic education didn't get that far (I'm more of an RF guy). &mdash;Akrabbimtalk 04:16, 6 December 2009 (UTC)


 * For integer division, 12/8=1. Each step of the process, be it restoring or non-restoring, produces at least 1 bit of the answer.  Since the maximum answer for 4-bit division is 4 bits, it will iterate a maximum of 4 times.  By your answer, I wonder if a lot of sites are confusing integer with float division. --  k a i n a w &trade; 04:24, 6 December 2009 (UTC)


 * After I shut down my computer last night, I realized that even floating point division has a distinct number of iterations. For example, SRT division begins with k padded zeros.  The number of zeros (k) is how many shifts will be needed to complete the division.  So, even if there is confusion between integer and float division, the number of iterations is still known before the iterations begin.  Therefore, I think it is just nonsense being copied and pasted throughout the Internet that one reason for division to be slow is that you don't know the number of iterations before you begin. --  k a i n a w &trade; 16:24, 6 December 2009 (UTC)


 * Division isn't important so why bother spending a lot on it? The division algorithms that are used are fast enough so it isn't a bottleneck and that's good enough. Just a little extra speed on addition or multiplication is more worthwhile. I don't see any great problems about making it almost as fast as a multiplication if it really was important. Dmcq (talk) 18:56, 6 December 2009 (UTC)
 * That is a horrifying over-simplification. Some computer graphics algorithms such as texture-mapping have to compute at least a couple of divide operations for every single pixel that's drawn. The speed of division can be a critical bottleneck and computer graphics people spend a lot of effort to minimize the number of them. SteveBaker (talk) 03:55, 7 December 2009 (UTC)


 * (It's been a long time since I thought about this!) Isn't it because multiplication can be done in parallel?  For a 16x16 bit multiplier:
 * Take 16 differently wired multiplexers - each one selects between a zero and the first operand shifted up 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14 and 15 times (that's just wiring up the inputs of the mux'es - no actual math is required).
 * Use the 16 bits of the second operand to switch the 16 multiplexers (a 0 selects the 0 input of the mux - a 1 selects the shifted version of the first operand.
 * Add the 16 outputs using a cascaded 16-way adder.
 * I don't think there is a similar algorithm for division - you have to know the result of each bit before you can calculate the next bit - so a 16 bit division requires anything up to 16 steps. SteveBaker (talk) 00:46, 7 December 2009 (UTC)


 * Floating point is a lot like integer math - remember, you can simply add or subtract the exponents then multiply the mantissae using integer math. There is a bit more messing around for the corner cases (denormalised numbers, etc) - but it's not that hard. SteveBaker (talk) 00:49, 7 December 2009 (UTC)


 * First, no-one seems to have provided the seemingly obvious wikilink to division (digital) yet, so let me do that. Anyway, I think SteveBaker's answer hits the core of the matter: binary multiplication can be effectively parallelized in hardware using Wallace trees or similar methods, while no such simple parallelization method is known for division.  In particular, one of the "fast division" methods described in the article, Newton–Raphson division, basically works by reducing the division to multiplication by the reciprocal of the divisor, which must first be computed iteratively.  Therefore it will always be slower than the corresponding multiplication operation.  —Ilmari Karonen (talk) 12:45, 7 December 2009 (UTC)


 * I didn't link to the article because it is pretty much worthless. It completely fails on the "how" because it assumes the user knows how everything works and makes very vague references to restoring and non-restoring techniques.  I considered rewriting it, but it looks to me like it used to be a good article and then a group of hardheaded editors came along and took ownership of it, forcing it to be useless. --  k a i n a w &trade; 13:40, 7 December 2009 (UTC)
 * I was searching for answers to a question below, and came across this: New Radix-16 Divider in Intel Technology Journal. There's a sort of implementation overview, but no diagrams.  Maybe this can point you to some other leads to follow.  Nimur (talk) 16:48, 7 December 2009 (UTC)

Non-linear stretching for HDTVs
Some widescreen televisions offer a non-linear stretch mode for displaying 4:3 images on the entirety of the 16:9 display without losing any (or at most very few) lines at the top and bottom edges and without the unpleasant effect of uniform stretching (especially on people). Unfortunately, there seems to be no standard name for this technique. Widescreen display modes calls it "wide zoom", but I suspect that name may instead refer to a stretch that differs in x and y (say, 16/12=133% in x and 120% in y, losing 1/12=8.3% of the lines in the image and distorting an on-screen square to an aspect ratio of only 110.8%). Other names seem to be Just, Horizon, Smart Stretch (which seems to be Sharp's name for it), Panorama, or TheaterWide &mdash; but I've seen written elsewhere that TheaterWide is another non-uniform x-y stretch. So: Thanks in advance for responses even to a few of these many questions! --Tardis (talk) 04:17, 6 December 2009 (UTC)
 * 1) Do all of those names really refer to non-linear stretches?
 * 2) What is the name of the non-linear stretch mode (if any) for Samsung, Sony, and LG televisions?
 * 3) What other brands support an equivalent option (and under what names)?
 * 4) Do Blu-Ray players exist that can apply such a stretch if the TV can't?  Can they apply it to upconverted DVDs and for BDs?
 * 5) Are there any significant quality differences between implementations?
 * 6) How does the notional aspect ratio of a DVD play into this?  Some movies, like Willy Wonka & the Chocolate Factory, were actually shot in 4:3, but their DVDs were released early enough that (I believe) they are expanded (linearly, surely) on the disc to 16:9, with the expectation that a DVD player connected to a 4:3 TV will undo the stretch.  Having that movie stretched non-linearly and then recompressed linearly to 4:3 would obviously lose.
 * Having owned a Sharp TV I know that their term of "smart stretch" is for a non-linear zoom where the left and right edges get stretched more heavily than the middle and a 4:3 image is converted fully to a 16:9 one. Most DVD players I have seen are more concerned with fitting a 16:9 image onto a 4:3 display instead of the other way around since (in the US anyway) there are more TVs with 4:3 than there are DVDs at *only* 4:3 (many of which are multisided offering 16:9 as well).  Hope this helps! --66.195.232.121 (talk) 17:14, 7 December 2009 (UTC)


 * Re Willy Wonka, according to the article it was shot open matte and released with both 4:3 and widescreen crops on the DVD. The widescreen crop was the intended one (the one the cinematographer shot for). If you play the 16:9 version on a 16:9 TV, you'll see the movie as intended. If you play the 4:3 version with nonlinear stretching to 16:9, marvelous things will happen. -- BenRG (talk) 22:11, 7 December 2009 (UTC)

Make apps running outside a VM visible within it?
In Windows, is it possible to set up a virtual machine so that selected apps running outside the VM have their memory mapped into the VM and are visible to apps running within it? (The only practical use for doing so that comes readily to mind is a future-proof way to defeat Warden -- run Warden in a VM, and let it see only Blizzard games and Windows system processes. Running the games themselves inside the VM would probably slow them down.) Neon  Merlin  08:35, 6 December 2009 (UTC)
 * Surely not. I don't think it violates the rules of thermodynamics, but trying to get the guest OS to imagine it has a process running when it really doesn't sounds far too complicated to be feasible.  --Tardis (talk) 16:01, 6 December 2009 (UTC)
 * Your best bet is to forget that one machine is "virtual" and make the applications visible to each other via a network service. Both the host and the virtual PC can operate as either server or client, and can communicate to eachother via an IP or TCP/IP-based protocol.  Depending on your needs, you might find OpenMPI very helpful for dealing with such tasks, especially if you are writing the software.  In general, though, this categorically excludes shared memory programming, which is what you are seeking (mapping the memory space of one computer across a different computer).  You could switch to a different operating system like Altix, which makes network-based shared-memory "transparent", but this is not possible on Windows or most linuxes.  Further, it has huge performance obstacles - and since you're already virtualizing the system... well, why would you virtualize and then attempt to share memory?  Anyway, re-reading your post, it sounds like your goals are very specific, and I think this approach is not going to work.  You stand a better chance at running a proxy server and intercepting the network traffic, modifying it, and re-transmitting "clean" network traffic - but this has its own set of challenges.  Nimur (talk) 20:19, 6 December 2009 (UTC)

Yes it's possible, see Windows_Virtual_PC —Preceding unsigned comment added by 82.44.55.75 (talk) 20:43, 6 December 2009 (UTC)


 * Warden examines other processes, and mmaps their memory (our article says it only mmaps the wow process itself, now at least). You can subvert this without a VM (indeed, a VM seems like an unnecessary burden). It's generally possible to intercept kernel calls, and to give Warden fake (i.e. okay) copies of pages (ones you've patched) when it tries to mmap. Or you could just patch warden yourself.  Of course warden will have some protections against that, but that can only go so far. With a bit of work you'll succeed in being able to change the Wow process without warden noticing. But when Blizzard figure it out (which they will if you post the program onto some public forum, or when they change warden to be more devious, which I guess they do very often) they'll break your hack (they'll look for your hack, they'll run extra checks). So starts a war of randomly-named polymorphic code (much like the war between virus checkers and polymorphic viruses that seek to disable them). Eventually they gain an advantage because they can persuade (i.e. pay) Microsoft to sign a Vista/win7 kernel module, which you can't, and which you can't patch. You can still subvert that by rootkitting the entire machine, to overcome Windows' signed-kernel-code restrictions. That works unless a given machine will not boot a patched kernel, verified using the Trusted Computing infrastructure. -- Finlay McWalter • Talk 20:47, 6 December 2009 (UTC)


 * Of course Blizzard have another trick up their sleeve. Warden doesn't (or need not) disable WoW even if it sees a given cheat. They can just keep a log of all the accounts that uses one, and then every six months or so close all those accounts for a TOS breach. That will piss off the regular users of the cheat no end (because even with the cheat they've spent/wasted countless hours on their character); it won't inconvenience the goldfarmers (who you'd expect would be the biggest users of cheatware) as they're smart and will roll accounts over quickly regardless (in part because of Blizzard's existing anti-goldfarming measures). -- Finlay McWalter • Talk 20:53, 6 December 2009 (UTC)

Web
What was the very first Web site? jc iindyysgvxc  (my contributions) 10:26, 6 December 2009 (UTC)


 * According to this page by Tim Berners-Lee it was http://nxoc01.cern.ch/hypertext/WWW/TheProject.html. That page is no longer active but there is an archived version. -=# Amos E Wolfe talk #=- 10:34, 6 December 2009 (UTC)


 * Keep in mind that the World Wide Web is only a small part of the overall Internet. Tim Berners-Lee started serving HTML pages in around 1989, and so earns the claim as the first "Web Site", but there are other internet and non-internet events which predate that.  History of the Internet is a good overview.  Nimur (talk) 15:33, 6 December 2009 (UTC)


 * Also note... "Web page" implies that you are looking for an HTML document that is accessible through the Internet via a webserver. Long before there were web pages, there were HTML documents.  You just couldn't access them via a webserver.  The first ones I saw were only available via FTP - just to show what HTML syntax looked like.  Then, by 1993, webservers were popular enough that you could access a lot of HTML through them.  If memory serves, it wasn't until around 1995 that the web succeeded in completely supplanting gopher. --  k a i n a w &trade; 13:45, 7 December 2009 (UTC)

OOP/philosophy terminology concordance?
Do any dictionaries exist for translating between object-oriented programming terms and their equivalents in philosophical ontology? Neon Merlin  13:19, 6 December 2009 (UTC)


 * Probably not. And if there was, would it be very long, or very interesting? I doubt it. Which is probably why there isn't one. --Mr.98 (talk) 22:29, 6 December 2009 (UTC)


 * Does The Jargon File help? (I'm not sure I'd describe it as 'philosophical ontology' - but it's a dictionary that's full of the terms that OpenSource programmer use - explained in more-or-less plain english. SteveBaker (talk) 04:41, 8 December 2009 (UTC)

ip address
A while ago I asked what the technical risks of someone knowing your ip address were, like hacking etc. Now I'm wondering what other issues might come from someone malicious knowing and wanting to cause trouble, for example someone randomly harvesting ips from Wikipedia recent changes and reporting them to the respective ISPs as having violated their TOS, hosting / posting illegal material, violating copyrights etc. All of the accusations are baseless, but would an ISP terminate a customer if someone were to go around reporting random ips? —Preceding unsigned comment added by 82.44.55.75 (talk) 18:11, 6 December 2009 (UTC)


 * Probably not. ISPs themselves have very limited liability for things like that. As a consequence, they don't get much out of enforcing requests other than hassles. They will comply with legitimate legal requests (e.g. a subpoena for info on the real human behind an IP address) but I doubt most would do anything just because some company asked them to—the ISPs themselves have very little to gain in doing so. --Mr.98 (talk) 20:06, 6 December 2009 (UTC)
 * Simply complaining to an ISP is not sufficient for them to take action against anyone. Usually, a subpoena or court order is required before the ISP will do anything; the barrier to obtaining one of these is sufficiently high that it is nearly impossible to get one frivolously, let alone to get them in bulk for many IPs without any actual cause.  Nimur (talk) 20:16, 6 December 2009 (UTC)
 * Ok, thank you. @ Mr.98, I wasn't even thinking of something as large as a company making reports, I was thinking of just a random person with nothing better to do collecting ip addresses and reporting them to a page like this, out of spite or boredom or something. Anyway, thank you for the helpful answers :) —Preceding unsigned comment added by 82.44.55.75 (talk) 20:26, 6 December 2009 (UTC)


 * Again, the ISPs have basically no liability for user activity (at least in the U.S.A.), and get their money by keeping users, not kicking them off. They would have to be extraordinarily stupid to actually pay attention to random letters alleging bad behavior and actually act on them in a way that hurts their own economic interests. It doesn't mean it's not possible, but I doubt it could be a serious problem in any way. It would be an extremely unproductive way of trying to make trouble. --Mr.98 (talk) 22:28, 6 December 2009 (UTC)


 * While this doesn't really answer the question, we do have Abuse response which handles legitimate long term abuse providing evidence in the form of 'diffs'. It isn't particularly active and responses aren't that great and indeed primarily chases cases of educational institutions who are more likely to respond  but they do sometimes deal with normal ISPs. For privacy reasons, many ISPs are not going to report if they did anything so if the behaviour stops, it's possibile it's because the ISP did something, it's possible the vandal just gave up. Many ISPs do of course have TOS which would allow them to cut off users although I would expect a warning is more likely (and probably enough to scare off most vandals). Abuse reports coming from random people may work if strong evidence is provided, and wikipedia diffs may be accepted as good enough evidence. Of course abuse reports coming from wikimedia system admins would likely be even more effective at getting a response but the foundation doesn't have the resources and I'm not sure about the interest. (I see some discussion of getting some official sanction and an appropriate @wikipedia or @wikimedia address, it would be interesting so how much this helps if it does happen. I expect a lot.) Abuse reports without any real evidence are almost definitely going to be completely ignored Nil Einne (talk) 13:54, 7 December 2009 (UTC)
 * I added the citation needed tags myself since I'm not entirely sure how correct they are. It was my impression but some things I've read particularly here Abuse response/2009 Revamp make me question it. Volunteers there will be able to give you a better idea. Nil Einne (talk) 14:08, 7 December 2009 (UTC)

Synching two computers
I have a desktop with Windows Vista and a laptop with Windows 7 and I would like to set it up so that if I change/add/delete a file on one computer, that the change will be made on the other computer as well so I don't have to use a flash drive to move files back and forth all the time. Is there an easy way to do this? I would be willing to spend a small amount of money if I need some sort of hardware to do this. In case this info is helpful/necessary, my network is an unsecured wireless network as my apartment provides this for free. And, I have a Seagate external hard drive which automatically backs up my desktop computer, but I believe it can only be hooked up to one computer at a time. Thanks. StatisticsMan (talk) 20:18, 6 December 2009 (UTC)


 * A one way sync (just changes on one machine to the other) is quite easy - rsync is easy and very good. Personally I'd tunnel that over ssh (particularly because you're on an insecure connection); you can get easy to setup ssh client and server from OpenSSH. A two way sync (where a change on either machine is propagated to its counterpart) is more challenging - rsync's counterpart Unison (file synchronizer) will do that. For any two-way scheme (including those from Microsoft) you'll run into "conflicts", where the same file has been changed on two different machines - it's just the same problem as a MediaWiki edit conflict, and like those you're left to resolve the conflict manually (usually you end up with two versions of the conflicted file). -- Finlay McWalter • Talk 20:23, 6 December 2009 (UTC)


 * You can try something like Jungle Disk. For US$4 a month, you get 10 GB (you can purchase additional space at a rate of US$0.16 per GB). You can install the Jungle Disk clients on both your machines, and it will keep them in sync in real time. This may not be ideal if you have a lot of data because it will sync it via the Internet. It has the added benefit of letting you access the data from anywhere that you have internet access, and keeps an offsite backup. - Akamad (talk) 02:41, 7 December 2009 (UTC)
 * Similar service is Dropbox where you get 2GB for free. You can find some other services and informations on Comparison of online backup services. Lukipuk (talk) 15:46, 7 December 2009 (UTC)