Wikipedia:Reference desk/Archives/Computing/2012 August 29

= August 29 =

Determining PDF page size programatically
I plan to batch-process a large number of PDF files (annotation using a combination of the ps2pdf utility of ghostscript, and pdftk stamp). It would be helpful if I from my script (or C++ program) could determine the page size (and orientation) of the first page of the PDF file. Any ideas about how that could be done? Thanks, --NorwegianBluetalk 09:59, 29 August 2012 (UTC)


 * ImageMagick's identify program will display something like

foobar.pdf[0] PDF 612x792 612x792+0+0 16-bit Bilevel DirectClass 61KB 0.140u 0:00.160
 * with a [1] etc, for subsequent pages. -- Finlay McWalterჷTalk 10:38, 29 August 2012 (UTC)


 * Where 612x792 appears to be the measurements in points -- Finlay McWalterჷTalk 10:44, 29 August 2012 (UTC)


 * Excellent, problem solved! Thanks for the quick reply! --NorwegianBluetalk 11:18, 29 August 2012 (UTC)


 * It turned out that ImageMagick's identify relies on ghostscript, which it crashed for every pdf I tried it on, resulting in pagefulls of postscript error messages on my system. (I have the most recent version of both packages). I then remembered the xpdf package, and sure enough, there is a pdfinfo utility that among other things returns the page size in points. Worked beautifully. I haven't tried it on documents containing mixed formats yet, but I could always handle those manually, if they turns out to be problematic. --NorwegianBluetalk 21:09, 29 August 2012 (UTC)

Integration of Intel Fortran 11.1 compiler with the Visual Studio 2008
I am trying to integrate Intel visual fortran compiler 11.1 with visual studio 2008. I have successfully installed both and I think I managed to integrate both to certain extent, because now I am getting visual fortran tab, where I make new project and all. I have a 64-bit system and Intel Xeon processor. When I write a fortran program and build it, it gives me the error, "Intel Visual Fortran Compiler for 'Win32' not installed". Please help me with the issue. I am not a programmer, so having problems understanding the nuances. Thanks - DSachan (talk) 12:38, 29 August 2012 (UTC)


 * Never mind, I think I managed it. DSachan (talk) 13:59, 29 August 2012 (UTC)

Idea to transfer entire Fedora 17 Linux system to new 2 terabyte hard disk
I think I've come up with a "least troublesome" way of transferring my entire Fedora 17 Linux system to my new 2 terabyte hard disk:
 * 1) Remove my old disks and put the new disk in.
 * 2) Boot up with a Fedora 17 installation DVD and install the most minimum system.
 * 3) Put my old disks back, along with the new disk.
 * 4) Boot from either the old disks or the new disk, it doesn't really matter, as long as I end up in a working Linux system.
 * 5) Transfer all the files from the old disks to the new disk, but keep  intact.
 * 6) Remove my old disks once again.

If I have a bootable partition on both disks, do I get to somehow choose which disk I'm booting from? Or how will my computer decide it? Is such a thing even possible?

Will I automatically get my old user account when I transfer the entire root partition over to the new disk? Exactly how are user accounts stored in Linux? Is there a way to preserve user IDs on the destination when copying files? (But if I mess up the user IDs on the files, it won't really matter, I can always use .)  J I P  &#124; Talk 17:43, 29 August 2012 (UTC)


 * "If I have…"
 * Your computer will likely first attempt to boot from the first disk it sees, which is determined by the cable connection order and the BIOS (etc.)
 * "Will I automatically…"
 * Yes, but you could just as well just transfer it over without doing a fresh install and fix  (as explained already in one of your other items).
 * "Exactly how…"
 * If you copy everything, everything will be preserved.
 * ¦ Reisio (talk) 17:53, 29 August 2012 (UTC)

Further question: If I no longer will have any use for the old disks, I intend to sell them or give them away to my siblings. Despite 99.999999% of Finland's population using only Microsoft Windows and thus being unable (or at least not easily able) to read Linux disks, I would much rather not give their new owner access to all my personal files. Will removing all the files and/or reformatting the partitions be enough? What if I modify the partition table, deleting every partition on the disks? J I P &#124; Talk 19:02, 29 August 2012 (UTC)


 * Personally I find old disks handy for keeping an old backup (labelled "my system date ddmmyy", wrapped in an ESD bag, and stuffed in a cupboard somewhere). The aftermarket value of old disks is fairly minimal. If you want to blank a disk properly, dd /dev/zero over its entirety. -- Finlay McWalterჷTalk 19:10, 29 August 2012 (UTC)


 * Will removing all the files and/or reformatting the partitions be enough?
 * Not quite (see Finlay's sugg). And yes, HDDs do not age that much, it's more that new HDDs are much faster than say, 5-year old drives were when they were new. The wear and tear is minimal -- if it had not been that way, they would have failed already.
 * If I have a bootable partition on both disks, do I get to somehow choose which disk I'm booting from?
 * On most bioses, I can press F11 and it lists all the block devices it found so far - including opticals but no USB HDDs on my machine. However, that F-key will probably depend on the manufacturer, and so will the range of detected drives. If F11 doesn't work, try to press pause early during booting - my TFT is too slow to pick up signal before it's over, so I cannot read the "Press F11 for Boot Devices menu" unless I pause the action.
 * I'm clueless though how the drives will be mapped if you have Linux on the secondary HDD, though. I did use Linux some time (on the primary), but its main use was to repair my Windows install when it was too screwed up to repair itself. Which happened about quarterly back then.
 * I'm using the boot device selection for Windows mainly but it can be useful for switching between Linux and Windoze in a cleaner way than relying on M$'s boot damager.
 * I mean 'manager.' - ¡Ouch! (hurt me / more pain) 08:14, 31 August 2012 (UTC)

I did all of the above, except that at the end I didn't copy any of the files on the root partition across. Instead, I copied all the files on my two personal partitions - one for my main home directory and one for all my photographs - and then just reinstalled every package I had installed since I upgraded from Fedora 14 to Fedora 17. (I hadn't really done any changes to the root partition except for to install packages.) Then I shut down the computer, removed my old disks, and booted up. Everything worked OK. By copying my entire home directory, file for file, I got the exact same personal settings back even though they are now on a completely new disk. Now I only have to decide what to do with my old disks. I could either keep them, sell/give them to my company, sell/give them to my siblings, or try to sell them on the public market. If I'm going to sell them to strangers I intend to get some compensation, but I'll make an exception for my company or my siblings. My company is 100% Windows only, and both my brother-in-law and my brother use Windows, while my sister uses a Mac. None of them should be easily able to read Linux disks, but to be on the safe side, I'll probably dd /dev/zero on the whole disks anyway. J I P &#124; Talk 18:04, 1 September 2012 (UTC)
 * One small comment though. When I plugged in my new disk and one of my old disks, the one that contained all my photographs, my computer refused to boot up. It appears it was trying to boot from the first disk it found on the SATA bus, but as the photograph disk didn't contain a bootable partition, it just froze. A reboot and a change in the BIOS settings fixed that problem. J I P  &#124; Talk 18:06, 1 September 2012 (UTC)

whois
In whois for google I see

Server COM.whois-servers.net returned the following for GOOGLE.COM

GOOGLE.COM.ZZZZZZZZZZZZZZZZZZZZZZZZZZZ.LOVE.AND.TOLERANCE.THE-WONDERBOLTS.COM GOOGLE.COM.ZZZZZZZZZZZZZZZZZZZZZZZZZZ.HAVENDATA.COM GOOGLE.COM.ZZZZZZZZZZZZZ.GET.ONE.MILLION.DOLLARS.AT.WWW.UNIMUNDI.COM GOOGLE.COM.ZZZZZ.GET.LAID.AT.WWW.SWINGINGCOMMUNITY.COM GOOGLE.COM.ZOMBIED.AND.HACKED.BY.WWW.WEB-HACK.COM GOOGLE.COM.ZNAET.PRODOMEN.COM GOOGLE.COM.Z.LOVE.AND.TOLERANCE.THE-WONDERBOLTS.COM GOOGLE.COM.YUCEKIRBAC.COM GOOGLE.COM.YUCEHOCA.COM GOOGLE.COM.WORDT.DOOR.VEEL.WHTERS.GEBRUIKT.SERVERTJE.NET GOOGLE.COM.VN GOOGLE.COM.VABDAYOFF.COM GOOGLE.COM.UY GOOGLE.COM.UA GOOGLE.COM.TW GOOGLE.COM.TR GOOGLE.COM.SUCKS.FIND.CRACKZ.WITH.SEARCH.GULLI.COM GOOGLE.COM.SPROSIUYANDEKSA.RU GOOGLE.COM.SPAMMING.IS.UNETHICAL.PLEASE.STOP.THEM.HUAXUEERBAN.COM GOOGLE.COM.SOUTHBEACHNEEDLEARTISTRY.COM GOOGLE.COM.SHQIPERIA.COM GOOGLE.COM.SA GOOGLE.COM.PEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEENIS.COM GOOGLE.COM.PE GOOGLE.COM.MY GOOGLE.COM.MX GOOGLE.COM.LOLOLOLOLOL.SHTHEAD.COM GOOGLE.COM.LASERPIPE.COM GOOGLE.COM.IS.NOT.HOSTED.BY.ACTIVEDOMAINDNS.NET GOOGLE.COM.IS.HOSTED.ON.PROFITHOSTING.NET GOOGLE.COM.IS.APPROVED.BY.NUMEA.COM GOOGLE.COM.HK GOOGLE.COM.HICHINA.COM GOOGLE.COM.HAS.LESS.FREE.PORN.IN.ITS.SEARCH.ENGINE.THAN.SECZY.COM GOOGLE.COM.ESJUEGOS.NET GOOGLE.COM.DO GOOGLE.COM.CO GOOGLE.COM.CN GOOGLE.COM.BR GOOGLE.COM.BITERMANSOLUTIONS.COM GOOGLE.COM.BEYONDWHOIS.COM GOOGLE.COM.AU GOOGLE.COM.AR GOOGLE.COM.ALL.THE.PEOPLE.WHO.SPAM.THE.WHOIS.ARE.SERIOUSLY.ANNOYING.SOMEPONY.COM GOOGLE.COM.AFRICANBATS.ORG GOOGLE.COM.9.THE-WONDERBOLTS.COM GOOGLE.COM.1.THE-WONDERBOLTS.COM GOOGLE.COM

To single out one record, look it up with "xxx", where xxx is one of the of the records displayed above. If the records are the same, look them up with "=xxx" to receive a full display for each record.

>>> Last update of whois database: Wed, 29 Aug 2012 17:52:34 UTC <<<

What has happened here? Why is this random sometimes offensive spam in googles whois record?


 * Heh, that is an unusual form of spam. It looks like people have subdomains of their sites that start with google.com, so a whois search for google.com finds them. For some reason I don't understand, whois sites are blocked at my office, so I can't see if there is a simple way to exclude those from the search. It doesn't look like we have an article on whois spam. 209.131.76.183 (talk) 18:35, 29 August 2012 (UTC)
 * Actually, the whois response lets you know how to single one out. Just put quotes around the domain name. I'm still not sure what the plan is to drive people to their sites this way... 209.131.76.183 (talk) 18:41, 29 August 2012 (UTC)


 * I don't believe the whois record is the target of them. They just have google.com subdomains to appear to be Google, for whatever reason that might be. OsmanRF34 (talk) 11:35, 30 August 2012 (UTC)


 * Notice the ZZZ domains at the top - it seems like they were picked so they would sort to the top of the list. They are registered as name servers, which is how they get subdomains into the list in the first place. I don't think there is any reason to have a nameserver with such an unusual domain name, and if there is it is also certainly for something shady. I'm still not sure what use this would be for spam or non-spam purposes. 209.131.76.183 (talk) 13:57, 30 August 2012 (UTC)
 * I'm curious about how did you got this answer? I mean, what's this utility called? (Or what command did you used) 190.60.93.218 (talk) 12:58, 31 August 2012 (UTC)

Hard disk storage in the future
I just calculated that at my present rate of taking an average of 90 photographs a day (that figure is skewed by the times I visit events like the World Bodypainting Festival - on a normal working day, I take 20 photographs at the most), 20 terabytes of hard disk storage would be enough for the rest of my life. What my niece (she's now 1 year old) or any possible children I might ever have do with my old photographs won't be my problem any more. So what is the easiest and cheapest way of acquiring 20 terabytes of storage? It doesn't necessarily have to be on a magnetic hard disk, but on a local storage medium nevertheless. It should be taken into account that this won't be a problem until about a decade afterwards. Is there any estimate on how storage technology will develop at that time? J I P &#124; Talk 19:23, 29 August 2012 (UTC)
 * 3TB hard drives are available quite readily these days, so 8 of those would get you 20GB (factoring in loss of space due to file allocation table). But since hard drive capacities are increasing regularly and the price (usually) stable or going down it makes more sense to acquire storage space as needed. Also unless you are planning to keep the same camera, camera resolutions also increasing so future photographs will use more space so your calculations may be off. In regards to estimates, Moore's law and Kryder's Law are interesting AvrillirvA (talk) 19:47, 29 August 2012 (UTC)
 * I agree with this. The rest of your life is a very long time to extrapolate things, and by the time you've filled one 3TB disc there will probably be much larger storage available. If you do want 20TB now, I recommend some redundancy. With 8 discs, you'll probably see at least one failure. I would keep at least two copies of everything, and every 5-10 years it might not be a bad idea to get new discs and transfer the data to prevent losing data to discs dying of old age. You could also get a NAS device - it would let you put an array of discs in it allowing you to have access to the 20TB on your network, and would have built in support for keeping redundant data and warning about disc failures. 209.131.76.183 (talk) 19:53, 29 August 2012 (UTC)
 * I would suggest backups rather then just redundancy. In particular, relying on a NAS device or any other similar setup as the sole source of redundancy is fairly risky. For starters there is no geographical redundancy so if your house or wherever the NAS device is stored catches fire and takes the NAS device with it, having 100 copies doesn't help. And a single device also runs the risk of something like a power surge taking out multiple hard disks. Further, even a well made NAS device runs some risk from malware or file system problems losing data no matter how many copies are supposed to exist. The risk can be kept to a minimum if you know what you're doing but in all likelihood if you're asking questions about it on the RD you don't. In other words, similar to a bog standard RAID setup, redundancy even in a NAS device is more important for avoiding downtime then protecting data although it also helps with data between backup. If you can afford multiple, sure go for a NAS or RAID setup and remote backups and whatever else. But if you can only afford one, it makes far more sense to have at least one backup stored in a remote location updated resonably often. Nil Einne (talk) 16:09, 30 August 2012 (UTC)


 * Did your calculations take into account that the filesizes of your pictures are all but guaranteed to grow? You don't plan on shooting at 16MP for the rest of your life, do you? The Masked Booby (talk) 01:57, 30 August 2012 (UTC)


 * 3TB hard drives are available quite readily these days, so 8 of those would get you 20GB
 * 20TB obviously. And since bitmaps (high quality photos at least) are large files, FAT granularity doesn't hit you nearly as hard. If you have a RAID-5, you'd have 21TB altogether, and I'd say that would still amount to enough space to store 20TB of bitmap files.
 * Still I'd say, backup regularly (shouldn't be hard on media, as most of the photos won't change, so you don't have to backup the same files again and again).
 * And be sure to validate the data. That is, keep checksums for the files, and compare regularly if they still apply. Silent data corruption is a killer.
 * And don't be TOO worried about >16MP resolutions. The human eye has only so much resolution, and going past that point (which might be more than 16MP though, I'm nowhere near expert level at that) is quite pointless. Camera manufacturers will keep saying that it ain't, but in the same vein I'll keep saying that they're a bunch of dazed PR guys. ;) - ¡Ouch! (hurt me / more pain) 08:15, 31 August 2012 (UTC)

So it's obvious I need geographical redundancy. Having 200 petabytes of storage at home won't be any good if my apartment is destroyed in a fire. I think what I should first do is get a lot of 1 TB or 2 TB USB hard disks (I currently have two, but I would need at least one or two more), and develop a scheme where I constantly keep at least one in a safe remote site, say a bank deposit box, and regularly swap the disk at the remote site with one at home, so the remote site gets an up-to-date backup. But there's one question that keeps bothering me: Data corruption can just as well strike my original hard drive as any of the backup drives. Depending on which has failed, I need to do different things to fix it. How can I check for such occasions most easily? J I P &#124; Talk 08:04, 1 September 2012 (UTC)

IE8 new version opening PDFs in Acrobat
Running Windows 7, which came with both IE8 (unknown bit number) and IE8 64-bit installed. For reasons that I can't understand, the unknown-bit-number browser recently started to take a long time to open new tabs (nearly a minute sometimes), and because I often run with lots of tabs at the same time (the main reason I refuse to use IE9), this wastes a ton of time, so I've just started to use the 64-bit version. For reasons that I can't understand, when I hold down ctrl and click on a link to a PDF, it insists on opening the PDF in Acrobat instead of opening it in a new tab, which is what would happen if I clicked this way on a link to an HTML page in this browser or on a link to a PDF page in the old browser. Any ideas on how to stop this? I've gone to Tools/Internet Options and played around with the General/Tabs settings, but this still happens even though I've selected the "A new tab in the current window" option at "Open links from other programs in". Browsers are versions 8.0.7601.17514 (unknown bit) and 8.0.7601.17514CO (64-bit edition). I've looked around for webpages that discuss this problem, but I can't find anything. Nyttend (talk) 21:09, 29 August 2012 (UTC)
 * The "unknown bit" version will be 32-bit. The reason the behavior is different between the two versions is that 32-bit plugins (such as the Adobe Reader one which renders a pdf in the IE window) only work on the 32-bit version. I do not know if Adobe offer a 64-bit version of their plugin, but if they do installing that would solve your issue. AvrillirvA (talk) 21:29, 29 August 2012 (UTC)

Operating systems
Hi. How many Source lines of code do the following operating systems consist of:
 * Windows Vista?
 * Windows 7?
 * OS X Mountain Lion?
 * Ubuntu? --41.129.34.34 (talk) 23:07, 29 August 2012 (UTC)


 * The first three don't release (all) their source code, so it's hard to answer. As to Ubuntu, what do you mean by "operating system"? Just the kernel, or the standard tools as well, or the desktop environment (which?) as well, or the apps (which?) as well? It's hard to draw the line. Marnanel (talk) 23:11, 29 August 2012 (UTC)


 * Any rough estimations would be welcome. About Ubuntu I mean the operating system including all the packages and tools on the standard CD that can be downloaded from the Ubuntu website. --41.178.232.21 (talk) 08:26, 30 August 2012 (UTC)


 * The article lists some figures (somehow) for NT through to Windows Server 2003, and for various versions of Debian and for OS X 10.4, so it can't be that hard to answer (if you're not a stickler for the answer being meaningful). Card Zero  (talk) 10:04, 30 August 2012 (UTC)


 * Since you'll accept rough estimates, here's a blog post from a Windows developer which says that Vista is said to have over 50 million lines of code.  Card Zero  (talk) 09:44, 30 August 2012 (UTC)
 * FWIW, http://windows.microsoft.com/en-US/windows/history says "Windows XP is compiled from 45 million lines of code." Vista and 7 are probably similar but a bit larger. It's hard to know what this includes, though. MS Paint and Minesweeper? I suspect so. Internet Explorer? I suspect not. Those gigantic video card and printer drivers? They're not written by Microsoft, but Microsoft probably has the source code, so I'm not sure. -- BenRG (talk) 18:29, 30 August 2012 (UTC)

Do you intend to count the lines-of-source required to write a compiler that can compile an operating system? Next question: do you intend to count the lines-of-source required to write a compiler that can compile a compiler that can compile an operating system? This is called the bootstrapping problem. Of course, this isn't an infinite recursion; but it's a lot of recursion. At some point, you have to acknowledge that "lines of source" is about the most useless way to measure software complexity. Even man-hours is a much more meaningful approximation, though even those are a statistical fallacy well-known to software-engineers. Nimur (talk) 17:20, 30 August 2012 (UTC)