Wikipedia:Reference desk/Archives/Computing/2008 August 24

= August 24 =

VNC into chroot?
I was just wondering if it is possible to connect to a chrooted environment with VNC... forgive me if this is nonsense, I don't know much about this stuff... it's just that I'm having lots and lots of problems trying to use graphical applications inside a chrooted environment... (yeah, I know how to use the terminal, but it is not enough...) SF007 (talk) 03:40, 24 August 2008 (UTC)
 * X11 by TCP can be used in this situation. Setting  in the environment probably should be enough to use it. By default, X uses a Unix domain socket which requires a file which is probably not included in the chroot. MTM (talk) 08:27, 24 August 2008 (UTC)

autorun
when an autorun doesn't function and it gives the message 'the application failed to initialize properly (0-c0000006) click on OK to terminate the application', how can it be resolved? Some file is missing? How can I know which file? Thanks. --Omidinist (talk) 07:58, 24 August 2008 (UTC)

Brute forcing regular expressions
Is there _any_ way to input a regular expression and find all possible strings that it would match? Processing time, memory, and storage are of no concern. I don't care how wacky or out of the way the solution is. If I have to install a virtual machine running Plan 9 on a 64-bit edition of Windows 7 I'm for it. This is only for simple regular expressions, like (T|t)itles? --mboverload @ 08:06, 24 August 2008 (UTC)


 * In most cases, there is no upper bound on the length of a string that can be matched, so there is an infinite collection of matching strings. How do you suppose they can all be listed? --tcsetattr (talk / contribs) 08:36, 24 August 2008 (UTC)


 * Sorry, I should have clarified this is only for simple regular expressions, like (T|t)itles? --mboverload @ 08:49, 24 August 2008 (UTC)


 * But /.*/ is also simple and there are simply infinite number of matches. You have to refine your definition of "simple" a bit before this can go anywhere. --antilivedT 09:07, 24 August 2008 (UTC)


 * It is possible with regular expressions like /x|y/, /x?/ or /x{1,3}/ and their combinations. Just all possibilities have to be enumerated and collected in a set. I haven't heard about any program which does it, but it looks simple. MTM (talk) 09:29, 24 August 2008 (UTC)


 * Here's a subset of the list matching the regular expression "(T|t)itles":

Titles titles abcTitlesxyz !"#$Titles%&'
 * See where this is going? Without anchors ("^" at the beginning and "$" at the end) you haven't limited the length of the string that can be matched. A regular expression that matches only a finite set of strings would have anchors at the beginning and end, and no "*" or "{N,}" quantifiers. It's pretty easy for the finite lists to be very large too. "^[a-z]{5}$", matching any string made of 5 lower-case letters, has 26^5=11,881,376 possible matches.
 * I though only about the matched part, so /a/ would result only in "a". From the original post: "Processing time, memory, and storage are of no concern.". MTM (talk) 09:37, 24 August 2008 (UTC)
 * If you're thinking that this regexp-to-list converter would be a good tool for learning regular expressions, I hope you've changed your mind by now. (There are tools for that already; search for "regular expression editor" or "regular expression debugger".) --tcsetattr (talk / contribs) 09:33, 24 August 2008 (UTC)
 * I don't know as much about regular expressions as you, but I hope you look at the spirit of my question. (T|t)itles?, in my mind but not the computer's, has 4 possible matches, Title, Titles, title, titles. I'm not trying to generate a list of every valid Visa number =). I don't know how to format it correctly so it wouldn't match things outside of the word, but that is something I will learn. Thanks for your expertise. --mboverload @  09:40, 24 August 2008 (UTC)
 * The deeper point in my reply was that regular expressions in real usage are quite often not anchored at both ends. We grep for patterns in a file, but we want to see the whole line that contained the match, not just the matched substring. And the "*" repetition operator is used a lot. The proposed listing tool will apply only to a small minority of useful regular expressions. As an educational device it would suck. For generating lists of strings with alternatives like /[Tt]itles?/ -> {"Titles", titles", "Title", "title"} and /[a-z]{5}/ -> {"aaaaa", "aaaab", ..., "zzzzz"} you don't really need a regular expression. I'm left wondering if there's another motivation. Because if it's a good idea, this tool wouldn't be hard to build. --tcsetattr (talk / contribs) 09:56, 24 August 2008 (UTC)


 * Well, if time is LITERALLY of no concern then (in a Linux/Cygwin setup) you could write a 1 line C program ('while(1)putchar(rand%127);') that would generate random strings and pipe the result into 'grep'. That exactly fulfills the demands of your question...so consider it answered!


 * But for a practical answer (which is what everyone else is attempting to provide), because so many regexps can match a literally infinite number of strings you have to sharply limit the regexp syntax to (for example) prohibit '*' or anything that matches a variable number of characters. So this tool you imagine wouldn't be able to operate on "normal" regexp's - only within this rather carefully restricted subset.  But even so, in almost all "real" examples of regexp's the combinatorial explosion will kill you.  So it becomes just really unlikely that anyone will try to write such a tool in a 'normal' manner.  I think you're doomed!


 * SteveBaker (talk) 16:52, 24 August 2008 (UTC)

Enough philosophy, here's an implementation. This is a Haskell program that takes a regular expression on the command line and prints out all strings matching that regular expression, without duplication, sorted by length and lexicographically within each length. It supports only "theorists' regular expressions" and not Unixy extensions like [a-z] or a{3,5} or even . (though it does support ? and +). If the regular expression matches infinitely many strings then the program will run forever but every matching string will be printed at a finite time. I tested it with GHC but it should work with Hugs or any other implementation that supports the Parsec library. If you save the program as regex.hs you can compile it with the command line ghc -package parsec -O -o regex regex.hs. It's quite fast (not that it couldn't be faster) and I find the output from expressions like (aa|bbb)* rather soothing. I hereby release this code into the public domain. Let me know here or on my talk page if you notice any bugs.

-- BenRG (talk) 18:31, 25 August 2008 (UTC)

Clearing Photo metadata on OS X?
Hello,

Is there anyway on OS X 10.5 to clear or remove all the metadata from a photograph?

Thanks for any help,

--Grey1618 (talk) 09:20, 24 August 2008 (UTC)

Misleading tags on YouTube
Hi; there's a user uploading episodes of a television program to YouTube, and putting the names of other programmes in the tags. This means that whenever searching for any of the listed programmes, one has to wade through pages of the irrelevant episodes he produces before getting to what one wants. I made two polite comments about this; the first received a polite but negative reply, the second just resulted in the whole conversation being wiped and my commenting privileges being suspended. I've emailed this user, but don't expect this to prevail.

The question is - what next? Is there a "report to staff" option where I can make clear what it is I have a problem with? A "report user" facility? A way to exclude his videos from my search results (failing ways to help the whole community by clearing them, that is!)... any tips? Thanks. <font size="3" color="#262CA5">╟─Treasury§Tag►contribs─╢ 15:05, 24 August 2008 (UTC)


 * Alluc --h2g2bob (talk) 16:24, 24 August 2008 (UTC)


 * There's a way to report videos. Aside from the improper tags, they're almost certainly copyright violations. JeremyMcCracken (talk) (contribs) 03:01, 26 August 2008 (UTC)

Installing Stuff
I have a few questions about installing programs on Ubuntu. Thus far I have been installing through the repositories.(With the various methods possible,"sudo apt-get install" along with "Add/Remove" in the Main Menu). But what about stuff that's not in the repositories and needs to be downloaded? For that stuff I encounter files that might end in "tar.gz" or something and when the message pops up it says Bzip archive. It says Archive Manager is the default thing for this sort of thing. So I select it, extract and all that. And then....nothing. Is the program installed? How do I run it? Are there other ways of getting programs? Did I do something wrong? And what's installing through source? Sorry for all the questions. Thanks in advance.--<font color="0070FF">Xp54321 (<font color="4CBB17">Hello! • <font color="4CBB17">Contribs ) 15:25, 24 August 2008 (UTC)


 * There's usually a file called "README" or "INSTALL" to look at, but the normal procedure is to open a shell, cd into the directory, and run:
 * -- you don't always need to run this, sometimes it is missing
 * -- this compiles it all and creates the program for you to run. The program is normally placed into the "bin" directory.
 * It is normally best to run it from the bin directory, but some projects can be installed into /usr/bin etc with:
 * That's the basics - it might fail if you're missing some libraries. If so, try and work out what they are and install them with synaptic. --h2g2bob (talk) 16:22, 24 August 2008 (UTC)
 * Thanks for the answers, but what does "cd into the directory" mean? How do you open a shell?--<font color="0070FF">Xp54321 (<font color="4CBB17">Hello! • <font color="4CBB17">Contribs ) 18:14, 24 August 2008 (UTC)
 * A "shell" is a program that presents a command line interface to the computer's file system, programs and processes. It's mostly just a way of making system calls, running programs etc, but it will also feature commands that work together to form a quasi-programming language allowing you to write little scripts which use logic, control flow, interaction between programs etc etc.  Anyway, to open a shell, choose Main Menu >> Accessories >> Terminal.  "cd to a directory" means use the "cd" command to change the working directory of the shell to a certain directory.  Here's an example from my shell

me@mycomputer:~$ ls Desktop film  incoming  music  oddments  print  sort  tools  zoo me@mycomputer:~$ cd oddments/ me@mycomputer:~/oddments$ ls bills flat  megadrive  sainsburys me@mycomputer:~/oddments$


 * The "ls" command lists directory contents. Note that "~" is an alias for your home directory (/home/me).  You can type "help" at the prompt to get help about built in commands, or use  man to get help about specific programs.  Also, building from source means getting a compiler and associated tools on your very own computer to create an executable program from the source code supplied.  It's kinda fun, and reassuring to do once in a while to prove you can.  —Preceding unsigned comment added by 78.86.164.115 (talk) 18:59, 24 August 2008 (UTC)

cd means to change directory and shell is the terminal (Applications>Accessories>Terminal). So if I had to cd to my Music directory I'd to cd /home/abhishek/Music or rather simply cd Music from my home directory. I think you should read some of this. I always had trouble installing software from source, as I'm a n00b myself and I usually install from the repos, but this guide somewhat gave me the understanding of the process. -Abhishek (talk) 18:46, 24 August 2008 (UTC)
 * Eh...my dad helped me here. But now there's error messages...so heh, I give up. For this one program anyways.--<font color="0070FF">Xp54321 (<font color="4CBB17">Hello! • <font color="4CBB17">Contribs ) 18:59, 24 August 2008 (UTC)
 * I don't think Ubuntu even comes with gcc by default - you have to install the build-essentials package first. In other words, trying to build stuff from source on Ubuntu without knowing what you are doing is an exercise in futility. That's why package managers exist in the first place. « Aaron Rotenberg « Talk « 21:43, 24 August 2008 (UTC)
 * This is true: install the  package for gcc, make etc. --h2g2bob (talk) 23:15, 24 August 2008 (UTC)


 * What are you trying to install anyway? How to install ANYTHING on Ubuntu is a good tutorial for beginners like you. --antilivedT 09:42, 25 August 2008 (UTC)

Sadly, yet another reason why Linux is still far, far from being considered a Windows replacement for the average user. When will the Linux community wake up and realise that people just want things to work? Click on install.exe (or auto-run from CD), choose your installation and click ok. Things are still far easier on Windows, and it really shouldn't have to be this way. btw I'm motivated to post a question below about the different Linux distros. Please have a look.  Zun aid  ©  ®  16:19, 27 August 2008 (UTC)

installing ubuntu from live cd
I had the following [problem] installing ubuntu. I managed to start the live "CD" from my HDD, but I am not able of installing it as a non-live version. How can I turn this live-CD on my HDD into a normal ubuntu installation.

NB: I don´t have a CD-ROM —Preceding unsigned comment added by Mr.K. (talk • contribs) 16:57, 24 August 2008 (UTC)

The only way I have heard of doing this (and the way I did it) is to burn the .ISO (the 'live CD') onto a CD-ROM and install from there. If you have no CD-ROM, then you obviously can't do it this way. I am not aware of any other way to do it.--ChokinBako (talk) 18:53, 24 August 2008 (UTC)


 * There are several ways - see the "installation without a CD" section of https://help.ubuntu.com/community/Installation -- Finlay McWalter | Talk 18:58, 24 August 2008 (UTC)


 * The person on Ubuntu bug 245794 says they can install 7.04 (feisty): can you install that and upgrade over the network. Otherwise, see if you can boot from a USB stick. Fully installing over a network is possible, but stupidly difficult. --h2g2bob (talk) 23:32, 24 August 2008 (UTC)


 * Just install unetbootin and put the liveCD onto your flash drive and boot from that, or use Wubi (Ubuntu) --antilivedT 09:34, 25 August 2008 (UTC)

File Transfer Protocol
Besides uploading files to your own website, what can FTP be used for? One of my friends recently said she had a friend with an 'FTP site' and they were transferring files to each other using that. She could not give me any more information than that, and couldn't even tell me how she was doing it. Can anyone tell me anything about other uses of FTP and why it's useful?--ChokinBako (talk) 18:50, 24 August 2008 (UTC)
 * Well, I'm not really an expert or anything, but basically, you can have an FTP site as well. It's essentially like, oh, the folders on your computer, where you can see and copy and move the files or folders on your hard drive. Only it's for files on a server. Archives of files (like the Interactive Fiction Archive, for example) often have an "FTP mode", because some people prefer to view files that way instead of in a big list. --Alinnisawest(talk) 18:53, 24 August 2008 (UTC)
 * FTP is faster than HTTP, because there's less overhead. As for a site, I don't know about Apache, but I've set up FTP sites using the IIS feature built into Windows XP and Server 2003. To see what you can do with FTP, open up a command line (Start --> Run... --> cmd) and type FTP, then help. Not many commands there, huh? You can access FTP sites using this command or in your "Network Places." You can also do it inside your browser by changing the http to ftp, assuming you're pointing the browser to an FTP site.--129.82.41.231 (talk) 19:01, 24 August 2008 (UTC)
 * On Unix-like systems proftpd or vsftpd can be used. Since FTP is less secure (does not support encryption of e.g. passwords used for authentication) I use sftp instead. MTM (talk) 19:52, 24 August 2008 (UTC)


 * HTTP is a newcomer among Internet protocols. FTP is one of the oldest Internet protocols still in use. Many of the online file archives that are now offered through HTTP began as FTP archives (and are still offered that way). Accessing archives through FTP has some big advantages over HTTP, like the fact that FTP clients let you transfer whole groups of files using wildcards or drag-and-drop. However, FTP is a pretty bad design; it survives only by virtue of its entrenchment. It certainly isn't faster than HTTP (unless your HTTP implementation is broken) and in many cases it will be slower. Also, it doesn't work well through firewalls or through tunnels (like SSH tunnels). Despite its similar name, SFTP is a totally different protocol. It's a much better design which combines the advantages of FTP and HTTP for file transfer. Unfortunately the software support for it isn't nearly as good as for HTTP or FTP, at either the client or the server end. -- BenRG (talk) 20:21, 24 August 2008 (UTC)
 * Actually from what I've read, FTP is "certainly" much faster than HTTP.--129.82.41.232 (talk) 21:18, 24 August 2008 (UTC)
 * Those huge paperback training tomes that litter the "Computers" section of bookstores are, to put it politely, not written by experts. Only the third book you linked offers any reason for its claim, and surely you can see that the reason ("because FTP does not support the display of graphics or streaming media, it can transfer files much faster than HTTP") doesn't make any sense. (The same book goes right on to claim that FTP can't be used to serve web pages, which must be news to these people.) Both HTTP and FTP just send the raw bytes of the file over a TCP connection. HTTP supports sending more than one file over a single TCP connection, FTP doesn't. If FTP is faster it's because HTTP is being routed through a slow proxy, or the ISP is throttling HTTP connections, or something else that has nothing to do with the protocols as such. -- BenRG (talk) 00:14, 25 August 2008 (UTC)


 * So, I basically need to find a decent (free!) server, and use FileZilla to upload (essentially archive or backup) my files?


 * Linux is free and easily works as an SFTP server (as well as HTTP and FTP). Again, Linux is free and works great as a desktop OS with many SFTP clients (KDE embeds it right into the file browser) as well as HTTP and FTP. -- <font color='#ff0000'>k <font color='#cc0033'>a <font color='#990066'>i <font color='#660099'>n <font color='#3300cc'>a <font color='#0000ff'>w &trade; 22:05, 24 August 2008 (UTC)


 * Since you have Windows XP, you can easily install an FTP server from your XP CD. Insert that disk into your CD drive, go to "Add or Remove Programs" --> "Add/Remove Windows Components" --> "Internet Information Services (IIS)" --> "Details" --> "File Transfer Protocol Service (FTP)," then click OK. Now you have a server that you can access directly from the Control Panel.--129.82.41.232 (talk) 23:09, 24 August 2008 (UTC)


 * Sorry, I didn't even say. I have Mac OSX......--ChokinBako (talk) 07:31, 25 August 2008 (UTC)


 * On OS X, you can select Apple -> System Preferences -> Sharing -> File Sharing, then Options and "Share files and folders using FTP". Accessing your home computer over the internet however requires that your ISP allows such access, which isn't always the case.  You may also have to modify the firewall settings of your cable modem or ADSL modem or other network device.  All this is of course assuming that you want to make your home computer an FTP server and leave it on for long periods - it can be easier to use a remote professionally maintained FTP server. 84.239.160.166 (talk) 15:57, 26 August 2008 (UTC)


 * Is your friend referring to topsites by any chance? F (talk) 12:59, 25 August 2008 (UTC)

GRUB
I formatted and reinstalled Windows XP on one partition of my HDD and now my GRUB bootloader menu won't show up to let me boot Linux. What do I need to do to get the menu back? (By the way, I chose only to format the partition that Windows was on by using the OEM's CD). Ζρς ι'β' ¡hábleme! 20:19, 24 August 2008 (UTC)


 * It rather depends what that OEM CD has done. Some are proper windows installers, in which case the installer has zapped the disk's master boot record.  If that's the case, boot with a liveCD and restore grub (see the grub documentation).  If, as is common with many OEM restore disks, it's not a windows installer but a Norton Ghost-like disk image extractor, then it has zapped your linux partition and you'll need to start again.  You can tell which has happened by examining the partition table for the disk (either using Windows' disk manager plugin or Linux's  ).   It's incumbent on me at this point to repeat my oft-ignored plea to those trying Linux for the first time - "Repartitioning and bootloaders are hard for those with limited, windows-only, experience. Cheap removable IDE disks (one for linux, one for windows) is the path of least pain for the unwary".  Later linux dists make this plea less necessary than a few years ago, but people still regularly blast their partitions (generally it's the windows partition) into digital oblivion. -- Finlay McWalter | Talk 20:29, 24 August 2008 (UTC)
 * What do you mean zap the partition? I still have my linux partition showing up, so it didn't repartition or anything like that.  I don't know if it formatted the linux partition though.  By the way, the CD is a Dell recovery disk.  Ζρς ι'β' ¡hábleme! 20:36, 24 August 2008 (UTC)
 * OEM restore CDs, including those by Dell, frequently wipe the entire disk and restore a factory image. They do this quickly (in say 15 minutes) without running you through the hour or so of the windows installer. -- Finlay McWalter | Talk 20:38, 24 August 2008 (UTC)
 * Hrmm, do they do this even when they ask you to choose the partition? Ζρς ι'β' ¡hábleme! 20:46, 24 August 2008 (UTC)
 * No, they wipe the entire disk, MBR and partition table and partitions and all. It probably says something like "this will wipe the info on your computer - proceed [y/n]".  -- Finlay McWalter | Talk 20:55, 24 August 2008 (UTC)
 * Most restore disks say that anyway because they are designed for typical users who will never repartition their disks or install another OS. If you can still see the Linux partition then it just wiped the MBR. Reinstalling GRUB is the way to go. « Aaron Rotenberg « Talk « 21:27, 24 August 2008 (UTC)


 * Don't listen to the nay-sayers above: it's probably just changed the MBR to point to Windows XP, not GRUB. You can reinstall grub from a live cd. Some help here and here. --h2g2bob (talk) 23:43, 24 August 2008 (UTC)
 * You might also want to try the super grub disk live cd. The user interface is ugly, but it'll boot just about anything. --NorwegianBluetalk 17:20, 25 August 2008 (UTC)

How do I copy a web site to my computer ?
I can, of course, copy individual HTML pages, one at a time. However, when I do this, the links don't point to the pages I copied, but back to the original website. How can I fix all those links to point to the copied pages ? StuRat (talk) 21:17, 24 August 2008 (UTC)


 * Hello. On my browser you can do this by viewing the source and then saving it.  This will leave all links in the document as they really are - normally defined relatively, so if you've copied all the pages and the same directory structure then you'll be okay.  This will mean you have to get every document on the site for it to work properly - including style sheets etc.  —Preceding unsigned comment added by 86.13.226.238 (talk) 21:31, 24 August 2008 (UTC)


 * Wget does what you want, but it won't be easy to use unless you use some kind of Unix. —Keenan Pepper 21:33, 24 August 2008 (UTC)


 * I have always used a program called Teleport Pro which has an option to download a "browsable copy" of a website to your computer. The program itself is shareware and a free trial version can be downloaded here. -=# Amos E Wolfe talk #=- 21:37, 24 August 2008 (UTC)


 * There are tools for doing this (wget can certainly copy pages recursively (see article on wget), but I'm not sure whether it rewrites all the links to point to local copies), but there is a (slight) caveat: websites and their administrators HATE this sort-of behaviour. Depending on how much you download, this can put an enormous strain on a server. If you want a good list of these sorts of programs, you can look at what programs wikipedia bans in its robots.txt: (look at the note on wget, for instance). If you do this, please act responsibly, and not completely crash the server you're downloading from. 90.235.4.253 (talk) 21:56, 24 August 2008 (UTC)
 * If you're only downloading one copy of things it's usually not that bad. Wget only really runs into trouble if you're flooding the server with requests (rather than just one at a time) or if you instruct it to download massive amounts of giant file sizes (e.g. videos and etc.). But just mirroring sites is usually not any more server strain than browsing usually is. --98.217.8.46 (talk) 02:00, 25 August 2008 (UTC)


 * The <tt>-k</tt> or <tt>--convert-links</tt> option to <tt>wget</tt> should alter the links to allow local viewing. -- Coneslayer (talk) 17:03, 25 August 2008 (UTC)


 * Using a Windows machine? Try HTTrack. It's easy to use, and robust. There are also Firefox extensions that can do this; I've used one called ScrapBook to good effect, except that it doesn't maintain releative paths (e.g. all the HTML gets dumped into one big directory with all the images, etc., which is fine if you are just saving the page to view it later—it'll all work just fine in Firefox—but not so good if you're doing web development with it and need it to be a perfect mirroring of the file structure, not just the appearance). --98.217.8.46 (talk) 02:00, 25 August 2008 (UTC)

Thanks for the answers so far. And yes, it is a Windows machine. StuRat (talk) 19:30, 26 August 2008 (UTC)

search for websites
Can you tell me the websites that are about crucifixions?24.165.11.18 (talk) 22:00, 24 August 2008 (UTC)
 * You can search the Internet for this information using a search engine like Google or Yahoo!. Go to google.com, type "crucifixions" into the box, and click "google search". You'll then get a list of the websites related to crucifixions. --h2g2bob (talk) 23:41, 24 August 2008 (UTC)

vanishing phone numbers
As I wrote on August 23, whenever I get a page or email listing a phone number, it simply vanishes from the screen. Like, if someone sends me an email with their telephone number, it will briefly show up and then fade away. Also if I look up a phone number on White Pages.com the same thing happens. I also will be on a site looking up (say for instance the nearest Walgreens) Walgreens.com will show the store address and phone number and before I can write it down it fades away. This is very puzzling to me. I don't recall this happening in the past....just the past couple of months. I haven't been doing anything different with my computer lately. Does anyone else have this problem64.184.124.178 (talk) 23:20, 24 August 2008 (UTC)


 * It's only phone numbers that disappear? Does it just leave a white space where it was? Ζρς ι'β' ¡hábleme! 23:45, 24 August 2008 (UTC)


 * Perhaps it's a screensaver coming on (or backlighting on a backlit screen going off). Moving the mouse will bring it back if this is the case. --h2g2bob (talk) 00:39, 25 August 2008 (UTC)


 * Please don't repost the same question each day. You will continue to get answers where you originally posted: . StuRat (talk) 00:57, 25 August 2008 (UTC)