User:NorwegianBlue/refdesk/computing

__NEWSECTIONLINK__

Ability to Self-Update to New Versions in Linux Distributions (Computer science)
I run Ubuntu in a virtual machine, and was happy to note that the updater that Ubuntu uses to patch, update, etc. is capable of downloading and installing new versions of Ubuntu (right now upgrading from 5.10 to 6.06 LTS) into itself, essentially an in place upgrade. Are any other Linux distributions capable of this? I have previously used SuSE, but via YaST, I was only able to update or upgrade components, I could never upgrade the entire operating system. Thanks. MSTCrow 04:39, 3 June 2006 (UTC)
 * On Debian, one can type  and it downloads and installs new versions of all components, including the kernel. (Which is not surprising, since Ubuntu is based on the Debian architecture.) –Mysid(t)  05:34, 3 June 2006 (UTC)
 * Are the upgrade mechanisms of Ubuntu and Debian identical, apt-get, synaptic etc.? --NorwegianBlue 09:28, 3 June 2006 (UTC)
 * In many ways. Ubuntu uses apt-get/aptitude/synaptic to upgrade and dist-upgrade, but uses its own repositories for packages. ?Sverdrup? 01:38, 6 June 2006 (UTC)

Help with MASM32 (Computer science)
I need to create an array that can hold 10 million integer numbers and fill it with random numbers ranging from 1 million to 10 million (minus one), When it is filled I need to write the index and contents to a file. I know how to generate random numbers in MASM and how to write from memory to a file using debug but I need to put them together in a MASM program. Anyone have a demo or example? ...IMHO (Talk) 00:52, 11 June 2006 (UTC)
 * It would be helpful if you rephrased the question to pinpoint the problem more exactly. Do you need help with the memory management/indexing, or with making your "random" numbers fall in that particular range, with writing from memory to a disk file from outside of debug, or with writing a self-contained MASM program? I see from your user page that you program in C. You might try to first write a C-program that does the job, with as few outside dependencies as possible, and then compile the C-program to assembly and study the output. --NorwegianBluetalk 10:13, 11 June 2006 (UTC)
 * Yes, that is quite easy to do with C (or C++) with a few "for" loops and the rand function (see here for help using that), and then using fstream to write to files (see here). Hope this helps. — M e ts 501 (talk) 13:57, 11 June 2006 (UTC)
 * With the range of pseudorandom numbers that IMHO needs, rand will not be sufficient, since RAND_MAX typically is quite small (32767). You might of course combine the results of several calls to rand by bit-shifting. If you do so, I would recommend checking the output with a tool such as ent, to make sure that the result still fits basic requirements to pseudorandom numbers. If you want to write your own pseudorandom number generator, you can find a thorough treatment of the subject in D. E. Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. --NorwegianBluetalk 15:00, 11 June 2006 (UTC)

Yes this information helps. Thanks. However, my goal in part here is to learn (or relearn) MASM. Back in the late '60's and early '70's assembly language was quit straight forward (and can still be that straight forward using the command line DEBUG command). Where I am having trouble currently is with INCLUDEs. Irvine32.inc in particular so I am trying to avoid even the use of INCLUDEs and do this (if possible) using only a DEBUG script. Don't get me wrong I have spent ALL of my programming career writing in high level language simply so that I could get far more work done but now my goal is to go back through some of the programs I have written in a high level language like Visual Basic and convert what ever I can to concise assembler or machine code which might help bridge the gap between Windows and Linux whereas a program written in C++ for Linux (source code) may otherwise find difficulty (after it is compiled under any version of Window's C++) to run. What I need specifically is to 1.) know how to create and expand a single dimension integer array with the above size. Therefore I need help with both the memory management and indexing, 2.) Although I can make random numbers fall into any range in Visual Basic I'm not sure about doing this in assembler, 3.) I also need help in writing the array contents and index to a file since even though I know how to write something at a particular location in memory to a file using DEBUG and how to write an array to a file using Visual Basic it has been a long, long time since I used assembler way back in the early '70's. Your suggestion to try writing in C and then doing a compile to study the output is a good and logical one but my thinking is that by the time I get back into C so that I can write such a snippet of a program that I could have already learned how to do it using MASM. Even still it is not an unreasonable or bad idea. Any code examples would lend to my effort and be appreciated. Thanks. ...IMHO (Talk) 14:58, 11 June 2006 (UTC)

I followed your suggestion to look at the disassembled output of the following C++ code and was shocked to find that while the .exe file was only 155,000 bytes the disassembled listing is over 3 million bytes long.

main {    printf("RAND_MAX = %u\n", RAND_MAX); }
 * 1) include 
 * 2) include 

I think I need to stick with the original plan. ...IMHO (Talk) 15:43, 11 June 2006 (UTC)


 * Wow! You must have disassembled the entire standard library! What I meant was to generate an assembly listing of your program, such as in this example. You will see that in the example, I have scaled down the size of the array by a factor of 100 compared to your original description of the problem. This is because the compiler was unable to generate sensible code for stack-allocated arrays this size (the code compiled, but gave runtime stack overflow errors).


 * To bridge the gap between Windows and Linux, I think that this is definitely not the way to go. If you are writing C or C++ and avoid platform-specific calls, your code should easily compile on both platforms. For platform specific stuff, write an abstraction layer, and use makefiles to select the correct .C file for the platform. If you want gui stuff, you can achieve portability by using a widget toolkit that supports both platforms, such as WxWidgets. I have no experience in porting Visual basic to Linux, but I suppose you could do it using Wine. --NorwegianBluetalk 17:53, 11 June 2006 (UTC)


 * Looks like I need to learn more about the VC++ disassembler. I was using it to created the execute file and then using another program to do a disassembly (or reassembly) of the execute file. I'll study the VC++ disassembler help references for at least long enough to recover some working knowledge of MASM and then perhaps do the VB rewrites in VC++ if it looks like I can't improve the code. Thanks ...IMHO (Talk) 23:05, 11 June 2006 (UTC)
 * You don't need to use a disassembler. In Visual C++ 6.0, you'll find this under project settings, select the C/C++ tab, in the "category" combo select "Listing files", and chose the appropriate one. The .asm file will be generated in the same directory as the .exe. Presumably it works similarly in more recent versions of VC++. --NorwegianBluetalk 04:58, 12 June 2006 (UTC)


 * All of the menu items appear to be there but no .asm file can be found in either the main folder or in the debug folder. With the C++ version of the program now up and running as it is supposed to with all of the little details given attention (like appending type designators to literals) the next step is to take a look at that .asm file ...if only it will rear its ugly head. ...IMHO (Talk) 01:21, 13 June 2006 (UTC)
 * Strange. You could try calling the compiler (cl.exe) from the command line, when the current directory is the directory where your source file lives. The /Fa option forces generation of a listing, the /c option skips the linker, and you might need to use the /I option to specify the directory for your include files, if the INCLUDE environment variable is not set properly. On my system that would be:

E:\src\wikipedia\masm_test>cl /Fa /c /I "c:\Programfiler\Microsoft Visual Studio\VC98\Include" main.c Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8804 for 80x86 Copyright (C) Microsoft Corp 1984-1998. All rights reserved. main.c E:\src\wikipedia\masm_test>dir *.asm Volumet i stasjon E er ARBEID Volumserienummeret er 4293-94FF Innhold i E:\src\wikipedia\masm_test 13.06.2006 19:23             2 292 main.asm
 * which, as you can see, works fine. The problem may be related to the fact that you have the free version, maybe assembly generation is disabled? Would that be the case if it only compiles to .net bytecode? If so, just about any other C compiler will have an option to generate an assembly listing, try using another compiler instead. --NorwegianBluetalk 17:38, 13 June 2006 (UTC)


 * There must be something seriously wrong with my installation. Even after multiple reinstallations of VC C++ v6 Introductory I keep getting command line errors like it can't find the include files, etc. I'll keep working on it. Thanks. ...IMHO (Talk) 00:01, 14 June 2006 (UTC)

Okay, finally got it! The thing that was messing up the command line compile under VC++ v6 Intro seems to have been a "using namespace std;" line (although oddly enough it has to be removed when the contents of an array variable are incremented but required when the same variable is only assigned a value). It looks like VC++ Express 2005 has the same settings function in the GUI but I have not yet been able to figure out and follow the procedure to get it to work. Its command line .asm intruction might also work now but I do not have time right now to test it. Thanks for all of the detailed suggestions and for helping to make the Wikipedia more than I ever dreamed it would be. Thanks. ...IMHO (Talk) 21:35, 15 June 2006 (UTC)
 * I'd add a caution here re the random business. It is remarkably hard to generate random sequences deterministically. See hardware random number generator for some observations. If you have to do it in software, you might consider Blum Blum Shub whose output is provably random in a strong sense if a certain problem is in fact intractable computationally. It's just slow in comparison to most other approaches. ISAAC and the Mersenne twister are other possibilites adn rather faster. On a practical basis, you might consult the design of Schneier and Ferguson's Fortuna (see Practical Cryptography). The problem is one of entropy in the information theory sense, and it may be that this doesn't apply to your use in which case the techniques described by Knuth will likley be helpful. Anything which passes his various tests will likely be satisfactory for any none security related purpose. However, for security related issues (eg, cryptography, etc) they won't as the entropy will be too low. Consider Schneier and Ferguson's comentson the issue in Practical Cryptography.
 * And with reapect to using libraries, I suggest that you either roll your own routines or install a crypto library from such projects as OpenBSD or the equivalent in the Linux world. Peter Gutmann's cryptlib is in C and has such routines. There are several other crypto libraries, most in C. Check them very carefully against the algorithm claimed before you use them for any security related purpose. G luck. ww 04:56, 16 June 2006 (UTC)

(';') licence question (Computing)
R (programming language) is distributed under the GPL Version 2, June 1991. If I develop an application that (among various other things) writes R scripts, in order to use R as a graphing engine, do I have to distribute this application as open source, under the GPL 2.0, or can I distribute it commercially under a different licence? --NorwegianBluetalk 08:04, 16 November 2006 (UTC)


 * The GNU GPL requires that programs based on the source code of another program that is GPLed must be also licensed under the GNU GPL. It does not say the output (in your case, the program) must be licensed under the GNU GPL. In short, your program can be distributed commercially. --wj32 talk 08:32, 16 November 2006 (UTC)


 * See, especially the section on "Combining work with code released under the GPL". There may be situations where your program will (according to the FSF) become covered by the GPL, such as linking with GPL'd libraries (calling runtime functions may be harmful) or subclassing GPL'd classes. As far as I know FSF's assertions on this subject have not been tested in court. Consider consulting a lawyer, or getting a programming language that has no such worries. Weregerbil 10:37, 16 November 2006 (UTC)


 * Languages are not distributed (how would you distribute German?); compilers and interpreters and libraries are distributed. Similarly, languages are not licensed.  As such, merely writing an R script can have no impact on anything; it's just text that you output.  (An exception would be if there were trademark considerations in your output or so, but typically interoperability has trumped trademark law.  See the Sega case.)  The trick is actually if/when you run an R interpreter or so; consult both the FSF and a lawyer, as the details of how you invoke R programs, how your program would behave in the absense of R support, and how closely your program works with the graphing code may affect the issue.  --Tardis 16:22, 16 November 2006 (UTC)


 * Let's see if I get this right... The graphics is an important component of the application that I am considering writing. If the program writes the R code, and R is started separately to produce the pdf's, it would be OK. However, if the program calls R in batch mode via a system call, it would be a violation of the GPL. Is this a correct interpretation? --NorwegianBluetalk 16:54, 16 November 2006 (UTC)


 * If your program never interacts with any other programs (e.g., R interpreters), it's irrelevant what you produce. If you do interact with an R program (via   or otherwise), you may count as associated.  That's when you have to talk to someone in a legal capacity and/or discuss it with the FSF (although I'm sure they'd be none too happy to hear you were trying to avoid writing free software).  --Tardis 17:38, 16 November 2006 (UTC)


 * Thanks. Actually, I'm not trying to avoid writing free software, but I may not be in a position to make the decision myself. Furthermore, the application in question would need to interact more or less tightly with closed-source commercial software, so if we were to release it under the GPL, the same kind of considerations would apply to that interaction. --NorwegianBluetalk 10:33, 18 November 2006 (UTC)

Generating permutations
I refer to this section of the Permutation article:

BEGIN

Algorithm to generate permutations

For every number $$k$$ ($$0 \le k < n!$$) this following algorithm generates the corresponding permutation of the initial sequence $$\left( s_j \right)_{j=1}^n$$:

function permutation(k, s) { var int factorial:= 1; for j = 2 to length(s) { factorial := factorial* (j-1); swap( s[j - ((k / factorial) mod j)], s[j]); }     return s;  }

Notation
 * k / j denotes integer division of k by j without rest, and
 * k mod j is the remaining rest of the integer division of k by j.

END

The algorithm is supposed to generate every different permutation of the integers 1 to n, but when I coded it there were repetitions - could someone else check this, please?

Also, the bit following "Notation" seems to be written very clumsily, omitting the generally-understood word "remainder".81.153.219.51 16:42, 9 January 2007 (UTC)


 * I don't claim I understand why the algorithm does what it does, but I've checked all n! outputs for sequence length n up to and including 9, and it is working just fine for me. Are you perhaps using the permuted s as input to a next call?
 * As to the clumsiness, I'd say: sofixit. If you don't feel comfortable doing that (but why wouldn't you?), consider leaving a comment on the article's talk page. --Lambiam Talk  21:47, 9 January 2007 (UTC)

Yes, I was using successive sequences - when I reset to 123...n order before each use, everything worked fine, thanks. With this apparently not working I searched for alternative algorithms - this one seems by far the shortest. It's a pity there is no attribution of source.

Re. the text, yes I'll change it - I hadn't fully appreciated how things were here.

With a dynamic IP address it may not look as if I'm the same person as before, but I am.81.153.220.80 18:27, 10 January 2007 (UTC)

CD28 gif 3d rotating structure (Jmol, PDB files)
I really enjoyed the rotating 3d image of CD28 on your CD28 page.My question is simply that I would like to know if I could use your .gif format software (code) to portray the 3d structure of another protein molecule for which I have the .pdb file (3d coordinates)on my website?Can you please help me with how to do this if it is indeed legal?Thanks again for another terrific Wikipedia page, as always!

Don Kaiser 


 * If you click on the image, you will find yourself on a page that indicates who created it...ask him how he did it. I've used the free Jmol program to make 3D molecular models from PDB files. I think it can export animated gif images, but not sure. DMacks 20:48, 2 February 2007 (UTC)

AVI files that can't fast foward
I've noticed that certain video files me and my friends have encountered cannot be fast forwarded, you can't skip to any other point in the clip. The only thing you can do is just play it forward, and to get to a certain point you have to sit and watch everything preceding it. Is there any way that this can be avoided? I try taking it into Windows Movie Maker and it's unable to import. What's going on here? NIRVANA2764 21:36, 31 January 2007 (UTC)


 * These files are lacking indexes, which tell a program which position in the file corresponds to what time in the movie. virtualdub may be able to help. Droud 01:44, 1 February 2007 (UTC)


 * Try it in VLC. I haven't heard of problems playing avi files, but if there are still problems seeking, then use ffmpeg to re-encode the file's container. This guide explains fixing flv files using ffmpeg - it is the same procedure for fixing (broken) avi to (fixed) avi. --h2g2bob 02:06, 1 February 2007 (UTC)
 * Indexes? That would be a huge waste of storage space and computation. Think about it: a 32-bit pointer for each frame. --wj32 talk 06:30, 1 February 2007 (UTC)
 * Even with the 64b timestamps and 64b pointers, it still comes out to less than 4MB to index every frame in a 2 hour movie. In any case, most AVIs can only be played from keyframes, so the index would only contain those. Droud 12:48, 1 February 2007 (UTC)

Automated Google searches: length distribution of written growls
How would I go about writing a C++ or JavaScript program that would create a table (CSV is fine for C++) of the number of Google hits for G followed by n R's, as n increases from 2 to 100? Neon Merlin  03:28, 24 May 2007 (UTC)


 * Here it is in Perl, which should give you the general idea. --TotoBaggins 13:52, 24 May 2007 (UTC)

use strict; use LWP::UserAgent; my $web_client = LWP::UserAgent->new(agent => "mozilla"); for my $n (2 .. 127) {    # make a string of 'r's, of length $n my $arrs = "r" x $n; my $url = "http://www.google.com/search?q=g$arrs"; my $response = $web_client->get($url); $response->is_success or die $web_client->status_line. ": $url"; my $page = $response->content; my $hits; if ($page =~ m#Results.*([\d,]+) for #) {        # we captured the hits in the first parens $hits = $1; # remove the commas from the hits count $hits =~ s/,//g; }    elsif ($page =~ /did not match any documents/) {        $hits = 0; }    else {        warn "Could not parse url: $url\n"; $hits = -1; }    print "$hits hits for 'g$arrs'\n"; # be a polite robot sleep 1; }
 * 1) !/usr/bin/perl -w
 * 1) we have to set the user-agent to pretend to be a
 * 2) browser, since google doesn't accept robots
 * 1) Google allows query words up to 128 chars long

Text-only browsing
For the summer vacations, I bought a mobile internet card for my laptop. I'm paying by the megabyte, and would like to minimize expenses. I'm using Firefox 2.0.0.4, Windows Xp. So I unchecked "Load images automatically" and "Enable java" it the "Tools|Options|Content" dialog. However, Flash animations (and images) are still downloaded. There is a "Manage file types" button in the same dialog, but the button that I would like to press for SPL and SWF objects, "Remove action", is grayed and inacessible; I only have the options of specifying which program to open the file in, or to save it to disk, which implies that it will be downloaded whichever alternative I chose. So my questions are:
 * 1) How can I convince Firefox to ignore Flash animations?
 * 2) Does anyone have additional advice about minimizing the amount of data transferred?
 * 3) Is anyone aware of a utility program that I can install, to monitor how much data that is transferred? There is a tool in the software that came with the card, but I would like to experiment using my broadband connection, before starting to spend $$$ on the mobile card.

I have installed the Adblock-plus plugin, which alleviates the Flash problem somewhat, but far from totally. And yes, I have tried Lynx, but found the user interface too limited for my needs. Thanks for any advice. --NorwegianBluetalk 18:00, 20 June 2007 (UTC)


 * You can use the flashblock or NoScript extension to configure when and where to load flash objects. -- JSBillings  18:08, 20 June 2007 (UTC)


 * There are some text-only browsers, made for either people with visual impairments who use screen readers or for those with low-end computers. However, only a portion of sites support this.  Many require that you have Flash or ActiveX or Java enabled to use the site properly.  Also, a few sites have a "Text only" interface you can select.  For example, here's a text-only, ad-free weather forecast site:  (Except for a few icons, such as for the Moon phase).  I suggest you build a Favorites/Bookmark list of these text-only sites and use them exclusively.  StuRat 18:31, 20 June 2007 (UTC)


 * There is a plugin to Firefox that allows you to totally disable flash or any other plugin (not addon, MR Tech is good for that), I can't remember its name. It even will report to the server  that your browser does not have flash capabilities, but a single click will re-enable flash functionality.  (it disables flash everywhere in the firefox executable instance).  I've mentioned it on here before and am running it currently but I can't remember its name. Root4(one) 19:37, 20 June 2007 (UTC)


 * You could try Opera Mini (you'd only need a J2ME runtime for your computer). It's designed to be used in slow metered connections like that, and uses Opera's proxy to compress/optimize/etc. the page before it's sent to your computer. --cesarb 22:48, 20 June 2007 (UTC)
 * (Now writing from mobile connection): Thank you all for your advice. I ended up with turning off images and java as stated, plus using Adblock plus and Flashblock. I also found an applet at http://delphi.about.com, called Network Traffic Monitor, written by Zarko Gajic, which allowed me to do some testing before leaving. I'll check out Opera Mini when I'm back. Thanks again. --NorwegianBluetalk 21:41, 21 June 2007 (UTC)

Graphics cards
I want to upgrade the graphics card on my desktop computer. What kind of compatibility issues do I need to think about. How can I work out what type of graphics card it right for me? MHDIV ɪŋglɪʃnɜː(r)d  ( Suggestion? | wanna chat? ) 16:02, 3 November 2007 (UTC)
 * I think it's just a matter of the kind of slot you have open on your motherboard. If you have a PCI-E x16 slot, that fits modern video cards. If you have 2 free slots, you can use dual cards to render the same display, with SLI or Crossfire (If you want this, make sure you buy cards specifically made for SLI/crossfire). Oh, there's the additional consideration of power- you need a very solid PSU to run newer cards (and a monster to run dual cards). Not only that but some of the beastier cards actually require a separate power connector- they trail a wire and you have to have a little power connector next to the PCI-E slot. -- ⁪ffroth 17:02, 3 November 2007 (UTC)
 * You didn't say what OS you are running. If Linux, get a card with an nVidia chip - the drivers are much better than ATI on that platform. SteveBaker 17:31, 3 November 2007 (UTC)
 * Second that ... also consider a graphics card with nVidia chip if you think you might be migrating or dual-booting with GNU/Linux as one (or more) of the options. --Kushalt 00:11, 4 November 2007 (UTC)
 * It's sort of a non-issue since linux is terrible for gaming anyway :) -- ⁪ffroth 01:14, 4 November 2007 (UTC)

I don't mean to be rude but who is talking about video games? --Kushalt 01:57, 4 November 2007 (UTC)
 * Yeah graphics card isn't only for gaming, you can use it for flashy, shiny effects is well! --antilivedT 05:27, 4 November 2007 (UTC)

Uninstalling GRUB bootloader
I used to dual boot my laptop with linux and windows. The bootloader I had was Grub. I recently deleted the entire linux partition. Now the trouble is, when the computer starts, I get the GRUB comand prompt and I have to type the following to load from my windows partition:

I'd like to know if there is some way by which I can make GRUB execute this code automatically each time my computer boots? Or even better, can I completely unistall Grub and get my windows partition to load by default? Thanks for the help!--Seraphiel (talk) 13:11, 12 December 2007 (UTC)


 * Hi .. Insert your windows xp cd, boot from it. Select to go into the "Recovery console". After you get into it, type "fixboot" followed by "fixmbr". That should do it. --RohanDhruva (talk) 13:46, 12 December 2007 (UTC)

Freeware solution for accepting 48-bit scans from an image scanner? (Computing)
I have a scanner capable of outputting 48-bit (i.e. 16 bits/channel) color scans. However, none of the several image editing programs I have seem to be able to accept 48-bit scans directly from the scanner (including two that have limited support for 16-bit-per-channel images). Images imported into the editors via TWAIN would appear as 8-bit-per-channel.

Is there a freeware solution for accepting 48-bit scans from a scanner and saving the scans in a format that preserve the bit depth?

(Update) The problem is solved. Turns out that my set-up was already capable of transferring 48-bit images, just that additional configuration was needed. —Preceding unsigned comment added by 71.175.23.249 (talk) 13:47, 20 December 2007 (UTC)

There is a version of GIMP called CinePaint that was forked off of the main GIMP branch several years ago specifically in order to add deep-pixel editing. It's been used in a bunch of movies (Harry Potter for example!) so it's pretty reliable. It's free - but it's missing quite a few of the newer GIMP features. It's maintained by a bunch of movie studios. SteveBaker (talk) 17:14, 20 December 2007 (UTC)

GIMP question
In the GIMP, is it possible to have a layer with a transparent background, and add a fully opaque layer (for example, a JPG photograph) on top of that so that the transparent pixels stay transparent, but the non-transparent pixels get their colours from the new layer? Or is it possible to "substract" a layer with a transparent background from a layer with an opaque background, so that the "substracted" pixels would become transparent? J I P | Talk 18:28, 9 January 2007 (UTC)
 * Yes I'm sure it's possible as I've done it before, but I haven't used it in ages and haven't the slightest idea of how to do it now --⁪froth T C  20:05, 9 January 2007 (UTC)
 * If I understood your intention correctly, this is something that can be done with masks. Go to the layer with the transparent background and choose  and then select   and click OK. Now activate the opaque layer, choose   and proceed with whatever settings there are. Go to the transparent layer's mask (by clicking it in the layer list), , then go to the opaque layer's mask,  . Anchor the mask, and you're done. –m y s i d ☎  21:01, 9 January 2007 (UTC)
 * Yes, this works. Thank you! J I P  | Talk 10:11, 10 January 2007 (UTC)

Creating duplicate hard links, then making them independent
Under Windows XP, I have a large library of PDFs that I need to edit down by deletion, and I want to keep both the edited and unedited versions on my hard drive. There will be room for both, but there isn't room to store the unedited version a second time. So what I want to do is make a copy of the folder with an identical tree, but where each file is a second hardlink to the same file in the original folder. I know this can be done with

fsutil hardlink create C:\oldfolder\subfolder\file.pdf C:\newfolder\subfolder\file.pdf

The problem is that fsutil only handles one file at a time, and the destination folder must already exist. How can I write a batch file that will find all the subfolders and files of the original folder, create corresponding subfolders in the duplicate folder and then fsutil hardlink all the files? Neon Merlin  00:35, 23 January 2007 (UTC)


 * Why would you store the unedited versions a second time? And why would you want to use multiple hardlinks, they're almost never useful. --⁪froth T 01:01, 23 January 2007 (UTC)


 * Well, can you think of a better way to make two copies of a folder, with the ability to delete files from one and keep them in the other, but without taking up twice as much space as a single copy? I don't intend to edit the files themselves, only to delete some and keep others (and maybe rename and move a few). Neon  Merlin  01:10, 23 January 2007 (UTC)


 * Nope. How many directories are we talking about? I'm not sure if you'd be able to preserve directory structure, but maybe you could change the file handler from acrobat reader or whatever to fsutil, and mass-open all of the pdfs. It would be a lot of Ctrl-A and Enter, but it might work. --⁪froth T 01:15, 23 January 2007 (UTC)


 * From a command prompt, do "dir /s /b >file.txt" then use a text editor to convert each line in file.txt into two lines of the form "md C:\newfolder\subfolder\file.pdf\.." and then your " fsutil hardlink create C:\oldfolder\subfolder\file.pdf C:\newfolder\subfolder\file.pdf" (yes it appears possible to make a new folder structure by simply adding \.. to a file which doesn't yet exist). Tested on Win2000. -- SGBailey 11:18, 23 January 2007 (UTC)

MS Excel formula help (SOLVED)
Hi folks, I need some advice with Microsoft Excel. I've got a table with various entries in which will be added to all the time, each entry will have a category assigned to it as well as a cost. What I want to do and can't for the life of me figure out how is add up the total cost for each category. I know I could sort the table by category and manually sum it but I want the entries in chronological order, not category order. I'm sure there must be a formula that will do this easily.

I have attached a screenshot containing an example of what I am trying to do, basically if you look at the example I want another table next to the main one with a list of all the categories and next to it, a cell containing the total cost of all the cells in the main table that have that category next to them.GaryReggae (talk) 20:00, 4 January 2008 (UTC)

Cheers!


 * If I understand you correctly, you need to use the SUMIF function. For example =SUMIF(B1:B20,"Red",C1:C20) gives the total of values in column C that have "Red" in the same row in column B. (Adjust the cell references to suit your exact needs.) AndrewWTaylor (talk) 20:07, 4 January 2008 (UTC)


 * Ah thanks, that's it, I looked at SUMIF but couldn't quite understand how to use it as the help is a bit vague. Your example has solved the problem! GaryReggae (talk) 20:17, 4 January 2008 (UTC)

Connecting a computer, a Playstation 3 and a television set
I'm trying to assist my son in connecting an old PC (Windows 2000), a modern television set, and a Sony Playstation 3. He has connected the video signal from the PC to the television via a VGA cable, and the video signal from the PS3 to the television set via a HDMI cable. This works excellently, and he can switch between the video inputs using the remote control of the television. However, two problems remain: I'd be grateful for advice about an optimal setup, where one could achive this, with as easy switching between the devices as possible, without spending a fortune. I'm also a bit confused about the keyboard and mouse setup. Since the PC and the PS3 are not aware of each other, how does one avoid that both devices respond to the signals from the keyboard and mouse? --NorwegianBluetalk 20:18, 5 January 2008 (UTC)
 * 1) He wants to buy a wireless mouse and keyboard for the PC, which he also intends to use for controlling the PS3.
 * 2) He wants to send the audio from all three devices to a set of loudspeakers, which up until now has been connected to the audio output of the PC.


 * For the audio, connect all the audio devices to audio inputs on the TV. Then connect the loudspeakers to the TV's audio-out. This way, whichever device you're switched to on the TV will output its audio through the loudspeakers. For keyboard/mouse, I'm afraid you're out of luck -- you'll either need two separate sets, or you'll need to switch a single set between the PC and Playstation depending on which you want to use. There's no way, as far as I know, for them to "share".  Equazcion •✗/C • 23:19, 5 Jan 2008 (UTC)


 * For the keyboard/mouse, you can use a KVM. I use an Avocent model.  It allows me to change the computer from the keyboard itself.  Technically, you are asking for a KM, not a KVM.  But, you may want to consider running both the computer and PS3 into the KVM and running the KVM to the TV.  Then, when you switch, you switch keyboard, video, and mouse all at once.  You don't have to switch video with the TV remote and KM with the KVM. --  k a i n a w &trade; 02:48, 6 January 2008 (UTC)


 * I thought of that after posting my answer, but that would add a lot of extra wiring and complication. You'd need to get 2 more wires for the keyboard and another 2 for the mouse, along with the KVM box itself, and every time you wanted to switch you'd need to go over to the KVM and switch it -- it wouldn't be automatic. It's not worth the money, the hassle, or the added wire entanglement, if you ask me.  Equazcion •✗/C • 02:54, 6 Jan 2008 (UTC)


 * My KVM came with 4 connections for 4 computers. It plugged in a wireless keyboard and mouse.  To switch, I press "Print Screen" and then 1, 2, 3, or 4 to switch computers.  There is no mess of extra wires and no hassle of going over to the KVM to switch things. --  k a i n a w &trade; 03:00, 6 January 2008 (UTC)


 * Thanks, Equazcion, and Kainaw. Connecting the audio from the PC to the TV was the way to go (something we would have found out if we had RTFM a bit more carefully...); it worked beautifully. Regarding the audio from the PS3, this turned out not to be a problem. I wasn't aware when posting that the HDMI cable took care of that. I did suspect that such a thing as a KVM switch had to exist, however I had no idea of what it was called. We'll check it out. Thanks again! --NorwegianBluetalk 16:46, 6 January 2008 (UTC)

Excel help required
Hi

In MS Excel I want to select all cells that have a particular value. I'm sure this must be really straightforward but I can't work it out from the help file.

Thanks a lot --195.167.178.194 (talk) 12:39, 8 January 2008 (UTC)
 * If you are looking to change the values in these cells you can use Ctrl-H to replace and Excel will search the document for you. Or, CTRL-F and then enter your value and hit Find All. You can then Ctrl left click all the cells you need.  Lanfear's Bane |  t  13:12, 8 January 2008 (UTC)


 * (ec) I think you would need a macro to do this:

Sub SelectByValue Const VALUE_TO_FIND = "test" ' <<< set this to the value you want to find Dim ValueCells As Range, c As Range For Each c In ActiveSheet.UsedRange If Not IsError(c.Value) Then If c.Value = VALUE_TO_FIND Then If ValueCells Is Nothing Then Set ValueCells = c               Else Set ValueCells = Application.Union(ValueCells, c)               End If            End If        End If    Next If ValueCells Is Nothing Then MsgBox "Value not found" Else ValueCells.Select End If End Sub
 * AndrewWTaylor (talk) 13:14, 8 January 2008 (UTC)

SVG files
Two part question:
 * Is there anyone on Wikipedia that can convert a PNG image to SVG?
 * If so, can anyone convert this image from PNG to SVG? TomStar81 (Talk) 06:34, 15 January 2008 (UTC)


 * There are plenty of people who can vectorise on here, but I doubt your request can be fulfilled. The original is a scan of a real object, not a diagram or drawing, and thus unsuitable for SVG. --antilivedT 08:11, 15 January 2008 (UTC)
 * Actually, that looks like an drawing to me, not a scan. Notice how all lines and stars are identical. But the resolution is frightfully low. --24.147.69.31 (talk) 16:26, 15 January 2008 (UTC)


 * If you bribed or threatened someone they could probably redraw it from scratch in SVG, but it's tedious work, mate. Cheers, Ouro (blah blah) 09:37, 15 January 2008 (UTC)


 * If you're looking for a quicky way to vectorize something, Vector Magic works better than most, and is free and runs out of your browser. --24.147.69.31 (talk) 16:26, 15 January 2008 (UTC)

Inverse color
How do I calculate the inverse color for a color? i.e. if I'm given a color, I want the color best suited for the background color, i.e. most readable.

I have seen many many online inverse color tools that simply assume it's the 256 complement (if I am using that term correctly), so (to use decimal), 10's inverse is 245. But of course then you get the color #808080 (i.e. 128 128 128), and the inverse is not correct at all.

Does anyone have a better formula? I'm thinking perhaps the color such that new minus old color has the maximum absolute magnitude. Any ideas? Ariel. (talk) 10:48, 22 January 2008 (UTC)


 * I'm not sure how exactly you're calculating the inverse, but you want to be breaking the colour down into its red green and blue components. For a given colour $$(R,G,B)$$, your inverse will be $$((255-R),(255-G),(255-B))$$. Readro (talk) 11:21, 22 January 2008 (UTC)


 * The answer depends on definition of  'inverse' , that is what color model you use. It's quite safe to assume that black and white should be 'inverse' to each other. But what do you mean by inverse of, say, bright yellow? Should it be dark brown (light–dark inversion, i.e. lightness inversion in HSL color space) or rather intense blue (RGB inversion, proposed by Readro above)? --CiaPan (talk) 12:30, 22 January 2008 (UTC)
 * If what you're after is a color which is as different as possible from the given color, then yes, you want each of the RGB components to have the maximum distance between them. This will always be a color with RGB components of 0 or 255. So for (192, 140, 60), for example, the most distant color will be (0, 0, 255). Of course, I am assuming here that in terms of perception, the components are completely unrelated and 128 is midway between 0 and 255. Also, this will not be bijective - the same color will be the "inverse" of many colors. If you want a bijection which guarantees that the colors will be quite different, you can take each RGB component and add 128 modulo 256. -- Meni Rosenfeld (talk) 12:39, 22 January 2008 (UTC)
 * Readro's inversion, and 128 modulo 256 both don't handle gray - they produce gray as the result. Meni's idea - is I guess, for each component - if it's greater than 127 make it 0, if it's less or equal, make it 255. I think that should work. CiaPan: you ask which type of inversion - I would answer: both. Because otherwise you don't handle gray (the opposite color to gray is still gray), so you need lightness change, but light yellow on dark yellow is also hard to read, so you also want a color inversion. Would Meni's idea do that? Ariel. (talk) 13:30, 22 January 2008 (UTC)
 * Ok, now I understand (I hope). :) Well, yes, Meni's idea is best for you: it will give most contrast color possible, approximately inversing the hue of a given color and choosing its maximum or minimum value. For example for yellow and similar colors you'll get intense blue; for any bright gray color, including white, you'll get black; for all very dark colors you'll get white, an so on. --CiaPan (talk) 13:41, 22 January 2008 (UTC)
 * [ec] Adding 128 modulo 256 does handle gray - it produces black or white for the result (for dark grey it will give light grey, which works but is perhaps suboptimal). My other suggestion will turn light yellow to light blue and dark yellow to white. -- Meni Rosenfeld (talk) 13:56, 22 January 2008 (UTC)
 * Sorry, I miss-understood what you wrote (misread the modulo). I tried it - it handles some colors better then others. It doesn't produce the opposite color (from a color wheel) for all colors like this color for example, on the other hand this seems OK. The other method you posted works nicely, except cyan is not a very easy color to read. Ariel. (talk) 15:38, 22 January 2008 (UTC)


 * You've asked two completely different questions here. A perceptually "opposite" color is not generally a good background color for maximum readability. The most-distant-in-RGB strategy will get you cyan on red, which is a terrible combination for readability even though these are perceptually very different colors. The add-128-modulo-256 strategy will give you such horrors as red on grey . In these cases cyan on black and red on white would have been far better. You should also take into account that you may have colorblind users, and what looks good to you may be unreadable to some of them. I think your best bet is to use a black background for all bright colors and a white background for all dark colors. To judge the brightness of an RGB color you could use the sRGB formula in the Luminance (relative) article. If it isn't obvious by now, your question really belongs on the science desk; human color perception is complicated and weird, and there's no simple mathematical answer to the questions of what looks most different or best. -- BenRG (talk) 15:24, 22 January 2008 (UTC)
 * The color is actually the background color, and I just want something readable for the text label on top of the color (the color blind person would read that - I think contrasting colors and brightness simultaneously should be readable to a color blind person, just changing the color would not be). I tried making all the text either black or white, it worked for many colors, but not for all of them. The ones that were mid-way in luminosity were hard to read. So far the most-distant method worked best, but some colors look terrible, as you mentioned. There has to be a better way - your cyan on red are both full luminosity colors, I want to also invert the luminosity. So solid red should become black (the cyan at 0 darkness). (Well, I think I want that - I don't really know what I want :) Ariel. (talk) 15:49, 22 January 2008 (UTC)
 * Can you give examples of colours for which neither a white nor a black label worked well? --Lambiam 18:02, 22 January 2008 (UTC)
 * For example: this color in white and this one and in white . I mean it's not terrible, but a color instead of black or white would be better. Ariel. (talk) 15:47, 23 January 2008 (UTC)


 * Note that properly converting RGB colors to grayscale isn't quite as simple as just adding the components together. In fact, the precise conversion formula will depend on the specific RGB color space used, but a quick and dirty rule is to weigh the red channel by 30%, the green by 60% and the blue by 10%.  Thus, to pick the maximally contrasting color from the set {black, white}, given the background color (r, g, b) ∈ [0,1]3, one might use the formula:

\mbox{color } = \begin{cases} \mbox{black,} & \mbox{if } 3r + 6g + b \ge 5 \\ \mbox{white,} & \mbox{otherwise}. \end{cases}$$
 * —Ilmari Karonen (talk) 04:31, 23 January 2008 (UTC)
 * I particular, the rule I give above produces, for extremal backgrounds choices, black on green, yellow , cyan and white , and white on black , red , blue and magenta . Simply using the unweighted mean would yield the opposite choice on green and magenta , which I think you'll agree would not look optimal.  —Ilmari Karonen (talk) 04:38, 23 January 2008 (UTC)
 * You could also perhaps experiment with lowering the threshold a bit, say from 5 to 4.5, since the human eye seems to be somewhat more comfortable with black on a dark background than with white on a bright one. But the 50% threshold ought to work well enough.  —Ilmari Karonen (talk) 04:48, 23 January 2008 (UTC)

Is the Ariel asking about complimentary colors as in art and design? Complimentary are the exact opposite of each other and shows the best contrast. It is based on RYB not RGB. 2 complimentary colored paints when mixed will always be black. See http://www.faceters.com/askjeff/answer52.shtml NYCDA (talk) 23:58, 22 January 2008 (UTC)


 * Two complimentary colors, as it says in the article you linked, usually mix to a muddy brownish color. Black and white have to be produced separately, essentially as two more primary colors. Black Carrot (talk) 04:16, 23 January 2008 (UTC)


 * That would be Complementary colors. AndrewWTaylor (talk) —Preceding comment was added at 08:45, 23 January 2008 (UTC)
 * I know what they are, I was asking how to calculate them. And if they are a good choice. Ariel. (talk) 15:47, 23 January 2008 (UTC)

(See above for 2 colors that didn't work great with black or white.) All the ideas posted worked pretty well, but I think it can be better. I like the most distant color rule, but I'd like to remove cyan, magenta, and yellow from the options, since those colors aren't the easiest to read. Any ideas? Ariel. (talk) 15:54, 23 January 2008 (UTC)


 * Mathmatically the opposite of "all" is "some", not "none". Opposite of gray is white or black depending on your prespective.  You might be better of if you add 128 to the component if the value is less then 128, substract 128 if the value is greater then 128.  For 128 itself, you can set it to 0 or 255.  Using this rule, you get
 * sample sample sample sample

NYCDA (talk) 19:11, 23 January 2008 (UTC)

Removing email addresses
Whenever someone posts their email address here, some well-meaning soul will remove it and say "I'm helping you not get spammed". My question is: how would that help? Presumably spambots are pointed at en.wikipedia.org and spider down from there. If they make it from the Main Page to Reference_desk to Reference_desk/Computing, surely they'll take the next step to the page history and harvest the addy from there. Is this just a WP superstition, or is there evidence that spiders are programmed to ignore the history pages? Bonus question: why isn't WP:RD/C linked in my post? Thanks! --Sean 23:55, 29 January 2008 (UTC)
 * I don't know about webcrawlers and the page histories, but your WP:RD/C link doesn't link because you can't link to the page you're on. Useight (talk) 00:30, 30 January 2008 (UTC)

Good question. When I remove e-mail addresses, I am not thinking about automated scripts. I am thinking about humans. I assume it would be possible for administrators to remove the edit from page history but I am not sure if automated scripts will be able to pick up the traces after the deletion (I presume that traces of undoable actions of administrators are logged). Kushalt 01:10, 30 January 2008 (UTC)


 * I mostly just remove with a comment saying "see the rules up top." It's more of an educational "no that's not how we reply here" kind of thing.  Of course, if somebody posts and then just waits around for an email and never comes back, then they'll never see the message.  C'est la vie.  -- LarryMac  | Talk  01:15, 30 January 2008 (UTC)


 * If you post your email address here, it can be harvested, either via a web spider or via downloading the database. If you don't post your email address, it can't figure it out from the history. The history shows your Wikipedia username. This is not your email address. Marnanel (talk) 01:48, 30 January 2008 (UTC)


 * The point is that the history will show the email of the person whose email was removed by a helpful editor - as in this diff. -- LarryMac  | Talk  02:09, 30 January 2008 (UTC)


 * Oh, I see, sorry. Marnanel (talk) 02:16, 30 January 2008 (UTC)
 * I am fairly certain that administrators can only delete pages, not edits. For pages, the record of the deletion taking place is publicly available, but the deleted content (such as an email address) is not. -- Meni Rosenfeld (talk) 13:09, 30 January 2008 (UTC)
 * They can (and do) delete edits by the cumbersome procedure of deleting the page and undeleting some but not all of the edits. The deleted material is then available to all admins. Since this is still fairly public, serious problems (such as libel) are dealt with via oversight. Algebraist 16:38, 30 January 2008 (UTC)

Can users with oversight see what other users with oversight have deleted? Kushalt 20:21, 30 January 2008 (UTC)
 * yeah --D\=&lt; (talk) 21:22, 30 January 2008 (UTC)
 * Not if they do the delete-then-undelete-a-limited-number-of-edits trick. That makes it so that even admins can't see it. --24.147.69.31 (talk) 23:31, 30 January 2008 (UTC)

Security on Linux
Real amateur question here.... I'm primarily a Windows user, but I have Ubuntu installed on one computer. Do I need to get myself some sort of anti-virus software to run under Ubuntu? If so, can you recommend a package that will work for me? ike9898 (talk) 15:56, 10 February 2008 (UTC)
 * It's unlikely you'll need anti-virus software for Linux, but if you do need one, there's ClamAV, which is the only one I've heard of. x42bn6 Talk Mess  16:12, 10 February 2008 (UTC)
 * All you really need is to keep up to date with the security patches. This can be set to install automatically, but I think the default is to alert you like this. A quick google shows this page on psychocats.net, which has a very good set of pointers. You generally don't need to worry about a firewall, the Linux one is called iptables. --h2g2bob (talk) 18:36, 10 February 2008 (UTC)
 * This link was VERY helpful; exactly what I wanted to learn about. Thx! ike9898 (talk) 23:30, 10 February 2008 (UTC)
 * If you want to configure the firewall on Ubuntu, install the program Firestarter using Synaptic. — BradV 21:16, 10 February 2008 (UTC)
 * I really hate firestarter, if you know what you're doing then you don't need a gui, and if you don't then firestarter just completely screws everything up D\=&lt; (talk) 04:48, 11 February 2008 (UTC)
 * Avast runs on Ubuntu, if you want to try that.--ChokinBako (talk) 08:16, 11 February 2008 (UTC)

mp3 music collection
What programme would the wikipedians recommend i use to organize my music collection automatically? Ie. organize the folders into artist/album/track and also rename all the tags to their correct names. A bonus would be for it to download album art. Does such a programme exist, or do i have to do it myself? thanks RobertsZ (talk) 10:54, 9 February 2008 (UTC)


 * I'd recommend WhereIsit for the cataloguing and Tag&Rename for the tagging and accessing/inserting album art from Amazon. -- Web H amster  11:10, 9 February 2008 (UTC)


 * iTunes. —Nricardo (talk) 16:21, 9 February 2008 (UTC)


 * You don't say what operating system you use. Anyway, I use Rhythmbox and love it. —Keenan Pepper 22:08, 9 February 2008 (UTC)


 * For Linux, I would use either Rhythmbox or Amarok. For either Windows or Mac, iTunes is the way to go. — BradV 03:25, 10 February 2008 (UTC)

PDF Files
Does anyone know how to do this ... or even if it can be done at all? Say that I have 10 pages in a report ... Pages 1 through 9 are in a single PDF file ... and Page 10 is a single page in M.S. Word. Can I somehow bring that lone Page 10 from M.S. Word into the PDF file ... so that all 10 pages are all included in one single document? If so, how do I do that? Or is this something that can't even be done? The long story short is ... the first 9 pages are coming from one source ... and they (all 9) can be converted into a PDF file as a whole unit. And that last page (page 10) is coming from a different source ... and it can also (separately) be converted into a PDF file. But, the ten pages as a whole unit cannot be converted into a PDF file as a whole --- since they are coming from different sources before PDF conversion. So is there a way to join together somehow the two separate PDF files into one file, with all ten pages together? Thanks. (Joseph A. Spadaro (talk) 20:19, 9 February 2008 (UTC))


 * There's [Pdftk] which is free software, and of course the commercial Adobe Acrobat if you want a user-friendly interface. 84.239.133.86 (talk) 20:34, 9 February 2008 (UTC)


 * What about printing off the pdf and the word, and then shoving the 10 pages back through the scanner. Or is that just way too analogue? Joesydney (talk) 02:11, 10 February 2008 (UTC)


 * Yeah, that's pretty analogue, especially when there is PDF merging software available for free. --98.217.18.109 (talk) 16:34, 10 February 2008 (UTC)

Thanks for all the replies. Number 1 ... I never knew that there even was free software available to do PDF merges. Good to now know. Number 2 ... No, I would never even consider the "analogue" re-scanning of 10 pages. It's a much longer document, and the 10 pages was just a hypothetical example for the purposes of this question. Thanks to all for the input. Much appreciated. (Joseph A. Spadaro (talk) 22:59, 10 February 2008 (UTC))


 * Thanks to 84.239.133.86 for the link to Pdftk. Exactly what I've been looking for myself. --NorwegianBluetalk 20:22, 11 February 2008 (UTC)

Windows Vista, suitable directory for standalone command line utility
I'm trying to help my father-in-law, who bought a new PC with Windows Vista, to get a command line utility working. I've never touched Vista. His hearing isn't too good, and he lives in a different city, so this is (a rather difficult case of) telephone support. So far, he has managed to copy the .exe file to a usb-stick. The question is, where to put it. On XP, I'd simply drop it in C:\Windows or C:\Windows\System32. Is the directory setup the same on Vista, and if so, would copying it there work as it would in xp? Otherwise, what would be the easiest solution? Having him create a directory for it, modify the PATH environment variable etc. to make Vista aware of the program would be *very* difficult. --NorwegianBluetalk 19:52, 11 March 2008 (UTC)
 * I believe command line support was removed from Vista. ArcAngel (talk) 21:17, 11 March 2008 (UTC)
 * It's still there, just less obvious. Either hit start and search for 'cmd' or go to all programs->accessories->command prompt. Anyway, I just checked the directory structure on my Vista box, it's the same as XP - those paths are still good. Not sure if UAC would take too kindly to it, but unless it does really wierd stuff, it should be fine. What does this program do? CaptainVindaloo t c e 21:29, 11 March 2008 (UTC)
 * It's good old grep, which he uses for searching his genealogy data base (a huge collection of huge text files). Unfortunately, the fact that he's working from the command line does not imply that he's computer savvy, just that he started doing this on a CP/M machine... On Vista, we had a hard time even launching a shell, but this site came to the rescue. Will we be able to copy the .exe to c:\windows from an ordinary dos shell, or will we need a shell with special privileges? If special privileges are needed, can these be acquired once the shell is launched (like su in unix)? --NorwegianBluetalk 22:04, 11 March 2008 (UTC)
 * I think maybe I found the answer to my preceding question in a subpage of the site I linked to, but I'd be grateful if someone would check it out and confirm:
 * Logon to Vista using your normal username and password.
 * Click on the Start button
 * Click on Start Search.
 * Type, cmd.
 * Right-click cmd, select 'Run as administrator' from the shortcut menu.
 * In the last step, what exactly is meant by right-clicking? Clicking on the blank background of the shell, on the icon at the top left of the window frame (system menu) or something else? If you're thinking, "well why don't you try?", remember: I cannot see the machine, the gentleman I'm trying to help does not communicate very clearly what he is doing, and has problems in hearing what I am saying. --NorwegianBluetalk 22:20, 11 March 2008 (UTC)


 * That's right, I was just typing up a response to say the same thing. You need to close the command prompt, right click the command prompt icon and select run as admin from the menu. Provided the text files are in his workspace (C:\Users\username), there shouldn't be any more problems from here. CaptainVindaloo t c e 23:04, 11 March 2008 (UTC)
 * You definitely should not need elevated privileges to grep. D\=&lt; (talk) 01:49, 12 March 2008 (UTC)
 * Clearly not, the question was whether you need elevated privileges to copy grep.exe from the usb-stick to C:\windows. CaptainVindaloo, you're saying that you need to right-click the cmd-icon in the start-menu, right? Is there no way to elevate the privileges of a shell that is already running? --NorwegianBluetalk 12:54, 12 March 2008 (UTC)
 * Yep, it's the start menu icon you're after. I don't think there is a way of elevating cmd's permissions while it's still running. Sorry Froth, I didn't mean to imply you needed elevated permissions to use grep, I just meant you just need it once to install it in system32 without aggroing UAC. CaptainVindaloo t c e 18:27, 12 March 2008 (UTC)
 * Thanks a lot! --NorwegianBluetalk 19:22, 12 March 2008 (UTC)
 * Thanks a lot! --NorwegianBluetalk 19:22, 12 March 2008 (UTC)

Another way to do it: Browse to the desired folder. Hold down shift and right click on the folder. Select "Open Command Window Here". --— Gadget850 (Ed)  talk  -  19:49, 12 March 2008 (UTC)

Administrator Rights in Windows XP
I just inherited a laptop from my company. They are very security conscious and I only have "user" rights on the laptop. I can't even change the time! Anyway, is there any way to change my access to "administrator" rights so I can actually use the computer? It has Windows XP Professional. Any suggestions? Tex (talk) 19:47, 28 March 2008 (UTC)
 * If you could do that easily, wouldn't that defeat the purpose of having "user" rights? If you're having a problem with it, I'd bring it to your company and state your case to them... Someletters&lt;Talk&gt; 20:27, 28 March 2008 (UTC)


 * It is very easy to do. Google for "windows administrator password change knoppix".  What you will find is a lot of instructions for using Knoppix to boot into linux, then mount the Windows drive, then change the administrator password, then reboot into Windows.  All operating systems which keep the passwords on the disk are prone to this sort of hack. --  k a i n a w &trade; 20:30, 28 March 2008 (UTC)


 * This free tool also lets you easily reset the administrator password for XP. I've used it many times when working with machines that were once locked up but nobody who had done the locking was still around. (As for whether you should do it, it's your judgment call, not mine.) --Captain Ref Desk (talk) 20:35, 28 March 2008 (UTC)

Concatenate audio files
How can I concatenate audio files? I am using Linux --B. Rajasekaran 18:47, 21 June 2008 (UTC)
 * This page gives some info on doing that. See the "Splitting/Concatenating MP3 Files" section. Jessica  N10248  19:04, 21 June 2008 (UTC)

Linux on a USB drive
I'd like to install Linux on my USB drive. (1 GB total, ~800 megs free.) Is it possible? What distribution should I choose? (Other than Fedora 9's LiveCD->USB tool.) --grawity 15:20, 23 July 2008 (UTC)


 * It's definitely possible, assuming your computer will boot from a USB drive. Here's a page on using Puppy Linux.  -- LarryMac  | Talk  15:24, 23 July 2008 (UTC)


 * I know it's possible (I tried it with Fedora's tool mentioned above), but I want a real install - not something that runs off a read-only image with a few megs for user data. --grawity 16:25, 23 July 2008 (UTC)


 * I guess I was confused by your use of the phrase "Is it possible?" -- LarryMac | Talk  17:39, 23 July 2008 (UTC)


 * may help you. —Preceding unsigned comment added by Willnz0 (talk • contribs) 22:56, 23 July 2008 (UTC)


 * PendriveLinux. The min requirements are 1GB though; and you gotta have windows to install it :/ Abhishek (talk) 13:48, 24 July 2008 (UTC)


 * This also runs out of a squashfs image. --grawity 13:54, 24 July 2008 (UTC)


 * Unetbootin (http://unetbootin.sourceforge.net/) is a very good tool to create Live USBs, it works on Windows and Linux, and can install various distros... SF007 (talk) 14:34, 24 July 2008 (UTC)

filesystem in a file, linux
i have a 1gb file called filesys. i want to format it into a filesystem and be able to mount it. Can someone give me a tutorial? I'm pretty sure this is possible right?


 * Quick start: mke2fs filesys ; mount -o loop filesys /some/directory
 * Add whatever filesystem options you want to mke2fs, like -j to create an ext3 journal. Add -F to avoid the "are you sure you want to do that on a regular file" prompt.
 * If you want to bind the file to a block device without mounting it, losetup /dev/loop0 filesys --tcsetattr (talk / contribs) 05:14, 29 July 2008 (UTC)

Raw (dd) copy of smaller to larger disk.
[http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Archives/Computing/2008_July_28#Raw_.28dd.29_copy_of_smaller_to_larger_disk. Link to archived thread]

Scenario: I'm using a linux machine, where the main disk with linux is /dev/sdb. Let's assume that I have also connected a second disk (400GB) taken out of a healthy windows machine, recognized as /dev/sda, which has a single NTFS partition. Finally, a third disk (500GB) is connected, which is unformatted, and recognized as /dev/sdc.

Question: If I first use dd to make a raw disk image of the windows disk (dd if=/dev/sda of=my.image bs=1024), and then copy the image (dd if=my.image of=/dev/sdc bs=1024) to the unformatted, larger disk, will the larger disk then be functional (i.e. bootable as the main windows disk in the machine where its contents came from, or mountable in linux, or readable in some other way)?

Motivation: The scenario above does not describe what I'm trying to do, but I need to know if it would be expected to work, in order to interpret the results of an attempt at data recovery.

The real problem: I'm trying to help some friends recover data from a crashed harddisk. I have mounted the disk on a linux machine. It is seen by the bios, and cfdisk and sfdisk report no problems. However, when I try to mount it (mount /dev/sda1 /mnt/bad-disk), I get the message

$MFTMirr does not match $MFT (record 0). Failed to mount '/dev/sda1': Input/output error NTFS is either inconsistent, or you have hardware faults, or you have a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows TWICE. The usage of the /f parameter is very important! If you have SoftRAID/FakeRAID then first you must activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for the details.

So I checked if there are damaged sectors:

dd if=/dev/sda of=/dev/null bs=1024.

dd terminated after about 3 GB, reporting damaged sectors.

I then used dd_rescue, to get an idea of the extent of the damage:

dd_rescue -A -v /dev/sda /dev/null

(-A is replace damaged sectors by zeroed sectors of the same sice, -v is verbose). The result was that only 36 sectors were unreadable, some of which might be recoverable by reading the bad parts in reverse, and patching them into the main image. So my plan is this: use dd_rescue to make a disk image, with the unreadable sectors replaced by zeroed sectors, and then dd the image to an empty, healthy disk, and hope that it is mountable in either linux or windows. The problem is, I don't have a disk with the same geometry as the damaged one. I do, however, have a larger disk.

Other suggestions on how to recover the data would of course also be appreciated. In particular, advice about software that is able to reconstruct a physically OK but logically faulty NTFS disk would be most welcome. Thanks, --NorwegianBluetalk 23:15, 28 July 2008 (UTC)
 * I have done this far too many times so trust me here
 * First, go to the HDD manufacturer's website and download their diagnostic floppy/CD. Trust me - just do it. They can do a proper fix of all bad sectors and do a proper test of all the mechanics. This should be your first step with any hard drive that has a physical problem like yours. Once it is done scanning see if it boots the drive or if you can mount it.  Chances are that there is extremely little file damage
 * Now you can try your dd trick - which SHOULD work unless your drive is so screwed even those manufacturer boot CDs can't fix it. Then the only thing you can do is try to bang the disk against stuff and put it in the freezer to get really cold.
 * If it will not boot Windows from the new drive slave the drive to another Window's install and use R-Studio NTFS if there are files missing. Only 50 bucks and the best recovery software I know of.  If you want a free - but still great solution try the Ultimate Boot CD for Windows.  It is free but requires an XP cd to build it.  Every computer guy should have this in their bag.  I PERSONALLY endorse it. It has similar free tools - masses of them.--mboverload @  01:28, 29 July 2008 (UTC)
 * Thanks a lot for your advice, mboverload. It's a Maxtor disk, so Seatools would be the program to use. I'll try it, and if it doesn't succeed, try the dd route, and check out the tools you recommend if it still isn't mountable. These things tend to take time, and mine is limited, so by the time I'm finished, this thread may have reached the archives. Therefore, I'll write a message on mboverload's talk page when I'm done, reporting whether I succeeded.


 * Meanwhile, I'm still interested in hearing from anyone who have tried doing a raw disk copy of a smaller to a larger disk, or who know for a fact that this works, and that the remaining space is flagged as free space, as one would expect. --NorwegianBluetalk 18:00, 29 July 2008 (UTC)


 * I've done it a few times when replacing a smaller hard disk with a larger one. Every time, though, I copied one partition at a time, rather than the full-disk copy that you're trying.  Here are the steps I did:
 * Partition the new disk so it has one partition that is exactly the same size as the partition you're trying to copy
 * Use dd to write the data to the new partition. You can do either a partition-to-partition copy (dd if=/dev/sda1 of=/dev/sdb1) or a file-to-partition copy (dd if=/home/myimage of=/dev/sdb1)
 * Do any neccessary repairs to the filesystem.
 * Mount the new partition to make sure everything worked correctly.
 * At this point, if you're using a growable filesystem (ext2/ext3 are, FAT isn't, I don't know about NTFS), you can enlarge the partition and then enlarge the filesystem to fill the available space. Alternatively, you can create a new partition out of the empty space.
 * If you've got an image of the disk, you can use Linux's loopback filesystem features to use it as if it were an actual physical disk. --Carnildo (talk) 21:59, 29 July 2008 (UTC)
 * Thanks!. This was very useful information. I like Partimage for backups, but have been a little worried as to what would happen if I had to restore to a disk with a different geometry. Another tool to achieve what you describe, is the Seagate Disk Wizard (works if there's a Seagate or Maxtor disk connected). It does exactly what you describe in one step, including resize the partitions (fat and ext3 also). I used it recently, but don't remember whether I had to use the Super Grub Disk to boot into linux, and do a grub-install to get grub working. The program warned that I would need a bootable medium to get in touch with linux, but as far as I recall, it wasn't necessary. Seagate disk wizard is a wonderful tool - when it works. I was unable to convince it to configure the partition sizes manually (but heck - it did a good job at resizing proportionally), and I was unable to successfully clone an IDE disk to a SATA disk. --NorwegianBluetalk 07:20, 30 July 2008 (UTC)
 * The GParted live cd will expand NTFS volumes. I've done it.  It is very picky though - if it detects any problems it'll say "fogettaboutit".  Which is what it should do! --mboverload @  02:56, 30 July 2008 (UTC)
 * I've succeeded in rescuing the entire disk without doing any of the above! When I added an additional IDE disk to the PC, the defective SATA disk suddenly started working. I will do some additional diagnostics in the weekend, in order to see if this is reproducible, and understand what happended. I'll post the details on mboverload's talk page. Thank you, mboverload and Carnildo, you have taught me a lot --NorwegianBluetalk 07:35, 30 July 2008 (UTC)

Follow-up on reference desk question
Several weeks ago, you answered a question about an attempt of mine at [http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Archives/Computing/2008_July_28#Raw_.28dd.29_copy_of_smaller_to_larger_disk. rescuing a crashed hard disk]. The hard disk was a SATA disk, which was connected to a PCI SATA card, on a PC with a motherboard that supported IDE only. The defective disk mysteriously became mountable in ubuntu when I connected a third hard disk to the system. I promised to come back with some diagnostics, to understand what was going on. The symptoms were as follows: I don't really understand this. The error message indicates that there was a discrepancy between the Master File Table and its mirror copy. In the presence of a discrepancy, I would have thought that windows xp would have been smart enough to recover if one of the two copies were correct, but it appears that this was not the case (??). Could the presence of the IDE disk somehow make both xp and linux use a different strategy at reading a defective disk? Doesn't sound likely to me, maybe something weird was going on at the hardware level.
 * In the power-on self-test, the SATA disk was detected.
 * Scanning the disk with dd_rescue showed that only 36 sectors were unreadable.
 * I connected a third IDE disk to the system (as slave, to the same cable as the boot disk) and suddenly, the SATA disk was mountable from ubuntu.
 * When I disconnected the IDE disk, the SATA disk was no longer mountable from ubuntu. The error message was the one shown in the post, with "$MFTMirr does not match $MFT" probably being the cause of the problem.
 * From windows xp, the disk was shown with an icon and a drive letter in the "My computer" folder, but it was unreadable.
 * I repeated this several times, and it was reproducible: the SATA disk was mountable when the second IDE-disk was connected, not mountable without the second IDE-disk.
 * I then booted windows xp with all three disks connected. Now, windows xp was able to access the SATA disk with no problems.
 * After I had successfully read the SATA disk from windows xp, the previously defective disk became mountable and readable from both ubuntu and xp, both with and without the IDE disk connected.

Anyway, thanks again for your help. --NorwegianBluetalk 11:18, 7 September 2008 (UTC)

Transfer speeds / online storage facilites
Hello Wikipedia,

I would really like your thoughts on online storage companies. Essentially, rather than spend my money on a big clunky laptop with lots of storage space, i'd love to take advantage of having a good internet connection and buy a light, portable laptop and store all my work (Adobe illustrations etc) online. (i do have a normal external hard drive but really hate using it -its just too 'fiddly' plugging it in and out.. i'm quite badly organised.)

My two main concerns are: 1) Is there a reliable 'brand' that won't lose my data or make it difficult to get at.

2) do they generally offer some guarantee of privacy? i don't mind a computer looking at my stuff (and then targeting adverts accordingly -e.g. gmail) but i do mind a human doing it.

Is my web 2.0 dream a possibility? if any knows any good companies then that would be great.

Thanks,

82.22.4.63 (talk) 21:13, 25 August 2008 (UTC) p.s. i'm in the UK if that makes any difference at all


 * 1) Go for bigger brands. With the size of their infrastructure, they are more likely to be reliable.
 * I suggest you avoid Web2. It is not yet widespread, and I have found that the tried and tested stuff just works, simply.
 * Have you considered the size of Adobe Illustrator files? They might be large.
 * What platform have you got? If it's Mac OSX, avoid MobileMe for the meantime. They have had lots of problems.
 * Are you going to use a backup utility, like Apple Inc.'s Time Machine (software)? Possibly rsync?My name is anetta (talk) 22:02, 25 August 2008 (UTC)


 * Amazon S3?My name is anetta (talk) 22:12, 25 August 2008 (UTC)


 * How fast is a "good internet connection"? For comparison, a USB hard drive is equivalent to about a 480 megabit per second connection, while an internal hard drive is about 1500 megabits per second. --Carnildo (talk) 23:56, 25 August 2008 (UTC)

Debian networking diagnostics
After installing Debian, my computer can't connect to the Internet. "Ping" gives "connect: network is unreachable". This is strange because the system was installed by downloading packages from the Internet while installing. I can, however, access the Internet from the rescue shell that came with the installation CD. Any ideas? I've tried restarting the computer, unplugging the router, and even reinstalling the entire OS. Same result. --99.237.101.48 (talk) 08:50, 26 August 2008 (UTC)


 * I wonder why ping is trying to connect to something. It's ICMP; there are no connections.
 * (Time passes...)
 * Oh great, ping in Debian has been replaced by something stupid that uses UDP. How could they mess up something so fundamental? Well that's not your main problem. Tell us how far you can get in this general network diagnostic sequence:
 * Driver check: Does the kernel recognize the network interface? Run "ifconfig -a". On the left there are interface names like "eth0" and "lo" and each one has a paragraph of information to the right. In this step all we care about is whether eth0 is in the list or not. If it's there, go to step 2. Also make a note of any extra "eth" entries (eth1, eth2, etc.) If it's not, tell us what you do have in the left hand column.
 * Link layer: Does your ethernet card report a link? Run "mii-tool". It should report "link ok" and the link speed. If not, try running "ethtool eth0" instead. That should say "Link detected: yes" at the bottom. If you've got an eth1, eth2, etc., run "ethtool eth1" and "ethtool eth2" also. Once you've found the active link, go to step 3. If you don't have a link anywhere, this is a problem with the cable or with the switch/router/modem on the other end of the cable... or a wireless negotiation problem if that's the type of connection you've got.
 * IP layer: Run "ifconfig eth0" (and/or "ifconfig eth1", "ifconfig eth2", etc.) Is there an "inet addr" listed? (It'll probably be on the second line.) Is the keyword "UP" present? (It'll probably be on the third or fourth line.) If yes to both, go to step 4. Otherwise stop here and show us the contents of your /etc/network/interfaces.
 * Routing table: Run "netstat -rn". Is there a line with "0.0.0.0" in the Destination column? If yes, go to step 5. Otherwise stop here and show us the contents of your /etc/network/interfaces.
 * Gateway test: In the netstat -rn list, find the line with "0.0.0.0" in the Destination column and then move over to the Gateway column. The address there is your default gateway; it should be the address of your home router if you have one, otherwise it'll be the address of the ISP router that serves your area. Whatever address you found there, ping it. If the ping succeeds, go to step 6. If it says something about being unreachable, stop here. If it just sits there doing nothing, it could just be a stubborn router refusing to reply to pings so go ahead to step 6 anyway.
 * Getting Out To The World: Run "traceroute -n 4.2.2.1". It may be slow, so be patient. If successful, it will end with a line showing 4.2.2.1 and a list of times in milliseconds. You can move on to step 7. If unsuccessful, show us the last 2 lines. If you got several lines of stars at the end, just show one of the lines of stars, and the line before it.
 * DNS test: Run "dig www.wikipedia.org". This is the last step so if you get this far, tell us what it says.
 * --tcsetattr (talk / contribs) 20:00, 26 August 2008 (UTC)


 * The system recognizes eth0, the only Ethernet port I have, but "mii-tool" yields "no MII interfaces found". I've never had any previous connection problems with this computer, so I'm puzzled as to what can possibly be preventing Internet access. --99.237.101.48 (talk) 03:57, 27 August 2008 (UTC)
 * You know, I spent like an hour writing a network diagnostic run-through covering lots of different possible endings; and this is a stupid place to do it because in a few days it'll be archived where nobody else will ever see it so it was basically done just for you. And you didn't even spend 10 seconds to read where it says if mii-tool doesn't give the right answer, run "ethtool eth0". It's quite insulting really. --tcsetattr (talk / contribs) 21:43, 27 August 2008 (UTC)
 * I tried "ethtool" but the command does not exist on Debian. Because of this, I assumed that you meant both "mii-tool and "ethtool" can be used, whichever exists. I apologize for misunderstanding the "if not" part, but rest assured that I read your entire post and appreciate/appreciated your help.  --99.237.101.48 (talk) 03:50, 28 August 2008 (UTC)

Splitting commands in Python (example of using source lang="xyz" tag)
In Python, is there an easy way to split input like commandline shells do? For example: --grawity 17:55, 12 September 2008 (UTC)
 * Input: command arg1 arg2 "a long arg3" arg4 "arg5 \"still arg5" arg6
 * Output:
 * command
 * arg1
 * arg2
 * a long arg3
 * arg4
 * arg5 "still arg5
 * arg6


 * Here is probably the worst python program ever written -


 * but it (kind of ) works. Boy I shoulda used a regexp :) 87.114.18.90 (talk)  —Preceding undated comment was added at 19:18, 12 September 2008 (UTC).
 * Nah, it's wrong, it doesn't handle escaped escapes properly \\"  87.114.18.90 (talk) 19:22, 12 September 2008 (UTC)


 * try:

import sys, shlex print shlex.split(sys.stdin.readline)
 * -- Finlay McWalter | Talk 19:55, 12 September 2008 (UTC)

Looking for a right-click upload to ftp server
Under XP, I'm looking for a program that will let me right-click to upload to a ftp server. The catch-- I want the file's local folder to determine what remote folder to upload to. So, if I have a folder called c:\htdocs\foobar\, I want all the files in that folder to upload to /public/foobar/ on the remote server.

In essence, I want to do one-way sync on a file-by-file basis, ideally accessible from within Windows Explorer.

Does anyone know of a good tool for such a thing? --Alecmconroy (talk) 10:34, 18 September 2008 (UTC)
 * That sounds handy. You should be able to do that with a batch file and stick a shortcut in "SendTo".  Drawback is that your username and passwd would have to be in the batch file.  Saintrain (talk) 17:40, 18 September 2008 (UTC)


 * Use [SendTo FTP] --—— Gadget850 (Ed)  talk  -  18:49, 18 Sept

Google Pages - Google Sites transition
Hello, I have a question on Google Sites. What are the rules regarding file storage on Google Sites? It seems that Google Sites does not allow certain filetypes. Is there a page somewhere that Google explains the rules of the game? Thanks. Kushal (talk) 11:14, 20 September 2008 (UTC)


 * ( Please answer if you know the answer. Kushal (talk) 10:51, 21 September 2008 (UTC)


 * Can no hero save me? :'( Kushal (talk) 00:11, 22 September 2008 (UTC)


 * Please? Pretty please? Kushal (talk) 19:15, 22 September 2008 (UTC)


 * Oh, I really hate it when sensible questions that are urgent to the questioner go unanswered, especially when the questioner is someone who has been around here for quite a while. The only rules that I'm aware of are the obvious ones: Google terms of service, see especially "8. Content in the Services". You've probably checked out this page already, but I'm posting it anyway, since you haven't stated explicitly that you've done so. Out of curiosity: why the urgency, and which file types have you had problems with on Google Sites? Does renaming the files help, or are they rejected based on their contents? --NorwegianBluetalk 20:16, 23 September 2008 (UTC)


 * Thank you, NorwegianBlue. Well, the question is not very urgent. However, Google is preparing for a transition from Google Page Creator to Google Sites. As a Google Pages user, I am not sure what to make of the transition. I would want to be ready when the transition takes place later this year(?). Kushal (talk)

Subversion
For the past few years, the only programming I did was quick systems administration tasks (and a few other things) in Perl from the Windows command line. Everything was a one-liner. My only form of versioning was the up and down arrows (ie the command history). I would run it, if it worked, I added the next part. If it didn't work anymore I just pressed up and tried to get it working, repeating this until until it worked (this was great: I could just guess at what "might" be the syntax and if I was wrong I'd have it right soon enough) but if I really messed up with the change I tried to introduce and couldn't get it working no-how, I would just press up twice to get to the version before that. Worked great.

Well, recently I decided to start using Python, and it isn't as conducive to the one-liner.

So I started doing this. Instead of up and down arrows, every single change I made I copied the py file, like this: 1something.py 2something.py etc. If I really broke for example 23something.py then I would delete it and go back to 22something.py

It works okay but it's more work after my next change is okay to copy a file and iterate the number it starts with, close the project, and open the copied version, than to just press the up arrow and continue editing!

So I'd like a gentle introduction to a real versioning system. I want something that I can use my old, test-heavy approach, where every single change I immediately tested (and thereby committed as a version in the command history), and this only took a press of the up key and the return key.

Can anyone give me an easy introductory tip (windows or an online service) that would be just as simple. Thanks! —Preceding unsigned comment added by 82.124.209.97 (talk) 13:21, 4 November 2008 (UTC)


 * If I were you - I'd install 'Subversion' (often abbreviated to the name of the command-line version: 'svn'). Its a full-blown, industrial-strength version control system with all the bells and whistles - but it has a fairly low entry-point on the learning curve and it's OpenSourced and fully portable across operating systems.  Once you've created your repository, you can boil down the (huge) command set to just four commands:  add, remove, update and commit.  Use 'add' and 'remove' to add and remove files from the directory you're working in - use 'commit' to put your changes into the repository - and 'update' to get an updated set of files from the repository.   This will get you started - then you can learn about reversion, making branches, tagging versions in various ways, resolving changes made by two or more people at the same time, etc, etc.  You can use Subversion remotely over the net - but you can also keep your repository on your local machine where you can easily wipe it out and start over if you make a mistake.  There are many great SVN tutorials on the net.  If you have a strong aversion to command-line tools, you can use one of the many graphical front-ends to subversion such as 'SmartSVN' - I believe some of those are OpenSourced, but I don't use any of them because I'm a command-line junkie.  I've worked on million-line-of-code projects that used subversion - and I keep all that I do at home using it too (even my firefox preferences file and letters to the bank are version controlled and kept in my repository!) SteveBaker (talk) 13:39, 4 November 2008 (UTC)
 * For a windows explorer-integrated graphical interface, you might want to try TortoiseSVN. After reading Steve's recommendation, I decided to try out subversion, and use my linux box as the server. I downloaded TortoiseSVN on my windows box as the client. First impression: very promising. To get it working with this setup requires some work though, and reading the docs carefully, as well as googling for error messages. Probably, getting TortoiseSVN working on a single machine is easier, but I haven't tried that. --NorwegianBluetalk 17:54, 5 November 2008 (UTC)
 * Getting TortoiseSVN to work on a single Windows machine is easy. Installing is just point-and-click.  TortoiseSVN adds its own menu to the Explorer, and you use the option "Create repository here" to initialize a repository (where the versions are stored).  Then you check out the repository with the option SVN Checkout and now you have a versioned working directory.  Put your Python script there, and select "SVN commit" after each good version, or "SVN revert" if you want to go back to the previous version.
 * As noted, a command line subversion client is also useful, and may be easier to add into your workflow - the key commands there are  and  .  One approach would be to replace your command   with   in order to really save a version before each and every test run, but this has drawbacks: first, it stores a version even if nothing changed, and second, after a test run it's now a bit more complicated to get the old version back.  84.239.160.166 (talk) 18:30, 5 November 2008 (UTC)

From SteveBaker's talk page Hi Steve,

Your recommendation of Subversion on the refdesk caught my interest. I've used cvs in the past but don't anymore, and need a versioning system for managing a lot of bits of code that I start writing, and often don't finish, but which still may contain elements that I might want to reuse later. As I commented on the refdesk, I downloaded subversion on my linux box, and TortoiseSVN on my Xp box. It was easy-to-use, and the repository browser of the windows explorer plug-in appeared to be able to manage the projects in my repository in a hierarchical way (something I never got working properly with cvs). When I looked at the actual files in the repository from the linux box, however, I got confused, because they had no resemblance to the files on the PC. With cvs, I used to be able to figure out what was going on. Here's what it looks like on the linux box: Link to dump, didn't want to fill all of your talkpage :-)

Where did my files go? There are two projects here, each consisting of several files, and there are two large files in directory db/revs:

One of the projects is called TuneProg. Ok, lets try to find it:

Hmmm?

The questions that I would very much appreciate your answer to, are:
 * Where is my data hidden? In the (binary) files "1" and "6"? Or has TortoiseSVN created a second repository on the windows box as well, that I'm using unknowingly?
 * Do you create a repository for each project (as some of the links that google came up with recommended), or do you keep everything in one big repository?
 * Is it possible/sensible to manipulate the files in the repository directly?
 * How do you backup and restore the repository?

Thanks, and thanks for your numerous contributions on the refdesks. I read just about every one of them, and am deeply impressed by your knowledge, and ability to explain things in an accessible way. --NorwegianBluetalk 20:08, 5 November 2008 (UTC)


 * Answers:
 * As a mere user of SVN, I have no clue how things are stored in the repository - you shouldn't care either. Don't touch the repository - don't even bother looking at it.  It's basically none of your business!   In CVS it was occasionally necessary to look at the repository and dink with it (eg, that was the only way you could delete a directory) - but with almost every other version control system, that's an utter "no-no".
 * I create separate repositories for most projects, yes. But my entire home directory (under Linux) is in one horrifyingly large repository.  (I start other projects outside of my home tree...eg /home/steve is my home directory - but my "tuxkart" project is under /home/projects/tuxkart).  So when I start some new piece of work (which, like you - I'm statistically unlikely to ever finish!) - I start it under my home directory and don't bother creating a repository.  When something reaches the point where it's likely to be pushed a long way down the development path (like if I maybe spend more than a week on it) - then I copy the files out of /home/steve/myNewThing and into /home/projects/myNewThing and start a new repository for the new work there.  This (sadly) means that the early development history of myNewThing is not present in it's /home/projects/myNewThing repository but since that's typically only a week of my early floundering around - it's no great loss...and if I really DO need it - the (now deleted) directory in /home/steve/myNewThing is still (inconveniently) in that repository.  But I like to have my ENTIRE home directory in a repository so that I can check out all of my current state of activity when (for example) I move from my deskside computer at home to my laptop for going on a trip or something.  The ability to merge changes from two trees into the repository is invaluable when you have two PC's and no way to conveniently share their hard drives.
 * It's certainly not sensible to manipulate the files in the repository directly. Whether it's possible is hard to say and would require deep guru knowledge of subversion that no mere mortal could be entrusted with!
 * You backup the repository just like any other set of files. But for chrissakes don't consider restoring it piecemeal.  It's an all-or-nothing kind of thing!
 * Treat the repository as a big black box - don't open the box!
 * SteveBaker (talk) 13:51, 6 November 2008 (UTC)


 * Thank you! I have a couple of more questions, just to make sure I understand:
 * From your description of sharing files between your laptop and desktop, I conclude that you have a server (where the repository is located), a desktop (possibly the same machine as the server), and a laptop, and that before going on a trip you check the files in from the desktop, and then check them out on the laptop, i.e. no moving of the repository itself, right?
 * If I want to migrate the repository from one machine to another, is all that is needed to copy everything in the directory to the new location, on a machine where subversion is installed, and start using it? Or do I have to do something administrative to make subversion on the new machine aware of the the directory I just copied?
 * Conversely, if I decide to abandon a project and see no need for keeping the files, is it sufficient to delete the repository, or should I do something administrative to tell subversion that the directory is gone?
 * And finally, I take your point about not messing with the contents of the box, and I'm not seeking subversion guru-level knowledge. Still I am a little curious about what's going on. After some experimenting, I concluded that subversion creates a new file in the repository each time I make a transaction, and that he names of the files are simply the sequence numbers of each transaction. When I commit a bunch of files, I get one big new numbered file, and when I do something trivial with one file, I get a small one. Thus, I would predict that the db/revs directory of the repository corresponding to your home directory contains a zillion of numbered files, of varying sizes, and no subdirectores. Could you please take a look and see if this is the case, just to satisfy my curiosity? --NorwegianBluetalk 00:14, 7 November 2008 (UTC)


 * Yeah - you don't want to be moving the repository if you can POSSIBLY avoid it! Because it contains not just all the files you currently have - but also all the files you've EVER had - and all of the old versions of all of the files you've ever had - it can become quite big - many gigabytes for a large, active project with lots of binary files that need to be maintained.  Moving it across the Internet is painful.  I have a web-hosting service where I leave my repository (and where my web site and Wiki is hosted) - they have terabytes of disk space 'assigned' to me (although I'm not using much of that).  So when I do a 'checkout' or 'commit' - all that's happening is that the files that have changed or been created are transmitted over the net - so everything goes fast.  It doesn't matter that I'm in outer-mongolia and my laptop blew up so I had to buy a new one...I just run 'svn' and do a check-out from my web host - and pretty soon my computer is "back to normal".  It's very cool.   For a while I had a static IP address on my home computer network and I ran an SVN server on an ancient computer with a big disk drive.  That gave me much faster access between computers at home (because I was connecting via 1Gbit/sec Ethernet) - but dog-slow access to the repository from the outside world because DSL is very slow from home to network.   Hence, when my family was temporarily split over two cities - and my son went to college - I shifted the repository over to my web hosting service so we get somewhat slower access from home - but vastly faster access from anywhere else on the planet.


 * Migrating the respository is just a matter of copying the entire repository directory.  There might be a problem in moving the repository between (say) a Linux box and a Windows machine - I've never tried that - but from like-to-like, it works just fine...although (as I said) it can be a HUGE set of files.


 * If you don't need the repository anymore - then yes, you can just delete it. That kinda defeats the object of having it for me - so I never do that.  Disk space is ridiculously cheap.


 * I have no interest in how Subversion does it's thing - but it's an OpenSource project - you can go to their website and read whatever documentation they have. If you're REALLY curious - you can sign up to their mailing list (or forum or Wiki or whatever they use)...they may even have an IRC channel.   Then you're talking to the guys who actually wrote the thing - and so long as you're nice to them and don't get annoying - they'll be able to satisfy your every question.


 * SteveBaker (talk) 00:35, 7 November 2008 (UTC)


 * Thanks a lot for your answers. The idea of using a version control system for backing up ordinary user files never occurred to me, and is really elegant. I'll do some more experimenting, and will probably follow your example. --NorwegianBluetalk 13:31, 7 November 2008 (UTC)


 * Yes - I dislike having two systems that do the same thing. Backing up files (incrementally) and restoring them again comes with it's own entire set of tools and commands - but version control does everything that incremental backups does for you - PLUS it gives you the ability to insert tags, make branches, make changes to the files on two copies of the original data - then merge them back together again, check the differences between two different backups...all of those kinds of things.   The only real distinction is that backups are typically run at the same time every night - where (usually) I only do a 'check in' when I've changed something significant.   That means that my backup interval is under my control - I don't have to back up for a month if I don't happen to be doing any work - or I can back up ("check in") just before I do something major so that if I screw up, I can get back to the earlier setup easily.  I'm altogether sold on the idea.   Of course, you still have the problem of doing a backup of the repository itself - but that can be done in a fairly mindless way (or - as in my case, I pay a web hosting service $9.99 a month and let them worry about that!).


 * I've toyed with the idea of putting some system directories under subversion as well - the '/etc/' directory on a Linux machine and the registry file on a Windows machine are obvious candidates. The problem is that I probably wouldn't remember to check it in at appropriate times - and the same /etc directory won't work on my laptop and my desktop machines (or any of a dozen other computers I occasionally use) - so I'd have to use 'branches' for each machine - and that really makes life complicated and doesn't help me a whole lot...so that's not happening and I still have to back up the system areas of my hard drive periodically...but I only do that a couple of times a year...it's not fatal if it gets trashed.


 * SteveBaker (talk) 14:14, 7 November 2008 (UTC)

Compiling a Python program into an executable (not posted)
I downloaded the source code of a python program (regdiff) which is also provided as an executable. The site definitely looks legit, but I'm hesitant about installing an executable from a small website which has few in-going links from external pages. I don't program in Python (in spite of my username), but would like to compile the three .py files into an .exe. I downloaded python 2.5 from here, and py2exe from here, and installed both. However, I'm having problems creating an .exe that works. I loaded the three files in the IDLE gui, and tried to run each. When running regdiff.py, I get the error message "ImportError: No module named psyco", when running tools.py I get the error message "ImportError: No module named gtools.dicttools", and when running setup.py, a lot of output is produced in the Python shell window, ending with:

As seen in the dump above, there is a warning about missing modules, the same names as before. However, this time two subdirectories are created, and one of them contains regdiff.exe. When I run it, it aborts with an error message. I googled "python psyco" and installed it from here. When running setup.py from IDLE, the same thing happens, except that psyco is not reported to be missing, and the error message when running the .exe refers to gtools.raw_registry. Googling "python gtools" gives many hits. The most likely candidate is on the website where regdiff was located, and appears to be available only as an .exe.

So, it appears that all that is needed, is to get the gtools library in source code, then, or alternatively implement the functions 'gtools.dicttools' and 'gtools.raw_registry'. No need to post this, then, since I've answered my own question. --NorwegianBluetalk 11:25, 9 November 2008 (UTC)

Displaying the administrator account
From the web
 * 1) The first option is to press [Ctrl][Alt][Delete] at the Welcome Screen twice. This will change the Welcome Screen login to the Windows 2000 style log in, where you can now type Administrator and the password for the Admin account.
 * 2) The second option is to boot in safe mode, under which the Welcome Screen will display only accounts with Administrator privileges, including the original Administrator account.
 * 3) Adding administrator account on Welcome Screen.
 * Open Registry Editor.
 * In Registry Editor, navigate to the following registry key:
 * HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList
 * Create the following entry
 * Administrator: REG_DWORD
 * Assign a value of 1.
 * Close Registry Editor

usb flash drive
 * Reboot


 * NOTE: In Windows XP home edition, Administrator logon is (by design) only possible in safe mode.

Display Classic or Welcome Screen

 * open key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\ CurrentVersion\Winlogon
 * modify key: LogonType
 * a value of 0 displays classic logon, a value of 1 displays the welcome screen
 * Requires reboot

SAM
Location: WINDOWS\System32\Config\SAM (256 kb)

Windows registry shapshot and diff utility
I'm looking for a program that can take a snapshot of the Windows Xp registry, and at a later time compare the current contents of the registry with the snapshot, and export the changes in .REG format. Can anyone recommend such a utility? Thanks. --NorwegianBluetalk 17:26, 7 November 2008 (UTC)


 * It wouldn't be very efficient in terms of time or disk space, but you could just export the whole registry before and after and then run a standard tool like diff on the result (which should work fine even for the large output files). It wouldn't quite be a REG format file, but it wouldn't be very hard to turn it into one (include the appropriate header lines, strike the annotations that diff added).  --Tardis (talk) 18:14, 7 November 2008 (UTC)
 * Thanks. I had tried that, but windows fc was unable to synchronize the registry dumps. Tried again now, with gnu diffutils diff. The registry dump contains binary data, so I had to use the --text switch. The resulting output appears to be in unicode, and removing the annotations will be very tedious as the output file from diff is large. --NorwegianBluetalk 16:30, 8 November 2008 (UTC)
 * Hm, that's true. It's probably easy enough to write this from scratch, though, since the regedit format is so simple and we can assume the two files to visit the keys in the same order.  Give me a bit and I'll have you a Python script.  (In the event that you're an Emacs user, it might even be easier in Elisp, so let me know.)  --Tardis (talk) 07:11, 9 November 2008 (UTC)
 * Wow, thanks!! I'm not an emacs user (did use it in a previous life, but the keyboard bindings are now long forgotten) . --NorwegianBluetalk 09:46, 9 November 2008 (UTC)
 * See User:Tardis/regdiff.py. I vouch for its lack of malice, but not its correctness.  (And anyone can edit that page!)  --Tardis (talk) 10:56, 9 November 2008 (UTC)
 * Reply on Tardis' talk page. --NorwegianBluetalk 19:59, 9 November 2008 (UTC)
 * I googled "registry diff", "reg difff" and "regdiff" and looked at the first 1/2 doz results for each - only free tool I found within this was regdiff .  Hope this helps,  davidprior (talk) 21:38, 7 November 2008 (UTC)
 * Thanks. The site definitely looks legit, but I'm hesitant about installing an executable from a small website which has few in-going links from external pages. I've tried compiling the program from the source provided, but it appears to depend on a library (gtools) which is available from the same website, but only as an .exe. --NorwegianBluetalk 16:30, 8 November 2008 (UTC)
 * I've sent an email to Gerson Kurz, the author of regdiff, about this thread and the lack of complete sources to regdiff. --NorwegianBluetalk 20:43, 8 November 2008 (UTC)
 * No reply so far, but googling the author's name gives many more links than googling ingoing links to his website, and indicates that there is little reason for concern. --NorwegianBluetalk 23:57, 10 November 2008 (UTC)

From Tardis' talk page
Thanks a million for taking the time! I'm trying it out now. The input files are huge (~ 80Mb), and its been running for about an hour, but stopped producing output quite a while ago. Are you sure the termination criterion is correct? In spite of my username, I don't program in Python, so it's difficult for me to determine. I'll be experimenting more during the week, and will be back with more information, and maybe a question or two. :-) Thanks again. --NorwegianBluetalk 19:55, 9 November 2008 (UTC)
 * It produces output when it finds differences, so it's quite possible for it to produce nothing during almost all of its run if the changes are minor and/or localized. I don't immediately see a way for it not to terminate; you can certainly check manually for the last change in the files and see if it has already generated it.  It should also finish in good time, since (with the sorted input) it can be linear in that input.  If it doesn't ever exit, or you find other bugs, please do let me know.  --Tardis (talk) 21:29, 9 November 2008 (UTC)


 * I've updated it to detect more kinds of errors (some of which would have produced damaging output were they to occur) and to verify that it's never stuck. You can also now disable deletions or request progress reports via (really) trivial changes to the code.  --Tardis (talk) 23:06, 9 November 2008 (UTC)


 * I'm afraid there's still problems.

"b.current_user.reg" and "a.current_user.reg" are dumps of the HKEY_CURRENT_USER hive immediately before and immediately after the installation of a large software package.

Relevant part of "b.current_user.reg":

Relevant part of "a.current_user.reg":

Identical, as far as I can see. Is there a problem related to alphabetic order and capitalization? --NorwegianBluetalk 23:40, 10 November 2008 (UTC)
 * (First, a belated "you're welcome".) Now I'm glad(der) I added those error checks!  Please realize that I have no Windows machine to test this on; I've been testing with .REG files I found online.  First, any output the original version generated should be discarded, as it would happily generate incorrect output in (some) cases like this.  Second, the "error" it reported depends only on the contents of each file separately, so that they're identical here is irrelevant.
 * You're quite right (I hope) that the issue is that the registry (like Microsoft's file systems) is case-insensitive, so that "Ab" comes before "AC". I've modified the program to follow that rule (as well as to recognize that "A\B", being a subkey of "A", comes before "AD"); if it still doesn't work, it will likely require sorting the input (which will greatly increase its runtime and memory use, unfortunately).  --Tardis (talk) 05:37, 11 November 2008 (UTC)
 * I'm sorry to say, I didn't get it working. I tried some modifications, still various error messages. So I followed the path of least resistance - installed the regdiff program by Gerson Kurz on a non-critical PC (one of my children's heavily malware infested laptops), and moved the .reg files back and forth using a memory stick. It turned out that Gerson Kurz's program did the job. However, I was amazed to realize how massive changes to the registry the installation of just about any windows program made. Therefore, I was unable to achieve what had I intended (easy deployment and uninstalling of a suite of programs). I've kept the registry dumps before and after each installation for later reference, in case I feel like tackling this problem at a later time. It looks like it would need a much larger programming effort than I had envisaged to sort out the important changes from the noise. Then making the tool would take a lot more time than doing the job manually a dozen or so times, to it'll have to wait until I have a lot of time on my hands. Thanks again for your time and willingness to help. --NorwegianBluetalk 13:13, 17 November 2008 (UTC)

X11 in minimal Debian installation
I'm experimenting with a really minimal Debian installation. I installed from a net-installation CD, and unchecked absolutely everything. Then installed openssh client and server, sudo and xorg. When running startx, I got the default background, and an xterm window in the upper left corner, no border, no possibility to move windows around. So far, everything as expected. Next, I installed metacity, and voila, the xterm window gets a border, and I can move it around, and spawn other xterm windows. However, when I now exit and re-run startx, I only get the background and the mouse pointer, no possibility to start a program (without switching virtual screens). So obviously, the behaviour of startx has changed, it now starts metacity instead of xterm. So my question is: what configuration file has been modified? I've looked unsuccessfully in various /etc subdirectories. The behaviour is the same whether I run as root or normal user.

I need two virtual displays, so what I would like to do is something like this:


 * (Correction after posting: corrected "setxroot" to "xsetroot") NorwegianBluetalk 22:11, 26 November 2008 (UTC)

Except that I would like the X session to terminate when I exit the last active application. The code above works (when I switch between a console screen and the X11 virtual screens and type ctrl-Z and bg as needed), but is rather awkward, and I suspect there is a "right" place to put such code. And I would very much like each X session to terminate when its last active application terminates. I definitely do not want to install a desktop environment like gnome. Any suggestions? --NorwegianBluetalk 22:31, 25 November 2008 (UTC)


 * You should look into using a ~/.xinitrc with whatever applications running that you'll like. -- JSBillings  02:56, 26 November 2008 (UTC)


 * Thanks. I read the man page of xinit after posting this. There were no .xinitrc files in the /root or /home/myname directories, but I could of course create them. I also tried to locate the system-wide xinitrc file, but didn't find it. I was looking in /etc and /usr/lib/X11. Googling now took me to this page (Slackware's implementation), which says that the global xinitrc is under /var. Since the behaviour changed after installing metacity, there's got to be a global xinitrc file somewhere, so I'll have a look there (or else search the whole filesystem). The page I linked to also says that exec is the way to go to make the X-session terminate when the xterm exits. --NorwegianBluetalk 09:19, 26 November 2008 (UTC)

Solved
Just for the record, here's what I ended up with. I called xinit directly instead of startx, to bypass the xauth security mechanism, which I don't need (I'm on a small home network behind a firewall), and which makes remote X a lot more difficult. Wrote these scripts, the last one is run as root:

start1: start2: Yes, I'll change that xhost + to an xhost +(ip-address).

gui: Switching between the two virtual screens by ctrl-alt-F7 and ctrl-alt-F8 works like a charm, and remote X works beautifully, the only manual step is to set the DISPLAY variable: --NorwegianBluetalk 12:39, 30 November 2008 (UTC)

Setting up a lan with two Suse machines

 * For a quick answer, skip to the bottom, after the underlined "I got NFS to work as well!"

I want to set up a local network, but can't get find much info on that. First the basics: I have a pc and a laptop (netbook), both running Suse (11.0 and enterprise desktop 10, respectively), wich are connected with ethernet to a Linksys (wireless) router, which in turn is connected to a modem. Both can access the internet, so that's ok. This being Linux, I sort of expected it to work instantly. I looked up the address of the netbook in the router and tried to surf there, but Firefox says "Failed to Connect. The connection was refused when attempting to contact 192.168.1.113. Though the site seems valid, the browser was unable to establish a connection." Then it suggested firewall settings might be to blame, so I disabled that on both machines and the router (of course not permanently, just to start with as many obstacles as possible out of the way). That didn't help. So I searched for more info and found that I had to set certain things in yast > network settings / network card (differs on the two machines). On both machines, I set ifup instead of networkmanager, two separate host names, the same domain name, use dhcp, write hostname to /etc/hosts, activate device at boot time and firewall: external zone. Most of that was already set and most of the rest remained empty. I assumed it was the different domain names that were to blame, so I tried again. However, the result is the same. Note that when changing from networkmanager to ifup on the netbook, it complained that it could not access installation media, so I skipped two files (from ///usr/share/lang, so I don't think that was important). Another thing is that the netbook has a rather odd device name: eth-id-00:21:85:4f:32:1c, where I expected a simple eth0. When I execute ifconfig, I get a normal eth0, with the 00:21:85:4f:32:1c as HWaddr. So just a different presentation, I presume. So what am I doing wrong? DirkvdM (talk) 18:42, 29 November 2008 (UTC)
 * I'm no expert at all, but could the error be that you haven't set up a web server? You say you try to "surf to" the computer, but how does the computer know what to show to the web browser? Have you tried to ping it? (then you'd have to set up file sharing if ping works, I have no idea how to do that on Linux but I'd be surprised if it was difficult) Jørgen (talk) 18:52, 29 November 2008 (UTC)
 * Good question. A ping from the console gives a repeated line with "64 bytes from 192.168.1.113: icmp_seq= ttl=64 time=0.167 ms", with the time varying around 0.2 ms. A ping from the router gives the same, except with a much slower time of 0.9 ms (surprisingly). In both cases 0 packets lost. I can also do a traceroute in the router, which apart from two administrative lines (30 hops max, 40 byte packets) gives only one line, with three timings around 1 ms. As for 'surfing to the computer', I can also do that with my own computer by surfing to / (root). So I imagined the same would work with another computer if I have its address. DirkvdM (talk) 19:17, 29 November 2008 (UTC)
 * Dirk, I love linux, and don't particularly enjoy going linux-bashing, but your statement "this being Linux, I sort of expected it to work instantly" in the context of wireless networking, gave me quite a a belly muscle excercise. My knee-jerk answer would be try a wired connection first, but if both computers can access the internet, wired vs wireless shouldn't be the issue. You write ..."and tried to surf there, but Firefox says"..., but as Jørgen points out, you can surf only to websites that have a web server running on the target computer, and you haven't written anything about installing one. I suggest that you install ssh (client and server) on both computers, and try to log into one of your computers from the other using ssh.
 * Regarding the installation media problem, I have no experience with Yast, but if it complains about not being able to access installation media, I suppose there is a configuration file somewhere that says where Yast should fetch new packages. If that file refers to both CD's and web-based repositories, I would try commenting out the references to the CD's, to force it to immediately go to the web-based repository. --NorwegianBluetalk 21:04, 29 November 2008 (UTC)


 * I am trying a wired connnection. It's a wireless router, but note I say 'connected with ethernet' and I'm talking about eth0. Maybe I should have been a little clearer about that. And I sort of expected it to work instantly because networking is the area Linux is best in, I understand. For example, it sets up an internet connection while installing itself and I understand that with Ubuntu you can even start surfing to find help on the installation process - very handy! Anyway....
 * Both machines have openssh installed. I understand it's text-based (and I'm no hero in that area). So I type that in and then in the list I see ssh needs a hostname, but when I fill in the hostname I gave the netbook ('Netbook', very originally) I get "name or service not known". All the rest are options. Do I need any of those? Such as [-p port]? I haven't a clue.
 * But is there no other way? Reading the ssh article - it appears to be developed for secure connections. But I don't need that; I basically want the two computers to act like one. So open a program on the one and then with that open a file on the other? Just like I access a cd or a memory stick (or hd or memory, for that matter)? Why would it have to be different when it's over ethernet? DirkvdM (talk) 09:08, 30 November 2008 (UTC)


 * ssh is invoked like this:

dirk@Desktop:~$ssh Netbook


 * I assume your username is the same on both machines, if it isn't ssh needs a username argument:

dirk@Desktop:~$ssh -l DvdM Netbook


 * If the following conditions are satisfied:
 * both machines can access the internet
 * /etc/hosts is set up correctly,
 * the ssh daemon is running,
 * the router is not doing something weird,


 * Then this really should work. You have already confirmed condition 1. To check if there is a problem in /etc/hosts, check if this works:

dirk@Desktop:~$ssh 192.168.1.113


 * I'm assuming 192.168.1.113 is the ip-address of your second computer. (To make sure, type 'ifconfig' on the command line, you'll some lines of output, one of these shows the ip address of the computer). To check if the ssh daemon is running type

ps -e | grep sshd


 * You should get at least one line of output, that ends in "sshd". If you don't, make sure that you have installed both openssh-server and openssh-client.


 * The error message you quote from firefox is a bit suspicious. It says that the connection was refused. When I browse to an address that doesn't have a web server running, I get the error message "Unable to connect. Iceweasel can't establish a connection to the server at 127.0.0.1." (Iceweasel is firefox under another name). This could indicate that it is the router that somehow is blocking the connection. I have a Linksys router too, and I see that it has a setting (in the "security" tab) called "Filter Internet NAT Redirection", which is explained as follows: "This feature uses Port Forwarding to prevent access to local servers from your local networked computers." It is unchecked in my router, and I have never tried to see what happens if it is checked, but judging by the description, it sounds like something which could cause problems like those you experience.


 * ssh is usually quite hassle-free, and I wouldn't recommend changing to some other login mechanism. However, ssh won't make the two computers behave as though they were one. It sounds like what you want to do is to mount directories of the remote computer on the filesystem of your local computer. I believe this can be done, but I haven't tried it, and hope someone else comes along, to explain how that is done. But, for the sake of diagnostics and making sure your setup is ok, you really should get ssh working before proceeding. --NorwegianBluetalk 13:58, 30 November 2008 (UTC)


 * I also thought about somehow mounting the partitions of one computer on the other, but didn't know how until I stumbled upon NFS, which appears to do that. And indeed it seems to work! Well, it might if the mount would not have been owned by root and I can't find a way to open a root browser to change the permissions. Aaaargh! A problem I encountered several times before: how do I open a program as root? Anyway, as you say, I should be able to get ssh working too.
 * "ps -e |grrep sshd" gives a line on the pc (3120 ? 00:00:00 sshd), but not Netbook. But on Netbook in software management I see that openssh is installed, and the summary says "secure shell client and server (remote login program)", so that seems to be ok. When I look for sshd it gives no results. Is that a problem? When I try it the other way around, from Netbook to the pc, after a minute or so I get "ssh: connect to host 192.168.1.103 port 22: Connection timed out".
 * Btw, what do I get when I get ssh to work? Do I get to control the other machine from the command line? Like I said, I don't know my way around that. And the things I do know take forever, even though I can type blind fairly fast. By the time I have gone down 10 levels in the dir hierarchy I've forgotten what I was looking for. :)
 * Still, for completeness: The user names are not entirely the same on both machines; on Netbook it's not capitalised. But neither 'ssh dirk@Netbook' nor 'ssh -l dirk Netbook' works. In both instances I get "ssh: Could not resolve hostname Netbook: Name or service not known". When I try the ip address, I get "ssh: connect to host 192.168.1.113 port 22: connection refused". And in the router, the 'filter internet NAT redirection' is not checked. The other three on that page are. DirkvdM (talk) 18:57, 30 November 2008 (UTC)

(outdent)

I'll answer your questions as systematically as I can. Quotes from your previous posts are in italics:

Regarding the ssh daemon on the target machine: ''When I look for sshd it gives no results. Is that a problem?''
 * If "ps -e | grep sshd" on the netbook gives no output, that is indeed a major problem. If sshd is not running on the host (Netbook), ssh from the desktop won't be able to make a connection. I'm puzzled that it isn't running when it is installed. To force it to start, you would type "/etc/init.d/ssh start" on a Debian-based distro. I don't know if it's the same in suse.

How do I open a program as root? su # su will prompt you for root's password, and you'll be logged in as root then type the name of the program, with a complete path if necessary.
 * From the command line:
 * Alternatively

sudo programname_with_path_if_necessary # provided sudo is installed.


 * I don't know what the xauth setup in suse is like, but you might get problems with gui programs using the first method, and not with the second. If you run into problems with the display not being accessible, the easiest solution is to install sudo if it isn't installed, add your username using visudo (copy the permissions of root), and use the second method.

When I try it the other way around, from Netbook to the pc, after a minute or so I get "ssh: connect to host 192.168.1.103 port 22: Connection timed out".
 * This one has me stumped. It indicates that the address is valid (otherwise you would have gotten the error message "no route to host"). It also indicates that sshd is running (otherwise you would have gotten "connection refused"). You have also confirmed on the target computer that sshd is running. What you should be seeing, is a prompt for the password of the username that sshd on the target machine thinks you are intending to use. Since there is a difference in usernames, remember the -l option. But even if you try to log into an account that doesn't exist, you should be prompted for a password.

''Btw, what do I get when I get ssh to work? Do I get to control the other machine from the command line?''
 * Yes, exactly.

Regarding the differences in user names: ''The user names are not entirely the same on both machines; on Netbook it's not capitalised. But neither 'ssh dirk@Netbook' nor 'ssh -l dirk Netbook' works.''
 * To keep the number of things that could go wrong to a minimum, I would stick with ssh -l correctly_capitalized_username_on_target_machine ip-address.

You write (when connecting from desktop to Netbook): In both instances I get "ssh: Could not resolve hostname Netbook: Name or service not known".
 * Please doublecheck /etc/hosts on the desktop machine. The error message says that the desktop is unable to translate the name Netbook into an ip-address. And as said above, stick with ssh'ing to the ip-address until at least that works.

Regarding router settings: ''And in the router, the 'filter internet NAT redirection' is not checked. The other three on that page are.''
 * That is the same setup that I have.

Regarding "browsing" to the target computer, you wrote : Well, it might if the mount would not have been owned by root and I can't find a way to open a root browser to change the permissions. And above: ''As for 'surfing to the computer', I can also do that with my own computer by surfing to / (root). So I imagined the same would work with another computer if I have its address.''
 * If you type "ftp://192.168.1.113" in the address bar of firefox, you should get a prompt for your username and password on the netbook. If /etc/hosts were setup correctly, "ftp://Netbook" should have the same effect. root access using this method might be disabled, but you should be able to log in with your ordinary username. You will then see the files of your home directory (and its subdirectories) only, not the root directory. You can get around that by creating a softlink to the root directory in your home directory. However, mounting the filesystem is probably what you want to achieve. AFAIK there are three methods, NFS, Samba and sshfs, but as I wrote above, I haven't tried this (although I have made a directory tree on my linux server accessible from the windows machines in my network using Samba).
 * When you write that NFS appears to work, but that there is a problem with permissions, what exactly is happening? Are you able to mount the filesystem on the netbook as read-only? And which username on the remote computer (netbook) are you using?

Regrettably, linux programs don't always work out of the box, but ssh using a wired connection is one of the things that really ought to work. There appears to be at least three issues: (1) your desktop is unable to translate "Netbook" to the correct ip address. (2) sshd is not running on the netbook. (3) something is blocking the ssh connection from the netbook to the desktop. The last one is what's bothering me most. That same "something" might also block the connection the other way once you sort out the other issues. I can think of only two possibilities, a router setting, and a software firewall. Here (ubuntu) and here (suse) are threads that discuss similar problems, that might be helpful. --NorwegianBluetalk 22:27, 30 November 2008 (UTC)


 * If all I get with ssh is a command line, then it is not for me, I suppose. Still, I can't stand it when something doesn't work, so I tried a little more. Here's the results, but if that doesn't give you a clear hint, then let it be. I'll try NFS instead.
 * When I said that when I look for sshd I get no results, I meant in the software manager. That that gave no results is odd, since 'which sshd' (one of the commands I remember because it can be extremely helpful) gives me /usr/sbin/sshd (and ssh is in /usr/bin/ssh, so without the 's'). So it is installed, but the software manager does not know about it?? Anyway, I typed '/usr/sbin/sshd start', which results in 'Extra argument start'. No idea if that is good or bad. So I tried the ssh command with the other machine's ip address on both machines, but with the same results.
 * On both machines, in the network settings, 'write Hostname to /etc/hosts' is checked (which was the default). On both, in etc/hosts I don't see 192.168.1.1x3 (or anything like it) and the machine's name only in the last line, behind the address 127.0.0.2 (actually two entries there, the other being the machine name followed by a dot and the group name (which of course is the same on both machines)). So I tried connecting to that ip address (I know nothing, so I'll try anything), but now the connection is refused.
 * When I try ftp, on the pc I get "Failed to Connect. The connection was refused when attempting to contact 192.168.1.113." And on Netbook the connection times out.
 * About starting a gui as root from the command line - I already tried that by typing 'Konqueror' in a root console, which didn't work. And when I look for it with 'which Konqueror' it tells me it can't find it (in the standard paths). But it is installed. What I really want is to be able to switch to root within Konqueror. In 'properties > share' there is an option to login as root, but that doesn't work. And now the computer freezes on me. This is the sort of thing that made me switch to Linux (never crashed on me once in three years, except for X once or twice), but strange things happen with this computer. Either this Suse version (enterprise desktop) or the computer is buggy. I hope it's the former, so I can install a better version, but fear it's the latter.
 * Anyway, with NFS, I can view everything on the pc now, but it is mounted as root. Ah, of course, I ran the whole thing as root, so I should have done that as a normal user. Silly me. But I'm very tired now (been a very long day at work), so I'll do that first thing tomorrow. Thanks for your patience with me so far. :) DirkvdM (talk) 20:08, 1 December 2008 (UTC)


 * First, yes - ssh does give you a command line, but it does a lot more behind the scenes in various protocols, to make connections work smoothly. I definitely recommend not giving up. The problems you experience do not suggest a "buggy" machine to me (i.e. a machine with deficient hardware). These are setup problems, and to use linux efficiently, you need to be able to handle such tasks, even when they involve a little command line work.


 * I'm not familiar with software management on Suse, but you need to make sure that the ssh server is installed and running. Google found this, which to me suggests that it needs to be installed and/or configured in your setup. If the software manager of Suse is anything like its gui counterpart in Debian-based distros (Synaptic), then searching for a package name will show it, with an icon indicating whether it is installed or not. The package containing sshd is called openssh-server, not sshd, at least in Debian. Googling also tells me that there is a software firewall in Suse, and I strongly suspect it to be the culprit when a connection is refused an attempt to make a connection times out. The second link in my previous post gives (hopefully correct) instructions as to how to make sure ssh connections are allowed in Suse. Here is another link that appears to be relevant to your problems. It says that you start sshd in Suse simply by typing its name, without any arguments. Thus the "start" argument was superfluous, hence the error message. And it doesn't take ip-address arguments, it sits there and waits for incoming connections. That's the sort of thing daemons do.


 * If there's no mention of the 192.168.1.1x3 guys in /etc/hosts, there's simply no way you can access the other computer by its name. There may be a menu-driven way to do this, but the easiest thing to do would be to edit /etc/hosts by hand. Easy - that is - if you use a console based text editor (like vi, emacs or nano). You'll need to be root, and will probably run into the x-authority hassle I referred to above if you try to start a gui editor like kwrite as root. You can probably get around that by using sudo if that is installed, se above. (A google search for disable xauth gives 35,000 hits - on a secure network xauth is a true PITA).


 * Your /etc/hosts file needs to contain lines similar to these, taken from the /etc/hosts of my desktop (pluto):

127.0.0.1	localhost 127.0.1.1	pluto.Mydomain pluto 192.168.3.49	atlas.Mydomain atlas


 * 127.0.0.1 is the loopback virtual network interface, a generic ip-address that your computer can use to access itself. The second one, 127.0.1.1, is something similar, as is your 127.0.0.2. It is the line beginning with 192.168.3.49 that tells the desktop that atlas (my server) is a synonym for 192.168.3.49.


 * Regarding "which Konqueror" failing, linux is case-sensitive, remember? And I'm pretty sure konqueror shouldn't be capitalized. Try "which konqueror", and I expect you will find it, probably in /usr/bin.


 * Hope this helps! Don't give up. --NorwegianBluetalk 22:14, 1 December 2008 (UTC)


 * (alternating indent)
 * On the pc, yast has an sshd server configuration, but on the netbook it hasn't. Also, many things work differently on the netbook, which can be very confusing. Yet another reason to do a new installation. But it has no dvd player, so I'd have to buy an external one first. Or install from hd, which brings me back to copying from the pc. Possibly with NFS But first look at ssh once more.
 * In sshd configuration on the pc, 'supported ssh protocol versions' is set at '2 only', which I changed to '2 and 1', as on the page you linked to. The rest is the same as on that page.
 * I am sure I had stopped the firewall on both machines and the router, but now it is active again on the pc. So disabled it and tried again. Now I get "The authenticity of host 192.168.1.103 can't be established" Then it tells me the rsa fingerprint and asks me if I want to continue. So sort of a snag, but it seems to work. To see what was the cause, I say no, change ssh protocol to '2 only' and try again. And I get the same. So it was the firewall? I'm very sure I disabled it. Except I did that not in yast's 'security and users > firewall' (as I did now), but in the network card setup. Are there two firewalls or do the two entries conflict (which would be stupid, although I've seen something like that before)? Ok, so now I do 'ssh 192.168.1.103' again and say 'yes' and then it tells me it permanently added that address to the list of known hosts, but then 'write failed: broken pipe' and back to the prompt. Damn.
 * You're right, I should have written 'konqueror' without capitalisation. So now I can find its location and start it. But then it turns out it is mounted read-only, which means I can't change ownership, but also not permissions. Not even as root!? I suppose I need to change fstab, but I'm only vaguely familiar with that. The relevant line consits of source dir on the pc, the mount point and then 'nfs defaults 0 0'. How do I change that to mount it as a normal user ('dirk')? Or do I need to do that in some other way?
 * This very simple operation (or such I imagined it would be) has now consumed many hours of my precious time (almost two full days now) - I also try to lead an offline life (no, really), and I have quite a lot of things to do recently (such as shopping now if I want to eat this evening :) ). So I'll have to leave ssh (for now) and use NFS. Could you help me with this last bit? (which I hope it will be). DirkvdM (talk) 13:33, 2 December 2008 (UTC)
 * Damn, I thought I had it. http://www.troubleshooters.com/linux/nfs.htm looks very handy. I haven't read it completely yet (and it's too late to do that tonight and tomorrow I'm working), but it seems that all that is needed is /etc/exports on the server and /etc/fstab on the client. So in the former (on the pc) I changed 'ro' to 'rw' and to make sure restarted it and on the netbook switched to root (log out and back in - first time I found a use for that) and executed mount -a. Then I tried to change ownership for the dir mounted over nfs. But still I don't have permissions to do that. Btw, I should change to user 'dirk' and group 'users', right? Not that that matters yet, because I get 'access denied'. Hold on, I can't get even view anymore. That is, I can view the dirs directly under the mount point, but no further down. All firewalls are down, so that's not it. I can't think of anything that is different from when I viewed them before except the change in /etc/exports on the pc. Oh, headache....
 * Btw, I had a look at your userpage and now you're NorwegianPurple. :) DirkvdM (talk) 19:33, 2 December 2008 (UTC)
 * (ec: I wrote the paragraph below before reading your last post, the old version of the page must have been cached)
 * Well, at least now we have a complete diagnosis:
 * The reason why you couldn't connect from the PC to the netbook, was that there was no ssh server on the netbook.
 * The reason why you couldn't connect from the netbook to the PC, was that the connection was blocked by the PC's firewall.
 * In addition, you couldn't connect using the names that you had given your machines because these had not been written to /etc/hosts.


 * The first time you connect to a new host with ssh, you get the message "The authenticity of host 192.168.1.103 can't be established", you're shown an rsa fingerprint, and are prompted whether you want to continue. This is normal behaviour. If you are making a connection between two of your own PC's you can safely answer yes. The client then saves this information about the host, and uses it as part of the verification process when subsequent connections are made. If everything works when you use ssh protocol 2 only, this is the safest setting (see the article).


 * I googled error the message 'write failed: broken pipe' in the context of ssh, and found two suggestions: (1) you have permitted ssh connections in the firewall on the PC, but not in the firewall on the netbook. (2) Using ssh protocol 1 (or "1 and 2") might resolve this issue.


 * You repeat that the netbook has no dvd player, and that this prevents you from installing packages. I addressed this above when I wrote: I have no experience with Yast, but if it complains about not being able to access installation media, I suppose there is a configuration file somewhere that says where Yast should fetch new packages. If that file refers to both CD's and web-based repositories, I would try commenting out the references to the CD's, to force it to immediately go to the web-based repository.. Did you try that, and if so, what happened?


 * I'll comment mounting of an NFS disk on your talk page.
 * --NorwegianPurpletalk 20:26, 2 December 2008 (UTC)


 * GOT IT! Well, sort of. A new thing I tried (leaving out as many things as possible that can get in the way) was to make a direct connection, without the router. I also logged into the netbook as root to avoid permission problems. And when I type 'ssh 192.168.1.103' I get prompted for the password. I used the root password (because I specified no user) and I'm in! At least I can surf around the hd's. But now comes the problem I mentioned earlier: I have precious little experience with the command line. Only system administrative work. I tried viewing a text with '/usr/bin/firefox lamp.html', upon wich I get 'Error: no display specified'. So I tried a copy with 'cp lamp.html /home/dirk', but of course that refers to the pc, not the netbook. So why did I not get an error, without a capitalised 'd'? Well, I forgot the trailing slash, so now I have a copy of that file under /home with the new name 'dirk'. Which goes to show my point. I know enough to figure this out, but with my lack of knowledge even the few things I know how to do I still do wrong. And I really don't have the time now to learn working with the command line. As long as the grammar is logical I can figure it out (that's what I'm good at in general), but I lack everything but the most basic knowledge.
 * In the ssh man file I read something about X11 forwarding, which I don't follow. Can I access the remote computer with a gui using this?
 * But one thing I really want to know now, even if nothing else works, is how to copy from the pc to the netbook (and later vice versa, I suppose). That would help me a lot. Now that I'm logged into a remote computer, how do I refer to the computer itself? I tried 'cp lamp.html 192.168.1.113/home/dirk' and the same with 127.0.0.1, but neither works (cannot create regular file .... no such file or directory).
 * Btw, I also tried 'ssh -l Dirk 192.168.1.103' after I logged out and back in as a normal user and that works. So I won't have access problems after I have copied files.
 * ftp://1092.168.1.103 still doesn't work. Nor with just plain the address (the very first thing I tried)
 *  Inserted by NorwegianBlue:  There's a typo in the url above, but I'm sure you typed the correct address when trying this out. --NorwegianBluetalk 19:59, 4 December 2008 (UTC)
 * And now something astounding. I sometimes try something I know won't work, but then what do I know. I made the connection through the router again, with the remote login open, and tried navigating though the pc. And it works! So somehow the router gets in the way of setting up the connection, but not after that. Weird. Or is it?
 * Exciting new attempt: set up the firewall on the router. Still works. Firewall on the pc back up. Still works. I even started the firewall on the netbook for the first time. And it still works. Next test: close the connection and try again. Damn, doesn't work. Times out. Anyway, I can make a connection, even if it's not in the most convenient of ways. And that's the main thing.
 * Lunchtime. I'll look at my user page for what you wrote about NFS in an hour or so. DirkvdM (talk) 10:40, 4 December 2008 (UTC)
 * I'll be back later tonight, but just a clarification: you wrote: "I made the connection through the router again, with the remote login open, and tried navigating though the pc. And it works!". I didn't understand exactly what you did. Did you unplug and replug some of the wires, or what? Could you please have a look and doublecheck which wire goes where, and explain exactly what your setup looks like? I imagined it was something like this, the names in parentheses are the labels on the connections on the router:

Linksys router +---+ broadband_connection --> |(uplink)               | |                      |  PC 192.168.1.103 --> | (1)                   | |                      |  Netbook 192.168.1.113 -> | (2)                   | |                      |                               | (3)                   |                               |                       |                               | (4)                   |                               +---+


 * Is this what your setup looks like? What did it look like when you made a direct connection, and what did it look like when you switched back to going through the router again, while leaving the connection open?
 * You don't use ssh to copy files, you use its cousin sftp, which is part of the openssh package (at least in Debian). Have a look at the manpage. If you have a connection where you're able to use ssh (whatever the wiring looks like), you type sftp 192.168.1.103. You get a prompt, in which you among other things can type "help". I'm not on a linux machine now, so the following is from memory: it accepts "dir" and "ls" (which are more or less synonymous) to show you the files of the remote machine. To copy a file from the local to the remote machine you use "put". There's also an "mput" to put multiple files using wildcards in the filename, and a command "prompt", which toggles whether you'll be prompted or not between each file in an "mput". To copy the other way, you use "get" and "mget". You can move between directories, I'm not sure if the command is "cd" or "chdir". You can delete and rename files. If you want to change the working directory on the local machine, the command is "lcd". To see the directory on the local machine: "lls" or "!ls". Preceding something with an exclamation mark, means that it will be executed on the local machine, and typing an exclamation mark only, lets you escape to a local shell, from which you can return to sftp by typing ctrl-D or "exit". And finally there are the commands "ascii" and "binary". If you are transferring between two linux machines, you should use "binary" (I think that's the default, but I'm not sure). If you are in ascii mode, you'll get a carriage return character inserted before every linefeed character, something which would ruin a binary file.  The striked-out commands are not used by linux sftp, but by windows ftp. --NorwegianBluetalk 15:47, 4 December 2008 (UTC)
 * I'm on the linux box now, so I've made a couple of corrections above. --NorwegianBluetalk 19:59, 4 December 2008 (UTC)
 * Re your question: In the ssh man file I read something about X11 forwarding, which I don't follow. Can I access the remote computer with a gui using this? The answer is "yes", but you won't be able to drag-and-drop files or anything unless you mount a directory tree of one of the computers in the file system of the other. What happens is that you start a shell, which is able to spawn new gui processes that run on the remote computer, but use the screen of the local computer. When I first met unix (in 1990), remote X was real easy. However, this was in the good old days, when there were only good guys on the internet. So they had to make it more secure, and I don't think you'll want to spend the time it will take to learn how to make a modern implementation work. I addressed exactly this question in a previous post. If file transfer is what you want, I think you'll be happy using sftp. Once you solve the technical issue that's blocking the connection, it'll be real easy. --NorwegianBluetalk 19:59, 4 December 2008 (UTC)


 * See at the very top, the second line, starting with 'first the basics'. Indeed, I have a setup as you drew it (except with Netbook in 4, but that immaterial, I suppose). Then I took the netbook plug out of the router and plugged it into the pc. For which I had to unplug that too, because it only has one ethernet connection. In that setup the ssh connection worked. And then I put it back the way it was and it still worked to my surprise. It's a workaround, but provided I don't have to do this too much it will serve the purpose. However, more good news:
 * I got NFS to work as well! This means I can work with a gui, which is much more convenient. Just in case you wish to do this one day, and of course for other readers, here's how I got it to work:
 * On the server (the pc), go to yast > network services > NFS server. Firewall open on all interfaces, accept the defaults. In the next window select the partition (or whatever part of the filesystem) you want to mount. Then when you get shown the options, change 'ro' to 'rw' if you also want to be able to write to the partition. Now, on the client maybe first make a dir under which you wish to mount, then go to yast > network services > NFS client. Click 'add' and enter respectively the ip address (or name, if that works) of the server, the partition you just selected on the server, the dir under which you wish to mount that and leave the 'defaults' options as they are. Possibly do the same for more partitions (which you must then also have set up first on the server) and click 'ok'.
 * Some mistakes I made: I wanted to mount (on the client) partition 7 on hd S500. On the server, I have that mounted on /z/S500_7. So the thing to do was select that and any other partitions I wish to remotely mount. On the client, make a /z/ dir somewhere (as a normal user, to avoid access restrictions) (not sure if this is strictly necessaary), then open the NFS client setup and there do the chore for every single partition. But I have 11 partitions, so I tried to mount /z/ and get it over with in one go. Which doesn't work because it is not a 'physical' partition. Am I saying that correctly? Another mistake I made was to edit a file remotely, open it on the server to check if it had worked, jump around the room in excitement because it had, forget to close the file on the server, do some more experimental stuff, try editing again and notice it opens read-only. Bummer. Until I realised I had left the file open on the server. Makes sense that I can't edit it then, of course. I also wonder if mounting the same partition (or part of it) on different locations on the client is a problem. That should work, I suppose, but can also be an invitation for complications.
 * In short: I am satisfied now. I can copy files from the pc to the netbook. Thanks a lot for your help! DirkvdM (talk) 19:19, 4 December 2008 (UTC)
 * Well done! It's been a pleasure, and I'm glad you persisted. --NorwegianBluetalk 20:38, 4 December 2008 (UTC)

On mounting an external filesystem in Suse, and on asking technical questions in general
Hi Dirk! Although I understand your frustration about using a lot of time when some piece of software refuses to do what it ought to do, I don't think it's a good idea to complain about it on the refdesk. Believe me - I've been there. I've spent countless hours of trying to make really simple things work. Sometimes successfully, sometimes only barely (midi + audio + sequencing can be quite a challenge in linux), and other times not at all (I realized that making wireless cards that weren't preconfigured work, would drive me nuts, and bought access points instead). As you, a very prolific contributor, who has devoted many, many hours to helping others on the refdesks very well know, we're volunteers. Whining about your precious time won't encourage others to spend theirs on helping you. There is a well known essay that you maybe have read: How to ask questions the smart way. I re-read it now, and I think it's wise to do so once it a while, especially before posting technical questions.

I asked a technical question here recently: X11 in minimal Debian installation. It got only one response (brief, though relevant). Although I'm sure the question could have been asked smarter, I don't think I was being ignored, but that maybe I chose the wrong forum - it may have been too technical for WP:RD/C.

The problems that remain to get the setup that you want, appear to be two:
 * 1) To be able to install programs on your netbook independently of the DVD, and without buying an external DVD drive.
 * 2) To mount a directory tree of one of your machines in the filesystem of the other.

Regarding (1): I have already given you a pointer on how to do this, but since I haven't got a Netbook with Suse on it, I can't try it out for you. Do so, and try to install an unrelated program that currently isn't installed, from a web-based repository. If you after making an effort are unable to, post a question telling what you want to achieve, and what you have done. Be very precise when asking - I had to do some guesswork to give the answers I've given, you'll get better answers if you make it as easy as possible for the reader. And don't give unnecessary info. Don't make it a diary of your hopes and expectations, disappointments and frustrations.

Regarding (2): As I've already said, I have no experience with NFS. It isn't installed on the machine I'm using now, and since my purpose is having a minimal install, I won't install it. Therefore I cannot help you out on this one, but others might be able to.

The thread has become quite long, and I don't think there will be more answers from other contributors. Moreover, it will soon hit the archives. So I think it's a better idea to post the two questions, separately, and in a focussed way in the spirit of the essay I linked to.

Example(1): ''I have a xxx.yyy Netbook (that came with?) Suse aa.bb installed. The Netbook has a wired connection to the internet through a Linksys ee.ff router, and browsing the web works fine. I would like to install packages from a web-based repository, but the netbook asks for the installation media when I try to install packages that aren't already there (quote error message). (Insert whatever you have done to remove the DVD from the list of repositories). How can I make the Netbook fetch packages from a web-based repository?''
 * Maybe you'll get a better answer on linuxquestions.org or a suse forum than here.

Example(2): ''I have a uuu.vvv Desktop with Suse cc.dd installed (by you or preconfigured?), and a xxx.yyy Netbook with Suse aa.bb installed (by you or preconfigured?). Both have a wired connection to a Linksys ee.ff router, which is connected to the internet. Both can browse the web without any problems. I would like to mount the directory tree (/aaa) on the (Desktop? Netbook?) as if it were part of the fileystem of the (Netbook? Desktop?). Both machines have a firewall running. I would be grateful for pointers on how to proceed.''

I'm sorry that I couldn't be of more help, but I'm sure the combination of a little patience, and asking the right questions in the right forums will get you where you want. I would very much like hearing from you when you've solved this, and hearing how you got the information you need. All the best. --NorwegianBluetalk 20:04, 2 December 2008 (UTC)

PS: In case you go to your talk page first because of the message notification, I've commented some of the other points in the thread at WP:RD/C. --NorwegianBluetalk 11:36, 3 December 2008 (UTC)


 * Good point about complaining. I try to write down what I do while I do it, get frustrated, write that down (more for myself) and then have to weed out those bits by the time I have figured something out and rewrite it for a more condensed version. Which I don't always comletely do. About unnecessary info: if I don't know where the problem lies I don't know which info might be useful, so I try to be as complete as possible. Still, point taken.
 * The problem is not installing software - I can do that over the Internet, as you indeed suggest. I want to be able to install a different version of Suse that I am more familiar with. I thought that would be possible with a usb stick, but a technician at a computer shop told me that even other experts he knew who tried failed. Though I suspect now he was talking about msWindows. A better option would be from the hard disk. I have never done that, but believe it should not be too difficult. But then I'd have to copy files from the pc. Which brings me back to the original problem. I thought about an installation over the Internet, but for that I believe I would still need to burn some 'start files' on cd, so that doesn't solve the problem. And anyway it would be rather wasteful since I already have the lastest version (11.0) and 11.1 is probably due any moment now.
 * As you say, maybe I should go to a more specialised forum.
 * Btw, I no longer give much help at the ref desks. I have done a whole lot of that until a year ago, but no longer have the time for that. I am now sort of 'cashing in on the credits I gathered'. :)
 * There's just one more thing over at the ref desk thread (how to address the local pc with ssh) and then I'll be able to use ssh. In the meantime, I'll have a look at that NFS page I found. Again, thanks a lot for your help. DirkvdM (talk) 12:19, 4 December 2008 (UTC)
 * My pleasure! I read in the thread now that you got NFS working, and I'm very happy that you persisted. I wrote an update to the comment that I wrote from work today. For some reason it appeared above your entry, even though yours was posted about half an hour earlier. (The 'pedia is extremely slow from here now, for some reason). Anyway, I updated the info on sftp, and answered your question about ssh -X. But if you've got NFS working, the problem is solved. --NorwegianBluetalk 20:15, 4 December 2008 (UTC)

Don't make me kick my Puppy....
I have Puppy Linux on two computers, and Windows XP on two others. I'm able to communicate via the network hub between XPs or from Puppy to XP, but not from Puppy to Puppy. I also have 2 Windows 98s on the hub, and a Damn Small Linux, to boot. So, how can I get my two Pups to talk to each other (or bark, as the case may be) ? StuRat (talk) 20:28, 4 December 2008 (UTC)


 * What does "communicate" mean? --71.106.183.17 (talk) 20:38, 4 December 2008 (UTC)


 * I'd like to be able to send files back and forth between Pups. StuRat (talk) 21:40, 4 December 2008 (UTC)


 * FTP should work regardless, using IP addresses. You can also set up NFS mounts between Unix filesystems. Franamax (talk) 21:43, 4 December 2008 (UTC)


 * How do I set up NFS mounts between Unix filesystems ? StuRat (talk) 12:38, 5 December 2008 (UTC)


 * Sounds like your systems are using Windows domain discovery to "communicate", the XP's can find each other and the puppies can find the XP's. Do you have DNS set up somewhere? The puppies will need that to name each other.
 * In any case, you should be able to telnet between the Linuxes, using their IP addresses. Franamax (talk) 20:51, 4 December 2008 (UTC)


 * I don't know if I have a Domain Name System set up on the Puppies. How can I check ? StuRat (talk) 12:05, 5 December 2008 (UTC)


 * You may find some useful information in the "Setiing up a lan with two Suse machines" topic higher up on this page. -- LarryMac  | Talk  21:27, 4 December 2008 (UTC)


 * You say you have these seven (2 puppy, 2 XP, 2 win98 and one DSL) computers connected to the internet via a hub??? Where do the ip addresses you use come from? Is there a device in your network that acts as a DHCP server? Is your ISP providing them?? Are you using static ip-addresses??? (if you don't know, the answer is "no"). We need more precise info to give a complete answer, but please examine the /etc/hosts files of the puppies, to see if they reference each other. --NorwegianBluetalk 22:44, 4 December 2008 (UTC)


 * I think it's a hub, yes, but for only 4 PCs, 3 of which are dual-boot. I don't understand why this is so hard to believe.  Are 4 PCs an absolute limit for all hubs ?  We just installed a switch, too, on one of the hub ports, to support additional PCs.  I really don't understand the diff between these, however, or how it affects Puppy-to-Puppy file transfers.  My ISP, Yahoo/ATT/SBC/whatever_they_call_themselves_this_week, provides a device (is this a DHCP server ?) to access the Internet via the phone lines (a Digital Subscriber Line, I believe, not dial-up or cable).  I think this assigns one (dynamic ?), external I/P address to our network.  I believe we also have static, internal I/P addresses for each PC.  Would those have been assigned by the hub ?  See the /etc/hosts contents below. StuRat (talk) 12:13, 5 December 2008 (UTC)


 * (Please try to be less scream-ish; it hurts the ears of others, and is sometimes interpreted as "Go away, n00b!" by the receiving party.) flaminglawyerc neverforget 23:08, 4 December 2008 (UTC)


 * Please accept my apologies. I'll reconsider my use of repeated question marks and bold fonts. However, StuRat has been around here for a while, and has contributed a lot to the refdesks. StuRat is in my mind no n00b, and that is why I perhaps was a bit careless in the way I worded the response. You are right, of course, that it is unnecessary to be noisy. However, I'm confident that it'll take more than this to scare StuRat away :-). StuRat, please give us a bit more info, and we'll be happy to help. And I promise, I'll be more gentle on true n00bs. --NorwegianBluetalk 23:45, 4 December 2008 (UTC)


 * Good. I don't consider Stu a noob either, so I'm glad we agree. So on with the show. flaminglawyerc neverforget 00:17, 5 December 2008 (UTC)


 * Well, when it comes to PC/Linux networking, I am a newbie, and was afraid that, if I posted this Q, I'd get a bunch of questions I can't answer (and maybe be teased for not knowing the answers). I should add that many of the machines are dual-boot, and boot in Windows unless I use a Linux boot disk.  Here's the map:

PC A) Windows XP or Puppy Linux PC B) Windows XP or Puppy Linux PC C) Windows 98 or DSL PC D) Windows 98


 * This creates a complication that the one Puppy may not know the I/P of the other Puppy, since it may not have been a Puppy at boot time. The contents of the /etc/hosts file on one Puppy are as follows:

127.0.0.1  localhost puppypc 192.168.1.1 pc2 192.168.1.2 pc3 192.168.1.3 pc4


 * StuRat (talk) 12:02, 5 December 2008 (UTC)

(outdent)

I'm sorry my previous post came out sounding arrogant, no teasing was intended. First, the basics:
 * To communicate on a network, a PC needs an ip-address. There are two ways of obtaining one: the PC can request one from a DHCP-server, or it can be configured to use a fixed (static) ip address. The usual setup in a windows-only network is that the PC requests its ip-address from a DHCP-server.
 * There is no guarantee that a PC receives the same ip-address on subsequent boots. The ip-addresses assigned, may depend on the order in which the PC's are booted.
 * A hub is a passive device, which just would just forward a request for an ip-address to the device that's connected as its "uplink". I actually had a setup like you described (several PC's connected to the internet through a hub, each getting a separate ip-address from my ISP's DHCP-server), but that was back in 1995, and that's the reason I was so amazed. Now (or soon), the number of PC's in the world is larger than the number of ip-addresses (IPv4 addresses, to be precise). Therefore, I was amazed that an ISP would grant you seven IP-addresses for the price of one.
 * Therefore, I think there are two possibilities: either the device you refer to as a hub is actually a router, or the device provided by your ISP is acting as a DHCP server. It doesn't really matter which is which, there is a DHCP server between your PCs and the internet.
 * I'm not completely sure about the difference between a switch and a hub either, but a hub is a more old-fashioned device. My understanding is that hubs send all the network traffic they receive everywhere, while switches make point-to-point connections, and avoid swamping the network with unnecessary traffic. If we take the trouble to read the articles, I'm sure its explained there.
 * As a workaround for the global scarcity of ip-addresses, your router, when acting as a DHCP-server, will give you a private ip adress, that is, an address that's unique on your network, and that is prohibited for use externally on the internet. What's visible on the internet, is the ip-address of your router, and your router gets its ip-address from your ISP. Your PC, and the PC's of millions of other users, gets addresses in the 192.168.1.1 − 192.168.1.255 range. The address is unique on your network, but it isn't transmitted to the outside world, so the fact that there are millions of PC's with the ip-address 192.168.1.100, isn't a problem.
 * When setting up a network, you need to make sure that no two machines have the same ip-address. If some of your machines use DHCP, and others have fixed ip-addresses, you must make sure that there are no ip-address collisions. Most routers by default reserve some ip-addresses for static use, and have a range that is assigned dynamically (e.g. 192.168.1.100 - 192.168.1.200 being reserved for DHCP, and the rest for static ip-addresses).
 * The fact that you use boot disks for puppy linux is important. I very much doubt that a boot disk would use static ip-addresses, it would use DHCP. I think the problem is that your /etc/hosts entries don't reflect the actual ip-addresses used. First, 127.0.0.1 is ok, that's a generic reference to self, i.e. the local computer. However, "192.168.1.1 pc2" is probably bogus. Most routers I've seen, reserve the 192.168.xxx.1 address for themselves - you can access the router by typing 192.168.1.1 in the address bar of your browser. The "192.168.1.2 pc3" and "192.168.1.3 pc4" entries are also suspicious. As stated, routers usually reserve the lowest ip-addresses for use as static addresses.
 * To determine which ip-address your puppy uses, you can type "ifconfig" from the command line as root. There will be several lines of output, one of which shows the actual ip address used. If you type "ifconfig" from one one of your puppies and make a note of its ip-address, you can access it by ftp or sftp using the ip-address, both from the other puppy and from the windows machines. Remember that the ip-address may change upon subsequent boots.
 * I tried out puppy linux a couple of years ago, and I vaguely recall that it had a file or something in the PC file system, where you could save settings. Thus, even if you boot from a CD, you may have the option of setting up static ip-addresses. But first, do some diagnostics based on what I've written above. If you can confirm that you are able to save settings, and want to try to assign static ip addresses to your puppies, let us know, and I (or someone else) will be back and try to guide you. --NorwegianBluetalk 20:44, 5 December 2008 (UTC)


 * StuRat,...
 * On each linux box type  and tell us what it says.
 * On each win box type  and tell us what the section that begins with "Ethernet adapter " says.
 * -- Fullstop (talk) 23:04, 5 December 2008 (UTC)


 * Ok, here's the I/P addresses (with rooms noted so I can recall which is which):

PC A) Windows XP or Puppy Linux (living room)    192.168.1.103 PC B) Windows XP or Puppy Linux  (basement)       192.168.1.102 PC C) Windows 98 or DSL         (my bedroom)     192.168.1.101  PC D) Windows 98                 (computer room)  192.168.1.100


 * But, are these permanent I/Ps or different depending on the boot order ? Do I now edit the /etc/hosts files to add these addresses ? StuRat (talk) 03:48, 6 December 2008 (UTC)


 * These are definitely DHCP-generated addresses. That implies that they're not guaranteed to remain stable, although your router may have a mechanism that reserves an ip-address to each MAC address for a given period of time. How did you get the ip-address for PC D? With a linux boot CD or from windows? In windows, the command corresponding to ifconfig is called ipconfig. Note that the ip-address of a dual-boot PC may be different when you boot into windows and linux (and if you set up static ip-addresses in linux, but not in windows, it's guaranteed to be different).


 * Did you try ftp'ing or sftp'ing your puppies using the ip-addresses?


 * The /etc/hosts that you posted is useless because it doesn't reflect the ip-addresses being used. I'm on a ubuntu PC that uses DHCP now, its /etc/hosts looks like this:

127.0.0.1      localhost 127.0.1.1      nemi


 * Followed by some IPv6 stuff, which you don't need if it isn't there already. All 127.xxx.xxx.xxx addresses map to your local computer, whether they're listed in /etc/hosts or not, see Loopback. And note that there is no mention of "real" ip-addresses. You wont break anything if you modify the hosts files of your puppies like so:

127.0.0.1       localhost puppypc 192.168.1.103   fido 192.168.1.102   guido


 * If you do this on both puppies, you should be able to ftp or sftp one from the other using its name, ftp fido from guido, provided the ip-addresses remain stable. Something they probaby won't do, in the long run.


 * To set up static ip-addresses, the relevant files are /etc/network/interfaces and /etc/resolv.conf. If you want to try this out, please show us their contents, and we'll be back and try to help. --NorwegianBluetalk 10:52, 6 December 2008 (UTC)


 * I used Fullstop's advice for finding the I/P addresses (except for the -a flag, which choked on Windows 98). They were the same when I booted under Windows XP or Puppy, which is good.  I certainly do want static I/P addresses, if those are needed so I can update the /etc/hosts files and allow Puppy-to-Puppy writes.  However, if there's a way it could automagically figure out the dynamic I/P addresses and update the /etc/hosts files accordingly, that would be even better. StuRat (talk) 14:50, 6 December 2008 (UTC)


 * I checked out the files you mentioned, and /etc/network/interfaces doesn't appear to exist on the Puppies, while /etc/resolv.conf is a link to /etc/ppp/resolv.conf, which is a link right back to /etc/resolv.conf. If I try to ping one Puppy from another it works fine, but if I try telnet it fails with "Connection refused".  Also, what I was calling a hub is apparently actually a router, if that makes any diff. StuRat (talk) 16:14, 6 December 2008 (UTC)

Your description of the files in /etc was so different from what I'm used to, that I had to check this out for myself. I downloaded the most recent version of puppy linux, "puppy-4.1.1-k2.6.25.16-seamonkey.iso". /etc/resolv.conf was just an empty file (not a softlink), /etc/network/interfaces was, as you said, nonexistent, and I did not immediately have a working internet connection. On the top left of the screen, there was a triangularly arranged set of icons. The bottom left of the triangle was an icon named "connect". I clicked it, and a new dialog appeared. I selected "Internet by network or wireless LAN". The next dialog shows your network card, or cards, if you have more than one. I'm assuming you only have one. There'll be a button, "eth0", click it. The next dialog starts out with "OK, let's try to configure eth0". Select "Static IP".
 * (outdent)

Now, you'll have to chose a static ip address for each puppy. From what you already have told us, we know that your router starts assigning dynamic ip-addresses from 192.168.1.100 and upwards. Therefore, you want to select an ip address lower that 192.168.1.100. Let's start with 192.168.1.18 for puppy1, and 192.168.1.19 for puppy2.

When setting up puppy1, enter 192.168.1.18 as its ip-address, 255.255.255.0 as its net mask, and 192.168.1.1 as the Gateway. I'm assuming 192.168.1.1 is the address of your router. You can doublecheck this by examining the output of ifconig (linux) or ipconfig (windows), but it's a pretty safe bet. In the DNS parameters fields (primary and secondary), enter the first and second DNS server addresses of your ISP. You may find these on the website of your ISP. Otherwise, use Knoppix, and have a look at the nameserver entries of /etc/resolv.conf after booting with knoppix. There'll probably be three nameserver entries. I selected the first two, and entered those in the DNS parameters fields. Then, I pressed "OK". (When I did this more than once, attempt number 1 was successful, 2 gave an error message because the configuration already existed, and appeared to wipe out the previous configuration, attempt number 3 was successful and so on).

At this point, the internet connection worked, and ifconfig told me that my ip-address was 192.168.1.18. resolv.conf was no longer empty, it was not a softlink, but a file containing these lines:

nameserver 193.213.112.4 nameserver 130.67.15.198

There was still no /etc/network/interfaces. However I found an /etc/network-wizard/network/interfaces/00:f0:a3:79:E1:0C.conf, which contained

STATIC_IP='yes' IP_ADDRESS='192.168.1.18' NETMASK='255.255.255.0' DNS_SERVER1='193.213.112.4' DNS_SERVER2='130.67.15.198' GATEWAY='192.168.1.1' IS_WIRELESS=' '

The 00:f0:a3... name probably reflects the MAC address of my network card (and btw I've modified the name, out of paranoia about posting details of my system on the internet). This file appears to correspond to /etc/network/interfaces. Now, this PC has a static ip-address. I was prompted about saving the modifications to the local hard disk, answered yes, and they were intact when I booted again.

Now, we modify /etc/hosts. Mine was at this point exactly like what you posted. A reasonable modification would be:

127.0.0.1 localhost puppypc 192.168.1.18 puppy1 192.168.1.19 puppy2

Finally, there's a file callled /etc/hostname, containing "puppypc". I would change that to the name you have chosen for each PC, let's say puppy1 for the first one. And that's it, you should now have a working setup with static ip addresses. If you set up the rest of your PC's correspondingly, ftp'ing using the names of the PC's should work. Note that you need to start the ftp daemon on the puppy, it wasn't on by default. It's in the menus, under network|PureFTPd ftp server. --NorwegianBluetalk 20:33, 7 December 2008 (UTC)


 * Thanks. Were you able to do this on two Puppy Linux computers and have them send files between each other ? StuRat (talk) 00:21, 8 December 2008 (UTC)


 * I set up one computer with Puppy linux, and tested ftp against a Debian computer (which already was set up with static addressing). I set up /etc/hosts appropriately on both, ftp'd in both directions, using the computer names. --NorwegianBluetalk 15:09, 8 December 2008 (UTC)


 * I see. Did you get the same "Connection refused" error before you did the set up ? StuRat (talk) 17:35, 9 December 2008 (UTC)


 * I think "Connection refused" indicates that you haven't started the ftp daemon on the pc you're trying to connect to, see my post above about how to start it. --NorwegianBluetalk 18:22, 9 December 2008 (UTC)

Set up static IP addresses and host names on the router. In fact, unless your router is some really cheap job, it will have already done something of the kind. The Windoze boxes will also have "registered" themselves with the router (more precisely: its DNS relay) using the names you gave them. -- Fullstop (talk) 18:15, 9 December 2008 (UTC)

Game programming, OpenGl, Shader languages
Hello/HALO,

I was starting game programming and studing basics but came across a problem in which environment I should programme.Is XNA or DirectX or OpenGl or any another.I just want to know which is best(may not easy) and other which is easy(may not best). -- 122.163.15.188 (talk) 15:27, 16 December 2008 (UTC)Harshagg


 * I think OpenGL is easier to work with than DirectX, but perhaps it is only a matter of taste (and I am not an experienced game developer). --Andreas Rejbrand (talk) 16:10, 16 December 2008 (UTC)
 * If you want to develop 3D games for the Microsoft Windows platform, then probably DirectX is the better choice, though. --Andreas Rejbrand (talk) 16:12, 16 December 2008 (UTC)
 * Performance depends on driver, Microsoft provided slow OpenGL driver to support its own DirectX 3D. Id Software successfully uses OpenGL. MTM (talk) 17:35, 16 December 2008 (UTC)


 * DirectX does a lot of the work for you and is easier to develop for. OpenGL is far more flexible, but requires more brains. Compare Unreal Engine 3 (DirectX) vs. ID Tech 5 (OpenGL) --70.167.58.6 (talk) 18:24, 16 December 2008 (UTC)


 * Hi! I'm an actual game programmer!  Graphics is my speciality.


 * Most Windows/XBOX games are written in DirectX - this is not an easy ride for many reasons - but that's how it is. XNA is pretty much ignored (and a good thing too!).  OpenGL is the only portable graphics API and it's also the only game in town for Linux, MacOSX, iPhone, Google Android phone - and it's the graphics API for things like Nintendo DS and Playstation are more closely modelled on OpenGL than on DirectX.   DirectX has some severe problems because it's a Microsoft-controlled standard and they can make life arbitarily difficult for you...hence, for example, if you want to use such nice features as Geometry shaders or Texture arrays, you have to have DirectX 10.  DirectX 9 won't do.  Unfortunately, in a typically Microsoftian move - they refuse to publish DirectX 10 for Windows XP - you need Vista.  But far more games players are running Windows XP than Vista - so most games writers are 'stuck' on DirectX 9.  In the OpenGL world, there are nice extension mechanisms that allow individual hardware vendors to add features to the API without help from the OS vendor.  Hence, OpenGL under Windows XP has both geometry shaders and texture arrays if your graphics card is "DirectX 10 capable".   That's a bloody ridiculous situation.  So with all of that information at hand - you'd think it'd be a slam-dunk and we'd all be using OpenGL.  But not so.  Sadly, DirectX has enough momentum behind it on two of the most dominant games platforms that people tend to stick to DirectX - despite all of it's many faults.


 * IMHO - it doesn't much matter which you learn initially - you're going to need to know both of them if you want to be a low-level graphics engine programmer.  However, if you're going to work with (say) the Unreal Engine - you'll probably quite rarely go near that low level.  Unreal provides it's own 'portability layer' over the top of DirectX, raw Xbox, bare-to-the-metal Playstation, etc.  You program mostly using the portability layer and you don't give a damn whether it's DirectX or OpenGL or raw register access commands.  On the very rare occasion you delve that deep - consult the DirectX/OpenGL manual!


 * The huge complexity of all of these API's is also way overstated and I strongly disagree that there is any complexity difference between them. These days you load textures, load shaders and DMA triangle meshes at the hardware as fast as possible.  This is probably 10% of the respective API's - most of the other 90% is stuff that's obsoleted by shader technology and may safely be ignored and looked up in the manual in the unlikely case you'll ever actually need it.  Shader technology has superceded a lot of old junk like "how do I draw a dotted line?" - well, you certainly don't rummage deep into the DirectX/OpenGL manual...you draw a regular line and write a shader to make it dotted.


 * In addition to the graphics API's - you need to get REALLY familiar with the shader languages - HLSL, Cg and GLSL. They are very similar to one-another but the small differences can kill you - so pay attention to those tiny differences!


 * SteveBaker (talk) 04:19, 17 December 2008 (UTC)
 * That comment was really insightful. Thank you steve. -- penubag  (talk) 09:08, 18 December 2008 (UTC)

Rootkit removal
I have a virus on my computer called backdoor.tidserv. I need to know how to remove it. It makes some sites, such as Google, act strangely. I can't remove it with the anti-virus software because it shows up as "left alone". And I can't use System Restore because nothing happens when I click the "next" button on the third step. Is it safe to remove it manually by going to the directory it is in, right-clicking it, and choosing "delete"? If not, are there any free anti-virus programs that will remove this virus? Not those that require registration. Just the free ones. 60.230.124.64 (talk) 11:14, 22 December 2008 (UTC)
 * Ubuntu, Kubuntu, Mepis, Debian, SuSE, etc. Each of them free, none of them requiring registration, all of them ensuring that you'll never again suffer from this kind of crap. -- Hoary (talk) 11:46, 22 December 2008 (UTC)
 * I don't want to switch operating system. All I'm interested in is getting rid of this treacherous virus. 60.230.124.64 (talk) 12:11, 22 December 2008 (UTC)
 * Ignore Hoary, there are plenty of viruses for *nix, and his answer doesn't even address the question. I'm going to write up some instructions now, which AV are you using? :) &mdash; neuro(talk) 12:51, 22 December 2008 (UTC)
 * Symantec. 60.230.124.64 (talk) 13:04, 22 December 2008 (UTC)
 * Then this should help. &mdash; neuro(talk) 13:06, 22 December 2008 (UTC) Turns out 'done' != 'solved'. &mdash; neuro(talk) 13:08, 22 December 2008 (UTC)


 * (ec) Yeah it annoys me too when someone suggests changing your whole OS and migrating your stuff just to fix some minor Windows problem.
 * this link tells you a bit about your virus. It seems Norton Anti Virus can get rid of it for you and I suspect that other Anti Virus software can remove it as well (perhaps your Anti Virus has ben compromised in some way).  If you don't want to splash AUS$60 or more, you can try a manual removal.  The word "TDSS" seems to be an important clue.  Search your system for all files with "TDSS" in the filename, and search the registry for "TDSS".  Delete the obvious candidates and move/rename the less obvious ones (remembering their old name/location).  Reboot your PC.  You might have to go round this process several times to be sure you have got all of it.  One last thing:  messing with the registry and system files carries a high risk of breaking Windows so bad that you need to reinstall everything.  Make sure you back up anything you cannot afford to lose (ie. documents, photos, emails, etc.) before you start.  Astronaut (talk) 13:06, 22 December 2008 (UTC)


 * Well no there aren't "plenty of viruses for *nix", but I do agree that Hoary's comment was a bit pointless. Anyway, go ahead and delete the virus' file. If you can. You see, while the virus is running, Windows locks the file so you can't delete it. You'll have to terminate the process first. If Windows Task Manager can't terminate the process, try IceSword or gmer or something else. You can also try booting from a GNU/Linux LiveCD with NTFS support or a Windows Live CD (see BartPE) and delete it from there.


 * The first thing you'll have to do is locate the virus' file. This can be done using Process Explorer (google it). If the virus is some sort of DLL, then it's going to be much harder. If it's a rootkit, use IceSword or gmer. --wj32 t/c 21:30, 22 December 2008 (UTC)


 * Actually, it's not that simple. This does have a rootkit component (thanks for the link, Astronaut). You'll first have to use IceSword's registry editor to delete,  ,   and  . Then use IceSword to move any files that start with TDSS in   to a backup directory. --wj32 t/c 21:42, 22 December 2008 (UTC)


 * I've heard that this virus can stop you from getting anti-virus programs. I don't know if this has happened to my computer, but what if it does? What can I do then? 60.230.124.64 (talk) 23:58, 22 December 2008 (UTC)


 * Please, stop worrying about what might happen if you "get an anti-virus" program. Search Google for IceSword and download it. Run it, and follow the instructions I just gave you. Sorry, but... your computer will not blow up if the rootkit you have prevents you from running an anti-virus program! --wj32 t/c 00:11, 23 December 2008 (UTC)


 * Yes, hunt down and kill any running processes and services that start with "TDSS". Process Explorer is good for that, and/or a rootkit killer such as IceSword if a rootkit is involved (though if you're running Vista you might have difficulty finding a rootkit killer that works).  The big problem though is thinking you've got rid of it all, only to find it comes back afer a reboot.  In my experience, it is possible to have multiple copies of the same virus or many different virus infections all hidden by the same rootkit.  Getting them all is a long job.
 * The best guide is to be familar with what your PC loads at boot time and then check up on any changes. Anything that starts at boot time should be checked out (googling file names is one simple method - eg. googling "TDSServ.sys" gets 9,000+ hits mostly about malware).  Astronaut (talk) 01:33, 23 December 2008 (UTC)


 * I used this program ComboFix a while back to get rid of some spyware/virus on a friend's computer after all other antispyware programs failed and I think the files it got rid of did start with TDSS... So maybe give it a shot. Cheers, --71.141.107.171 (talk) 04:59, 23 December 2008 (UTC)

I'm puzzled. I read above: Its point was that installing GNU/Linux is about as simple as the procedure suggested above, that it avoids the risk of the recurrence of something similar, and that it's free and doesn't require registration. Of course you'd copy your work files off the computer first; if this "backdoor.tidserv" malware prevents this, then you could do it after booting off some portable, CD-based alternative to the damaged Windows installation. -- Hoary (talk) 14:04, 23 December 2008 (UTC)
 * Astronaut: it annoys me too when someone suggests changing your whole OS and migrating your stuff just to fix some minor Windows problem [...] messing with the registry and system files carries a high risk of breaking Windows so bad that you need to reinstall everything
 * Wj32: there aren't "plenty of viruses for *nix", but I do agree that Hoary's comment was a bit pointless


 * Amazingly (well, it amazes me anyway) some people actually WANT to run Windows. Advising them to switch to Linux - while fundamentally sound advice - isn't helping them solve their immediate problem - and is therefore likely to be rejected, typically with some degree of hostility.  So it's probably best not to suggest it until they are in a better mood!  But there can be no doubt whatever that in practical terms, Linux is safe from viruses. neuro says there are 'nix virii - which it technically true - but nobody ever suffers from them - so this is at best a misleading statement.  I've been using Linux since almost day #1 (I downloaded it from Linus himself soon after it was first announced) - I don't take any precautions whatever against virus attacks - I visit dubious websites, I download stuff with impunity, open attachments from complete strangers, I don't have a virus checker or even a hardware firewall and I leave my computers (many of them) turned on 24/7 on open Internet connections and sometimes, even wireless routers without encryption.  All sorts of things that would be rapidly fatal to a Windows user.  But in 17 years of intensely reckless Linux/Internet use - I've not had a single virus, malware, rootkit or other inconvenience of any kind - and neither has any of my friends or colleagues who use Linux.  So - you shouldn't make the switch because you have one specific problem - but in terms of general freedom from grief over the long term, it's a strong reason to change.  SteveBaker (talk) 16:05, 23 December 2008 (UTC)


 * Steve's right. I'm not going to tell you to switch from Windows to Linux if you're not ready to, but you should understand that the virus problem under Windows is one of the sad prices you must pay for choosing (or being forced) to use Windows.  Bill & Co. have taught you that viruses are inevitable, are the sole fault of the nasty virus writers, and are a fact of life that must be forborne, like STD's and bad weather.  But all three of those points are quite false.  A properly-designed operating system is immune from malware (and it's inherently immune; it doesn't require add-on security products to make it so.)  The virus plague under Windows is only partially the fault of the nasty virus writers -- it is also very directly the fault of Microsoft, for actively enabling the possibility of viruses by adding lots of ill-advised features to Windows over the years, and by never taking security seriously until it was much too late. —Steve Summit (talk) 20:41, 30 December 2008 (UTC)


 * I use Ubuntu myself, but telling Windows users to switch to GNU/Linux isn't going to be accepted by them - they are Windows users after all. --wj32 t/c 22:49, 23 December 2008 (UTC)

Just let me ask a question about this virus. The only bad things I know it does are 1) change the behaviour of search sites such as Google and 2) may stop you from getting anti-virus programs so you can delete it. Are there any other symptoms? 60.230.124.64 (talk) 05:52, 24 December 2008 (UTC)


 * The trouble is that when you have one virus, it isn't long before other malware arrives. The link I provided above, says this virus opens a backdoor into your PC.  Such a backdoor enables other malware to be installed without your consent or knowledge, including more viruses, keyloggers and for example linking your PC to a bot-net which will spam many millions of other PC users round the world.  Under the burden of all this malware, you PC will eventually slow to a crawl as it expends more and more resources servicing the needs of the malware.  Personal information such as bank account details, PIN numbers etc. could be stolen enabling theives to empty your bank account (and if you believe the more paranoid "security experts", use that money to finance people trafficking, terrorism, drugs, etc.)  Clean up your PC before it gets any worse.  Astronaut (talk) 13:34, 27 December 2008 (UTC)

Backdoor Again
1. Remember that "backdoor.tidserv" virus I was talking about above? Well, I got rid of it by deleting it manually (or, more accurately, "them" since there were multiple copies of it on the computer) - and by manually, I mean going to their directory, removing them, then deleting them from the Recycle Bin. Is it safe to do it this way? And the search engines are still acting strangely (they take me to random sites instead of the site I want). How can I fix this?

2. There is another virus on the computer called w32.tidserv. Everytime I scan for it, it says it was cleaned by deletion, but when I restart the computer, the damned virus is still there. Could this other virus be the cause of Google's and other search engines' strange behaviour? And can I delete this other one manually?

60.230.124.64 (talk) 23:13, 29 December 2008 (UTC)


 * TidServ is a rootkit. I already gave you instructions for removing it; get IceSword and you will be able to delete the rootkit. Also, deleting viruses is definitely safe. Just don't try to unload rootkit drivers while they're running. --wj32 t/c 04:22, 30 December 2008 (UTC)


 * Deleting files manually is not sufficient to remove a virus from your system. Most viruses embed parts of themselves in various system files where you won't be able to get at them.  Use an antivirus / antimalware program.  If your antivirus program claims it cleaned your system but it seems like the virus keeps coming back, then try a different antivirus program as well.  Tempshill (talk) 16:03, 30 December 2008 (UTC)

Solution Configurations in Visual Studio
I have started using Visual Studio Express 2008 to develop some Windows programs in C#. When I first installed it, I'm pretty sure I was able to build my programs in either "debug mode" or "release mode". However, I've recently noticed that the solution configuration options have disappeared and the Configuration Manager option is permanently greyed-out. I am now only able to produce a release version. Any ideas how to fix this, or should I reinstall Visual Studio from scratch? Astronaut (talk) 15:04, 19 January 2009 (UTC)


 * I use VS Express too. It only produces a Debug build when you run/debug the program from inside Visual Studio. When you build using F6, it produces a Release build. --wj32 t/c 04:12, 20 January 2009 (UTC)


 * Tools | Options | Show all settings (enable) | Projects and Solutions | Show advanced build options (enable). Bendono (talk) 04:20, 20 January 2009 (UTC)


 * WTH. I didn't see that! Thanks a lot! --wj32 t/c 06:27, 20 January 2009 (UTC)

Disabling autorun under XP
A few weeks ago I asked here about disabling autorun under XP. The standard answers didn't help me. It now looks as though Microsoft has finally admitted the problem and come up with a fix, so I hope it's okay to post a link here for anyone else having the same problem. --Shantavira|feed me 19:02, 26 February 2009 (UTC)


 * Of course, and thank you. I tried following the instructions, and managed, after some fiddling. The link to the English Microsoft page doesn't work for a non-English version of the OS. So I googled "KB950582 oppdatering" (KB950582 update), and found the relevant Microsoft page in Norwegian. However, the utlility "Gpedit.msc" that is referred to in the article was not installed after the update. So I followed the manual instructions, and added the registry key NoDriveTypeAutoRun (REG_DWORD), which I assigned the value 0xFF, which should disable AutoPlay on all kinds of drives. (Location: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\). And that annoying dialog that comes up whenever I plug a usb device to the PC is gone, the drives just appear in "My Computer". --NorwegianBluetalk 00:01, 27 February 2009 (UTC)

What is the best free antivirus program?
Are there any ones which are on all the time? So when I download a file it automatically scans it immediately?--75.187.113.105 (talk) 00:43, 14 March 2009 (UTC)
 * There are three major free anti-viruses. They are:


 * avast! Home Edition
 * Avira AntiVir Free
 * AVG Anti-Virus Free Edition

All three have resident/real-time protection against threats (That is, files are scanned upon opening and such). As for your second question, avast! has a Web Shield module which will scan almost everything (Certain content such as music files are excluded by default from Web Shield scanning to prevent slowdowns of your browsing speed) as they stream to your computer (Website content; avast! picks up a lot of malicious JavaScript) or as files download. (It creates a transparent proxy and redirects all web traffic through it to scan it for threats)

In personal tests that I conducted within a sandbox it successfully detected rogues (Fake software that bring along a lot of malware and that try to scan you out of your money) mid-download. With other threats it often detected them before I even got a download prompt! (Temporary files are scanned automatically and the threat(s) was/were detected)

avast! Home Edition (The free version) however lacks PUSH updates (Initiated by the avast! servers rather than by the user), a script scanner (Scans scripts executed on the local computer; more importantly though it scans websites in web browsers for malicious content), command-line scanner (used by those who prefer the efficiency of the command-line and those that wish to execute batch commands), automatic actions to be taken when a threat is detected (In the free version a popup appears with cool [But loud] siren sound effects showing the threat detected and giving the options available. In the Professional version; an action can be set to be taken automatically when a threat is detected), and finally the Enhanced-User Interface. (There are many complaints about the avast! Simple User Interface [I have no problems with it and love it for its simplicity] for looking too much like a media player etc)

However, avast! is the most fully-featured of the free anti-viruses with features such as the Boot-Time Scan (Scans your computer before Windows boots up to kill off threats before they can defend themselves against removal), the Virus Recovery Database (Stores copies of critical system files to allow easy repairs if they get infected), and the Virus Chest. (Same as the "Quarantine" area of most other anti-viruses but avast! allows you to scan files in it [As many times as you want] to check for false positives. In addition, you can add your own files to the chest. -- I recommend avast! but note that it lacks heuristics (Behavior analysis, this is regarded by some as just advertising though that creates too many false positives, making it difficult for the user to determine what is a threat and what isn't)

Avira AntiVir Free is also highly popular like avast! (Which has 75 million registered users+). In tests by independent companies such as AV Comparatives; it had the highest detection rates (But not for rootkits as discovered by other testers) beating out even GDATA which uses the BitDefender and avast! scanning engines for very high detection rates. Avira AntiVir Free though lacks anti-spyware and anti-adware; a major weakness. Like avast! it also prompts you for every threat detected (Can be annoying if you want to clean up a heavily infected machine). It has heuristics. -- I recommend it but be sure to use a good anti-spyware application alongside it and a good HIPS application as well.

Lastly, AVG Anti-Virus Free Edition 8.5. It is probably the most popular of the three.

I know this will cause controversy but I do not recommend it. The 7.5 version of AVG Free was excellent but the 8.x series lost too many features. No anti-rootkit is very bad as rootkits are becoming more common and ever more dangerous and difficult to remove. The reduced priority updates are also bad as AVG has usually had pretty bad detection rates for threats with old signatures. (Other anti-viruses can do rather well with out-of-date signatures but always keep them up-to-date. Even so, a recent study by Panda Labs shows that 35% of infected computers HAD an updated anti-virus. Having an anti-virus alone is not enough.) Its detection rate is describable as decent. If you want an AVG product, get the paid version not the free one.

If you are willing to spend a few bucks: (I suspect not as you asked for recommendations for free anti-viruses)

I most highly recommend the Kaspersky 2009 and Norton 2009 versions. (Yes, I know, "Norton sucks!"; yeah, well, not the 2009 versions, they are light and have demonstrated excellent anti-malware capabilities. However the following is true: "McAfee sucks!" :P)

Hope this helps. :) -- Xp54321 ( Hello! • Contribs ) 02:54, 14 March 2009 (UTC)

VNC client
Hi

Can someone suggest a good vnc client for windows, specifically one the may have a LAN browser, and tabs like Vinagre in ubuntu?

TIA PrinzPH (talk) 22:48, 13 March 2009 (UTC)


 * Check out TightVNC and RealVNC. They are two of the more common varieties. Shadowjams (talk) 00:55, 14 March 2009 (UTC)

Xlib fullscreen window sample
Can you give me a link to a minimal program that creates a fullscreen x window? Something along the lines found here but complete (the code there doesn't work for me or I miserably fail copy-paste). --194.197.235.29 (talk) 16:39, 13 March 2009 (UTC)


 * I've fused a nice Xlib tutorial with the code you describe, making the following (which works on my Ubuntu machine):


 * 87.115.143.223 (talk) 17:34, 13 March 2009 (UTC)


 * Thank you, works very well on my ubuntu machine too. --194.197.235.29 (talk) 18:32, 13 March 2009 (UTC)

Finding pixel with RGB in visual basic or visual C++
is there some code in visual basic or in visual C++ with which i could find a pixel with a specific color values(red, green, blue) at a particular Y- coordinate or X- coordinate? please post that fully.--harish (talk) 15:42, 12 March 2009 (UTC)


 * A pixel in what - in an image, on the screen, on a video? 87.115.143.223 (talk) 15:43, 12 March 2009 (UTC)


 * In Java, the AWT Robot can do this, either by creating a BufferedImage of the screen (allowing you access to a "copy" of that screenshot), or by returning the current color of the pixel at the mouse-coordinate. The AWT Robot can get screen information from anywhere rendered by the operating system (not just in the Java application window(s)).  There's probably an equivalent feature in .NET Framework or Visual C++.  Nimur (talk) 17:01, 12 March 2009 (UTC)


 * This article describes the WinAPI calls that capture the screen in VisualC++ (I guess they're also exposed to VB by the same names). 87.115.143.223 (talk) 17:22, 12 March 2009 (UTC)


 * Sure, just use GetPixel and get a handle to the whole screen. (Replace with another HDC if you need something more specific.) Here is a quick sample:


 * Regards, Bendono (talk) 18:01, 12 March 2009 (UTC)

Getting rid of virus when you can't boot
Hey, I just received an old computer from a friend of mine. Problem is, it can't boot into Windows XP. Well, it can sort of, but as soon as you get to the login screen an error saying drwtsn32 failed to initialize appears, and upon clicking OK, the computer reboots. I suspect a virus of some kind. Question is, what tools (preferably free) would allow me to run a virus/spyware scan on the hard drive, without it having to boot into Windows? Are there any LiveCD versions of Linux that come built-in with antivirus tools I can use to scan a Windows partition? The computer doesn't have internet access, so I can't update definitions if I have an old .iso... I'd really prefer not to reformat the HDD, and that's why I'm looking into these other alternatives. Thanks for all the help I can get! 141.153.216.72 (talk) 18:55, 25 May 2009 (UTC)


 * One question: Have you tried booting Windows into Safe Mode?  Tempshill (talk) 19:49, 25 May 2009 (UTC)
 * Yes, and I get a blue screen as a result. (OP) 141.153.215.48 (talk) 22:35, 25 May 2009 (UTC)
 * I googled "antivirus boot cd free" and the first link was this list of free antivirus CDs. If you really think this is a virus rather than some configuration problem &mdash; you haven't mentioned why you think it's a virus, but let's say it's a significant probability &mdash; I would (a) go ahead and try one of those boot disks; but you could also (b) detach the hard disk and attach it to a working computer of yours that already has some antivirus software on it that can scan the drive.  Tempshill (talk) 23:24, 25 May 2009 (UTC)


 * You could run clamav from a boot CD. Googling turned up the clamav live CD, which allegedly is updated hourly with the latest virus definitions by an automated script. (Here's the script.) -- BenRG (talk) 23:27, 25 May 2009 (UTC)

Linux chmod, chown, chgrp
I need to do the following, recursively in a directory tree: Could someone please suggest how to to this from the command line, in an easy-to-remember way? Thanks, --NorwegianBluetalk 14:27, 24 July 2009 (UTC)
 * 1) chmod 755 for all subdirectories
 * 2) chmod 644 for all files
 * 3) chown www-data both for files and directories
 * 4) chgrp www-data both for files and directories


 * find -type d -exec chmod 755 {} \;
 * find -type f -exec chmod 644 {} \;
 * chown -R norwegianblue www-data
 * chgrp -R somegroup www-data
 * 87.114.144.52 (talk) 14:34, 24 July 2009 (UTC)


 * Thanks! find was what I was looking for. The chown and chgrp syntax doesn't seem to be right, though. Didn't work, and the manpage says nothing about specifying the original owner/group name. But

find -exec chown www-data:www-data {} \;
 * appears to work, and changes both user and group. Btw, why the need for a backslash before the semicolon? --NorwegianBluetalk 18:25, 24 July 2009 (UTC)


 * sh uses semicolon for syntactic purposes, so it needs to be escaped to make it unsyntactic or whatever. Unescaped it separates commands without a linebreak, eg 'rm ~/.bash_history; history -c; exit'. --91.145.89.22 (talk) 19:06, 24 July 2009 (UTC)
 * Thanks. --NorwegianBluetalk 19:07, 24 July 2009 (UTC)


 * I usually just use chown -R username:groupname directory. Never needed chgrp. Indeterminate (talk) 00:09, 25 July 2009 (UTC)


 * Missing from all these answers is the elegant answer chmod -R a+rX. This is not the same as a+rx which would add x permission for everybody on everything. The capital X adds x permission for everybody on those things that are already executable for somebody. In other words it's probably exactly what you're looking for. The creators of unix knew you'd want it so they added it for you, around 25 years ago. 69.245.227.37 (talk) 11:43, 25 July 2009 (UTC)
 * Brilliant! Thank you. --NorwegianBluetalk 15:58, 25 July 2009 (UTC)

Downloading flash-based video
I'm looking for a program or browser-plugin that enables downloading of flash based video from a web site, specifically this site, in a format playable by media players such as Windows media player, VLC media player or Quicktime. OS: Windows XP, or Linux (Debian or Ubuntu). Anyone aware of such a beast? --NorwegianBluetalk 21:16, 29 July 2009 (UTC)


 * I was able to download the FLV file using FlashGot in Firefox. VLC can play FLVs and there are no doubt codecs that let you play it in everything else as well. --98.217.14.211 (talk) 21:24, 29 July 2009 (UTC)
 * Thanks a million! Worked like a charm. --NorwegianBluetalk 21:55, 29 July 2009 (UTC)

Addendum: I'd like to add, in case anyone else is interested, that the file was easily transcoded to a format recognizable by Windows Media Player, using the transcoding wizard of VLC media player. --NorwegianBluetalk 22:12, 29 July 2009 (UTC)
 * Also the VLC Player can output video as ASCII art, realtime.. probably worth a download just for that.. 83.100.250.79 (talk) 15:15, 30 July 2009 (UTC)

bash scripting - testing if wget succeeds
I am writing a bash script that uses wget to download some files. I'd like my script to behave differently if wget fails (specifically, if it gets a 404). Other than testing if the file exists after wget terminates, is there a way to do this? I tried the wget manpage but didn't see anything. Also: is there a good scripting tutorial out there? 87.194.213.98 (talk) 13:51, 1 August 2009 (UTC)


 * Try:


 * This works because wget sets a 0 result code (that's $?) if it "succeeds", and 1 if it "fails". For the couple of sites I tested, a 404 does count as failure, but that may not be the case for every one. wget doesn't return more detailed information - this post explains why. -- Finlay McWalter • Talk 14:15, 1 August 2009 (UTC)
 * That's brilliant, thanks very much. 87.194.213.98 (talk) 14:40, 1 August 2009 (UTC)

SQL Select A, Count(*) as 'B' FROM T GROUP BY A
Hi All,

Is there an easy way to do this using sql: generate a table with two columns, one a field from the database and the other the number of times it occurs? Something kinda like: SELECT COUNT(CustID) AS totalOrdersFromCust FROM Orders_table WHERE CustID='id-here' except it will go and loop through the database so I end up with the number of times each customer has placed an order?

ex: custID   totalOrders 1        24 2         16 etc..

I know there's gotta be a better way the me having to write a loop with an external app.

Thanks in advance! PrinzPH (talk) 02:16, 29 July 2009 (UTC)


 * select CustID, count(*) as 'NumberOfOrders' from Orders_table group by CustID --Nricardo (talk) 02:50, 29 July 2009 (UTC)

Thanks Nricardo, Works! PrinzPH (talk) 22:31, 29 July 2009 (UTC)

Automatically opening programs slowing down reboot
When I reboot certainly programs open automatically and I have to wait for all of them to finish loading before I can do anything. Some I guess I need to do this, such as the horrible all around speed bump that is Norton, but others I don't see any need for, such as Quicktime, Windows Live Messenger, Yahoo Messenger, Ares and so on. If I want to access these I can open them as needed. So my question is, is there a way to set these programs to not automatically load?--Fuhghettaboutit (talk) 13:30, 13 August 2009 (UTC)


 * Many can either be individually configured (in their own config screens) not to start up automatically, or their shortcuts can be removed from the "Startup" menu. But some things don't play nice - for them, Microsoft's AutoRuns program is the big stick. -- Finlay McWalter • Talk 13:36, 13 August 2009 (UTC)


 * I recommend a program called Startup Control Panel, which I've always found a reliable method of knocking startup crud on the head. In many years using it, I've not spotted any problems with it. (That said, I was not aware of Autoruns for Windows v9.53, so what do I know - it sounds good too.) --Tagishsimon (talk) 13:37, 13 August 2009 (UTC)


 * Running msconfig usually works for me, but that can seriously screw up your computer, so the others would probably work better. It's only advantage is that you don't need to install it. To use it, just open up the run window, and type msconfig. Thanks,  gENIUS  101  14:52, 13 August 2009 (UTC)

Thank you all. I will attempt some mix of these when I get home later today (and ultimately report back here, probably long after you guys have moved on). Cheers!--Fuhghettaboutit (talk) 17:14, 13 August 2009 (UTC)

all website links
I need to way to scan an entire website and list the url of every html page, image, and file in a plain text —Preceding unsigned comment added by 82.43.89.136 (talk) 08:13, 24 August 2009 (UTC) Alex (talk) 11:54, 24 August 2009 (UTC)
 * Sorry but the only way to do this is to manually visit and get the url for every page
 * bullshit. sorry but web crawlers do exactly what I'm asking, they just download files instead of generate a list of links. I'm looking for something similar to a web crawler, that will scan the site and list every link, file and image. —Preceding unsigned comment added by 82.43.89.136 (talk) 12:52, 24 August 2009 (UTC)
 * I imagine greasemonkey would allow you to do this, though you'd have to write a script to make it so. --Tagishsimon (talk) 12:55, 24 August 2009 (UTC)
 * I have a greasemonkey script that can extract links from a single page, but that's not what I need. I need a program to scan an entire website, possibly hundreds of pages, and list every .html .jpg .exe etc etc link it finds. —Preceding unsigned comment added by 82.43.89.136 (talk) 13:08, 24 August 2009 (UTC)
 * Use wget to get all .htm or .html files (with its "recursive" parameter), then process the files with a Perl script using regular expressions to get the names on all the links? (I don't know the details of this, but it should be possible to learn with some effort) Jørgen (talk) 14:06, 24 August 2009 (UTC)
 * :( I was hoping for an easy way. I found this program called URL Extractor which does exactly what I want but it's limited on a free trial. I've searched for free open source alternatives but can't find anything. Ah well, thanks for trying :) —Preceding unsigned comment added by 82.43.89.136 (talk) 14:30, 24 August 2009 (UTC)
 * Try Xenu - it has some save/report options that probably do what you want. Unilynx (talk) 21:26, 24 August 2009 (UTC)
 * holy shit that's perfect, THANK YOU!
 * With wget, you can just run wget -m --delete-after -nv http://yoursite.com and it'll do pretty much exactly what you want, although it'll download every file on the website, so it can take a lot of bandwidth/time. If you use this on a site that you don't own, it would be considerate to use a wait interval, like -w 10. This will take a lot more time, but cause less load on the server. Also, keep in mind that using recursion won't show you files which aren't connected to your starting point through some path of links. But other than that, it's an easy solution. Indeterminate (talk) 22:40, 24 August 2009 (UTC)
 * You can tweak that command in various to make it faster and friendlier, like excluding everything but HTML files and so on. There are many reasons this won't get all links though, as web sites are very dynamic these days.  --Sean 23:58, 24 August 2009 (UTC)

Download an entire YouTube channel
I want to download all of somebody's videos. There are 435. What is the most efficient way to do this? Mac Davis (talk) 01:35, 22 August 2009 (UTC)
 * I'm afraid you'll have to download each one seperately using a youtube video downloader. Do you use firefox? Warrior  4321  04:18, 22 August 2009 (UTC)
 * I know enough ways to download an individual video. I was hoping there was a batch tool of some sort. Mac Davis (talk) 05:39, 22 August 2009 (UTC)
 * Here is a command line youtube video downloader. Here is a youtube channel's URL:

http://www.youtube.com/profile?user=BarackObamadotcom&view=videos&start=0
 * Here is what each video link looks like in that channel page:

href="/watch?v=R3XW5XQLnk8&feature=channel_page"
 * so to get the whole channel, just do something like:


 * Untested! --Sean 12:49, 22 August 2009 (UTC)

C/C++: optimal use of fread and fwrite
I'm writing a program that splits huge files into suitable chunks, and a corresponding program that reassembles these. Doubtlessly, others have done this before, but there are a couple of tweaks that I want to include, and I think I'll spend less time writing my own than finding a program that does exactly what I would like it to do.

I first tried the naive approach, fputc and fgetc, thinking the compiler would take care of the buffering. The files are pretty large (40Gb and upwards). I was absolutely amazed at how long time this took - ten to twenty times as long as a simple file copy to the same USB device. The program was compiled in release mode with Microsoft Visual C++ 6.0.

So obviously, fread and fwrite are the way to go. I want the program to be as fast as absolutely possible, and did some experimentation with fread and fwrite, with encouraging results.

fread and fwrite have the syntax:

size_t fread( void *buffer, size_t size, size_t count, FILE *stream ); size_t fwrite( const void *buffer, size_t size, size_t count, FILE *stream );

I will allocate the buffer using an std::vector.

I am writing here to ask for advice on what values to select for size and count in each read and write operation, in order to archive optimal results.

Thanks, --NorwegianBluetalk 19:44, 19 August 2009 (UTC)
 * Does it matter whether I ask for one block of 1048576 bytes, 1048576 blocks of one byte or 1024 blocks of 1024 bytes, and if so, which choice is preferable?
 * I assume that it's a good idea to avoid that the data being read is swapped to the hard disk before it is written to the device, so I suppose the size of count * size should be smaller than the amount of available RAM. Correct?
 * Is there a simple way (with the MSC++ 6.0 compiler) to determine the amount of free RAM?
 * What would be sensible choices for count and size on a Windows XP PC with 520Mb RAM, 1Gb RAM and 2Gb RAM (assuming no other applications are running at the same time)?


 * You might consider reading mmap. Our article covers mmap and its Windows alternative, MapViewOfFile.  These methods will allow the compiler (rather the operating system's runtime library) to do the buffering for you.  When you use low-level standard IO calls like fputc, you are specifically requesting unbuffered reads and writes - so the compiler should not optimize those with a buffering scheme.  In Java, the New IO (java.nio.*) package, (documentation) and its Channels methodology, allow you to do the same - with the VM overseeing the demand paging and swapping of buffers into and out of memory.  I have yet to find a more efficient method for reading gigabyte- and terabyte- size files than Java NIO.  (I attribute this to intelligent pre-buffering based on the VM's reasonably accurate assessment of when and where you will leap to in the file next).  Nimur (talk) 20:17, 19 August 2009 (UTC)


 * [f]getc and [f]putc are buffered. I assume the speed problem comes from the constant checks to see whether the buffer is full/empty, combined with the inherent inefficiency of a byte-by-byte memcpy. -- BenRG (talk) 20:58, 19 August 2009 (UTC)
 * My mistake - I was confusing the memory copy, which is single-byte-at-a-time. It seems BenRG is correct.  Nimur (talk) 21:07, 19 August 2009 (UTC)


 * Hm - I can't find a good standard library call to determine the amount of available physical memory in C (and I don't even remember ever learning it!) In Java, you can call Runtime.getRuntime.freeMemory - with the caveats that (a) this is an estimate, and (b) this is only the maximum memory allocatable to the JVM (not total free system memory).  In C, the convention I have always used is to malloc and check for NULL; if failed, wait-and-retry or exit.  I don't know if it's good design methodology to try to allocate exactly as much memory as is reported available - so be sure to use some "margin of error".  Nimur (talk) 20:30, 19 August 2009 (UTC)


 * In C#, or C++ .NET, you can use a PerformanceCounter - MSDN documentation and example in C#. Nimur (talk) 20:39, 19 August 2009 (UTC)
 * Ah - here's what you want - Win32 API GlobalMemoryStatusEx. This is the most portable version (for Windows computers) and works at the lowest level of abstraction.  Nimur (talk) 20:44, 19 August 2009 (UTC)
 * It seems like the standard way to get the current memory status on linux is to check the values in /proc/meminfo (which can be accessed like a file, although it is not a regular file). I'd be curious if some more expert linux systems guys have better insight - surely there's a system call?  Nimur (talk) 20:55, 19 August 2009 (UTC)


 * You shouldn't be allocating huge buffers. The buffer just needs to be large enough that the system call overhead (some constant * file size / buffer size) is negligible. 64K should be more than large enough for that. It doesn't matter how you split it between count and size—the C library just multiplies them together. What's really important when copying between different devices is that the reading and writing happen in parallel. If they alternate it will cut your speed in half. On the read side, you want the OS to do readahead; on the write side, you want write-back caching, not write-through. There's probably nothing you can do about the write-side caching. Windows uses write-through caching on USB devices by default, because people have a tendency to yank them out as soon as Explorer says their files are copied. I seem to recall that reading large chunks can cause NT to disable readahead, so this is another reason to use small chunks (but maybe I'm thinking of Win9x). Annoyingly, there is a reason you might want to write large chunks: it will decrease fragmentation because NT will search for a large contiguous region of free space when you use large writes. You can avoid this problem by preallocating the file, but you only want to do that on NTFS: on FAT I think it will cause the whole file to be written twice (the first time with zeroes). I would stick to small chunks.


 * If you want to get fancy, the fastest way to do disk I/O in NT is overlapped I/O. The idea is that instead of the read/write function returning when it's done, it returns immediately and then later you get a completion callback. Between your request and the callback the OS owns your buffer and you can't touch it. The advantage is that when you have several requests pending the OS can schedule the I/O better because it knows what's coming; it doesn't have to guess. To use overlapped I/O, allocate two or three buffers (maybe a megabyte each?) and start read requests on them, then go into a wait state (using SleepEx). When you get a read completion callback you trigger the corresponding write; when you get a write completion callback you reassign the buffer to another part of the file and trigger a read. Everything is single-threaded, so you don't have to worry about synchronization. It's actually quite easy and it will perform optimally regardless of FAT/NTFS and caching and so on. The main problem is that it's NT-specific. -- BenRG (talk) 20:58, 19 August 2009 (UTC)
 * Java NIO also implements a platform-independent asynchronous IO: . Nimur (talk) 21:03, 19 August 2009 (UTC)
 * If I was committed to getting the best performance out of vanilla reads and writes, I would use a binary search to get the biggest chunk of memory malloc would give me, and then do another binary search on the best buffer size (benchmark each). The second number is probably pretty stable on a given platform, so I'd cache the result in a file somewhere.  --Sean 01:47, 20 August 2009 (UTC)
 * Using large blocks and multiple asynchronous I/Os allows the system to do good disc scheduling. This means it may read or write the blocks in order of where they are on the disc rather than logically in the file, this cuts down seek time which is a major component of file copy times. Dmcq (talk) 08:54, 20 August 2009 (UTC)


 * Thanks everyone for lots of good input! I think the OS-dependent simultaneous read-and-write calls will require too work much for this job (I will need to port this to Linux afterwards), and be risky, because removable media are involved. My reason for wanting to select as large a buffer as possible, was exactly what Dmcq pointed out, but the buffer still ought to be smaller than the available RAM, no? I didn't understand Seans suggestion for using malloc to estimate free RAM, I thought malloc used virtual memory, not necessarily physical RAM. --NorwegianBluetalk 12:11, 20 August 2009 (UTC)
 * Oops, you're right of course; too much time in kernel land. :( --Sean 14:56, 20 August 2009 (UTC)


 * Asynchronous I/O is not risky. If the output device is configured for write-through caching then the writes won't complete until they are done. It's no different from an ordinary synchronous write. In fact, when you do a synchronous write on NT it just does an asynchronous write and then waits for it to complete before returning. But stdio will work fine if you want to stick to plain C. You may as well try benchmarking different buffer sizes as Sean suggested, but large buffers (more than a few megabytes) are a bad idea; they won't help performance and probably will hurt it. -- BenRG (talk) 19:04, 20 August 2009 (UTC)
 * Boost has a cross-platform asynchronous I/O library: . --Sean 14:58, 20 August 2009 (UTC)
 * Thanks again. Copying 40Gb to a 64Gb memory stick using a buffer size of 512Mb took approximately 80 minutes (including calculation of MD5 sums of each chunk). The memory stick was FAT32, and empty, but the file I copied from (a file which holds a truecrypt volume, unmounted of course) was rather badly fragmented.
 * Regarding fragmentation, the defragger that comes with WinXP was unable to do anything about it. I defragmented the disk before allocating the 40Gb file, but the program was happy as long as the individual files were contiguous. It's not like in the olden days, when the Norton utilities defragmentation tool maximized contiguous free space. When I tried to defragment the disk after allocating the volume, it just gave up, even if 25% of the disk was still unused. I think I'll move the 40Gb file to external media, fill the disk up with moderately large dummy files (4-8Gb?), and try to defrag again.
 * Anyway, I strongly suspect that the limiting factor is the write speed of the memory stick. According to this review, writing 1.8 Gb took 8.45 min, which should correspond to 40Gb taking 188 min, so I beat the test in the review by a factor of 2.3. Therefore, there's probably little to be gained in attempting further improvements. I liked the boost thing, though, I think I'll have a look into it just for the fun of it. --NorwegianBluetalk 00:51, 21 August 2009 (UTC)
 * Why are you using a 512MB buffer? I told you in both my responses that large buffers would hurt performance, and you ignored me. A buffer that large will disable all of the OS's caching mechanisms, leaving one device or the other idle virtually all of the time. If your faster device is k times faster than your slower device then performance with a huge buffer will be about k/(k+1) of optimum. For equal read and write speeds that's a 50% reduction. The Fudzilla review is obviously wrong about the speed as your result demonstrates. Searching the web I find quoted figures ranging from 8 to 17 MB/sec. You're getting 8.5 MB/sec.


 * There's no reason to use XP's bundled defragmenter. There are free alternatives that are better, like JkDefrag. Use one of them instead of trying to coerce XP's defragmenter into doing what you want. -- BenRG (talk) 10:30, 21 August 2009 (UTC)


 * Boost makes heavy use of generic programming, so gird your loins for the compilation error messages, which are sadly not going away anytime soon. --Sean 12:11, 21 August 2009 (UTC)


 * @BenRG: I didn't ignore you. My first test was with a 512MB buffer. You and Dmcq gave conflicting advice, and Dmcq's advice was closer to my prejudices than yours was, so I tried that first. An increase from 8.5 to 17 MB/s would be most welcome. I am planning to test the performance with both a smaller and a larger buffer (it turned out that the source PC had a lot more RAM than I thought). I'll be back with more results! And thanks a lot for making me aware of JkDefrag!
 * @Sean: Heh,heh, I know. I've used some of the boost libraries in previous projects. Painless on linux, except for the error messages. A bit more problematic on Windows, as I stubbornly insist on using an ancient compiler. --NorwegianBluetalk 13:45, 21 August 2009 (UTC)
 * Here's why you should be differently prejudiced. When you call fwrite it has to copy all of your data somewhere else before returning since you might modify the buffer as soon as it returns. It either has to physically store it on the device or copy it to an OS-owned buffer. But no OS is going to allocate 512MB of kernel memory to store your data. At most it will allocate a few megabytes, so most of your data will have to be written during the fwrite call. When calling fread you have the same problem in reverse. It has to fill the buffer before returning, either from cache or from the device. You only read the data once, so the only way anything will be in cache is from speculative readahead. But no OS is going to read ahead 512MB. At most it'll read ahead maybe 256K, so almost all of the physical reading will have to happen during the fread call. Since there will never be an fwrite and an fread in progress at the same time in a single-threaded application, one or the other device is sitting idle for the vast majority of the overall runtime. On the other hand, if you use a 64K buffer then writes will copy the data to kernel memory and return immediately and reads will copy the data from the readahead cache and return immediately. The actual reading and writing will happen in the background, simultaneously on both devices. There are three reasons you might want to write larger chunks: to reduce seek times (not an issue when copying between devices), to reduce fragmentation (not an issue on flash drives) and to avoid redundant filesystem metadata updates (possibly an issue on flash drives). If you use overlapped I/O you're no longer relying on kernel buffering, so you can use larger buffers (where larger means, I dunno, 16MB) and get the best of both worlds. 512MB is insane. There's no need to test it because there's no situation in heaven or earth where it would be a sensible choice. Try sizes between 64K and 1MB and go with whatever's fastest. -- BenRG (talk) 14:57, 21 August 2009 (UTC)
 * Thanks a million for spelling it out in crystal clear detail. I understand, and am convinced. I'll modify my program, and do the tweaking in the 16-256kb range instead of the 128Gb-2048Gb range. I'll do some benchmarking. I hope to get the time in the week-end, and hope to be able to be back with the results before this thread is archived. --NorwegianBluetalk 16:55, 21 August 2009 (UTC)


 * I realise you have your program working now, but the standard utility for cutting a large file up into convenient-sized chunks is split (Unix). There will surely be versions available for Windows.- gadfium 23:07, 21 August 2009 (UTC)
 * Thanks. As I stated at the beginning of the thread, the reason I do this at all, is that there are a couple of tweaks that I would like to include (blockwize md5 sums being the most important). Moreover, I don't want to type a bunch of parameters, just

mysplit SOURCEFILE DESTFILE_NO_EXTENSION and myjoin SOURCEFILE_NO_EXTENSION DESTFILE
 * I know, of course, that avoding command line parameters can be solved by writing a script/bat-file. I tried the the md5 checker I had available on the original 40Gb file. It was insanely slow (I don't know exactly how slow, as I didn't have the patience to wait for it finishing, but we're talking MANY hours). I'm on a different computer now, so I can't check exactly which md5 checker it was. Probably the md5 checker will behave better on the smaller chunks than on the 40Gb file. I have cygwin (which includes split) installed on my home computer, but not on the source computer for this project. I'll include split when I do the benchmarking. --NorwegianBluetalk 10:27, 22 August 2009 (UTC)


 * You can set your I/O to work unbuffered and asynchronous and then you can use multiple large buffers. This is almost equivalent to doing memory map operations. Dmcq (talk) 11:26, 22 August 2009 (UTC)

Benchmarking
I checked out the source code for Split (Unix) here. It uses a buffer size of MAXBSIZE for each write. Googling has given values for MAXBSIZE of 32k, 56k and 64k, in good agreement with BenRG's recommendations. The most time-critical part of this project, is the transfer of data from the harddisk to the USB stick. Since the write speed of the USB stick is vastly slower than the read speed of the harddisk, I reckoned it would make sense to determine the ideal block size for achieving the fastest possible write speed to the USB stick, so I wrote a program to that purpose. I did this benchmarking on a rather old (2004-2005) AMD64 XP PC with 1Gb of RAM. I killed every killable process in the task manager, stopped AVG, but kept the Visual C++ IDE alive while the program was running in a DOS box. The program wrote a 3Gb file from a RAM buffer to the USB stick. In case anyone is interested, here is the program:

I ran each combination of buffer size and number of iterations twice. The results were repoducible - with the same parameters, the results were virtually identical. Here are the results:

64 kB buffer:  6.0 MB/sec 256 KB Buffer:  8.2 MB/sec 1 MB buffer: 10.1 MB/sec 4 MB buffer: 10.5 MB/sec 16 MB buffer: 11.0 MB/sec 64 MB buffer: 10.8 MB/sec 128 MB Buffer: 11.0 MB/sec 256 MB buffer: 10.8 MB/sec 512 MB buffer: 10.8 MB/sec 1 GB buffer:  7.6 MB/sec

It seems as if the write speed, on this particular computer, levels out at about 16MB, but the difference between 1MB and 16 MB is tiny, and it may well be that the disadvantages that BenRG has pointed out, will make a choice of 1MB more sensible. The fact that performance doesn't drop when buffer sizes reach "insanely" high levels, indicates to me that the OS may allocate more physical RAM than what BenRG expected. I promised to compare performance to Split (Unix), but will not have time to do so before this thread is archived (benchmarking takes a lot of time). If it turns out that Split outperforms my tailor-made program, however, I will humbly be back with a post linking to this one, entitled "Reinventing the wheel", with the results. --NorwegianBluetalk 20:55, 23 August 2009 (UTC)


 * I've been checking out some internet forum such as this one. As our article also states, writing data to a flash drive, implies that a large amount of disk space has to be erased, for a large drive typically 2MB, as indicated in the external link. Hence, I conclude that 2MB is probably the ideal buffer size for the application I'm working on. Note also that performance starts to drop only at a buffer size of 1GB. To me, this indicates that no swapping to disk is going on when 512MB is malloc'ed (after all, win XP can run acceptably on the remaining 512MB).


 * I also tested the benchmarking on a much faster computer with 4Gb RAM today. It performed slightly poorer than the figures I have quoted above (but I was unable to disable its anti-virus software). --NorwegianBluetalk 18:40, 24 August 2009 (UTC)

C/C++: optimal use of fread and fwrite
Thanks a lot for your response to my question. Just to be absolutely sure that I understand what you mean, and don't complicate my code unnecessarily: when you say use a small (64k) buffer, you mean: and not I suppose the latter would be equivalent to using a large buffer, right? If I've understood you correctly, your point is to keep the hardware of both drives occupied at the same time, and that using a large buffer would allow the output device to enter an idle state while the input device was filling the buffer, and vice versa. Grateful for confirmation. --NorwegianBluetalk 15:27, 21 August 2009 (UTC)

Followup
Thanks a lot for taking the time to give me advice, and to carefully explain the rationale for using a small buffer size. I was totally convinced, and therefore greatly surprised at the results. I have posted the results of the benchmarking, and updated them today by also testing a 1GB buffer, and only then did performance drop. As I've written in the thread, I think there are two reasons for the discrepancy between expectations and observations. One is the special properties of flash memory. Using a small buffer implies that the same block has to be rewritten many times. The other is that I think you have underestimated the amount of memory that the OS may allocate for a process. To me, the data suggest that no swapping to disk occurs when a 512MB buffer is used on a PC with 1GB of RAM, given that no other memory-hungry application is running. After all, many people run XP with 4GB and no swapfile. Your advice was very helpful, because it forced me to do the benchmarking that I should have done before asking, and to really think things through. It is much appreciated. --NorwegianBluetalk 19:13, 24 August 2009 (UTC)

Unblocking a downloaded file in Windows Vista
I have downloaded an executable file from the Internet, and I am positive that it is not malicious. However, Windows Vista has blocked it, so I have to confirm the execution of the program, everytime I try to run it. This is very annoying. And worse yet: In the file properties dialog box, there is a "Unblock" button, but it does not work! (Apparently this is a bug in Windows - even if Microsoft for some reason really do not want me to be able to unblock the application, the button should not be enabled if it has no effect. I have SP2.) I have tried to run "explorer.exe" as administrator, and opened the file properties dialog from there, but that did not work either. How to unblock the file? --Andreas Rejbrand (talk) 18:42, 24 August 2009 (UTC)


 * Apparently the zone information about where the executable came from is kept in an alternate data stream associated with the file. One way to supposedly clear up the problem is to download this command-line program. If you put streams.exe in the same folder as your executable, then open a command prompt, cd to that folder and type "streams -d (yourexecutablefilename).exe", it'll delete the associated data streams. Hopefully that should clear it up. If you want to disable the feature altogether, try option 3 on this page to disable it through local policy. Indeterminate (talk) 22:13, 24 August 2009 (UTC)


 * He can also do it an easy way and right click go to properties and there should be an option to unlock it.  Rgood erm  ote    06:54, 25 August 2009 (UTC)


 * If you had read my original post three paragraphs above, you would have noticed that this was the first thing I tried, but it didn't work! :) --Andreas Rejbrand (talk) 11:52, 25 August 2009 (UTC)


 * Thank you very much, Indeterminate! It really worked! --Andreas Rejbrand (talk) 11:56, 25 August 2009 (UTC)

Linux cp of home directories

 * 1) What are the correct options to give to cp to recursively copy a directory, maintaining file ownerships+groups, file permissions, including .hidden files, and let softlinks remain softlinks? I'm copying the /home subtree to a different disk. I have read the manpage, but the number of options is quite large, and the explanations terse. In case there are distro-variations, I'm using Ubuntu and Debian, and the file systems are ext3.
 * 2) Is there any reason not to replace the /home subtree with a softlink to a subdirectory on a different disk?
 * Thanks, --NorwegianBluetalk 13:04, 5 September 2009 (UTC)
 * "cp -a /home /new/home" as root should do it. As for the second question, theoretically some program might get confused by /home being a softlink, but that would certainly be a bug.  84.239.160.214 (talk) 13:26, 5 September 2009 (UTC)
 * Thanks! --NorwegianBluetalk 13:52, 5 September 2009 (UTC)


 * I've always used 'tar' for this:

cd fromdir ; tar cf -. | ( cd todir ; tar xf - )


 * It preserves links, sym-links, ownerships and privilages, etc - so long as the person doing the copying has the relevant permissions. There are a bunch more command line arguments to fine-tune what gets duplicated and what doesn't. SteveBaker (talk) 16:05, 7 September 2009 (UTC)
 * Thanks. Presumably the -a option achieves the same thing (the 'a' is short for archive), but as you say there are a bunch of command line arguments. The directories in question are small enough for me to use both approaches and compare. --NorwegianBluetalk 19:06, 8 September 2009 (UTC)

What is the current gold-standard text for Windows C programming?
In answering the question, just above, I was going to recommend Charles Petzold's classic Programming Windows, but I see that it's still on its 11-year-old fifth edition, and is two or three major Windows revisions out of date. wikibooks:Windows Programming doesn't cite anything that much newer. Is there a well-regarded book that has taken the space occupied by Petzold, or is the space left to those horrid books that charge you £40 for a printed out copy of the library docs? -- Finlay McWalter • Talk 18:28, 1 September 2009 (UTC)
 * I would suggest going straight to the source: Technical Resources for .NET from Microsoft. They also provide a free version of their Visual Studio .NET compiler and integrated development environment, which has a C, C++, and C# compiler (among others); and supports their newest CLR/.NET framework (and, if you prefer to use outdated technology, also supports raw Win32 api and MFC code, too).  The Tech Resources on their website include tutorials, sample-code, and API documentation.  This tool has a huge amount of in-program documentation as well.  .NET Framework Conceptual Overview may be the best first-place-to-start, as it lays out the newest suite of technologies and how they interplay on a modern Windows operating system.  (Oh the days when you simply "compiled" C code!  Not anymore!  Even a "compiled language" is often really an interpreted, virtualized, managed application technology suite on pretty much all of the major distributions of all the major operating systems!)  These tutorials might bring the "newbies" up to speed and the "old-fogeys" might want to brush up on which of their assumptions have become invalidated by modern software engineering architectures.  Nimur (talk) 18:34, 1 September 2009 (UTC)
 * If modern technology is so great, why are you recommending object-oriented imperative programming languages? C and C++ are both a lot older than Win32, and C#, while it's a newer design in the narrowest sense, doesn't have any ideas in it that are less than 40 years old. At least recommend F#—though I suppose its history is just as long. And were you planning to display any windows, or create any files, or open any TCP sockets to send any SQL queries to any relational database servers in your modern .NET application? Anyway, I'm pretty sure Finlay McWalter meant Win32 specifically. To which the answer is... I don't know. I would probably go with Petzold, the online MSDN documentation, and whatever other online resources you can find with a web search. -- BenRG (talk) 00:52, 2 September 2009 (UTC)
 * I'm not endorsing anything in particular, and I wasn't intending any descriptions in my earlier post in a pejorative way. I'm just proposing that modern C++ is dramatically technologically different from C++ in 1990; the best resource(s) for learning it are directly reading the documentation provided by the compiler and operating system vendors.  The OP wanted information about "Windows programming", and the Windows API is presently defined in terms of the Common Language Infrastructure, which will probably persist beyond any specific compiler technology or GUI API.  I expect that the win32 API will also soon be deprecated; we're now more than 15 years past "Windows NT" (which was intended to be the Win32 replacement).  Continuing to program in that paradigm is a surefire way to keep your code non-portable and rapidly-deprecated.  The conceptual ideas for modern architectures do have histories dating back as far as you want to chase them; but it's only in this decade that dynamic thread scheduling, multi-core systems, kernel virtualization, file-system-in-user-space, driver management frameworks, and JIT runtime environments have become available on mainstream consumer devices.  Every single one of these technologies, which are now a crucial part of the Windows platform (and Linux and Mac OSX and others) have historical precedents dating back at least as far as the System 360; but now that they operate on every computer, the best way to learn to program these architectures is to use one of the more modern incarnations of the languages you mentioned, like CLR, .NET, or Java.  All of these technologies are well-documented online via their official distributors.  It has been my experience that no textbook has been able to keep up, editorially, with the online documentation.  Nimur (talk) 03:27, 2 September 2009 (UTC)

GUI Design
How can I create a Graphics User Interface in C ? One way i know is by using the graphics.h header present in turbo c/c++ (old version) which is not available in newer C compilers...please suggest some way to do this... —Preceding unsigned comment added by Piyushbehera25 (talk • contribs) 17:52, 1 September 2009 (UTC)


 * The Windows API itself (the underlying C API) has extensive calls for GUI programming (you get them, at first instance, from windows.h). A lot of people use Microsoft Foundation Class Library, the main C++ wrapper for those calls.  These both give you access to Windows' own GUI system, which means you get programs that compile and run only on Windows. Alternatively, you could choose to use a cross platform toolkit (e.g. GTK+, Qt, wxWidgets, or Tk) - these will allow your code to work on a variety of platforms beyond windows, but you lose some of the nitty-gritty access to lower-level windows features. -- Finlay McWalter • Talk 18:19, 1 September 2009 (UTC)


 * Glade + gtk is a relatively easy and portable way. --91.145.73.98 (talk) 19:05, 1 September 2009 (UTC)


 * Thanks...can you also suggest some books or online resources which provide good coverage of the above stated api's..


 * gtk tutorial. If you decide to go with gtk, you likely want to install devhelp with gtk documentation. --91.145.73.98 (talk) 19:17, 1 September 2009 (UTC)

Granular synthesis (music) was: There is any program that randomize a song??
There is any program that randomize a song?? An example: The program get the sound, split in many parts with 0,001 second and then put those parts together in a random order. —Preceding unsigned comment added by 189.0.212.236 (talk) 01:31, 1 October 2009 (UTC)
 * Such a program would be trivial to write (e.g. about five or ten lines of code) in GNU Octave. (Have a look at audio processing tools).  Because it's not a very commonly-needed utility, I doubt it has been written and widely distributed.  Nimur (talk) 05:45, 1 October 2009 (UTC)
 * If you split a song in 0,001 second chunks, and then join them in a random order, I'm certain that the result would bear no resemblance whatsoever to the original. I'm also pretty sure that what you would hear is a a 1000 Hz tone with some background noise. If you want to get a result that would be even remotely pleasing to the ear, your chunks would have to be a lot longer, and your program would have to choose the splice points carefully. At the very least, the signal at the end of the first chunk needs to have about the same amplitude as the signal at the beginning of the chunk you are joining it to. Otherwise there will be a very audible "pop". (And it is the playing of 1000 pops per second that would create the 1000 Hz tone that I referred to). Writing such a program is not trivial. --NorwegianBluetalk 10:01, 1 October 2009 (UTC)
 * It would be an example of Granular synthesis applied to a soundfile. With the grain size around a millisecond you would no longer be dealing with the original song so much as synthesizes wholly new sounds out of semi-random waveform bits and pieces. There are lots of granular synthesis tools, as listed on the page about it. Also, taking care of the "pops" is integral to granular synthesis. Pfly (talk) 07:37, 2 October 2009 (UTC)

Making a video
I have some sets of 24-bit bitmaps that I wish to concatenate into a video. Some sets are 800x600 pixels, others are 770x700 pixels. The sets range in size from 500 bitmaps to 7500. The bitmaps are labeled "frame####.bmp", with #### being an integer ranging continuously from 0001 to the end of the set of bitmaps. These labels are in the order I wish to concatenate them (i.e., I want frame0001.bmp to come first, then frame0002.bmp, etc.) The output video should retain the 24-bit colorspace and be in the .mov, .qt, .wmv, .mpg, or .mpeg format. Considering that I run Windows XP, are there any freely-available programs I can use to do this? --Lucas Brown 42 19:14, 28 September 2009 (UTC)
 * FFMpeg can do this:

ffmpeg -f image2 -i frame%04d.bmp my-movie.mpg
 * --Sean 19:51, 28 September 2009 (UTC)

Umm... Is it possible to download a binary file? I'm not that adept with this stuff yet. --72.197.202.36 (talk) 20:57, 28 September 2009 (UTC)
 * Googling "ffmpeg binary" suggests this unofficial builds page. « Aaron Rotenberg « Talk « 22:18, 28 September 2009 (UTC)


 * Try here for some unofficial builds (binaries). But yeah, it's a pain you can't just download pre-compiled binaries a little easier! Most of the free-software types have a pretty limited view of who will use their software... even if you do know how to compile things from scratch, it's a terrible pain to get everything working right that way. --Mr.98 (talk) 22:17, 28 September 2009 (UTC)


 * Haha, look at the diffs on our edits. Somehow, neither of us got an edit conflict. « Aaron Rotenberg « Talk « 22:23, 28 September 2009 (UTC)


 * It is not that the authors refuse to make binaries. They just don't have the need to do so.  There are a lot of open source projects that have a lot of Linux users and very few Windows users.  The Linux package managers handle the binaries, not the authors of the project.  Since Windows is not package-based, there is no package manager to make binaries for it.  That is why I've argued for many years that someone needs to make a Windows package system (like APT or YUM) for Windows users.  Then, Windows users can open it, search for ffmpeg, locate the package, and click "install".  When the package is updated, they get a systray notice and easily upgrade the binary. --  k a i n a w &trade; 03:23, 29 September 2009 (UTC)


 * You can also do this with the inbuilt Windows Movie Maker, albeit with restrictions about how long each image is displayed. --Phil Holmes (talk) 08:41, 29 September 2009 (UTC)

ref tag in html?
Is there such a thing similar to the ref tag in wikipedia? I know that you can use an anchor to duplicate it, but when you add an anchor link in between two others, you have to change the ones after it and if you have like 100 references, you have to do a ton of work. So does anyone know if there is a way similar to the ref tag in wikipedia in html or something else (cascading style sheets etc). —Preceding unsigned comment added by Renassault (talk • contribs) 04:05, 18 October 2009 (UTC)


 * We have this - Template:Anchor. Nimur (talk) 04:40, 18 October 2009 (UTC)


 * sorry, I don't understand what that says (I think it talks about how to do it on a template). Is it saying you can't do it with ? (also, sorry, I dont know how to sign my name). —Preceding unsigned comment added by Renassault (talk • contribs) 04:53, 18 October 2009 (UTC)

Here you go. I just made one up for you:

Once the page is loaded, the "setRefs" JavaScript function runs. All of the generated links point to a section called 'references'. Feel free to rename either the function or the section name. You could also do it using the document.getElementsByName method, if you wanted to. The body above is just boilerplate text. You add a span tag wherever you want a citation. I don't know how Wikipedia does it, but I like to keep my code as simple as possible.--Drknkn (talk) 06:43, 18 October 2009 (UTC)

DOS printing
I have some DOS applications that insist on printing to a parallel printer port. However, my new printer only has USB. Where can I find a cable that can do this? (There are lots of cables that connect the computer's USB port to a parallel printer port but I need the reverse.) Alternately, are there any drivers that can fool the DOS applications into thinking that they are printing to a parallel port when they are actually printing to the computer's USB port. — Preceding unsigned comment added by 117.196.224.106 (talk) 12:19, 31 October 2009

You don't need a cable; DOS does not recognize USB. Assuming you have Windows XP or above, let it do the work: ---— Gadget850 (Ed)  talk 22:18, 2 November 2009 (UTC)
 * Install the printer under Windows
 * In the printer properties, share the printer with a logical name
 * Capture LPT1 by opening a command prompt:
 * NET USE LPT1: \\ \ /PERSISTENT:YES

Easiest way to decide if a point is within a polygon?
I am writing a computer program in BASIC (I'm neither a career programmer or a mathematician). I have polygons represented as lines. The lines consist of two x,y coordinates. What would be the easiest practical method to determine if a point x,y is within a polygon or not?

I have thought about extending a line vertically or horizontally from the point to see if it cuts two polygon-lines on either side. But this would give the wrong answer if the point was between two polygons rather than in one, or near a concave part of a polygon.

Sorry, referring me to a standard graphics library written in C or whatever would not help as I would not understand it and could not use it. Thanks 78.149.247.13 (talk) 11:09, 17 December 2009 (UTC)


 * The easiest way is to calculate the winding number of the polygon around the point, by calculating then summing the angles subtended by each edge that the point. This works for a polygon of any complexity. --JohnBlackburne (talk) 11:50, 17 December 2009 (UTC)


 * I get the feeling, from your description of your representation, that you have several polygons, that are only represented as lines with an x and a y coordinate. What makes me suspect this is the statement that extending a vertical or horizontal line would not work if the point was between two polygons rather than in one. The computer will not automatically sort the lines into separate polygons. Therefore, whatever algorithm you chose, you'll need to somehow represent which polygon a given line belongs to, and to test a point against one polygon at a time. --NorwegianBluetalk 14:46, 17 December 2009 (UTC)
 * Correct, although you could have more easily deduced that from seeing that I wrote polygons rather than polygon. 78.147.9.91 (talk) 19:59, 17 December 2009 (UTC)
 * If you have multiple polygons then doing a test multiple times, once for each polygon, will work. You can early out when your condition is satisfied. If you're doing this a lot with the same polygons you might save some time if you consolidate your polygons, merging e.g. overlapping ones. But this will only be beneficial if the results are still polygons, i.e. there are no holes, and at least some overlap, and is probably a lot of extra work. --JohnBlackburne (talk) 20:21, 17 December 2009 (UTC)
 * WP:WHAAOE: point in polygon. My understanding is that the "cast a horizontal (or vertical) ray" approach is the simplest to program.  --Tardis (talk) 18:21, 17 December 2009 (UTC)
 * I didn't know about that page, which presents them both quite nicely. I'd still say the winding number is easier as it's just to a simple for loop. It may not be the most efficient but there are things you can do to deal with that: as well as what the article suggests you can be as careless as you want working out the angle, i.e. use whatever approximations and fast but inaccurate routines you know; even errors of ± 25% are acceptable. --JohnBlackburne (talk) 18:52, 17 December 2009 (UTC)

Can I check please if I've got it right about calculating the winding number? Do I calculate the angle subtended from the point to each line, then add them all up? Then I imagine I divide that by 360 degrees, which gives me a number. What is the significance of that number? Wont it be a similar number even if the point is inside or outside the polygon? 78.147.9.91 (talk) 20:14, 17 December 2009 (UTC)
 * I've only used the first approach you suggested, which happened to be the first example in the article Tardis linked to (and which works fine, as long as you check for the special cases, such as your line intersecting with a vertex). Nevertheless, whichever approach you use, I think you need to decide how to deal with self-intersecting polygons. Are the holes created in such circumstances, such as the pentagon within the pentagram in the figure to the right, inside or outside the polygon? --NorwegianBluetalk 21:00, 17 December 2009 (UTC)


 * Yes. Sum the angles and it should be either 0°, 360° or -360°. It can be plus or minus as you don't need to do anything to ensure it's a clockwise or anticlockwise winding - it doesn't matter. 0° means the point is outside the polygon, ±360° means it is inside. Your maths only needs to be accurate enough to tell these cases apart. Divide by 360° to get the winding number.
 * As for self intersecting polygons they are a very different problem: the winding number can be anything, e.g. zero or two, even "inside" such a shape, which makes "inside" a much less well defined property.--JohnBlackburne (talk) 21:32, 17 December 2009 (UTC)
 * How can you get a total of 0 degrees when the point is outside the polygon? The angles will all be positive, so it will not add up to zero. What am I missing please? 89.242.147.247 (talk) 00:45, 18 December 2009 (UTC)
 * The angles are directed: you go around the polygon in one direction and count how far counterclockwise (say) the ray from the test point sweeps with each step. If it moves clockwise instead, you subtract.  --Tardis (talk) 05:23, 18 December 2009 (UTC)
 * If you can arrange that the pairs of points for each particular polygon are such that they describe a cycle round the polygon when considered as directed arrows then the job can be one fairly easily by extending a line like you say. The extra trick is to check for each line cut which side of it the line from your point cuts it, ie to the left of the arrow or to the right. If the cuts all balance out you're outside any polygon. If there is more of one than the other you're inside a polygon. This is a varia nt of the cycle counting algorithm. You don't need to note which polygon each line belongs to. Dmcq (talk) 23:51, 17 December 2009 (UTC)


 * I am late on this discussion, but it is important to know if the poygons are convex or concave. Convex polygons are easy to work with.  Concave polygons are difficult. --  k a i n a w &trade; 02:51, 18 December 2009 (UTC)
 * Both convex and concave polygons are handled perfectly well by these algorithms. If you know all your polygons are convex things are a little simpler, e.g. a raycast from the point in any direction will intersect the polygon exactly once if the point is inside it. But otherwise the approach is the same.--JohnBlackburne (talk) 09:26, 18 December 2009 (UTC)

The winding number W of the polygon (z0, z1, ..., zn&minus;1, zn=z0) around the point z = x + iy may be computed by a simple formula involving complex numbers:
 * $$W={\frac 1 {2\pi i}}\sum_{k=0}^{n-1}\log \frac{z_{k+1}-z}{z_k-z}$$

The principal branch of the logarithm is chosen. If W = 0 then the point z is outside the polygon (z0, z1, ..., zn&minus;1). Note that W is undefined when z is on the border of the polygon. Bo Jacoby (talk) 04:36, 18 December 2009 (UTC).
 * That looks simpler but is of limited use when programming as you generally can't use complex numbers --JohnBlackburne (talk) 09:26, 18 December 2009 (UTC)


 * If you don't have any self intersections like that picture or count the centre as being outside then just counting how many lines you cross to get outis enough, for instance if you check a line going vertically see how many intersections you have with y > that of your point. If it is odd your point is inside. You don't even have to have a particular direction associated with the lines for this. Dmcq (talk) 10:48, 18 December 2009 (UTC)

John Blackburne: Several programming languages support complex numbers, but BASIC does not. The J (programming language) allows this short implementation of the winding number formula: W=.9&o.@(%&0j2p1)@(+/)@:^.@(% _1&|.). To test if (0,0) is within the square ((1,0),(0,1),(-1,0),(0,-1)), type W 1 0j1 _1 0j_1 and get the result 1, indicating YES. To test if (4,0) is within the square, type W 1 0j1 _1 0j_1 - 4 and get the result 0, indicating NO. This is perhaps the the easiest practical method to determine if a point x,y is within a polygon or not. Bo Jacoby (talk) 01:47, 19 December 2009 (UTC).

Thanks. Can the winding number, or any other simple proceedure, be used to check that a) a polygon does not overlap itself, or b) that two different polygons do not overlap? edit: I suppose the answer is that at least two of the lines would cross. 84.13.56.95 (talk) 12:04, 19 December 2009 (UTC)


 * In general no. The winding number is limited to 0 and [±]1 in and around standard polygons, so if you get a winding number of 2 you know it's a self-intersecting polygon. But there are also self-intersecting polygons with winding numbers of only 0 and 1. As for two polygons, the winding number is of little use there too. If you find a vertex of one inside the other then they intersect, but they can intersect without this ever happening. If your polygons are always convex you can look for a separating line, i.e. a line not in either which they lie on either side of. But in general checking edges in one against edges in the other is probably best.--JohnBlackburne (talk) 23:45, 19 December 2009 (UTC)

Thanks, slightly off-topic question - how could two polygons intersect without a vertex (I assume that means corner) of one within the other, assuming they are made of straight lines? 89.242.211.123 (talk) 16:14, 20 December 2009 (UTC)
 * Think of a cross make of two long rectangles on top of each other. Dmcq (talk) 16:45, 20 December 2009 (UTC)

MediaWiki LocalSettings.php -- setting up Guest read-only account.
I'm setting up a wiki which requires being logged in to view any page except the login screen. In addition to the ordinary editor and admin accounts, I would like that there be an account called "Guest", that is allowed to view any page, incuding its source, but not to edit anything. How do I accomplish the last requirement? Here's what I've got so far:

$wgGroupPermissions['*']['createaccount'] = false; $wgGroupPermissions['*']['read'] = false; $wgGroupPermissions['*']['edit'] = false; $wgWhitelistRead = array("Special:Userlogin", "-", "MediaWiki:Monobook.css");
 * 1) Prevent new user registration except by sysops.
 * 1) Require login to view anything

Thanks, --NorwegianBluetalk 15:29, 13 March 2010 (UTC)


 * Add this to LocalSettings.php (in addition to what you already have):
 * $wgGroupPermissions['user']['read'] = true;
 * $wgGroupPermissions['user']['edit'] = false;
 * $wgGroupPermissions['editor']['edit'] = true;
 * and then add all accounts you want to be able to edit to the "editor" group (the default WikiSysop account should have the rights to do that). You can create a Guest account and not add it to the editor group to get your desired functionality. You may also want to give some additional users the right to create accounts and make them editors - I think bureaucrats will be able to do that by default. mw:Manual:User rights management gives more information on this topic. --Tango (talk) 05:38, 14 March 2010 (UTC)
 * Thanks a lot! Worked perfectly. What I hadn't understood before reading your reply, was that I could add new groups in LocalSettings.php myself. --NorwegianBluetalk 11:48, 14 March 2010 (UTC)

exif
where can i get a freeware program to manage/edit exif data, etc? —Preceding unsigned comment added by 86.144.124.51 (talk) 22:21, 29 March 2010 (UTC)


 * Imagemagick's identify program will display the EXIF data. To help you with editing, you'll have to tell us which OS you're using, and whether you're willing do download any of the mystery-meat freeware programs Google finds when you search for "exif editor" - personally I'd have great concerns about doing so. -- Finlay McWalter • Talk 22:26, 29 March 2010 (UTC)


 * I haven't used it, and don't vouch for it, but this looks promising. It apparently allow you to edit some, but not all, of the EXIF header info. -- Finlay McWalter • Talk 22:37, 29 March 2010 (UTC)

PHP text box - retrieving entered data
I'm trying to make a very simple php script where you can type something in a text box, press submit and it'll write the text to file. I suck at php and this is the best I could come up with but it doesn't work.

<? $logfile= 'log.html'; $fp = fopen($logfile, "a"); fwrite($fp, $x); fwrite($fp, " "); fclose($fp); ?>  

Could anyone give me advice on what I've done wrong? Thank you :) —Preceding unsigned comment added by 82.44.54.207 (talk) 10:29, 2 April 2010 (UTC)


 * The problem is that you don't have the control flow right. Imagine how this page loads up. First it opens the log file and writes to it. Then it gives you a text box for information. It will do it in exactly that order — top to bottom — so it will never get the text box data.
 * Here's a slight rewrite. What I've done here is make the script first check if we have POST data to write, and only write then. If the user has input data, it won't ask you for the text box again.

<? if($_POST["x"]) { $x = $_POST["x"]; $logfile= 'log.html'; $fp = fopen($logfile, "a"); fwrite($fp, $x); fwrite($fp, " "); fclose($fp); echo "Thanks for the data!"; exit; } ?>  
 * See if that works better. Note that we are having the same form POST data to itself, and we have just made it so that it can detect if it should be handling data or not. This is a quite common control structure in PHP. If you are having trouble writing to the file, make sure its permissions are set correctly. The other potential issue is whether you moved the POST data into a variable or not (that's my line with the $x=$_POST on it). In much older versions of PHP it used to be that POST variables could move directly into the regular variable space automatically (if you $_POST["x"], then that's what $x is), but that has been stopped by default for a long time because it is a HUGE security problem to let potential users indiscriminately fill variables. So if you are looking at other code as a reference, that might be where it is slightly incompatible. Retrieving the variable from $_POST is easy and a better alternative. --Mr.98 (talk) 12:26, 2 April 2010 (UTC)
 * Thank you so much! :D

Security of website
Say the permissions of a directory on a website are set to 777, in order to facilitate uploads from a php-based application. There are no direct links to the directory from any pages of the website. If the maintainer of the website gives the directory a long name, generated randomly, is it there any way that an intruder can find the directory, except by brute force generation of names? --NorwegianBluetalk 06:44, 30 August 2009 (UTC)
 * http://xkcd.com/538/ F (talk) 12:03, 30 August 2009 (UTC)
 * LOL! Excluding physical access to the server, physical abuse of the maintainer of the website, and cracking of passwords to the server. --NorwegianBluetalk 12:32, 30 August 2009 (UTC)
 * If the attacker is intercepting messages (e.g. packet sniffing), and those HTTP requests are sent in plain text, (which they usually are, unless you are using HTTPS), the attacker will know that directory exists without brute-force attack. However, this assumes the attacker has such a surveillance capability - usually meaning he must be sharing a non-switched network with you.  Nimur (talk) 16:10, 30 August 2009 (UTC)
 * Thanks! --NorwegianBluetalk 17:33, 30 August 2009 (UTC)
 * You can list directories if you're allowing ftp so it doesn't sound particularly hidden to me. If it can only be accessed by html then don't put the directory into a directory without an index.html file as your server may automatically construct a list of the contents to send back, so sticking the directory at the top level is probably best. Dmcq (talk) 17:39, 30 August 2009 (UTC)
 * I can access the server both through ssh and ftp, but password protected, of course. No anonymous ftp. And there is an index.html file in the parent directory. The php-based applications live in other subdirectories. Should that be ok, as long as the passwords are strong (except for the packet sniffing attack that Nimur referred to)? --NorwegianBluetalk 18:04, 30 August 2009 (UTC)

Googlebot indexing
Will a subdirectory-tree on a web server, that has has permissions 777, and which is used for uploads and as a file repository for a php-based application, located on a site which itself is indexed by googlebot, also be visible to googlebot and similar web crawlers?

The application in question is MediaWiki. Short urls (without the question mark) are not used. There are no ingoing links to the directory, its name appears only in LocalSettings.php. There is no anonymous ftp, and login is required to access the wiki. I'm also assuming that the directory has a long, randomly generated name.

I've asked a question similar to this one before, read Web_Crawler, as well as this xkcd comic, but am still not totally sure about the answer. Thanks, --NorwegianBluetalk 07:43, 21 April 2010 (UTC)
 * Put an XML site map on the site so the crawler will know where to find stuff. That works better than relying on the link graph. 66.127.53.162 (talk) 08:17, 21 April 2010 (UTC)
 * Sorry for not being clear enough, the intention is that the crawler should not be able to index these files. --NorwegianBluetalk 10:46, 21 April 2010 (UTC)
 * Had you considered using a robots.txt exclusion file? --Phil Holmes (talk) 15:08, 21 April 2010 (UTC)
 * Oh. Yes, use either a robots.txt or comparable meta tags (see the article Phil Holmes linked to).  Note that there are evil crawlers that ignore such directives.   66.127.53.162 (talk) 16:31, 21 April 2010 (UTC)


 * If you want to prevent a web spider from indexing it, the best option is to restrict access to that directory, either by requiring a login or by denying delivery at the server level (for example, deny all except an IP whitelist; or require a login/password; and control this with an .htaccess file). Alternately, if you have absolutely no links to that subdirectory, ("security through obscurity"), the directory's presence should be totally unknown to all spiders.  Note that if you ever accidentally create a link (or if a spider somehow guesses the directory name), the "obscurity" scheme provides no measure to deny access to that robot.  (I know that in MediaWiki, especially if you post files on the wiki, there will be links to that upload directory - but you stated that these wiki pages already have some access control).
 * Ideally, you want identical access control for the .htaccess file and the wiki login - there may be some way to do this using session-sharing and a unified login. Ultimately it boils down to how strongly you seek to protect / deny access.  I strongly recommend that you use an authentication scheme - htaccess files are easy to understand and implement, and are a good way to augment a "security through obscurity" scheme.  If you are particularly worried about protecting your content, consider requiring additional measures, (in order of increasing complexity/security): secure HTTP, requiring a captcha, encrypting the directory contents, or replacing the simplistic "uploads" directory with a full-blown web-application and secure network-storage scheme that enables per-user authentication.  Eventually, your secure system must migrate to a non-public network if you want the maximum level of security.  Nimur (talk) 17:47, 21 April 2010 (UTC)
 * If you're trying to run a private site then you have to use login authentication and trust your users, yes. Keep in mind that if the site has any external links at all, those can disclose the site's existence through referer headers when users click on the links.  If you're just trying to keep a low profile from mass visibility (spelled Google), then robots.txt seems sufficient for most purposes.  66.127.53.162 (talk) 20:22, 21 April 2010 (UTC)

Although "robots.txt" is a perfectly valid answer to my question, it makes sense to consider not-so-friendly bots or other intruders, who might use "robots.txt" for the exact opposite of what it's intended for. This is indeed a case of "security through obscurity". So it seems .htaccess + .htpasswd is the way to go. I wasn't aware it protected all subdirectories (of which there are a lot in a MediaWiki file repository). I tried it, and my testing so far indicates that its presence doesn't confuse the MediaWiki software, files are uploaded and downloaded fine. However if I try to browse the "secret" directory, which used to be accessible to someone who knew its name, I'm now prompted for a username and password. Thanks everyone! --NorwegianBluetalk 16:55, 22 April 2010 (UTC)

Replacing .htaccess files with  sections in apache2.conf
I'm moving a web site from a hosted server to one I run myself. On the hosted server, I used .htaccess files to restrict access to certain directories. After some fiddling I managed to get .htaccess working on the server that I run myself. However, the apache2 docs advise against using .htaccess files if you have complete control over the server, as I have in this case, and instead use <Directory> sections in apache2.conf. "Any configuration that you would consider putting in a .htaccess file, can just as effectively be made in a <Directory> section in your main server configuration file."

I'm unable to figure out how to follow the above advice.

My apache2.conf contains this section,

The directory that I want to protect contains this .htaccess file

The .htpasswd file in /some/directory/ was created using the htpasswd program. According to the docs, there's a performance cost in using .htaccess files, because the apache server needs to check a lot of directories for .htacess files, hence the reccommendation of using <Directory> sections instead. I'm running Apache/2.2.9 (Debian 5.0.4) PHP/5.2.6-1+lenny8 with Suhosin-Patch mod_python/3.3.1 Python/2.5.2 mod_perl/2.0.4 Perl/v5.10.0, in case the answer is version- or OS-dependant.

I'd be grateful if someone could show me how to protect a directory, say /var/www/mydir, such that only user nblue, whose password is 123, can access it &mdash; using <Directory> sections instead of .htaccess files. Thank you. --NorwegianBluetalk 12:19, 6 June 2010 (UTC)


 * I'm pretty sure it's actually ridiculously simple: just create another Directory section for /var/www/mydir, and put all the .htaccess stuff in it. You can put it right below the other Directory section, I think. Like so:


 * Does that work? Indeterminate (talk) 04:21, 9 June 2010 (UTC)
 * Thanks, Indeterminate. I read your answer from my work PC earlier today, and was about to write that that was the very first thing I tried. But I decided to try it a second time, just to make sure, before responding. And whaddyaknow... it worked. I must have made some silly mistake, and spent too little time pursuing the obvious solution, before moving into more exotic solution-attempts (trying to include the contents of .htpasswd in apache2.conf). Thanks a million, for restoring my faith in docs, the universe and everything! --NorwegianBluetalk 21:21, 9 June 2010 (UTC)

Ubuntu 64-bit - Not recommended for daily desktop usage?
The Ubuntu download page advises against using the 64 bit version for daily desktop usage. My questions are (1) why?, and (2) which version would be the better choice for virtualizing Ubuntu with VirtualBox on a PC running 64 bit Windows 7? --NorwegianBluetalk 07:13, 17 July 2010 (UTC)
 * I'm running Ubuntu 9.04, 64-bit, on a laptop. Mostly works great.  But there's a certain amount of consumer software that hasn't been completely ported.  For example I have to run a beta version of Skype, and it doesn't seem to want to let me select the output sound device.  There's a drop-down box for it, but only one entry in the box.  So I can't use my USB headphones, which is very annoying. --Trovatore (talk) 08:06, 17 July 2010 (UTC)
 * Thanks! --80.32.202.187 (talk) 11:49, 18 July 2010 (UTC) (NorwegianBlue, not logged in)

Q: Best Way of Automating Getting Information from Website? A:AutoIt
As stated, I'm trying to automate copying text from a website and then write it to a file. The text comes from JavaScript so it seems that I need to access the website through a browser (so I can't just download the source). I would like to be able to execute this process through something like a batch script so that it could be done every hour for example. I'm willing to try to write a program as I have some experience. The best idea I have now is try my hand at making some sort of Firefox add-on. I also have wondered if there is a program or language that will allow me to automate clicking patterns (ie click at coordinates on desktop which will open browser, click at other coordinates, drag, copy...) Any help is appreciated. 206.59.150.250 (talk) 04:35, 11 August 2010 (UTC)


 * Lynx (web browser) has javascript capability and if you run it with the "-dump" option, it will dump the rendered page to a text file. -- k a i n a w &trade; 04:42, 11 August 2010 (UTC)
 * I'm striking the above comment because it appears that the easy-to-download Windows binary of Lynx does not support JavaScript at all. I also noticed that many Linux flavors, like RedHat, use Lynx with JavaScript completely missing. --  k a i n a w &trade; 04:46, 11 August 2010 (UTC)

Undoubtedly if you wanted to dig into Java script you could figure out how it's rendering the page and write something that can find what you are looking for in it. What you really want is a java script engine that will work with a scripting language. That's the more efficient way to do this than automating clicks on your computer, although setting that up is probably easier. Although I don't think it'll deal with the java script, you might check out cURL for the general idea. Shadowjams (talk) 06:37, 11 August 2010 (UTC)


 * If you want to automate clicking patterns, I have not used it myself, but my colleagues have found AutoIt to be pretty excellent for that purpose, assuming you are using Windows. It's basically Windows macros on steroids. It seems to have no trouble reading pages rendered by browsers automatically. --Mr.98 (talk) 12:57, 11 August 2010 (UTC)

Moving the contents of Windows PST files to an archive on a linux machine
I have two mail accounts that I access from Windows PCs, using Microsoft Outlook. I archive old emails in .PST files on the Windows PCs. I would like to periodically (and manually) transfer the contents of old .PST files to an archive on my Linux machine (that dual boots with XP), from which I would like to be able to open the emails and forward archived emails to my windows accounts. The archive must of course preserve attachments and have good searchability, and it will become quite large. I read recently that the Windows version of Thunderbird can read .PST files, so that might be one the tools needed to achieve what I want. I do not have any experience in setting up a mail server on a linux machine. Thanks in advance for advice on how to proceed and pointers to relevant howtos. --NorwegianBluetalk 07:37, 24 September 2010 (UTC)
 * If the Linux box is not the one with the main Outlook install, then I'd recommend installing a light-weight IMAP server on it (in Linux). You can then drag'n'drop from Outlook to the IMAP server. The archive would be placed into an 'old-emails' folder on the IMAP server.


 * Are you manually sharing the .PST files around your MS-Windows boxes? This would alleviate the need for that as well; you'd just do a 'send-and-receive emails' on the other Outlook instances to sync them to the IMAP server. For laptops I'd set Outlook to keep a copy of the IMAP contents, so you can read your non-archived emails on the road.


 * There are several packages available in most Linux distro repositories for this. The IMAP server itself can't send or receive email (a SMTP server is needed to receive and IMAP isn't used for sending), so there is little security problems. I'll check how I did this and get back to your here. It wasn't difficult, but there were a few non-documented steps I had to take. CS Miller (talk) 13:01, 24 September 2010 (UTC)


 * It's been a couple of years since I did this (so I stand to be corrected with more up-to-date info) but Thunderbird can't read PSTs, and open-source tools and libraries to read them aren't very mature. There are several ways I know of to do what you want:
 * On a Windows machine, import the PST into Outlook (not Outlook express). Then, on the same machine, run Thunderbird and use its import from outlook option. This does MAPI calls (rather than reading the PST file) to get the email data. This should produce an mbox format file (down in the hidden gubbins of the Thunderbird profile in the Application Data area) which you can copy over to the equivalent place in the profile of a Linux Thunderbird install.
 * On your Linux machine, install Dovecot (it's in the standard package repositories for most distributions) and configure it as an IMAP server (its config options are fairly obvious). Now configure your Linux Thunderbird client to be an IMAP client of that Dovecot IMAPd.  On Windows, import the PST in Outlook, then configure Outlook to be an IMAP client of that Dovecot IMAPd on Linux. Then drag-and-drop the emails from the place Outlook imported them to over to the IMAP account, and they'll be instantly available to the Linux tbird. Once that's done you can decide to keep them inside the IMAP server or you can have the Linux Thunderbird copy them down to its own store (again by drag and drop).  Unfortunately in your position this requires the Linux and Windows machines to be running concurrently, and if you're dual booting this isn't possible.
 * Use Fookes Software's Aid4Mail (again on Windows, but it's fairly basic stuff, so it should work in Wine). That will read the PST file and will export it to either .eml files (that's one file per email) or an mbox that thunderbird will read. Again you'd copy those exported files over to Linux - you'd put the mbox into the Thunderbird profile, or with the .emls you'd just leave them in a folder somewhere and open them in Thunderbird by double clicking. Aid4Mail (I think you'd need the Professional version) isn't free.
 * I'll dig around and see if Evolution has PST support (as Evolution tries harder to be an Outlook replacement than Thunderbird does). -- Finlay McWalter ☻ Talk 13:15, 24 September 2010 (UTC)


 * You've not said from where the Outlook machine is getting those emails. If they're coming from a Microsoft Exchange server (which is pretty common in corporate and institutional environments) then Evolution can access them on the Exchange server. If that's the case you might consider using Evolution on Linux rather than Thunderbird (it's pretty good, and has much better Exchange integration).  If you're set on Thunderbird, I think Evolution can export to mbox. -- Finlay McWalter ☻ Talk 13:19, 24 September 2010 (UTC)


 * The this free-software program claims to be able to convert PST to mbox on Linux. I haven't tried (and don't have the wherewithall to try it now) but you can give that a shot. -- Finlay McWalter ☻ Talk 13:28, 24 September 2010 (UTC)


 * Thanks a lot for your replies! The emails originated from two different exchange servers. The plan is to just copy the .PST files, I don't intend to connect to the exchange server from the linux machine. I'll be working on this in the weekend. I'll possibly be back with more questions, and will report the results here. --NorwegianBluetalk 15:58, 24 September 2010 (UTC)

Processors, i5 and i7
How much should I pay for a desktop workstation with at least 4 GB of RAM, at least 1 TB hard drive at a fast speed, and quad core i7 or i5, plus an appropriate video card for image editing, and windows 7? Thanks. 84.153.219.234 (talk) 11:48, 16 July 2010 (UTC)


 * There is a pretty huge jump in price between i5 and i7. If you buy an i7 core, go for the i7-9xx series, these have larger caches and use QPI, while i7-8xx will use older dual-channel memory bus.  Your price is going to vary significantly on these kind of choice (a factor of nearly 30% or 50% between an i5- (HP p6580) and i7-  (HP e9150t) workstation based on some cursory scans at major vendors).  Another way to rephrase your question: are you still uncertain which processor you need, and is price more relevant?  If so, the i5 is a fine chip - but you will definitely see a performance boost with the i7- if you are doing CPU-heavy or memory-intensive tasks.  You should pay in the neighborhood of US $500-700 for an i5- system; and in the neighborhood of $700 - $1200 US for an i7-system depending how much you deck it out with RAM, graphics, and where on the spectrum of specific processors you choose to land.  Note that if you include the keyword "workstation" in your search for desktop PC, you will end up finding yourself in the "business purchases" section of the vendors' websites, and they will charge you more for essentially the same hardware.  Nimur (talk) 15:21, 16 July 2010 (UTC)
 * Also the most expensive i7 costs the same as four i7-930s. 121.72.166.173 (talk) 06:24, 17 July 2010 (UTC)

Convert a set of jpegs to video
I have 200,000 jpg files from a webcam. How can I convert them into a video? They are all of the same dimensions and named sequentially 82.44.55.25 (talk) 19:43, 16 December 2010 (UTC)
 * FFmpeg is the ideal software for that.
 * If your jpg filenames are cam000001.jpg to cam200000.jpg and you want 24 frames per seconds, it's as simple as:
 * ffmpeg -r 24 -i cam%06d.jpg result.mpg
 * You'll find more detailed instructions on http://electron.mit.edu/~gsteele/ffmpeg/ --Dereckson (talk) 20:01, 16 December 2010 (UTC)

Video re-encoding
I have a video encoded in h264 at 5fps. I need the video to be 25fps, but I want it to remain the same speed and duration. What Windows programs (preferably free) can do this? 82.44.55.25 (talk) 14:51, 8 January 2011 (UTC)


 * It sounds like you want to do tweening to fill in the missing frames. I can tell you right now that this isn't going to look very good, with 80% of the frames being made to fill in the gaps, because there's just not enough "information" to start with.  I think 50% or fewer frames "tweened" in this way might look OK. StuRat (talk) 15:03, 8 January 2011 (UTC)
 * Yes, that's exactly what I want. I'm not too worried about quality 82.44.55.25 (talk) 16:15, 8 January 2011 (UTC)
 * FFmpeg can do this sort of video conversion. Graeme Bartlett (talk) 20:45, 8 January 2011 (UTC)
 * ffmpeg can adjust the frame rate of a video, but only by duplicating or dropping frames, not interpolating. In this case each frame would be followed by 4 exact copies of itself, so the result played at 25fps would exactly like the original at 5fps, not smoother. ffmpeg is too simple for such a complex job. 67.162.90.113 (talk) 22:28, 8 January 2011 (UTC)

You could use FFmpeg to dump to JPEGs, then use ImageMagick (and a for loop) to make filler frames morphed from adjacent ones, then FFmpeg again to go from JPEG to <insert video format of choice>. ¦ Reisio (talk) 03:47, 9 January 2011 (UTC)

What is the efficiency of human sorting?
...or phrased slightly differently, what sorting algorithms do humans tend to use, and what are their efficiencies? I suspect the answer will depend a lot on the items that are to be sorted, so I'll present the example I'm particularly interested in, the sorting of sample tubes in a laboratory. The tubes are numbered (long numbers, say 9 digits). The numbers come in different series with the same three leading digits (six series, say), and may or may not arrive partially sorted. Sometimes they arrive in racks where each rack is partially sorted. I read the article Sorting algorithm, which states that a good algorithm has a complexity of $$\mathcal{O} \left( n \log n\right)$$ and a bad algoritm is $$\mathcal{O}\left( n^2 \right)$$.

How will a human perform, in the setting I described? Guesstimates are welcome, pointers to relevant empirical studies even better.

Thanks, --109.189.66.11 (talk) 19:59, 10 January 2011 (UTC)


 * When I've taught algorithms class, I task 3 to 4 students with sorting short, medium, and long lists of items (sometimes numbers, sometimes words, sometimes sort on height or size). My experience is that humans use an insertion sort or bubble sort for short lists.  When the list gets long, they tend to use a bucket sort to get it partially sorted and then an insertion sort on the shorter lists.  There is a problem with identifying efficiency in computer standards.  A human is capable of viewing more than one item at a time.  So, insertion sort on small lists is O(n).  For example, sort 4, 9, 2.  You can see all three numbers at once and instantly note that 2 is the smallest.  Then, you can see both 4 and 9 at the same time and note that 4 is the smallest.  That leaves 9.  A computer would have to look at each number, one at a time, to see that 2 is the smallest.  Then, it need to look at the 4 and 9 separately to note that 4 is the smallest.  If a computer was capable of looking at more than one number at once and perform a comparison on all numbers against all other numbers all at the same time (the way humans do), sorting would be much faster in computers. --  k a i n a w &trade; 20:23, 10 January 2011 (UTC)


 * (EC) Maybe it's obvious, but I'll point out that many of the linked sorting algorithms are useful for humans sorting physical objects too. For instance, I've used a truncated merge sort for sorting student assignments. I imagine it would work well for your tubes, provided you have plenty of cache space. SemanticMantis (talk) 20:30, 10 January 2011 (UTC)


 * Thanks! Bubble sort and insertion sort are O(n2), merge sort is O(n log n). If the number of samples is doubled, is it safe to assume that the number of man-hours needed to do the job should be somewhat less than the square of what it was before the number of samples was doubled? --109.189.66.11 (talk) 20:44, 10 January 2011 (UTC)
 * As Tardis said below, those big-O times are based on assumptions about the speed of the primitive operations that don't necessarily apply to sorting of physical objects. -- BenRG (talk) 22:48, 10 January 2011 (UTC)


 * If you know that there are only a small number of "series" (six, you suggested), you'll use bucket sort or one of its many variants (MSD radix sort or postal sort or so). If sorting something like a deck of cards where the total is known and every card's position in the result is known, you can use a trivial tally sort that barely counts as a sort at all.  These are not comparison sorts and so the usual $$\mathcal{O} \left( n \log n\right)$$ rule (namely, that all good sorts have that complexity) doesn't apply.
 * One trick that humans can often use is that shifting physical objects can be much more efficient than shifting data: you can shove a line of books down a shelf to make room for another in their midst, and if you're strong enough it doesn't take any longer no matter how many you're moving. This makes insertion sort pretty efficient in the physical world even though it's a "bad" algorithm in the standard sense.  Heuristics allow that to be applied even when there are multiple shelves each of a fixed size: "allocate" more shelves than will be needed in the end, and then almost all of them will retain some empty space throughout so that you can do the shove-insert trick.  Then you can pack the shelves (if desired) at the end in linear time.
 * Anecdotally, I recently wanted to sort about 200 CDs into racks that had discrete slots (so that I really had to treat them like a computer would); I very consciously used merge sort between various piles of CDs on the floor (with the last merge "writing" into the racks). Of course, since I know the standard sort algorithms, perhaps I'm not useful evidence for what "humans tend to use".
 * You might also be interested in physical computation (which is unfortunately a redlink): algorithms that rely on physical processes other than the usual read/write/address of the Turing machine. Quantum computing gets lots of attention, but there are surprisingly efficient ways to use even the classical world to do what you might call computation: spaghetti sort, for instance.  Unfortunately, I'm having trouble finding other good references on physical computation (variously called {physical|analog} {computation|algorithms}).  --Tardis (talk) 21:00, 10 January 2011 (UTC)


 * Another factor to consider is whether a single human or more will be doing the sorting. The multiple human case mimics parallel processing in computers.  For example, one person could do a bucket sort, as Tardis said, where they divide the pipes up into, say, 6 series of pipes.  Others could then sort those buckets, by using an insertion sort, and wouldn't need to wait until the first person finished his sorting.  If there's just one human doing the sorting, though, perhaps an insertion sort right away makes sense (although there may still be 6 lines ("buckets") of pipes laid out, but he would put each in it's proper position in each "bucket", immediately, and not just pile them up for later sub-sorting).   StuRat (talk) 21:47, 10 January 2011 (UTC)


 * I remember reading somewhere that most humans did insertion sort methods on lists up to a certain size, and then switched to various divide and conquer style sorts. This book (ISBN 0596155891 page 208) says "Humans naturally resort to a divide-and-conqueror algorithm" but it's referring to a specific kind of problem. I'm having trouble finding papers on that, but I'm sure someone's researched that question. Shadowjams (talk) 00:27, 11 January 2011 (UTC)


 * Thanks, everyone! --109.189.66.11 (talk) 19:24, 11 January 2011 (UTC)

Re-encoding large video files
Hi, I use a program called "Fraps" to record and save videos of games that I play. I store the videos on my hard drive to later edit and upload to youtube. The only problem is that the video files end up way too large and they take up over 200 gigs on my hard drive. What are some programs that will help me scale down the size of the files without losing the high quality of the videos? What is the best program for this and whats the best free program for this?(I might be willing to pay some cash if the free programs prove to be too infective). I'd really appreciate some help and thank you for your time. :) (Ahalol123 (talk) 00:10, 13 January 2011 (UTC))


 * If you have money, Adobe Premiere will do this. Whether or not you have money, the free program VLC media player will do this, too.  Of course, compression comes at a cost.  You can tweak the video compression settings as you like to choose the tradeoffs between quality and file size &mdash; the higher the quality, the higher the file size will be.  In your shoes, I would try several of the VLC settings using the highest quality compression settings possible and I think you'll be pleased.  Comet Tuttle (talk) 00:17, 13 January 2011 (UTC)

Converting the video to XviD or h264 or something similar will drastically reduce the filesize. See Avidemux, Handbrake, MEncoder and Category:Free_video_conversion_software 82.44.55.25 (talk) 00:20, 13 January 2011 (UTC)

Thanks for all the help! I really appreciate it! My computer already has Adobe premiere installed, so money isn't really a problem. I'll try out both and see how good they work because I'm only really looking for the most efficient and quick program to use as I have a lot of files. Thanks again for the support! (Ahalol123 (talk) 00:30, 13 January 2011 (UTC))


 * Depending on what version of Premiere you are running, you may want to be sure to choose "Adobe Video Encoder" (or a similar item under the File menu) rather than "Save As" or "Export" &mdash; the former gives you the flexibility and codec options you'd expect. Comet Tuttle (talk) 22:44, 13 January 2011 (UTC)


 * FFMPEG, and the user-friendly version provided with the WinFF user-interface, are free software. FFMPEG supports conversion to and from almost all major formats.  It is the core video engine used in almost all free and commercial (in modified form) video processors and players; so using FFMPEG nearly guarantees compatibility with most video players.  Nimur (talk) 01:01, 13 January 2011 (UTC)

There's a lot of suggestions here, which one is the best? Thanks again for all the help. (Ahalol123 (talk) 02:15, 14 January 2011 (UTC))
 * I'm pretty sure every program listed above (excluding Adobe and maybe mencoder) uses FFMPEG "under the hood." So, you'll get the same results.  HandBrake, WinFF, and so forth do have some value-add, being a bit easier to use (ffmpeg is a command-line tool); so it's a matter of your preference.  In my opinion, the command-line interface is very easy to use; here are instructions for starters.  Nimur (talk) 04:44, 14 January 2011 (UTC)

Duplicate file detection
Is there an easy way to detect and perhaps remove duplicate files from a directory? The files could be anything: documents, text, images, music, etc, but there could be hundreds of files to process. Any duplicates would have some things in common, but other things could be different: filenames, dates, other meta data. I thought about writing a program to do this, something using MD5 and moving duplicates to a different directory, but maybe there's already such a program available (preferably for free). Astronaut (talk) 10:01, 2 April 2011 (UTC)
 * There are lots, I like this one 82.43.90.38 (talk) 10:26, 2 April 2011 (UTC)

See fdupes. ¦ Reisio (talk) 20:40, 2 April 2011 (UTC)

apt-get
What happens if I run apt-get (in Ubuntu) with certain listed packages and stop it using CTRL-C while it's downloading? Is there a command I should run to perform some clean up tasks?

In particular, say I've run three apt-get install commands each with one ore more listed packages? How do I get back to how the system was before running them all. NOTE: Not all the commands may have been fully executed. For example, I stopped one while downloading, in another I mistyped one of the packages so i don't know if any of them installed. --178.208.209.155 (talk) 00:38, 7 September 2011 (UTC)
 * If you do it while it's downloading then no problems will be caused. Package files are stored for later continuation of the apt-get process in '/var/cache/apt'. If the download process does not complete, then no installation will occur. If the configuration/install part of apt-get began, then some weird things can happen, but running 'apt-get -f install' as the command tells you to in such a case should fix it. 'dpkg -r' or 'dpkg --purge' will let you remove individual packages. 'cat ~/.bash_history|grep apt-get' will show you what you tried to/did install, provided you put in the commands fairly recently. Nevard (talk) 05:22, 7 September 2011 (UTC)

Gaming programming languages
Am I right in guessing that proffessional game programmers use languages like C when programming games? What (other?) languages would they use? (I don't think all the hits on google have proffessional grade languages).Heck froze over (talk) 18:34, 15 November 2011 (UTC)


 * C and C++ for a great deal of development. C# for Microsoft XNA. Java for Android.  Objective C for iOS.  Lua is used pretty commonly as a scripting language inside games. -- Finlay McWalterჷTalk 18:45, 15 November 2011 (UTC)


 * ActionScript for Flash games and (bubbling under) javascript for HTML5 games. For the backend (server) part of server-based games like MMORGS, just about anything could be used, but I'd expect to see a lot of Java there, and some Python and PHP, and C++ for some parts. -- Finlay McWalterჷTalk 18:48, 15 November 2011 (UTC)


 * Finlay McWalter pretty much nailed it. If you can be more specific about what your target platform is, we can provide a better response. TheGrimme (talk) 19:31, 15 November 2011 (UTC)


 * Two important ones I forgot: a small number of people, on the fancier Direct3D/OpenGL games, will code the shaders themselves, in GLSL or HLSL; they're not far from C. -- Finlay McWalterჷTalk 21:40, 15 November 2011 (UTC)

Can SQL SELECT combine items from several rows?
I have a database with a structure like this: Table1UID Identity  Date           Value 00001   ABC       01.01.2011     43 00002   DEF       02.01.2011     28 00003   GHI       05.01.2011     37 00004   ABC       05.01.2011     49 00005   JKL       08.01.2011     28 00006   GHI       09.01.2011     40 00007   ABC       12.01.2011     42 00008   MNO       23.01.2011     31

The Table1UID is unique, while the "Identity" field is non-unique. I want to make a selection that collects all (or at least several, see below) occurrences of the same "Identity" in the same row of the selection.

Identity Table1UID1   Date1       Value1  Table1UID2   Date2        Value2   Table1UID3  Date3        Value3 ABC   00001        01.01.2011     43   00004        05.01.2011     49     00007       12.01.2011     42 DEF   00002        02.01.2011     28   NA           NA             NA     NA          NA             NA    GHI    00003        05.01.2011     37   00006        09.01.2011     40     NA          NA             NA    JKL    00005        08.01.2011     28   NA           NA             NA     NA          NA             NA    MNO    00008        23.01.2011     31   NA           NA             NA     NA          NA             NA

I suppose I may have to put a limit to how many items I want to combine; say I want to collect the values corresponding to the first, the last, and the "middle" Table1UID (whichever way "middle" is easiest implemented). Is it possible to do this with SQL SELECT, and if so, how? I'm using Microsoft Access (ancient version), but could switch to MySQL if that makes it easier. Thanks, -NorwegianBluetalk 19:41, 21 November 2011 (UTC)


 * I can imagine very cludgey ways to do this involving lots of embedded SELECT queries that abuse the FIRST and LIMIT parameters to try and get successive iterations of the same values and poke them into a row, but it strikes me that this would be a very clunky way to do this (and likely to fail with regards to when there is only a first and not a second), which indicates that there's probably a smarter approach than this. But I'm not an SQL guru so I'll defer to others on that point. If you're using Access, though, my inclination would be to go about things differently, using VBA to construct this sort of thing, just because it'll be less apt to go belly-up when you hit all of those NAs... ---Mr.98 (talk) 21:27, 21 November 2011 (UTC)


 * You can almost do it with group_concat (MySQL - similar functions in other engines). You'd use: select Itendity, group_concat(Table1UID, Date, Value) from YourTable group by Identity. The catch is that the group_concat comes out as one column with comma-separated values. Of course it is rather each to explode a comma-separated value in a wrapper query to turn them back into independent columns. -- k a i n a w &trade; 21:34, 21 November 2011 (UTC)


 * If I were to do it as SQL, I imagine creating 3 temporary tables would be the way to go, where:


 * 1) The first table has columns Identity, Table1UID1, Date1, and Value1 and contains the first occurrence of each Identity when sorted by Table1UID.


 * 2) The second table has columns Identity, Table1UID2, Date2, and Value2 and contains the first occurrence of each Identity when sorted by Table1UID1, which is not in the first table.


 * 3) The third table has columns Identity, Table1UID3, Date3, and Value3, and contains the first occurrence of each Identity when sorted by Table1UID1, in the reverse order, which is not in the first or second table.


 * Then the only ugly thing remaining to deal with would be Identities in the first table which aren't in the second or third tables. You might want to update the second and third tables to add the "NA" for any Identity present in the first table, but absent in those tables.  After this prep, the final SELECT statement should be straightforward. StuRat (talk) 22:19, 21 November 2011 (UTC)


 * For a solution specific to MS SQL Server 2008 & later, you can use the ROW_NUMBER OVER(PARTITION BY ... ORDER BY ...) to group and sequence the rows. Those partial results can then be wrapped up as a Common Table Expression (CTE) using a WITH clause that precedes a final select which selects and joins the contents of each column.  The number of column groups would be fixed though.  Generating a result with a dynamic number of column groups would likely require some dynamic SQL.  For the three column case, you would have something like:

DECLARE @Table1 TABLE(Table1UID INT, [Identity] VARCHAR(100), [Date] DATE, Value INT); INSERT @Table1 VALUES (1, 'ABC', '2011/01/01', 43), (2, 'DEF', '2011/01/02', 28), (3, 'GHI', '2011/01/05', 37), (4, 'ABC', '2011/01/05', 49), (5, 'JKL', '2011/01/08', 28), (6, 'GHI', '2011/01/09', 40), (7, 'ABC', '2011/01/12', 42), (8, 'MNO', '2011/01/23', 31); WITH CTE1 AS (           SELECT col = ROW_NUMBER OVER(PARTITION BY [Identity] ORDER BY Table1UID), *            FROM @Table1        ) SELECT C1.[Identity], Table1UID1 = C1.Table1UID, Date1 = C1.[Date], Value1 = C1.Value, Table1UID2 = C2.Table1UID, Date2 = C2.[Date], Value2 = C2.Value, Table1UID3 = C3.Table1UID, Date3 = C3.[Date], Value3 = C3.Value FROM CTE1 C1       LEFT JOIN CTE1 C2 ON C2.[Identity] = C1.[Identity] AND C2.col = 2 LEFT JOIN CTE1 C3 ON C3.[Identity] = C1.[Identity] AND C3.col = 3 WHERE C1.col = 1 ORDER BY C1.[Identity];


 * -- Tom N (tcncv) talk/contrib 01:24, 22 November 2011 (UTC)


 * Did you give it a run ? If so, I'd like to see your results. StuRat (talk) 03:58, 22 November 2011 (UTC)


 * Here are the results I get from the above. Except for a few formatting details and the limitation of being a fixed layout, this appears to match the OP's request.
 * {| class='wikitable'

! Identity!!Table1UID1!!Date1!!Value1!!Table1UID2!!Date2!!!Value2!!Table1UID3!!Date3!!Value3
 * ABC||1||2011-01-01||43||4||2011-01-05||49||7||2011-01-12||42
 * DEF||2||2011-01-02||28||NULL||NULL||NULL||NULL||NULL||NULL
 * GHI||3||2011-01-05||37||6||2011-01-09||40||NULL||NULL||NULL
 * JKL||5||2011-01-08||28||NULL||NULL||NULL||NULL||NULL||NULL
 * MNO||8||2011-01-23||31||NULL||NULL||NULL||NULL||NULL||NULL
 * }
 * Although I'm using Microsoft SQL Server, the common table expression and window functions like ROW_NUMBER appear to be more widely implemented than I originally realized. --  Tom N (tcncv) talk/contrib 06:29, 22 November 2011 (UTC)
 * Thanks a lot! Unsurprisingly, the code failed miserably with my ancient (2000) Access version. I installed Microsoft Express 2008 server and management studio, and managed to run the SQL statement after having created a new empty database, by right clicking on its icon, and selecting "new query". I pasted the SQL statement into the edit window, and pushed the execute button (exclamation mark). The table that you showed appeared below the SQL code. However, the table "Table1" did not appear to have been created, as I thought it would have been (does the code only create a temporary Table1?). I found the output of the query in the tree control, but not where I expected (it was under "Databases/System databases/master/tables/dbo.Query", not under the database that I had created). I'll need to experiment a bit more, but that will have to wait till tomorrow evening! Thanks again for getting me started! --NorwegianBluetalk 22:02, 22 November 2011 (UTC)
 * MNO||8||2011-01-23||31||NULL||NULL||NULL||NULL||NULL||NULL
 * }
 * Although I'm using Microsoft SQL Server, the common table expression and window functions like ROW_NUMBER appear to be more widely implemented than I originally realized. --  Tom N (tcncv) talk/contrib 06:29, 22 November 2011 (UTC)
 * Thanks a lot! Unsurprisingly, the code failed miserably with my ancient (2000) Access version. I installed Microsoft Express 2008 server and management studio, and managed to run the SQL statement after having created a new empty database, by right clicking on its icon, and selecting "new query". I pasted the SQL statement into the edit window, and pushed the execute button (exclamation mark). The table that you showed appeared below the SQL code. However, the table "Table1" did not appear to have been created, as I thought it would have been (does the code only create a temporary Table1?). I found the output of the query in the tree control, but not where I expected (it was under "Databases/System databases/master/tables/dbo.Query", not under the database that I had created). I'll need to experiment a bit more, but that will have to wait till tomorrow evening! Thanks again for getting me started! --NorwegianBluetalk 22:02, 22 November 2011 (UTC)
 * Thanks a lot! Unsurprisingly, the code failed miserably with my ancient (2000) Access version. I installed Microsoft Express 2008 server and management studio, and managed to run the SQL statement after having created a new empty database, by right clicking on its icon, and selecting "new query". I pasted the SQL statement into the edit window, and pushed the execute button (exclamation mark). The table that you showed appeared below the SQL code. However, the table "Table1" did not appear to have been created, as I thought it would have been (does the code only create a temporary Table1?). I found the output of the query in the tree control, but not where I expected (it was under "Databases/System databases/master/tables/dbo.Query", not under the database that I had created). I'll need to experiment a bit more, but that will have to wait till tomorrow evening! Thanks again for getting me started! --NorwegianBluetalk 22:02, 22 November 2011 (UTC)


 * The DECLARE @table1 TABLE(...) statement is used to define a "table variable" in SQL Server, and such tables are automatically dropped when execution of the current script completes. It can be thought of as a more convenient, but less powerful alternative to a temporary table (see ).  What you are looking for is a persistent table which you can create using CREATE TABLE Table1 ( ... ).  Changing the first statement to create a persistant table and replacing all remaining references to "@Table1" with "Table1" will give you a running script.  Note that you need only create and populate the table once.  --  Tom N (tcncv) talk/contrib 00:46, 23 November 2011 (UTC)
 * Thanks a million! Works perfectly now! --NorwegianBluetalk 19:41, 23 November 2011 (UTC)

Difficult-to-remove folders with random names, Windows 7
After installations and Microsoft updates, folders with random names like "3286eb5992e84d4ec46ad5" are sometimes left behind on my largest drive. When I was using Xp, I just deleted such folders. With Windows 7, this turns out to be more difficult. Here's what happens: So the question is: How do I get SYSTEM privileges, and remove the folder. Surely, this is junk? --NorwegianBluetalk 09:51, 19 December 2011 (UTC)
 * 1) I right-click on the folder, choose delete.
 * 2) Windows replies: "Are you sure?"
 * 3) I click "yes".
 * 4) Windows says (translated from Norwegian): "Folder access denied. You need administrator privileges to delete this folder", and displays three buttons, one with a blue-and-yellow shield "Continue".
 * 5) I click "Continue".
 * 6) Windows starts counting files, but then displays a message: "You need permission from SYSTEM to change this folder". Buttons: Try again and cancel.
 * 7) Trying again just redisplays the dialog.


 * Those are used by the Windows System Update, so if you're got pending updates waiting to be installed or otherwise dealt with you should not mess with those folders.
 * If you're absolutely sure that Windows System Update is not doing anything, and has somehow left those folders behind accidentally, they can be deleted after you take ownership of them. APL (talk) 11:58, 19 December 2011 (UTC)


 * Thanks. Yes, I'm absolutely sure there are no Windows System Updates pending, or other updates for that matter, and I've deleted such folders without problems when using Xp. I followed the link (which is for Xp), but was able to take ownership only of the files in the directory, not of the sub-folders (of which there were many). I was unable to repeat the procedure sub-folder by sub-folder, the dialogs that appeared when right-clicking these were different from that of the parent folder. So I booted into linux, deleted the folder, emptied the trash, and the folder was gone. --NorwegianBluetalk 08:28, 20 December 2011 (UTC)


 * You should be able to take ownership of the whole directory, including subcontainers and objects, give yourself permissions and the delete the entire directory in Windows 7 in advanced permission settings. I've done it before Nil Einne (talk) 12:16, 20 December 2011 (UTC)


 * Thanks. I anticipated that this might be a problem, and looked for checkboxes etc. to indicate that I wanted to take ownership of everything recursively. I don't remember the details, I think I checked off that that was what I wanted to do. However, in the process, warnings came up for the subdirectories (saying I needed SYSTEM privileges to take ownership of the subdirectory), and saying that the folder would be in an inconsistent state. So, as mentioned above, I deleted the whole thing after booting from Linux. If the problem appears again, I'll try again from Windows, and take careful notes and screenshots! --NorwegianBluetalk 13:14, 20 December 2011 (UTC)

Norwegian letter (ÆØÅ)
What is html code for o mixed with /............Kittybrewster  &#9742;  22:10, 5 January 2012 (UTC)


 * The Ø article has it. For stuff like this go to Norwegian alphabet and that links to the articles for all the relevant letters, and they in turn have that letter's representation in many formats. -- Finlay McWalterჷTalk 22:37, 5 January 2012 (UTC)


 * To answer your question directly, the answer is "&amp;Oslash;" for capital "&Oslash;", and "&amp;oslash;" for lower case "&oslash;". I often write web pages by hand, and type these characters as if they were part of the official spellings of the words involved. --NorwegianBluetalk 23:10, 5 January 2012 (UTC)
 * You can also simply use the characters Ø and ø. This will "just work" on any modern browser.  If you want to be very sure, you can specify that your HTML page should be interpreted in Unicode by adding the character-set id to the page header: the official specification for HTML Document Representation Specifying the character encoding is very verbose.  There a several standard ways to do so:
 * Instruct your server to prepend the HTTP meta header Content-Type: text/html; charset=UTF-8. If you don't know how to do so, use one of the options below.
 * Include this in your HTML document head: <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
 * Use (and comply with the specifications for) XHTML, and include an encoding="UTF-8" tag in your document header
 * The old-fashioned "&amp;Oslash;" character entity reference codes are still valid (in HTML 4.01), but you should use a more modern technique. More detail at Character encodings in HTML.  To put it succinctly, if a web-browser or rendering engine can interpret the entity-code, it can also (almost certainly) interpret the unicode; but the reciprocal is not always true.  Nimur (talk) 23:46, 5 January 2012 (UTC)
 * Thanks! Using æøåÆØÅ directly in (sloppily written) web pages has caused problems in the not-too-distant past. Advice much appreciated. --NorwegianBluetalk 09:49, 7 January 2012 (UTC)

Android app paranoia
How do I determine whether an Android app is trustworthy or not? The app I'm considering installing is Android Lost

How do I know From the webpage, I get the impression that Android Lost is written by just one person, whose name I didn't find on the page (maybe I didn't look hard enough, or was tired, its late here.). Installing the app means putting a lot of trust in the developer (whose name I don't know), who doubtlessly is working hard to produce an app that appears to be in high regard, and who lives by donations. What if this good guy gets tired of his low-income situation, and turns info a bad guy who discovers new ways of making money from the resources he controls by having his app installed in many phones; ways of making money that are not beneficial to the phone owners (annoying popups, fraud, identity theft, or phone-botnets (if such beasts exist)). Thanks, --NorwegianBluetalk 00:42, 1 March 2012 (UTC)
 * who is writing the software?
 * what personal information could the app access from the phone and store in its own servers?
 * that the app won't start spamming me (or popping up commercial messages or whatever)?
 * what happens to my data if the company goes bankcrupt, can the data be acquired by a less trustworthy company?
 * What guarantees to I have against something bad happening?
 * How ugly could it get?
 * Am I being paranoid, or do my questions represent common sense caution?
 * A final question - what is the status of open-source, quality Android apps?
 * Often the app's permissions can help determine if the app is going to do something it shouldn't. Permissions are enforced by the system, so if the developer doesn't declare a permission they can't use it. But this particular one looks like it actually does need a lot of potentially dangerous permissions. Unfortunately, if the source code isn't available, it can be very difficult to determine what the app is actually doing. It seems to be trustworthy currently; if you're concerned that it may change in the future, disable automatic updates and then check the reviews before manually updating. If permissions change, a manual update will be required regardless of your preference, so use that opportunity to research the change. Reach Out to the Truth 02:18, 1 March 2012 (UTC)


 * Ok, let's try and get some answers.


 * 1) To discover the author, I searched for the app in the Android Market. It seems to be made by a guy called Theis Borg. A little further Googling leads me to believe that that's this guy.
 * 2) At the app's page on the Android Market you can click permissions and see all the features on your phone that this app will be able to access. Basically it's everything, but that's not too surprising since this is an app that allows you to control your phone remotely. It does mean, though, that the app could (as in, will have permission to) access information on your phone, including your IMEI number, your location, details of your SMS messages and more.
 * 3) Given the permissions you have to allow for this app, the answer to this is, 'you don't'.
 * 4) This might get a little close to 'legal advice', so I'll reserve the right not to answer this one right now...
 * 5) You might want to read the section about security on the developer's website - particularly the section saying "Trust: Basically all of the above is just text. You _will_ have to trust me that I am a nice guy and all that I say is true. If you do not trust me that is quite OK - then you should not install this app. No hard feelings from my part."
 * 6) Worst case scenario? Someone else can take control of your phone and all the information therein, and deny you access to it. Although you would have the phone in your hand, it would be as useful to you as if it was stolen. As I say, worst case.
 * 7) Paranoid? Yes, you are. However, that's not necessarily a bad thing. This app feels legit to me, and the developer has got his contact details, including home address, freely available on the net. That's generally not something that you do when you're out to rip people off. Also the reviews seem quite positive. On the minus side, I don't see any reviews of the app from 'big media' - I'm thinking Engadget, Gizmodo, Lifehacker, those sorts of things. That doesn't mean it's not good, just that it hasn't broken into the mainstream. Final point is that the app has over 100,000 downloads in the last month alone. Google is normally pretty quick at dealing with issues in the Marketplace, and if any of those 100,000+ people had reported something fishy with the app it wouldn't still be around for download. My advice would be to check out alternatives to this app (I like Prey personally) and have a think about the permissions, weigh up the consequences with the benefits of the software and then make a decision.
 * 8) Sorry, but I'm not quite sure what you meant by this question. And I'm not sure I'm the right person to ask.

Hope all this helps! - Cucumber Mike (talk) 11:49, 1 March 2012 (UTC)
 * Worst case scenario: You have your emails, passwords, bank info, phone numbers on your phone. They use your phone to call their own premium rate numbers, as a relay for their terrorist/drug activity, to WikiLeak your secrets, to plant incriminating information, to harass your family and friends, to access your bank accounts, to steal your identity (if anyone would want it after all the previous steps). Rich Farmbrough, 16:27, 1 March 2012 (UTC).


 * If it's open source, then you can read the code and compile it yourself... Rich Farmbrough, 16:28, 1 March 2012 (UTC).


 * Thanks everyone for your responses! Special thanks to Cucumber Mike for a very thorough and useful answer. Sorry about being unclear in the last question. I meant to ask about the availability of open source, high quality apps for Android, because my impression is that just about everything is closed source. But I think I'll research the question a bit more myself, and come back with a separate question about this if necessary. --NorwegianBluetalk 18:15, 1 March 2012 (UTC)
 * FDroid's repository includes only free and open source applications. That's what I use. I just found AOpensource.com as well, which I haven't taken a look at yet. You can probably also find others that aren't included in those resources by searching the Android Market for terms such as "free software", "open source", and "GPL". Reach Out to the Truth 18:56, 1 March 2012 (UTC)
 * Thanks! I just browsed both sites superficially now, and saw lots of stuff that looks interesting. Excellent! --NorwegianBluetalk 21:49, 1 March 2012 (UTC)

Video to GIF in Ubuntu
Can anyone suggest a software which can convert video to gif ext images? Software centre softwares will be preferable! --<span style="white-space:nowrap;text-shadow:#ED791A 0em 0em 0.8em,#F55220 -0.8em -0.8em 0.9em,#1D6B00 0.7em 0.7em 0.8em;"> Tito Dutta  ✉  23:19, 6 August 2012 (UTC)
 * GIMP can do it using the GAP package, if it can handle the type of video you have. Looie496 (talk) 23:25, 6 August 2012 (UTC)


 * ffmpeg can do it thus: ffmpeg -i in.mpg -pix_fmt rgb24 out.gif -- Finlay McWalterჷTalk 23:28, 6 August 2012 (UTC)

Flicr image upload
(Barnstar on User:Medeis' talkpage) Thanks for finding a good photo for the Héctor Camacho article! INeverCry 20:01, 24 November 2012 (UTC)
 * I reviewed it on Commons, and added it to the other articles in interwiki. INeverCry  20:10, 24 November 2012 (UTC)

Thanks. I was concerned because I expected a special prompt for flickr images, but didn't find one while uploading--but I see you caught thta. I am curious how you found Ms. Negron's last name? There didn't seem any obvious way to find it, or I'd have added her full name. (I know I have uploaded images before from Flickr without finding the author's full name. I haven't done too many, so you may want to check my previous uploads, if there is a way to do that. μηδείς (talk) 20:59, 24 November 2012 (UTC)
 * I only did the Flickr review, which I rarely do. This user added the info you refer to. I don't know much about the subject, as I spend the majority of my time with deletions/restorations. INeverCry  21:12, 24 November 2012 (UTC)
 * Actually, I did find full her name when I went back to leave a notification at Flickr thanking her and letting her know we used the image. I am not to worried about the user name issue on the other images, no one has told me they wre subject to deletion and they were all in good faith of course. Thanks, again. μηδείς (talk) 21:17, 24 November 2012 (UTC)
 * You might want to use this for uploading Flickr images in the future. INeverCry  21:26, 24 November 2012 (UTC)
 * Excellent! μηδείς (talk) 21:31, 24 November 2012 (UTC)

Running mobile apps on a Windows PC
To make discussion easier, I am dividing this question into four separate questions. Please answer in the sections below, and thanks for all answers. 180.245.210.151 (talk) 04:30, 29 January 2013 (UTC)
 * With great difficulty.


 * There are emulators that come with the developer kits of those platforms. (However, The iOS dev-kit is Mac-Only.) In theory you could use the emulators to run the apps on your desktop. In reality, emulators often don't work well and have many limitations. (For instance, as far as I know, the Android emulator has only the most rudimentary OpenGL support. It won't run games, or almost any other app that uses OpenGL for display.) Another stumbling block is actually getting the apps. You can't connect to app stores with an emulated device, only real ones. So you'd need to get a copy of the app from the developer (or pirate it, I suppose.)
 * In most cases it would be more trouble than it's worth. APL (talk) 09:14, 29 January 2013 (UTC)

How do I run Android apps on a Windows PC?
Use BlueStacks.—Ëzhiki (Igels Hérissonovich Ïzhakoff-Amursky) • (yo?); January 29, 2013; 15:42 (UTC)

How do I run iOS apps on a Windows PC?

 * Run Mac OS in a VM. ¦ Reisio (talk) 16:00, 29 January 2013 (UTC)

How do I run Windows Phone apps on a Windows PC?
You can download the development tools for Visual Studio and this will allow you to run and debug 'your' applications either through an emulator or with an attached windows phone (you need to developer unlock it first). Surely however, if you are serious about development, the small bit of googling needed to find this wasn't beyond you? nonsense ferret  16:58, 29 January 2013 (UTC)

Video codecs for future storage
Part 1 I'm getting several hours of super 8 film digitized. I want to keep it in a format that strikes a good balance between keeping the amount of storage space needed at a managable level, and preserving as much as possible of the information in the video. I've had tests done at several companies who specialize in this, and the company so far that produces the best digitizations, uses this hardware, and a codec that VideoLan describes thusly: Stream: 0 Type: Video Codec: Packed YUV 4:2:2, U:Y:V:V (2vuy) Language: English Resolution: 1280x720 Display resolution: 1280x720 Frame rate: 24

edit: YUV 4:4:2 corrected to YUV 4:2:2

When I calculate the total number of pixels encoded, the file size is about 2.7 bytes per pixel.
 * Do we have an article about this codec? I've read Chroma subsampling, but anything more specific?
 * Is this a codec that can be expected to be readable a long time into the future?
 * In this particular codec, are all frames represented equally faithful to the original, or are there key frames that are represented more faithfully, with diffs in between which use some kind of lossy compression with respect to the key frames? To clarify: The details in the encoding scheme, i.e. whether key frames and diffs are used or whether information is stored a sequence of entire images, does not matter to me. If the diffs are exact, it's fine.
 * I'm happy with a file size of up to about 4 bytes per pixel. Are there codecs that are widely used, can be expected to be readable a long time in the future, and which can be manipulated with ffmpeg (Windows version from http://ffmpeg.zeranoe.com/builds/), that would be a better choice than the one described above?

Part 2 Thanks, --NorwegianBluetalk 07:42, 6 March 2013 (UTC)
 * For some reason, the company mentioned above only delivers the files in .MOV containers. Can I convert to .AVI losslessly with ffmpeg, and safely delete the original?
 * Will ffmpeg be able to split longer files into shorter sequences losslessly?


 * 1a. I can't find a specific article but YUV has more information. 1b. Yes. 1c. It is uncompressed; you lose nothing except half the horizontal color resolution, which is normally not noticeable and which virtually all other codecs lose as well. Every frame is a key frame. 1d. 8-bit YUV 4:2:2 is two bytes per pixel. You could ask about 24-bit RGB, which would be 3 bytes per pixel, but it's possible that YUV 4:2:2 is the raw output format of their digitizer and converting to anything else would lose information. You could also ask about more than 8 bits per channel, but I think you would have more compatibility problems. 2a. I think ffmpeg -i input.mov -vcodec copy output.avi will work. 2b. Probably. At any rate there are tools that can. -- BenRG (talk) 18:47, 6 March 2013 (UTC)
 * Thanks a lot, BenRG. I wonder what the extra 0.7 bytes per pixel are? I just discovered Mediainfo, which reports the format like this:

General Complete name                           : MyFile.mov Format                                  : MPEG-4 Format profile                          : QuickTime Codec ID                                : qt  File size                                : 8.41 GiB Duration                                : 3mn 23s Overall bit rate                        : 355 Mbps Encoded date                            : UTC 2013-01-24 16:33:18 Tagged date                             : UTC 2013-01-24 16:36:06 Writing library                         : Apple QuickTime ©TIM                                    : 00:00:00:00 ©TSC                                    : 24 ©TSZ                                    : 1

Video ID                                      : 1 Format                                  : YUV Codec ID                                : 2vuy Duration                                : 3mn 23s Bit rate mode                           : Constant Bit rate                                : 354 Mbps Width                                   : 1 280 pixels Height                                  : 720 pixels Display aspect ratio                    : 16:9 Frame rate mode                         : Constant Frame rate                              : 24.000 fps Color space                             : YUV Chroma subsampling                      : 4:2:2 Compression mode                        : Lossless Bits/(Pixel*Frame)                      : 16.000 Stream size                             : 8.37 GiB (100%) Title                                   : Apple aliasdatahåndterer Language                                : English Encoded date                            : UTC 2013-01-24 16:33:18 Tagged date                             : UTC 2013-01-24 16:36:06

Audio ID                                      : 2 Format                                  : PCM Format settings, Endianness             : Little Format settings, Sign                   : Signed Codec ID                                : sowt Duration                                : 3mn 23s Bit rate mode                           : Constant Bit rate                                : 1 536 Kbps Channel(s)                              : 2 channels Channel positions                       : Front: L R Sampling rate                            : 48.0 KHz Bit depth                               : 16 bits Stream size                             : 37.2 MiB (0%) Title                                   : Apple lydmediahåndterer / Apple aliasdatahåndterer Language                                : English Encoded date                            : UTC 2013-01-24 16:33:18 Tagged date                             : UTC 2013-01-24 16:36:06

Other ID                                      : 3 Type                                    : Time code Format                                  : QuickTime TC Duration                                 : 3mn 23s Time code of first frame                : 00:00:00:00 Time code settings                      : Striped Title                                   : Tidskodemediehåndterer / Apple aliasdatahåndterer Language                                : English Encoded date                            : UTC 2013-01-24 16:36:06 Tagged date                             : UTC 2013-01-24 16:36:06


 * Could a silent audio stream consume so much space? I interpret the "Bits/(Pixel*Frame) : 16.000" output as a confirmation of your statement that the yuv compression should result in 2 bytes per pixel.
 * The conversion to .AVI that you suggested worked, sort of. The file played fine in VideoLan, I compared a PNG snapshot with the corresponding frame in the original, and found the the files to be identical (binary diff). However, this particular content in an AVI container crashed MediaInfo (but not ffprobe). --NorwegianBluetalk 23:27, 6 March 2013 (UTC)
 * You don't need a silent audio track whether it consumes a lot of space or not, include  in your   command. ¦ Reisio (talk) 00:09, 7 March 2013 (UTC)
 * I know I don't need it, but I'm puzzled by the size of the files as delivered by the company who did the digitization. I don't want to mess with the original files until I'm 110% sure what I'm doing, but I'll try stripping it as you suggested on a copy of one of the files, to see if that's the explanation. Thanks! --NorwegianBluetalk 08:41, 7 March 2013 (UTC)
 * 1280×720×24×16 = 353894400, which is consistent with the 354 Mbps rate quoted above, and 203s×354Mbps = 8.37 GB, so I think either you miscalculated or the file with the problem is a different one. Note that 1 Mbps = 106 bps, but 1 GB = 230 B. The audio track accounts for less than 1% of the file size, as shown by Mediainfo. It's possible they include it because some playback software wasn't designed to handle files without an audio track. You could at least downsample it to 8 KHz or something. -- BenRG (talk) 18:31, 7 March 2013 (UTC)
 * I indeed miscalculated, and was about to come here and say, slightly embarrassed, "mystery solved". I used the frame rate of the source material (18 fps), which was part of the file names, and completely ignored the blindingly obvious fact that they had inserted extra frames, to adjust to 24 fps instead of 18 fps. But it's a good thing you beat me to it, since in doing so, you also spelled out how other relevant calculations, which I was unsure about, are done. Thanks a lot! --NorwegianBluetalk 20:23, 7 March 2013 (UTC)

Raw disk writing in Windows
I've tested writing to a USB2 external hard drive while connected to a USB2 port and a USB3 port. It is about 16% faster on a USB3 port. Why is that? Bubba73 You talkin' to me?


 * That's an interesting observation. What are you using for the benchmark?  A significant difference between the USB2 and USB3 protocols is that the latter has a lot more pipes and slots; they're briefly described in this document.  Obviously with a USB2 drive, that alone doesn't help, as the host controller has to fall back on the USB2 protocol. But it does mean that the driver can use more space (essentially more and/or longer queues) within the host controller itself; it may be that this is allowing it to use smart features like Native Command Queuing better, giving a higher effective throughput.  A second, perhaps more prosaic, possibility is simply that the driver for the USB2 path is older and rather conservative, and in writing the USB3 driver (where they had to rewrite that pipe and slot stuff anyway) they did a smarter, more aggressive, job of scheduling the IO, producing the more efficient use you're seeing. -- Finlay McWalterჷTalk 02:55, 16 March 2013 (UTC)


 * I wrote a program to test all of my drives. It writes 1MB blocks to a file repeatedly, for 15 seconds, then calculates the rate.  Just out of curiosity, I switched the USB2 external HD to a USB3 port, and it was faster.  I've switched back and forth and run it several times, and it is consistently about 16% faster when plugged into the USB3 port, compared to USB2.  Bubba73 You talkin' to me? 04:18, 16 March 2013 (UTC)


 * You're may be better off with a longer time (so much more data); particularly if the drives are configured for write-behind then you risk the results of the benchmark being confused by different caching behaviour throughout the USB3 stack. Ideally you wouldn't be writing to files either (in case the file system too has some confounding effect), but to the physical disk surface instead. That's trivial on Linux and BSD (and I guess on OS-X), and possible (but increasingly difficult) on Windows (note that this will destroy partitions and data on that disk). -- Finlay McWalterჷTalk 04:33, 16 March 2013 (UTC)
 * It's not much different on Windows than on Linux. paths like \\?\Device\HarddiskVolume1 are the equivalent of /dev/hda1, and you can pass them to dd for Windows. Obviously you should be really careful if you do this on any OS. The "increasingly difficult" article seems to be about a mechanism to prevent writing to a disk region that's also mounted as a filesystem, which seems like a good idea to me. You should unmount any filesystems before clobbering them (by removing the drive letter in Disk Manager). -- BenRG (talk) 05:27, 16 March 2013 (UTC)
 * What would the question mark in \\?\Device\HarddiskVolume1 typically need to be substituted with? --NorwegianBluetalk 09:51, 16 March 2013 (UTC)
 * Nothing, the ? is a literal in this syntax. See Path_%28computing%29 or .Kram (talk) 19:46, 16 March 2013 (UTC)

Is there any good, easy way to do nominally GUI-oriented stuff in Windows?
I was surprised to find out that Ruby uses Tcl/Tk to do Windows GUI stuff. Why is a fancy language like this that aspires to have everything predefined by convention not offering its own simple set of commands to interface with the OS directly? More to the point, are there easy options in other languages (Lua, Perl) for basic navigation? For example:


 * I want to drag a file and drop it on some icon and have my script run, and have it receive as parameter(s) the name and location of the file dragged onto it.


 * I want to have an icon for Command that I can copy into that directory that opens in that directory, no CD required.


 * I want to have a function as simple to use as print for a Lua program that can take a string of text and uses it to display/conceal/resize windows or add buttons, content, etc. into them as simply as you'd add a return with "/n".

Am I just clueless about options for stuff like this, or is there some huge cluster fuck in windows that is preventing people from making basic stuff like this, or has no programmer ever seen a need for any of these things? Wnt (talk) 13:03, 10 May 2013 (UTC)


 * I really don't understand your question at all. If you're asking "why does Ruby use a clunky old thing like Tk rather than Windows' own GUI framework", well I think that's because with Tk is particularly easy to get something basic working with just a few commands. The real Windows GUI is available to all programs, and all programming languages, from DLL files.   It's certainly possible to write old-style win32 GUI programs in other languages (I've done this with win32gui in Python), but that's a tiresomely low-level API, so writing programs is a bit of a chore.  Programs can also use higher level (which can mean less flexible) frameworks like WinForms (I've got a python program somewhere that does that too).  These, and other APIs like Direct X are available to everyone - as a practical matter, someone usually has to write a little translation shim that bridges the gap between that language's data model and the native API (which are usually exported by DLLs using Microsoft's own x86 calling conventions).   Other programs (e.g. Wireshark, Audacity) eschew the native API (which, after all, confines them to running only on Windows) and use cross-platform GUI toolkits like Swing (for Java), GTK, Qt, Tk, or WxWindows - these allow the same program to be compiled, with little change, on Windows, MacOS, Linux, etc. - at the expense of the resulting program looking slightly alien on every platform.   But if your question really is just "GUI programming seems unduly complicated, tiresome, and obscurant", then yes, it is.  GUIs are powerful, complex, concurrent, asynchronous, and graphical, so it's inevitable that programming one is going to be much harder than a simple interrogative textual Q&A. -- Finlay McWalterჷTalk 14:16, 10 May 2013 (UTC)
 * I'm getting the point that Tcl/Tk (and others) give a platform-independent GUI, but I still don't really understand why other languages wouldn't include these features directly, in an equally platform-independent way, and hopefully invent some new philosophies about how to make them better. Wnt (talk) 19:48, 10 May 2013 (UTC)


 * Tcl is beside the point - Ruby does not use Tcl. Ruby uses Tk, but can use other GUI frameworks too, and other languages can use Tk. You should not impute a particular preference in Ruby for Tk - its bundling is just a matter of convenience. Languages have to do lots of different things, and reimplementing their own way of doing everything, rather than using what's available on the system already, is often foolishness. GUIs are no different - language developers have enough work to do just developing their language and the core stuff that they can't get elsewhere. So they use existing GUI frameworks, either platform specific or multi-platform, to get stuff done.  Strong coupling between a language and a GUI framework is usually a way to make a weak language and a weak GUI. Sometimes, as with Java's AWT or Swing, language developers have reluctantly had to also develop a GUI in parallel (particularly back in the 1990s when AWT and Swing were born), often because there wasn't a suitable multiplatform GUI that met their requirements.  These days GTK and Qt in particular are very mature, and it is difficult to see why some language should reinvent those very fine wheels. Don't think that including stuff into a language "directly" is a good thing; it is usually not, and wise language designers try to keep their languages as small, and as task-directed, as possible. -- Finlay McWalterჷTalk 20:17, 10 May 2013 (UTC)


 * Possibly the easiest (and thus least flexible) way of doing very basic GUI tasks is Zenity which allows shell scripts to do some very basic dialog box ("do you want to wipe the disk?") type things.  The practical utility of this on Windows is somewhat questionable - it does allow cmd.exe scripts some semblance of a a GUI, but working in that antediluvian horror is such a bother that GUIs are the least of someone's worries. Once someone migrated to Windows Powershell, which passes for a sane, modern shell, they get direct access to DLLs, and might as well just make Winforms calls like this. -- Finlay McWalterჷTalk 14:39, 10 May 2013 (UTC)

Changing the HD serial number (of the hardware, rather than of the volume). Possible?
HOOTmag (talk) 14:08, 17 September 2013 (UTC)
 * Changing the serial number is something you'll probably want to try asking the manufacturer about. Most drives will have a disk signature that is often used much like a serial number to uniquely identify a drive, and it is a filesystem feature that can be changed. Format routines are designed to preserve the signature, but in Windows you can use diskpart to change it:  K ati e R  (talk) 13:46, 18 September 2013 (UTC)


 * I suspect the hardware serial number is held on the drive's firmware (as well as probably being printed on the label stuck to the drive casing). Searching for HD firmware flash suggests tools exist, but usually only to update a particular manufacturer's firmware.  I haven't yet found if anything will let you change the serial number.  But why would you want to do that anyway?  Astronaut (talk) 16:03, 18 September 2013 (UTC)


 * (Re why one would want to do this:) There is a number (possibly more than one) uniquely identifying each disk on a Windows 7 machine. It's more than a year ago that I changing tried changing this, unsuccessfully, and eventually gave up. As far as I can remember, this is not the same thing as the UUID. The scenario where this would have made sense is the following: I bought a new machine, with an empty disk of the same size as the system disk. I reserved space on the system disk for installing Linux. I then cloned the first disk onto the second disk, installed Linux and Grub. At that point, each disk holds identical installations of Windows. With Xp, it was easy to configure grub to boot the Windows disk of my choice. This allowed me to maintain a "virgin", clean install of the OS as it originally was delivered, and to keep it updated with only the necessary, security updates. The second disk was my main disk, where I installed the programs I use, etc (there were data disks as well, but that's beside the point). If something bad happened to my main OS disk, I could erase it, dd it back from the virgin disk, and reinstall my programs. With Win 7, this was no longer possible. My interpretation of symptoms and error messages was that the reason had something to do with a disk-identifying signature of some sort. It's a long time ago that I struggled with this, so I can't provide details. But this is a scenario in which an answer to the OP's question would have provided a solution to a problem. Anyway, I ended up installing grub on both disks, and using the BIOS boot device select options to achieve the functionality I wanted, in a rather more cumbersome way than I was used to. --NorwegianBluetalk 20:05, 19 September 2013 (UTC)


 * Windows 7 is using the disk signature that I mentioned in my first response - that's why I posted it, your sort of problem seemed like the most likely reason someone would be asking the question. The fact that disk formatting and imaging tools tend preserve it rather than rewrite it unless told not to can lead to all sorts of annoying boot problems when switching disks. K ati e R  (talk) 12:51, 20 September 2013 (UTC)


 * SYNTAX: uniqueid disk id=Identifier
 * For MBR disks, 'Identifier' is a four-byte value in hexadecimal form.
 * For GPT disks, 'Idenfifier' is a GUID value.

Video editing software
I've seen some videos on YouTube where the song has been multi-track recorded by the same performer, and the video is composed of many windows/tiles showing the performer singing different voices or playing different instruments in sync with himself or herself. The window/tile layout is changed many times during the video, a tile may come floating in from above etc. Here are two examples: My question is: Which video editing programs are capable of creating such composite videos. Thanks, --NorwegianBluetalk 23:16, 16 February 2014 (UTC)
 * Evolution of Miley Cyrus - Parody, Zoe Anne
 * Daft Punk ft. Pharrell - Get Lucky - A Cappella Cover - JB Craipeau


 * Final Cut Pro can do that. Even Blender can do it, if you're willing to!  In general, you need a nonlinear video editor software that supports layering and layer transforms.  Nimur (talk) 16:29, 17 February 2014 (UTC)


 * Thanks. Final Cut Pro appears to be Mac only. I've had a go at Blender once, but found the user interface too... different. Anything Windows or Linux with a more familiar user interface? Following the links from the article on Final Cut Pro takes me to Adobe Premiere Pro (Windows) and OpenShot Video Editor (Linux). Would these be up for the task? Is it correct that the Adobe program is licensing only? Other Windows alternatives? --NorwegianBluetalk 17:26, 17 February 2014 (UTC)


 * I have tested about ten Windows Video editors. Most that cost around 100 dollars would be able to do what you are after. Personally, I decided on CyberLink PowerDirector because I felt most comfortable with the way it handled multiple videos and sounds. I had no problems with 100 simultaneous films in one final video. DanielDemaret (talk) 22:44, 17 February 2014 (UTC) Several of these video editors have "trial versions", that is how I was able to test most of them before buying. DanielDemaret (talk) 22:46, 17 February 2014 (UTC)
 * As a direct answer to your question, Adobe Premiere Pro can do what you are after. However, I personally found it very slow compared to Powerdirector. DanielDemaret (talk) 22:59, 17 February 2014 (UTC)
 * Other editors that would work are, AVS and Sony Vegas. Each has its pros and cons. Adobe has more features (None that I felt I would use), AVS supports non-degraded videos (sometimes) and Sony has some interesting transitions. I tested these and more about two years ago. After Effects would also do the trick, but is a bit overkill for what you are asking for. However, After Effects does have some extremely good effects. I tested making a dice-cube rolling around with six videos, one on each side: It looked cool, but took ages to make. DanielDemaret (talk) 23:07, 17 February 2014 (UTC)
 * As a comparison, you could have a look at two videos I made way back just to try them out. They are not very good, but they do display some effects. http://www.youtube.com/watch?v=I7J8xGrz9g0, made with Cyberlink, has some simple multi-window transitions about 1 minute 25 seconds into the video. http://www.youtube.com/watch?v=mIUC2ST6-G4 ,made with After Effects, has some simple cubes. DanielDemaret (talk) 23:18, 17 February 2014 (UTC)
 * For a fully free and open source tool chain check out ffmpeg (transcoder), kdenlive (multi-track video editor), audacity (audio editor) and the GIMP (image editor). kdenlive is Linux only but runs quite happily in a virtual machine. It supports track overlay and track positioning. Skrrp (talk) 14:57, 18 February 2014 (UTC)


 * Thanks very much Daniel and Skrrp! I actually bought a copy of Powerdirector 11 some time ago, but it turned out it wasn't well suited for the work I was going to do (projects using digitized super 8 video). There was an issue concerning 24 fps and 25 fps recordings. The problem was that it didn't handle 24 fps well (giving a warning that the content would be degraded). I ended up eventually with Adobe Premiere Elements 11 in combination with ffmpeg. Thanks for the links to your videos Daniel, and thanks Skrrp for the suggested toolchain. I know ffmpeg, audacity and the GIMP, but had never heard of kdenlive, which look very interesting. --NorwegianBluetalk 00:13, 19 February 2014 (UTC)

Visual C++ Simple Game Programming
So, from past questions, I'm learning C++ (and finding that I, actually, really love it as I get used to it). For my own interest, I'd like to make a simple game using it - nothing advanced, at the moment, I just want to make a box that moves around in an area, correctly collides with areas borders, and maybe has to avoid other boxes; just something to get a feel for the basics. At any rate, after a little research, it looks like there are all sorts of different packages and directions to start from, so I wanted to know if anyone has any suggestions. I'm not looking for the simplest route, but what would be most useful to work with long term/what would be worth knowing. If anyone has any experience or direction they can offer, I'd greatly appreciate it, thank you in advance:-)Phoenixia1177 (talk) 03:50, 3 September 2014 (UTC)
 * Simple DirectMedia Layer is a popular multimedia library that works on most platforms. You can use it to animate simple 2D and 3d graphics.  Nimur (talk) 04:06, 3 September 2014 (UTC)


 * What operating system are you targetting, and what development environment are you using?--Phil Holmes (talk) 11:17, 3 September 2014 (UTC)


 * For a very simple game, there are indeed an insane number of libraries that'll help you get it done - and for that purpose, it doesn't much matter which one you choose. I think your decision as to which one to invest time into depends on what you ULTIMATELY want to get done rather than what the simple example needs.  Most 'real' games rely on a bunch of different libraries - one for graphics, another for physics, another for sound and so forth. SteveBaker (talk) 14:51, 3 September 2014 (UTC)


 * I checked out SDL, it seems straightforward enough, I'm going to play with it this weekend. I'm using windows 7, visual express 2013 - I also have Code::Blocks, but haven't really used it. As for targeting, at the moment, no one - at most, I'd like to one day make something simple yet entertaining, basic 3d stuff graphically, nothing commercial grade (I'm not delusional); realistically, this will probably be more of a learning experience and way to pass the time, I like game programming related stuff because it challenges various skills and ends up with a fun result (and I like designing, so that's cool). @Steve, I'm not looking to make anything along the lines of what passes for a modern commercial game, however, I would like to learn something that had the ability to make one. In short, I want to learn something that people would actually use, even if I'm never going to use it for that end. Thank you all again for your help - with this question and my previous ones, actual programming is not something I have a lot of background in and while thrilling, I'm finding it to be a different sort of challenge, you all have been very helpful in getting me started with this:-)Phoenixia1177 (talk) 21:00, 3 September 2014 (UTC

Silent data corruption of video files
I've had an incident affecting my collection of photographs and videos that worries me, and that I hope someone here is able to help me understand. I had five videos (.3gp format) recorded on an HTC phone (HTC Incredible S S710e) bought January 2012. The videos were recorded May 2012. I recently discovered they were corrupt. Jpegs in the same directory appeared to be ok. Two other videos recorded with the same phone a couple of months later, which were stored in a different directory, were ok. Below (collapsed) is a summary of the troubleshooting I've done so far.

05.05.2012 17:25        50 019 626 VIDEO0002.3gp 01.10.2014 14:54        50 019 849 VIDEO0002_corrupt.3gp 05.05.2012 17:42         1 686 133 IMAG0237.jpg 01.10.2014 14:54         1 696 534 IMAG0237_modified.jpg
 * In my most recent backups, the files were corrupt, but I retrieved the intact files from a backup from June 2013, along with the jpegs that I had kept in the same directory.
 * I've found that the sizes of the corrupt videos are larger than the originals:
 * Although there appears to be no problem with the jpegs, I've found that they too have increased in size:
 * Using exiftool, I've found some differences in the metadata, both of the jpegs and the videos:


 * Videos:
 * A tag called "Handler Type" in the original is missing (or rather, moved, see below) in the corrupt file. It has the value "Video track" in the original.
 * The tag called "Handler Description" immediately follows in the original, and is untouched in the corrupt file (value: "VideoHandle").
 * Later in the corrupt file, the following section appears:

Handler Type                   : Metadata Encoding Time                  : 2012:05:05 18:52:54Z Media Class Secondary ID       : Unknown Content Media Class Primary ID         : Video
 * Finally, the movie data offset is different, 810040 vs 810263.
 * When the movie data offset is taken into account, the size of the binary video+audio data sections are exactly equal (49209586 bytes) in the original and corrupted files in the example. So I isolated the tails of the files, to check if the binary video+audio data were equal. They weren't. It is conceivable that they would have been identical if I'd had a way of assembling separate and continuous audio and video streams. But when compared as 49209586 byte binary files, they were totally different.
 * I restored the entire foto directory from the 2013 backup on a separate disk, and programmatically deleted all files that had identical md5 signatures to files in my current Photo/Video directory, and then examined the files that were left in the restored backup. I'm not through examining (it's a huge task), but I found one similar instance, in which .MOV files recorded on a Canon EOS 500 camera had been modified in the data section. The exiftool-readable stuff was identical. The modified .MOV files were playable, and I noticed no visible or audible differences between the original and modified file. See below regarding the accompanying *.THM files, which really are small JPEGS.
 * JPEGS
 * The tag "Interoperability Index: R98 - DCF basic file (sRGB)" present in the original is gone in the modified file.
 * Later, there is a section "Padding: (Binary data 2060 bytes, use -b option to extract)" in the modified file, that is not present in the original.
 * Thumbnail offsite and length are increased in the modified file.
 * There is a section in the modified file:
 * There is a section in the modified file:

About : uuid:faf5bdd5-ba3d-11da-ad31-d33d75182f1b Date Acquired: 2013:01:26 00:12:31
 * That is not present in the original.
 * In the second observation mentioned above (modified but playable .MOV files from a Canon EOS 500), I had renamed the *.THM files to *.jpg. These had also been modified. The most conspicuous changes were that Exif byte order had been changed from Little-endian (Intel, II) to Big-endian (Motorola, MM), and that the block
 * In the second observation mentioned above (modified but playable .MOV files from a Canon EOS 500), I had renamed the *.THM files to *.jpg. These had also been modified. The most conspicuous changes were that Exif byte order had been changed from Little-endian (Intel, II) to Big-endian (Motorola, MM), and that the block

About        : uuid:faf5bdd5-ba3d-11da-ad31-d33d75182f1b Date Acquired : 2013:01:21 23:46:36
 * had been introduced. Note that the uuid is identical to the one above, and that the date is different. It is puzzling that the dates are close in time, but nevertheless before the undamaged backup was taken.
 * A web search for similar problems came up with two hits that resemble the problems I have experienced:
 * http://feedback.photoshop.com/photoshop_family/topics/_3gp_corruption_error_in_lightroom_4_beta
 * http://answers.microsoft.com/en-us/windows/forum/windows_7-pictures/windows-live-photo-gallery-corrupting-3gp-files/62c48718-4957-4566-b4f3-f1e93d933e2d
 * http://answers.microsoft.com/en-us/windows/forum/windows_7-pictures/windows-live-photo-gallery-corrupting-3gp-files/62c48718-4957-4566-b4f3-f1e93d933e2d

Working hypothesis: The troubleshooting clearly shows that this is not a case of random data corruption caused by faulty disks or cosmic rays, and I find it very unlikely that it is caused intentionally by malware. I probably at some point in time, have accessed the files with a program installed on my PC which silently modifies both their metadata and the way video and audio chunks are laid out in video files. My top suspect was XnView, which I use a lot. However, I've tried to reproduce the problem with XnView, but my currently installed version does not modify the files it accesses in its file explorer. Other candidates are Windows itself, with its image and video display functionality. I've tried accessing the files with the windows 7 image viewer and the windows Xp viewer (from a virtual machine), without being able to reproduce the problem. I suspect that the uuid referred to in the details of the troubleshooting, faf5bdd5-ba3d-11da-ad31-d33d75182f1b identifies the culprit. A web search for the uuid leads to various images, but does not appear to be strongly associated with reports of data corruption. I have not been able to identify the program associated with the uuid, a search at https://mikolajapp.appspot.com/uuid/ returns no results for {faf5bdd5-ba3d-11da-ad31-d33d75182f1b}. I have used regedit to search for the uuid in my Win 7 installation, and in the Xp virtual machine, with no hits. This does not exclude that it is a previous version of a currently installed program.

And that's where I am right now. The incident certainly feeds my paranoia. If I had known the reason, I would have been in a position to eliminate the problem.

Suggestions on how to proceed to reach an accurate diagnosis of the cause of the data corruption would be highly appreciated. I believe I have mentioned the possible suspects, but should add that I use Adobe Lightroom and Exiftool a lot, and previously used jhead and a Canon program for adjusting white balance etc of raw files.

In particular, it would be helpful if someone could find out what program the uuid faf5bdd5-ba3d-11da-ad31-d33d75182f1b is associated with.

Thank you in advance!

--NorwegianBluetalk 11:43, 19 October 2014 (UTC)


 * I was able to reproduce this by opening the properties of a JPEG image in Explorer, going to the Details tab, and altering information there (for example, adding a tag or a star rating). This adds EXIF data to the image itself, and the UUID faf5bdd5-ba3d-11da-ad31-d33d75182f1b appears in the data that it adds. -- BenRG (talk) 01:39, 20 October 2014 (UTC)


 * Thanks! I'll try tonight if the same procedure corrupts the videos. I often access the Details tab for checking things, but rarely modify the EXIF data from there. But your experiment shows that Windows itself is making the changes, probably via calls from other applications. This was very helpful! --NorwegianBluetalk 04:54, 20 October 2014 (UTC)


 * I can confirm that modifying the EXIF data of the video files from the Details tab indeed caused corruption of the .3gp video files, with very similar changes in the EXIF data to those I described in the details of the troubleshooting above. I then used ffmpeg and moved the data to an mp4 container:

ffmpeg -i VIDEO.3gp -vcodec copy -acodec copy VIDEO.mp4
 * and again modified the EXIF data from the Details tab. This time, the file was still fully functional. All I need to do now, is to move the small number of .3gp files that I have to .mp4 containers, and the problem is solved. Thanks, BenRG, for giving me peace of mind! --NorwegianBluetalk 21:45, 20 October 2014 (UTC)

Looking for an automatic exposure blending tool
I've recently found a large lot of my family's vintage photos in a garage and I'm currently scanning them. Most of these are snapshots taken throughout the 1930s and 1940s, but turns out that a number of these paper prints are actually exposure bracketing sets of even older photographs, where photographic plates taken c. 1900-1930 (an educated guess rather than a broad one, based not only upon wardrobes and hairdos, but especially the specific ages of family members portrayed which make it clear that some of the photographs are definitely pre-WWI and going up until the late 1920s) were photographed again in order to transfer them from plates to paper prints. Because the images lost a lot of dynamic range in this crude optical copying process, the photographer did exposure bracketing shots of two or three per original plate with a different f-stop each, so that one exposure has good shadows, one good mids, and one good hi-lights, whereas the rest is lost to the single shot.

So now I'm looking for a free exposure blending tool which would combine the dynamic range for every exposure bracketing set (so that it would use the optimal exposure for every area) and export me a tone-mapped regular BMP, TIFF, or JPG on the other side. I've spent three very frustrating hours tonight trying out FDRtools and Enfuse  after seeing reviews with rather good example results from these two. But FDRtools always crashes on me with a runtime error when I'm trying to load the images, or, at the very latest, when I'm clicking 'Edit', and Enfuse should rather be called *CON*fuse because it's not really a program, but more of a weird programming language that is *WAY* beyond me. I can't even tell how to make this Enfuse thing operative or access my image files somehow. So, what else would be out there to do this kinda thing? --84.180.255.151 (talk) 00:38, 17 November 2014 (UTC)


 * Don't know about FDRtools but I found Enfuse pretty simple “ just stupidly followed the instructions”. Are you using Windows or Linux? Think I got the know-how from which is currently down for maintenance so I can't be sure. On (say) Ubuntu's OS (Linux) one just downloads it and it's good to go. Have another try. As the Buddhists  recommend: Start with a quite and peaceful mind. More hast equals less speed. --Aspro (talk) 01:52, 17 November 2014 (UTC)


 * I'm on Windows XP here. Spent 90 minutes on Enfuse now after your encouragement and reading your linked page by means of Wayback. All I manage to do is make Enfuse.exe tell me on double-click that it's not a 32bit application, and when I'm trying to use the droplets, it says it can't find Enfuse.exe anywhere, even if they're in the same folders. --84.180.255.151 (talk) 02:35, 17 November 2014 (UTC)


 * Uhm. It is well known that Microsoft doesn't like people like you, to use free software! Microsoft would rather you spend £500 on a photo-shop application. So, create a live Ubuntu Linux memory stick and run Ubuntu Linux. That will not affect your XP installation at all . Oh, and the weird programming language your referring to is probably BASH. Forget it. On Ubuntu, just make sure you have the Huggins suite downloaded and installed  on Ubuntu then follow this tutorial: Creating HDR Images with Enfuse & Hugin. No programming skill required on Ubuntu. Just switch XP Windows  off and then back on again, if anything doesn't go to plan.--Aspro (talk) 02:58, 17 November 2014 (UTC)
 * Alternatively, you could use a Ubuntu LiveCD .--Aspro (talk) 03:06, 17 November 2014 (UTC)
 * But wouldn't I need a 64bit hardware machine to begin with in order to run such a 64bit application such as Enfuse? --84.180.255.151 (talk) 03:13, 17 November 2014 (UTC)


 * Enfuse and Hugin have both 32 and 64 bit versions. I currently use them on a 32 bit Dell desktop. --Aspro (talk) 03:23, 17 November 2014 (UTC)
 * YES!!! A thousand thank yous! I had to google a bit to find the 32bit version (because it's not linked directly from the Enblend-Enfuse main site nor Sourceforge), then fiddle a bit to find out that (other than for instance Mencoder) it only works if both Enfuse.exe and the photos are in the C:/Documents and settings/user directory that I'm prompted in the DOS prompt...but now it *WORKS*! It's amazing to see what tonal range Enfuse can recover by combining three basicly two-value hi-contrast exposures from 70 years ago of the same original plates that in turn were taken a hundred years ago! :D It's just that aligning several paper print scans is a bitch compared to aligning several digital shots taken with a tripod... --84.180.255.151 (talk) 04:16, 17 November 2014 (UTC)
 * Alternatively, you could use the Hugin GUI which does link to the 32 bit version on the Hugin sourceforge site and comes with Enblend and Enfuse. Nil Einne (talk) 04:53, 17 November 2014 (UTC)
 * BTW, you appear to be correct that the Enblend/Enfuse website doesn't seem to link to 32 bit version in any real way. The simplest way when this is a problem on Sourceforge is IMO to look for it yourself. An easy way to find it is if you click on the download link, this will normally take you to a download page which will tell you the download is starting soon. Click on the project name (Enblend in this case). This will take you to the project reprository e.g. for 'Enblend'. You should see some links like 'summary', 'files', 'reviews' along a line somewhere in the middle or middle top of the page (below the non project Sourceforge stuff). If you click on 'files', you will be taken to the reprository. Sometimes the reprository may be a bit confusing, but you can often tell by the date and name where to look. Don't be afraid to use the back button if you end up at the wrong place. Alternatively, if you hover over the link for the latest version, you can see where that is and guess where to look. In this case if you click on 'enblend-enfuse' and then 'enblend-enfuse-4.1' you will end up  where you can find the 32 bit and 64 bit. I found the Hugin 2014.0 and 2013.0 links the same way before I noticed the 2013.0 was linked on the main Hugin page and it isn't unheard of for software to only provide the source tarball even if a precompiled binary exists and sometimes you may be looking for older versions, or RC version or whatever which often aren't well linked, so it's helpful to know how to navigate the Sourceforge reprository in any case. Nil Einne (talk) 05:08, 17 November 2014 (UTC)
 * (EC) You can download a 32 bit Windows binary of Hugin here or here  depending on whether you want a an RC version or the latest non testing version. The later link BTW is under the "Pre-compiled versions" section under Windows: Official 2013.0.0. I don't have a Windows XP install to test, but I'm fairly sure it will just work if you get the right version. Considering the details here, I would suggest the non Python installer (i.e. RC4 or latest nontesting). BTW, the reason why it didn't work here doesn't seem to have anything to do with any Microsoft dislike for free software (unless you count the lack of a package manager or simple way to compile stuff from source, but I think even many non technical *nix users find compiling stuff from source often isn't so simple hence the proliferation of package managers), but all to do with the fact Sourceforge didn't provide the right version. From my own testing, I think it doesn't detect whether you have a 32 bit or 64 bit version of Windows only that you have Windows and so provides you the 64 bit version, I'm guessing based on the choices of the Hugin sourceforce maintainer as the default version for Windows. The reason may be because while there are ways to try and detect 64 bit (whether browser is 32 bit or 64 bit) vs 32 bit Windows via the useragent these may not be entirely reliable.  You can perhaps partially blame Microsoft here in that while they did stuff a certain way in IE e.g.  &  and many followed, I'm not sure if they ever published this as a recommendation for others to follow. Also it seems there are some cases when even IE may provide no clue the OS is 64 bit, although I'm not sure that people using Enterprise mode are likely to be a significant concern for providing the right version for software install. And I'm not sure whether Apple would have followed Microsofts recommendations in Safari Windows even if they did exist. Or for that matter, even if there was an entirely reliable way to detect Windows bitness from the useragent, SourceForge would use it.  You can perhaps also fault Microsoft for not allowing universal binaries, but there a number of reasons why they may have chosen not to do so, and it's unlikely free software considerations even came in to them. (And I'm fairly sure software providers could simply use a 32 bit shim which chooses whether to install a 64 bit or 32 bit version.)  Edit, oh except for the links, this seems to mostly apply to Enblend/Enfuse as well Nil Einne (talk) 04:51, 17 November 2014 (UTC)