Wikipedia:Reference desk/Archives/Computing/2009 March 15

= March 15 =

Cocoa Finder in 10.6
The page on 10.6 makes a note that the Finder for this version will be done in "Cocoa" instead of "Carbon", but the pages on Cocoa and Carbon don't seem to really explain why would be done or what good it would do. Is there any reason someone familiar with the APIs could explain? Thanx, 76.117.247.55 (talk) 00:18, 15 March 2009 (UTC)
 * My best guess answer: 64-bit compatibility (as one of many other contributing factors). Carbon is not fully compatible with 64-bit programs under Mac OS 10.5.  See also Google: .  LobStoR (talk) 05:02, 15 March 2009 (UTC)


 * Lacking a single "right" answer (Which I suspect this question doesn't have), that's perfect. Thanx, 76.117.247.55 (talk) 03:09, 16 March 2009 (UTC)

Graphics cards
Question: Are 2x GeForce GTX 260's in 2-way SLI faster than 3x GeForce 9800 GTX+ in 3-way SLI?

(assuming the games support SLI)

Thanks for your time! ZX81  talk  01:21, 15 March 2009 (UTC)
 * That's a lot of text... You may want to begin or end with your question, to make it easier to find (rather than bury it in the middle). That being said, I assume your question is: "Are 2x GTX 260's faster than 3x 9800GTX+?"
 * Here's an article at Tom's Hardware comparing 3x GTX260s and 2x GTX280s. The answer to your question depends on whether or not the programs you plan to use will benefit from that type of SLI setup... If you are using this setup to play games, note that not all games will receive a benefit that scales linearly to the number of graphics cards you have.
 * Another note... setups with NVIDIA SLI and multiple monitors may be more complicated than you think... you might not be able to simply plug in any monitor to any video card while SLI is enabled... check out this Google search for more information.
 * LobStoR (talk) 04:45, 15 March 2009 (UTC)
 * Yes that was the question. Too often we see questions on the reference desks not giving enough information and more questions needed answering first so I wanted to try and give as much as possible, but I went a bit overboard - Sorry! I've drastically shortened it now.


 * Thank you for the links, especially the SLI and multiple monitors which took me to NVIDIA's SLI Multi-Monitor Support page which shows what I'm trying to do with the 3 monitors isn't even possible whilst it's in SLI mode. The question about the graphics cards is still valid though as the newer drivers now support 2 monitors so I may just have to use that instead of 3. Thanks! ZX81  talk  13:16, 15 March 2009 (UTC)


 * This kind of question is VERY tough to answer - it's so dependant on the nature of the game and where it's bottlenecks are. SLI improves pixel throughput - but it makes geometry performance worse - so adding a third card (if that's even possible) will improve pixel rates - but not geometry.  If you use your three screens to get the same field-of-view and just push up the resolution - you get a different answer than if you widen your field of view and keep the same resolution.  It's really not the kind of thing where a definitive answer is possible.  I'd stick with two cards - simply because that's what most game-writers develop with - and the most that routine play-testing happens with.   Hence you're less likely to run into odd bugs due to inadequate testing of games and drivers in three-way mode.

Thanks for the responses and because it turned out I can't run 3 monitors on an SLI config I've decided to stretch my budget a little further to buy a single GTX 295 to drive the main 24" monitor and an additional GTS 260 for the 2x 19" ones. Thanks again! ZX81  talk  16:33, 15 March 2009 (UTC)

DVD video creation
I'm looking to burn a movie to a DVD so it can be playable in a standard DVD player. However, I don't know what the steps are to do this. Is there any instructions for software that can do this on Mac? --12.146.80.2 (talk) 01:30, 15 March 2009 (UTC)


 * Oh, sorry for not having iDVD installed!--12.146.80.2 (talk) 13:52, 15 March 2009 (UTC)


 * The easiest way (in the sense that most Macs have the program, not that the program is easy to use) is to use iDVD. Googling "burn DVD without iDVD" turned up this option that seemed like it might work as well for your purposes. --98.217.14.211 (talk) 15:18, 15 March 2009 (UTC)

You might like to look at Toast. They also sell one called Popcorn which is cheaper. Often when you buy a burner they come with DVD burning software, like these. I'm assuming that if you haven't got iDVD then maybe your Mac is a bit old and hasn't got a built in burner?91.111.85.208 (talk) 20:00, 15 March 2009 (UTC)

I'm spacing out on this Q...
I've noticed that the total size of the files on a standard commercial DVD isn't any less if the DVD contains a black and white movie (gray-scale, technically), than a color movie of the same length. Why is this, do they include more detail to take up the space which would otherwise be occupied by color info ? StuRat (talk) 03:43, 15 March 2009 (UTC)


 * My guess is that the recording speed (and reading) of a DVD is constant in time. This is just a guess - will be interested in answers from those "in the know".  --Scray (talk) 04:24, 15 March 2009 (UTC)


 * Short answer: DVDs are always encoded with color. For more information, I would recommend that you read about the MPEG-2 codec - which is used on DVDs.  Also, I recommend using a more useful section title than "I'm spacing out on this Q..."  LobStoR (talk) 04:51, 15 March 2009 (UTC)


 * So, to help visualize this, it's like they have RGB codes for every pixel, even when the 3 values are identical in every case ? You'd think someone would have fixed this by having an instruction in the header that means "this is a gray-scale file, so only one value is given for each pixel, which represents the Red, Green, and Blue values for that pixel".  It seems like such a simple thing, and would allows 3 times as many black and white episodes to fit on a single DVD. StuRat (talk) 05:21, 15 March 2009 (UTC)


 * It is possible to reduce certain video formats to grayscale and thus save a lot of space, but I think that Scray may be partly right in that this is a spin issue. A DVD player might not be as effective at buffering a disk with data compressed three times more than usual, or it might just be a precaution, or even just a marketing decision. Perhaps the King of DVD is under the impression that episodes of the Brady Bunch would produce more revenue if sold in sets of five instead of thirty. 210.254.117.186 (talk) 14:38, 15 March 2009 (UTC)


 * (ec) Actually - it would make very little difference. Firstly, color information is stored at much lower resolution than intensity (they don't use RGB exactly) so color takes up maybe a quarter the space of the intensity data.  Secondly, the compression routines are optimized to preserve intensity information at the price of color - so the savings would be MUCH less than 3:1...maybe 10% at most.  But, you might argue, even 10% is worth having.  But that's not all there is to it.  The people who set these standards have to consider keeping software and hardware complexity to a minimum - over the course of the lifespan of the DVD standard, decoders for the format have to be implemented by many companies over many platforms and adding complexity where the benefit is small is a really bad idea. SteveBaker (talk) 14:39, 15 March 2009 (UTC)

32-bit memory limits and GPUs
I know that a 32-bit computer can only use 3.3-ish GB RAM, but does the same limit apply to graphics memory as well? 144.138.21.184 (talk) 05:16, 15 March 2009 (UTC)Will


 * The absolute limit to the address space for a 32 bit computer is 4Gb (thats 2^32). Into that 4Gb of space the systems memory manager must map various things - most obviously is your RAM chips, but there's also the PCI address space and various other bits of hardware. That's where the roughly 3.3 Gb limit comes from (although that can, to an extent, be dodged with Physical Address Extension). Now when you say "graphics memory" we need to discuss what that really is.  First let's assume you have a nice video card like an Nvidia (not a cheap solution like onboard intel, or a unified memory architecture like an Xbox).  That card has its own RAM chips, which are visible to its own CPU (a video card really is a powerful independent computer, not a mere dumb peripheral). PCI devices can choose to reserve a chunk of the host system's address space for themselves - so if the host accesses that chunk of memory then the memory controller issues a memory-read or memory-write over the PCI bus (rather than over the DRAM bus) and that operation is serviced by the PCI peripheral that has claimed that chunk.  PCI devices can choose to map all of their memory in this way (so it's just a 1:1 mapping) or they can choose to map only part of it.  If the choose to map all of it, that all takes away from the 4Gb available for other things.  Mapping all the memory makes life easy (mostly) for the author of the device driver, as he can just manipulate the inside of the card's memory as if it was (slow) local DRAM. But graphics cards are advanced devices with minds of their own (which means they're doing things all the time), so manipulating their memory means you have to handle concurrency and synchronisation between the main CPU and the card's own. So often cards don't expose all their memory, just a chunk they want to use to communicate back and forward with the system CPU. Right now cheap consumer cards seem to have around 64Mb of onboard RAM, so they probably map most or all of that into the CPU space. But high-end cards are getting on for 1Gb, and if they map all of that then they're consuming a big chunk of host address space (space you can't use to access local DRAM). One option is simply to go to 64bit, where the address space is vast and there's acres of breathing room for everything to be mapped into. Alternatively the card can choose to map only a portion of its memory into host space, and have some registers that allow the host to change that mapping (which is called a "window") on request.  Accessing memory through a window is a pain for the device driver programmer, as he has to organise his operations accordingly (old graphics driver authors are used to this, as the same thing used to be necessary with older graphics interface standards).  So the short answer is "4Gb is the limit, including all mapped graphics memory, but different graphics cards map their graphics memory into that 4Gb into different ways". 87.115.143.223 (talk) 11:42, 15 March 2009 (UTC)


 * Out of interest I checked the memory map on my x86 linux machine (but much the same happens on Windows and OS-X) - it has a 128M nVidia GeForce memory card. Here's what   shows for it:

02:00.0 VGA compatible controller: nVidia Corporation NV44A [GeForce 6200] (rev a1) Flags: bus master, 66MHz, medium devsel, latency 248, IRQ 19 Memory at e4000000 (32-bit, non-prefetchable) [size=16M] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at e5000000 (32-bit, non-prefetchable) [size=16M] [virtual] Expansion ROM at e6000000 [disabled] [size=128K]


 * Which means it's taking four distinct chunks of the CPU's memory map. I'd guess the two 16M ones are for command queues and control registers, and that big 256M one is for bulk data. Notice that the 256M is actually double the size of the card's 128M of memory: a common trick hardware designers do is to duplicate a block of memory, one with pages marked as cacheable and one with pages marked as non-cacheable, so software can use either mode as appropriate. I'm sorry if this seems like a diversion, but my point is to show that there frequently isn't that 1:1 mapping of device<-->system memory. 87.115.143.223 (talk) 14:09, 15 March 2009 (UTC)

Alan turing
Who is alan turing —Preceding unsigned comment added by Shushank22 (talk • contribs) 06:38, 15 March 2009 (UTC)


 * Perhaps our article Alan Turing will answer your question. –  7 4   06:51, 15 March 2009 (UTC)

Ubuntu vs. Xubuntu
I have heard that Xubuntu is an extremely lightweight operating system, so when I get a new notebook, I was planning to install it. However, I just found a model that comes with Ubuntu. The notebook is very limited: 512MB RAM, and 4 GB hard drive space. Is Ubuntu also lightweight enough to work on that notebook? (I know that Windows or Ubuntu will work, but just taking up a lot of space). Thanks,  Genius  101 Guestbook  13:55, 15 March 2009 (UTC)


 * Ubuntu 6.10 ('edgy') requires a minimum of 400Mbytes of disk space - and a 'standard' installation needs 2Gbytes. I don't know about the latest version - but I doubt it's doubled in size - so it'll fit.  The question is: How much space do you need for your stuff?  A 4Gbyte drive is pretty pathetic by modern standards.  However, (presuming you have a USB port) you can easily get the space back by buying a cheap 'thumb-drive'.  If you Google "cheap thumb drive" - you'll see that they are currently selling for around $2 per gigabyte!  So IMHO, go ahead and install Ubuntu in all of it's glory - and splurge $9 to get one of these 4Gbyte thumb drives to keep your 'user files' on...heck, go nuts, spend $20 and get an 8Gb drive - a big chunk of the cost is postage, so you might want to check out a nearby brick-and-mortar store instead.  SteveBaker (talk) 14:16, 15 March 2009 (UTC)


 * ...Or just spend $50 or so and upgrade the laptop harddrive to something more reasonable. I think I got a 300GB laptop drive for about $80 not long ago. Usually these sorts of things are best bought separately from the laptop itself (ditto RAM) as you can find them quite cheap elsewhere on the 'net (e.g. newegg.com), though you have to do the labor yourself... --98.217.14.211 (talk) 15:33, 15 March 2009 (UTC)


 * I hadn't thought of the thumb drive, but that sounds like a good idea. Thanks!  Genius  101 Guestbook  16:15, 15 March 2009 (UTC)


 * Sounds like the OP is buying something like an EEE PC which has a 4gb internal solid state disk and no place to put a hard drive. Rather than a thumb drive sticking out the side you could consider an SDHC card.  There are also replacement mini-pci ssd's available up to 64gb or 128gb for most of those netbook models now.  Not all of them--some (including the EEE 701) have ssd's soldered onto the board.  As for ubuntu, yeah, the base install fits on a cd-rom, but it is missing a lot of stuff (they fill most of the install iso with Office-like bloatware) and you will be in apt-get download hell for months after your initial installation.  I'd expect xubuntu to be even worse.  75.62.6.87 (talk) 09:18, 16 March 2009 (UTC)

Mobile photo sharing
Hi, I'm looking for a photo sharing service accessible from a mobile (i.e. has mml) that has some sort of a private share method in which the viewer doesn't have to be a member, for example they just have to input a predefined password (that I have taught them) to see the images. Is there anything like that? All the ones I've found only have private sharing between members... I think. They can be pretty vague on how it works sometimes. Thanks! 210.254.117.186 (talk) 14:28, 15 March 2009 (UTC)
 * And to clarify, apparently programs like coppermine do this for incoming links to private albums, but I don't have any server space so I'm looking for an online simple gallery-type service. 210.254.117.186 (talk) 15:00, 15 March 2009 (UTC)
 * In general, I don't think you will find one unless it is provided by your cell phone manufacturer. I would try Facebook or Myspace, which might be built into your cell. Of course, there are all the photo sharing sites (e.g., Photobucket) which also require a username. Magog the Ogre (talk) 02:27, 16 March 2009 (UTC)
 * Photobucket has the function to password protect albums like I need (I checked it yesterday), but it's mobile support is lacking, and you're limited to tiny thumbnails and a really ugly interface. I'm pretty sure it's pretty common, the only problem is that I have to register and try out each to find out what it's like, which isn't really cool. 210.254.117.186 (talk) 02:43, 16 March 2009 (UTC)

Using Opteron 2000 series in a 4 socket motherboard
I bought a TYAN S4985 Quad Socket motherboard recently in order to build a 'god' gaming box.

In order not to rob a bank I am trying to figure out if it's possible to run two 2200 or 2300 series Opterons in two sockets, leaving 2 sockets empty, until the prices on the 8 way capable Opterons drop to a level I will pay. Currently the board is just sitting there while I look to ebay for a good Opteron deal.

There's not much information to be found regarding this issue around the web.

Also is it possible to use three 8 way opterons and IBM’s CPU Pass Thru card on a motherboard like mine and then reap the benefits?

85.81.121.107 (talk) 16:44, 15 March 2009 (UTC) Mopteron

Me again :-)

I found this website for a company called ISI, selling this

http://www.isipkg.com/cost_reduction_modules.php

Wondering if this is what IBM is calling a cpu pass thru card? 85.81.121.107 (talk) 19:42, 15 March 2009 (UTC)

Me again....I found this link, indicating that the 2000 series do indeed work, sweeeeet. Especially now the 2376 is so damn cheap...

http://www.spec.org/cpu2006/results/res2007q1/cpu2006-20061224-00182.html

This just makes me think why AMD does not advertise that you can go from one 2000 series to two before you have to switch to a 8000 series for 4 way fun. Anyway I'll buy the 2376s new and a single 2200 for bios upgrading if my S4985 needs a bios update to recognize the Shanghai's

If anyone can confirm that the 2300/2200 does work in a quad socket board please come forth. I also checked up on the cpu pass thru card, it apparently is a special cpu 'daughter' card for a specific IBM motherboard. It looks veryn much like the ones that were shown in the istanbul youtube videos'

Would be sexy though to have a drop in pass trhu socket 'enhancer' for three way play.

85.81.121.107 (talk) 02:55, 17 March 2009 (UTC) Mopteronto

annotating pdf files
Is there any free program out there that I can download that would let me annotate a pdf file? I mean this as in the ability to highlight certain areas, write things in the sidebars or on white space between lines, or even draw lines or circles around things, and then save it for reference later or print it. —Preceding unsigned comment added by Yakeyglee (talk • contribs) 16:44, 15 March 2009 (UTC)


 * If you only care about a single page, Inkscape can open almost all PDFs and then you can draw text and graphics on it to your heart's content (and then save back as PDF). Unfortunately Inkscape will only open a single given page of a PDF, so (while it is possible to chop and reassemble PDFs using free tools) I guess you might find this tiresome. 87.115.143.223 (talk) 17:01, 15 March 2009 (UTC)


 * Foxit Reader will allow you to do this. The free version will mark the PDF with an evaluation stamp in the top-right of every page, which may or may not matter to you. — Matt Eason (Talk &#149; Contribs) 17:58, 15 March 2009 (UTC)

Am I allowed to be using a pirated version of windows?
I have a dell computer and when I bought it, it came with a cd but I lost it. Am I allowed to use a fake cd from my friend? I even have a sticker on the side of my computer that says certificate of autheticity.--75.187.113.105 (talk) 20:41, 15 March 2009 (UTC)


 * Windows licensing is very complex, and we're not allowed to give legal advice. Dell will, however, sell you a recovery CD at a pretty low price (something like $10). 87.115.143.223 (talk) 21:00, 15 March 2009 (UTC)


 * If you have an authentic Windows serial number then I doubt anyone cares -- it's not really the disk you're buying, but the license to the software. The sticker on the side doesn't mean anything in this case. --98.217.14.211 (talk) 22:51, 15 March 2009 (UTC)


 * The sticker on the side will be an OEM licence. For that licence to work with the disk it has to be an OEM disk (and most likely the same class of OEM, and for the largest manufacturers the same OEM). Windows licences, even for products that are (to the end user) the same, are varied and mutually incompatible. It might work, but for a Dell I doubt it. 87.115.143.223 (talk) 23:46, 15 March 2009 (UTC)

Yes you are, as long as it is the exact same version of Windows. I even had a friend who got a computer with the sticker at an auction: he called in to Microsoft who promptly sent him a Windows CD in the mail. Just make sure you use the same registration number as the sticker on the side of your computer. Magog the Ogre (talk) 02:23, 16 March 2009 (UTC)
 * Getting a replacement CD is not the same as using a pirated CD. Taemyr (talk) 03:36, 16 March 2009 (UTC)
 * Unless I am sorely mistaken, each Microsoft replacement CD has its own license. One may use an old CD for his/her system if s/he owns the license. Magog the Ogre (talk) 03:39, 16 March 2009 (UTC)

Can this fax?
When I bought my current system, I was still on dial-up, and the modem card also came with some faxing software, which I used only occasionally. Today I'm on DSL with a home network: Computer connected to the router, router connected to the DSL modem, DSL modem connected to the phone line -- but I am faxless. From the point of view of the telephone wire, it goes wall -> DSL modem -> phone, and not through the computer case.
 * Option A is: buy a phone line splitter, and run a short wire from the splitter to each modem. This would allow the fax modem to "see" a phone line and use it when needed. Or would it?
 * Option B might be: run the phone line from the wall, to the fax modem, to the DSL modem -- daisy chain them both onto the same run.
 * Q1: Which of those is better, or worse, than the other? (Is one of them a spectacularly bad configuration?)
 * Q2: In the first option, I could probably plug the phone into either modem, yes? But perhaps connecting the phone to the fax modem is a bad idea, as it might require one of those line filters that I have to have on every other phone in the house.  Or do I have to have that upstream of the fax modem anyway, even without a phone on it?

Thanks in advance! --DaHorsesMouth (talk) 23:04, 15 March 2009 (UTC)


 * So, to clarify, it seems you don't have a DSL filter (or, then, that the DSL filter is inside your DSL modem's casing)? If that's the case, are you confident that the phone connection is really an analog phone (and not a digital phone connection supplied by the internet company and carried over the DSL)? 87.115.143.223 (talk) 00:10, 16 March 2009 (UTC)


 * Yes to both. There is no DSL filter required on the wire that goes to the DSL modem itself, just on every other phone jack in the house; this according to Qwest.  Am I confident in the phone?  Yeah, I've been using it as a telephone for a couple of decades, and it worked fine as a phone and a fax when the wire went to the back of the computer case (same computer) and thence to the phone on the desk.  Same phone, same wire, same wall jack.  --DaHorsesMouth (talk) 01:58, 16 March 2009 (UTC)


 * Your best bet is probably to split the "phone" output from the DSL modem into both your phone and fax-modem. There's no guarantee this will work if, for instance, you have digital phone service, but I don't think it would cause any line problems either. As an alternative, you might want to consider using an internet fax service to send and receive faxes instead. –  7 4   00:57, 16 March 2009 (UTC)


 * Split it "downstream" from the DSL modem? Hadn't thought of that option.  Will try that after I buy my splitter...  Thanks! --DaHorsesMouth (talk) 01:58, 16 March 2009 (UTC)

Comparing variables in C++
I am writing up a program in C++ and at one point I need to check if ten variables (call them t1,t2,...,t10 and they are all floats) are equal or not. If they are all equal, then I need to execute some commands otherwise the program keeps going. My questions is that is it better to do

if (t1==t2) && (t2==t3) && ... && (t8==t9) && (t9==t10)

compared with

if (t1==t2) { if(t2==t3) { if(t3==t4) ... } }

as nested if statements? In the second case, I know that not all ten conditions will be checked unless all ten have the same value. Does that also happen in the first case? Will all ten conditions always be checked in the first case or will the checking stop as soon as first false is found? The reason I ask is because this is all inside a loop which will be executed about $$10^200$$ times so usually, the difference in overhead will not make a difference but in my case, if there is a difference, the difference will be huge in execution time. So obviously I want the faster one. I know that if statements can be costly but which one is less costly?

I am calculating a value recursively and I want the loop to stop if the value doesn't change for ten consecutive iterations. If anybody have any better suggestions (smarter/faster) on how to test ten variables for equality with each other or how to make the loop stop, let me know. BTW, I am only an intermediate C++ user and I am doing this for a math project. I am using a do while loop working on a dual processor Linux machine using the built in g++ compiler. Thanks!-Looking for Wisdom and Insight! (talk) 23:25, 15 March 2009 (UTC)


 * Evaluation of && and || is left->right lazy, so f&&a where f==0 will never evaluate a, and t||b where t==1 will never evaluate b. There isn't an obvious speedup, and the two approaches you describe will probably be equivalent (as will "clever" stuff like using arithmetic in place of some booleans, or using ?:) - compilers are good enough now that cleverness just fools them into producing worse code. There might be a slight speedup with your first method, if the compiler has an optimisation targetted at such a (fairly common) case. The only speedup I can suggest is to consider the relative probabilities of each comparison being false; if they're all the same, just do them as you've done in logical order.  But if some comparisions are more likely to be false than others then do those before the rest - that way you'll reject the failing cases sooner (on average). 87.115.143.223 (talk) 00:05, 16 March 2009 (UTC)


 * Why exactly do you have ten variables to be tested in a loop? If this is simply your test for "ten consecutive non-changes" consider rewriting it to keep a count of non-changes instead. That is, you calculate value x, then calculate value x+1 and compare them. If equal, increment non-change-count, else set non-change-count to 0; then loop. Exit the loop when non-change-count reaches 9. That loop structure uses a total of 2 comparisons per iteration instead of 10. –  7 4   00:50, 16 March 2009 (UTC)


 * I think you have one other "interesting" problem to solve using your primary algorithm, that being, "On any given iteration through the loop, how do you know which t to set?"
 * Further, suppose you've found three in a row that match, so that t1, t2 and t3 are equal, but the next time through something jiggles and it doesn't match. Which t will you set -- 4 or start over at 1?  That's a bookkeeping problem in its own right.
 * --DaHorsesMouth (talk) 02:11, 16 March 2009 (UTC)

Ignore all of the above: your answer lies in the fact your first statement is done via short-circuit evaluation (if you wanted to turn off short-circuit evaluation, you would use only one and(&) or or(|) symbol). The first statement in assembly language is identical to the second in assembly language. Magog the Ogre (talk) 02:19, 16 March 2009 (UTC)


 * Also note that you shouldn't use == with float (or double) values. Once you get some imprecision going, you can set a=1.0 and b=1.0 and then find that a==b returns false. --  k a i n a w &trade; 02:49, 16 March 2009 (UTC)

So I am short circuiting, right? Because I am using the double ampersand sign &&. So this means that the expressions are evaluated in order and soon as one is found false, the entire expression is false and the rest are not evaluated, right?-Looking for Wisdom and Insight! (talk) 05:09, 16 March 2009 (UTC)


 * Yes, && short circuits. But doing that comparison with ten variables is a horrendous code smell.  Use an array, or a linked list wound through the call stack (shallow binding) or anything sane, just not ten variables.  75.62.6.87 (talk) 09:21, 16 March 2009 (UTC)


 * As stated above, a loop with 10 exit-condition comparisons is generally bad practice regardless of whether the values are stored in variables, arrays, or some other data structure. You should try to keep your loop as simple as possible, but no simpler. –   7 4   16:49, 16 March 2009 (UTC)


 * Woahhhh! Wooooaaaah!!!! .  You said "The reason I ask is because this is all inside a loop which will be executed about $$10^200$$ times so usually".  Um...oops!  Let's back up a bit here.  Are you saying your loop runs 10200 times?!?!   Let's suppose you have a state-of-the-art 4GHz CPU - that's 4x109 clocks per second.  If you entire loop could execute in one clock tick (certainly it can't) then it's going to take 10199 seconds to finish...there are about 3x107 seconds in a year - so even with the most optimized possible program on the fastest CPU available - it's going to take 10192 years...that's a LOT longer than the life of the universe so far - certainly, the Sun will burn out and the earth will be a crispy cinder before your program is even a millionth of a millionth of one percent of the way to completion!  This is not a matter of optimization.  This is a matter of a total rethink of the entire possibility of doing the calculation! SteveBaker (talk) 04:50, 17 March 2009 (UTC)


 * Um… 10200 / 109 ≠ 10199. The correct result is still ridiculously large, just slightly more scientifically so. –  7 4   21:06, 18 March 2009 (UTC)
 * 10200 sounds like an indefinite and fictitious number to me. If it means 102 hundred = 10,000 then the tests will take a fraction of a second whichever way they are optimised.  The simple fact is that there are nine potential comparisons per loop, and all you can do to help is check the least likely condition first.  In fact you may need 18 checks: (t1>=t2-e && t1<=t2+e && t2>=t3-e ...) for a constant e>0 small enough for your application's precision. Then it starts to matter whether you compare t3 against t1 or t2, which is a whole new problem. Certes (talk) 21:49, 21 March 2009 (UTC)