Wikipedia:Reference desk/Archives/Computing/2011 December 31

= December 31 =

"find" in firefox browsers
When I try to "find" a string, the scroll-bar ends up so positioned that the string I'm searching for is on the lowest line in the window. If a human did that, I'd consider it rude at best. Is there a way to make it appear on the middle line? Michael Hardy (talk) 00:11, 31 December 2011 (UTC)


 * I don't like that either. If you do a "Previous" find, it puts the line at the top, which is a bit better. StuRat (talk) 00:17, 31 December 2011 (UTC)

First result for a search of 'firefox addon find middle': https://addons.mozilla.org/en-US/firefox/addon/find-to-center/ ¦ Reisio (talk) 01:20, 31 December 2011 (UTC)

Is this fast enough to run Battlefield 3?
I am purchasing a new Dell XPS 15 laptop with the following customized specs:


 * 2.2 Ghz i7 2670-QM
 * 8 Gb, 1333 Mhz RAM
 * nVidia GeForce GT 540M 2 gb
 * Windows 7, 64-bit

This should be fast enough to run BF3 right? What kind of fps can I expect? Acceptable (talk) 01:17, 31 December 2011 (UTC)


 * Yes, & significantly more FPS than the human eye is physically able to distinguish. ¦ Reisio (talk) 01:19, 31 December 2011 (UTC)
 * Well, that's better then me, welp. Thank god I'm getting a new video card in a couple of days. Res Mar 02:33, 31 December 2011 (UTC)
 * I'm not so sure. I think you're skirting the minimum system requirements ( PROCESSOR: 2 GHZ DUAL CORE (CORE 2 DUO 2.4 GHZ OR ATHLON X2 2.7 GHZ)...WITH ATI RADEON 3870 OR HIGHER PERFORMANCE) )... Your 520M benchmarks just below the 8800 GT and above the ATI 3870... so I don't think you'll be at the highest end of the curve. That's a beast machine for a laptop, but BF3 has some high requirements. Shadowjams (talk) 22:14, 31 December 2011 (UTC)


 * I don't know much about mobile GPUs but the OP said they had a 540M not a 520M. A quick search suggests this probably has double the number of shaders of the 520M and probably double the memory bandwidth so the performance difference is likely to be quite big, likely over double. If you really meant the 520M then I would I guess it's significantly faster then the 8800GT although must still be below the recommended (but the recommended sound a bit insane) Nil Einne (talk) 11:09, 1 January 2012 (UTC)

Helping MS SQL 2000 trying to understand how to do this
Hi,

I've got two huge tables Sales and Events containing something like things clients bought (this is not really what happens but this example matches the real situation). Events looks like:

eventid, clientid, datetime

Sales contains:

eventid, productid, price

So a Sale is one of the events that can happen to a client. To get all sales for client 12345 we're using a query like this:

select * from sales where eventid in (select eventid from event where clientid=12345)

Obviously a JOIN would be a much better solution, but translating our internal query language to SQL is much easier this way, and MS SQL seems to understand that it's actually supposed to JOIN when I look at the Estimated Execution Plan. This query is done in 30 milliseconds.

Now I want to know: at what prices did client 12345 buy product 678. What SQL seems to think is that it's much faster to find all Sales for product 678 and after that do the difficult join to find out which records apply for client 12345. Unfortunately, this a very slow (3 seconds+) strategy as nearly every client bought 678, but a normal client only bought a couple of items. It would be much faster to find the 20 or so sales for client 12345 and then filter for the product.

Is there anyway I can help SQL understand this? I've been looking into "hints", but I can't find a hint like "filter on this first". All possible indexes are there, and all statistics have been updated.

Txs! Joepnl (talk) 02:38, 31 December 2011 (UTC)


 * Just to be clear, the slow SELECT is like this, right ?

select price from sales where productid = 678 and eventid in (select eventid from event    where clientid = 12345)


 * If so, you might try reversing the order:

select price from sales where eventid in (select eventid from event    where clientid = 12345) and productid = 678


 * If that doesn't help, then an actual JOIN may be the only cure, or perhaps you could do the query without the productid WHERE clause, and put the results into a temporary table, then apply that clause to the that table. StuRat (talk) 02:47, 31 December 2011 (UTC)


 * Here are a few additional suggestions. I assume that there are already indexes defined for eventid and productid and that the query optimizer is selecting the productid index.
 * Try creating an single index on the sales table for both productId and eventId. (create index ix_sales_eventid_productid ON sales(eventid, productid)).  You might already have separate indexes for each column, but the SQL optimizer can only select one and may be choosing the productid index.  Having an index with both columns may yield the most efficient access.
 * If you can't alter the database schema, you can force selection of an existing index using the with-index hint like "...from sales with(index(ix_sales_eventid))...".
 * If you would prefer to avoid using index hints, you can make the productid look up less index-friendly by altering the expression to something like productid+0 = 678 or abs(productid) = 678. That will force the query optimizer to ignore the productid index and favor the eventid index.
 * Hope this helps. --  Tom N (tcncv) talk/contrib 18:42, 31 December 2011 (UTC)

Flash drive
I got my first flash drive ... and promptly screwed up. The second time I copied some files to it, I yanked it out without selecting eject. All of the files and folders are gone (no permanent loss). My question is, have I damaged the drive, or can I go ahead and transfer the files again? Clarityfiend (talk) 02:00, 31 December 2011 (UTC)


 * You probably didn't damage it. Give it a try. StuRat (talk) 02:55, 31 December 2011 (UTC)


 * I'd have thought that if there is nothing recoverable on it, and it may possibly be corrupt, reformatting it might be a good idea? What do others think? AndyTheGrump (talk) 03:10, 31 December 2011 (UTC)


 * (e/c) Agree (with both), but I'd be a little wary. That 'eject' function is a bit of a gimmick in my experience, at least on Windows computers - it's like a safety mechanism to check that the drive isn't actually in use, but provided you have nothing open in an application, copying, saving, etc, then just yanking it out like that shouldn't do anything; it certainly shouldn't wipe everything on the drive. I'd test it out with some unimportant files for a while to make sure the drive itself isn't faulty. Also, I don't mean to sound presumptuous, but are sure you ever really had the files/folders on there in the first place? It's just that I've been using these things for many years and have seen them yanked out on hundreds of occasions without that happening. --jjron (talk) 03:14, 31 December 2011 (UTC)


 * Yes, I'm sure. It happened the second time I transferred files over. The stuff from the first time was there. Clarityfiend (talk) 04:15, 31 December 2011 (UTC)


 * Uh oh. I formatted it (it went extremely fast). Then, when I tried to add a new folder, it said "Could not find this item. This is no longer located in <%NULL:Optext>. Verify the item's location and try again. Lexar" Clarityfiend (talk) 04:21, 31 December 2011 (UTC)


 * Try formatting again, but make sure you untick "Quick format" in the format dialogue box to get a full format. After that I'd personally restart the computer, and try another flash drive in it if you have access to one, just to make sure it's not just the computer. Then try this drive on a different computer (again checking if it's the drive or the computer). Then take the damn thing back to the shop and swap it over, as I personally suspect the drive is faulty (and FWIW I wouldn't get Lexar, I tend to either buy Sandisk or (previously) Kingston, but I don't mean to sound like a consumer advice service or something so I won't actually recommend any particular brand) . --jjron (talk) 05:05, 31 December 2011 (UTC)
 * Hmmm ... I rebooted, stuck it in a different slot, and got my stuff back. Clarityfiend (talk) 05:21, 31 December 2011 (UTC)


 * "Eject" is there because modern operating systems (for the last 20 years or so) cache disk writes. If you yank a drive out without stopping it then any unwritten cached data has nowhere to go. Actually, Windows disabled disk caching on USB devices by default some time ago because of people's tendency to yank them out without stopping them, so that probably isn't your problem. But in the future you could make your USB devices faster by enabling disk writes write caching (it's in the device properties), as long as you remember to eject them. -- BenRG (talk) 06:13, 31 December 2011 (UTC)


 * Enabling disk writes ? You must mean delayed writes.  I don't care for that option.  My goal is to make things quick and simple for me, not the computer, and having to pick eject makes it worse. StuRat (talk) 16:33, 31 December 2011 (UTC)


 * Well, the point is to make things faster overall (and maybe also extend the life of the flash drive a bit, if it is one, by reducing the number of redundant writes). -- BenRG (talk) 08:14, 1 January 2012 (UTC)


 * As far as faster overall, I'm a bit skeptical, once the time to "eject" is included, and perhaps some allowance for time wasted when you forget to eject properly. StuRat (talk) 00:24, 2 January 2012 (UTC)


 * I don't need that much speed, but thanks all. Clarityfiend (talk) 08:11, 31 December 2011 (UTC)
 * Please note that there are fake USB flash drives that claim to have a multiple of their actual capacity, writing more data than what they can really store will cause failures, garbage data, etc. You might want to check if the drive is genuine. Look for a tool called h2testw from Heise/c't (a renowned German computer magazine). The tool is available in German and English, so no need to learn German first. ;-) -- 188.105.112.97 (talk) 15:25, 2 January 2012 (UTC)

Running on the battery
I bought an Asus laptop last July which came with Win7. A couple of months ago I installed Ubuntu to dual boot. I've discovered that if, while in Windows, I unplug the power supply and use battery power it says there is 98%. However, in Ubuntu when I unplug it I get a warning that there is insufficient power in the battery and the computer will hibernate. If I plug the power back in I can see that the battery has 98% power left before it shuts down. Any suggestions? CambridgeBayWeather (talk) 08:08, 31 December 2011 (UTC)


 * looks like a bug. try ubuntuforums.org for the specialists in this. good luck! Staticd (talk) 13:53, 31 December 2011 (UTC)


 * Thanks. CambridgeBayWeather (talk) 16:11, 31 December 2011 (UTC)

Ad-ID
Friends- In your reporting re. Ad-ID your refer to the idea that Ad-ID is a descendant of the "International Standardized Commercial Identifier" or something close to that.

Since I (a) created ISCI in 1969, (b) proposed it to the AAAAs in early 1970, (c) prepared its first publication for the AAAA's announcement of ISCI's "birth" in Spring 1970, (d) supervised its use by the industry from ISCI's debut, 01JL1970, (e) was its sole "operator" through 31JA1974 when I retired after 23 years with Leo Burnett Advertising (VP-Broadcast Business) and (f) continued to single-handedly operate ISCI until the end of 1992 when I sold the majority (96%) of ISCI coding to the AAAA's and the National Association of Advertisers - perhaps you will allow me to correct your name (above) to "Industry Standard Commercial Identification".

Although I see that the AAAA's and Ad/ID (which was built upon the ISCI system) announced several years ago that ISCI was being retired, they were possibly in breach of contract unless they made it clear that they were only retiring the 96% of the ISCI codes which they owned. I did not see the announcement. I'm now 97 years of age and at that time was caring for both my bride of 69 years and my 100+ year-old-sister. I hope you will correct the name.

FYI: Incidentally, I will shortly provide Wikipedia with the details regarding "Dole Dating" - a copyrighted dating system I created in 1998 and have urged many to adopt with no fear of copyright infringement. It is the briefest possible foolproof method of expressing dates.

Alors,

David W. Dole — Preceding unsigned comment added by 206.72.27.52 (talk) 09:11, 31 December 2011 (UTC)


 * Hi David - This is the Wikipedia Reference Desk, a section of Wikipedia where people can come to ask questions, and random strangers on the Internet answer them. You can go and actually edit the Ad-ID and International Standardized Commercial Identifier articles yourself, but note that personal recollections are not a great idea to add to Wikipedia articles; they will probably be challenged as "Original Research" and removed.  Wikipedia articles' statements are pretty much supposed to have references to published "Reliable Sources", top to bottom.  Leave me a note on my talk page if you need help.  Comet Tuttle (talk) 19:51, 1 January 2012 (UTC)

mymax wireless router not turning on.
I was going to use my computer today and saw that the wireless network was not working. Tried to see what was the problem and saw that my mymax wireless router power led is off. Tried to plug it on and off again without sucess. What can be the problem?201.78.191.84 (talk) 12:35, 31 December 2011 (UTC)


 * Start with the obvious: try plugging something else into the same power socket. Does that work? If not you need to fix the power supply. If so, you need to fix or replace the router.--Shantavira|feed me 13:32, 31 December 2011 (UTC)

C won't compile
NOTE: Sorry about the formatting of the code if someone can fix this please do so. So I've written this program to check whether a number is prime but it won't compile and it has someting to do with the sqrt. I get a message saying "...: undefined reference to 'sqrt'" "collect2: ld returned 1 exit status" --178.208.197.58 (talk) 18:39, 31 December 2011 (UTC)


 * You're not linking with the math library. On unix/linux with gcc, you'd say gcc foo.c -o foo -lm
 * The same (with slightly different compiler options) will be true for other platforms and compilers like Windows, but I don't have the specifics to hand. -- Finlay McWalterჷTalk 18:45, 31 December 2011 (UTC)


 * Another option is to rewrite it without the square root function. In some languages a square root can be rewritten as X**(0.5) or X^(0.5).  However, in this case, you might be better not to use a square root at all, since it's CPU intensive.  Instead of "i <= sqrt((double)x)", how about "i*i <= x" ?  Those statements should be identical for positive numbers (other than different overflow/underflow issues), but the second is far quicker.StuRat (talk) 18:56, 31 December 2011 (UTC)
 * StuRat's idea is excellent for yet another reason: it avoids comparing an integer to a floating point number. When comparing integers, "<=" is guaranteed to do what you expect it to do, which is not necessarily the case when one or both of the, whataretheycalled?, comparands is floating point. For instance, if x=25, then sqrt((double) 25) may be represented by a number slightly smaller than 5, in which case the loop would stop at i=4, and the program would give a wrong answer. Not sure whether this is an issue in practice with these particular numbers, but it's good to be aware of the pitfalls of floating-point arithmetic. --Wrongfilter (talk) 02:42, 1 January 2012 (UTC)
 * In terms of optimization, StuRat's idea does have the disadvantage of requiring the computation of i*i for every iteration, whereas sqrt(x)</tt> could be computed only once. Are any C compilers smart enough to know that <tt>sqrt</tt> always yields the same result for a given argument (that it is deterministic solely based on its argument) and thus optimize the code, or would the code always have to be rewritten to take this advantage?  Also, one way to avoid the floating-point arithmetic problem is to use an integer square root function. -- ToE 04:30, 1 January 2012 (UTC)


 * If the goal is efficiency then many compilers will realize that sqrt((double)x) is constant during the loop and only compute it once, while i*i would have to be computed each time. If you want to be certain of this speedup then introduce a new integer variable n=sqrt((double)x) before the loop, and say i <= n in the loop. Most floating point computations will probably follow the IEEE 754 standard where I think sqrt(i*i) == i for non-negative integers is certain if i*i can be precisely represented, but you can also make certain with n=sqrt((double)x+0.5).
 * An efficient program could of course make other speedups, for example not testing even factors above 2. For large numbers there are far more efficient algorithms than trial division, but fast primality testing algorithms can be complicated. PrimeHunter (talk) 04:32, 1 January 2012 (UTC)
 * A compiler computing sqrt((double)x) before the loop is an example of loop-invariant code motion. PrimeHunter (talk) 04:42, 1 January 2012 (UTC)


 * C does not have a "to the power of" operator. It has a "to the power of" function,, but using it won't help at all in this case, as it also requires the math library, and suffers from the same problem with comparing integers to floating-point numbers as User:Wrongfilter mentioned. Using   is the best way to go, in my opinion.  J I P  &#124; Talk 08:37, 1 January 2012 (UTC)


 * Although really we've (hopefully) fixed the OP's problem, we've gone off on an optimisation tip. In that vein, I've benchmarked the two optimisations proposed (for the same naive prime-test algorithm). The code is here.  The results aren't quite what I'd expected, and I'd appreciate it if others, with different platforms, would test it to see what effect that has.  There are three versions:
 * first.c - based on OP's code
 * second.c - use i*i rather than a sqrt
 * third.c - one sqrt, to save a multiply per iteration
 * I'm testing 12764787846358441471, which I found online; I've had to resort to working in 64 bits, as 32 bit primes are tested so quickly (even on my old machine) that it's hard to benchmark properly. Performance is:
 * first: 197 seconds
 * second: 150 seconds
 * third: 134 seconds
 * This shows us some interesting things:
 * The sqrt isn't removed by loop invariance (which was enabled) - otherwise first and third would take the same time. This must be because the compiler doesn't know that sqrt is a pure function, presumably because the library doesn't mark it with the GCC <tt>pure</tt> function attribute.  I'd be interested to know if the same code, compiled on other platforms, does get this optimisation from the compiler.
 * I'm surprised the saving of second over first isn't much greater. I don't know if that's because sqrt is rather efficient, but I suspect that the results are being impared because I'm having to use uint64_t on a 32 bit machine (with all the concomitant heavy lifting that entails).  I'd be interested to know if the ratios hold when compiled and run on a real 64 bit platform.
 * -- Finlay McWalterჷTalk 14:19, 1 January 2012 (UTC)


 * Incidentally that was gcc 4.5.2, gnu libc 2.13, on 32 bit Intel Linux. -- Finlay McWalterჷTalk 14:36, 1 January 2012 (UTC)
 * My results are:
 * first: 28 seconds
 * second: 26 seconds
 * third: 26 seconds
 * That's gcc 4.1.2, x86_64-redhat-linux, gnu libc 2.5 if I'm not mistaken (newer computer, older system, apparently). --Wrongfilter (talk) 14:44, 1 January 2012 (UTC)


 * If the goal is benchmarking the SQRT function versus alternatives, I suggest removing all code not needed for the test, to maximize the relative effect of the SQRT. StuRat (talk) 00:19, 2 January 2012 (UTC)


 * I vaguely recall from having met this situation before that optimizing the sqrt call is slightly harder for the compiler than it seems at first glance. The error reporting behavior depends on mode flags that can be changed at runtime, and the compiler doesn't know what flags will be in effect with the code runs. It can't move the sqrt out of the loop unless it knows there will never be an error (i.e. negative argument), or that the error handling will never have any side effects that matter to the next iteration of the loop. 68.60.252.82 (talk) 04:28, 2 January 2012 (UTC)


 * Our modulo operation article states that where  is a power of two, compilers will typically rewrite   as , but that otherwise many compilers will compute the integer quotient   in the process of computing  .  Are any optimizing compilers smart enough to recognize when both   and   are used within a loop and compute the latter only once, thus doing the division for free?  If so, would that encourage writing a trial division primality test as follows?


 * -- ToE 04:23, 2 January 2012 (UTC)


 * This depends completely on the assembly used. I would expect that, when in assembly, this would use a conditional jump for the do loop. Then, you put i in a register. Add 2 to it. Move the result to the i register. Put x in a register. Perform x/i, getting x/i and x%i in two registers. Do a conditional return. Perform x/i, getting x/i and x%i again. Do a conditional jump. Because the second x/i doesn't change a variable, it would be rather easy for an optimizer to move the second x/i before the conditional return. Then, the optimizer would easily see two x/i's in a row. Of course, having an optimizer realize that it can move the x/i before the conditional return is the big trick. I am certain that the gnu compiler will recognize that it is possible because it does shuffle things that don't change variable values rather easily. -- k a i n a w &trade; 20:25, 2 January 2012 (UTC)


 * Note that  is defined to return   when   is a negative odd integer. This means that it will usually generate less efficient code than , since the compiler has to check for that case. On the other hand,   is the same as.


 * I did some tests with Microsoft C++ version 16.00.40219.01 for x86 with the  flag. For   it produces more or less the following:


 * For  it produces:


 * (Multiplication is a lot faster than division.) It doesn't convert  to  . However, gcc 3.4.4 and 4.5.3 for Cygwin with   do make that optimization. All three compilers do just one division per iteration in ToE's code, meaning that it is probably the most efficient implementation of trial division. Funny that I never thought of it before. -- BenRG (talk) 21:21, 2 January 2012 (UTC)