Wikipedia:Reference desk/Archives/Computing/2017 December 23

= December 23 =

Optimising compilers
I read (I think here) that neither GCC nor Visual C++ were particularly "optimising" compilers. Intel's ICC is (at least in my experience); I compile tiny programs that solve "toy" maths problems (I am not a software developer by trade), and have found ICC's output to be as much as eight times faster than GCCs. My question: why does anyone bother with non-optimising compilers when the difference in performance of output seems to be so large?--Leon (talk) 11:24, 23 December 2017 (UTC)
 * A compiler can be asked to optimize for speed, or for size. Loop unrolling which duplicates the code for each iteration will make the code longer, but faster. This can only be done for code that loops a fixed, small, number of times, preferably for a reasonably small code block. LongHairedFop (talk) 14:32, 23 December 2017 (UTC)


 * There's a comparison of six compilers at colfax research, which concludes that GCC is good but ICC is usually better. It doesn't include Visual C++ because the test enviroment is Linux. I haven't been able to find a recent comparison which includes Visual C++.


 * Any professional C++ compiler team is going to put a lot of effort into producing well optimised code, so I would expect Visual C++ to produce respectable results.


 * For any specific application, the most optimising compiler may be different from the best in the benchmarks. A programmer might also consider compilation speed, support for current standards, compiler documentation, debugging and other tool support (such as tools which detect programming errors), support for multiple platforms, and cost of the compiler.- gadfium 22:21, 23 December 2017 (UTC)


 * Can you point us to the code where it's allegedly eight times faster? Visual C is certainly an optimizing compiler, so I suspect something is up.  Maybe you don't have flags enabled in the way they're supposed to be for the type of optimization that's ... optimal.  In addition to gadfium's list, the #1 reason people might choose a compiler that generates less performant code:  It is part of their current tool chain that they've spent years working in.  They have shipped their code dozens of times and changing compilers would invalidate all the previous testing that has been done.  Tarcil (talk) 22:25, 23 December 2017 (UTC)


 * According to the article ICC does not optimize for AMD processors, so it would only be faster if your target audience used only Intel processors. Also, GCC can optimize if you tell it do.  The default value is not optimized but it you give it the -O9 or other flags it will optimize more.  RudolfRed (talk) 22:35, 23 December 2017 (UTC)
 * GCC uses -O2 by default. Noted, some toolchains pass in a bunch of options to the compiler, so you Ignore this; I started typing it, checked, found I was wrong, but apparently messed up my edit and didn't delete this. --47.157.122.192 (talk) 19:31, 24 December 2017 (UTC)
 * As mentioned, ICC only generates optimized code for IA-32 and Intel 64. A lot of people write software for architectures other than those. Ever heard of those smartphone thingies? Or the IBM mainframes that handle all your money? Also, some software uses compiler extensions that ICC may not have. --47.157.122.192 (talk) 07:07, 24 December 2017 (UTC)


 * In my experience, Intel's icc will outperform other compilers if:
 * Your code is compute-bound on Intel processor targets
 * You write code that can make use of Intel Performance Primitives and related libraries
 * A great myth is that a compiler can optimize any code to make it run faster or use less memory. This is inaccurate.
 * For example: icc with IPP and AVX will speed up zlib: but if your decompression is slowed by file I/O, this computational speed-up does not manifest as any user-noticeable speed difference.
 * Nimur (talk) 17:48, 27 December 2017 (UTC)

AMD or Intel: does it matter?
For private users, does it matter if a laptop has AMD or Intel chips? — Preceding unsigned comment added by 37.35.145.135 (talk) 14:04, 23 December 2017 (UTC)
 * Nope. I happily use both. The performance is important, but the brand is not. &#40;&#40;&#40;The Quixotic Potato&#41;&#41;&#41; (talk) 18:04, 23 December 2017 (UTC)
 * It depends on your priorities. Whether you care about running gaming smooth, saving electricity, lifetime, price, ... you would choose either one or the other. Also, keep in mind that both AMD and Intel have models that prioritize different things. — Preceding unsigned comment added by WouterPeters (talk • contribs) 17:48, 29 December 2017 (UTC)

Lots of files slow down saving from GUI on Linux
You may remember me asking earlier about saving files from GUI applications, such as Mozilla Firefox, on Linux was freezing the application's interface for a few seconds.

I found out that this was because the directory I was saving the files to had over 50 thousand files. I moved them all to a different directory and the problem went away.

My question is, why was this happening? Saving files from the command line worked perfectly OK all the time. J I P &#124; Talk 22:22, 23 December 2017 (UTC)
 * some applications open up a list of files in the directory and put them in alphabetical order when you save-as. They may also open each file to get some more information on how to draw an icon for it. Not only that they can give you a subset listing of file names that match the partially thyped title. All these slow down when there are many files. Graeme Bartlett (talk) 01:35, 24 December 2017 (UTC)
 * There are a lot of opportunities for lazy programming to cause misbehavior when dealing with large numbers of files. --jpgordon&#x1d122;&#x1d106; &#x1D110;&#x1d107; 21:32, 24 December 2017 (UTC)