Wikipedia:Reference desk/Archives/Computing/2013 March 26

= March 26 =

Google on Bing?
Can you Google on Bing? Do you still call it googling or do you call it binging? Bubba73 You talkin' to me? 04:48, 26 March 2013 (UTC)
 * Just as much as you can hoover with a goblin, or Tannoy an announcement with a Bose. -- Q Chris (talk) 14:15, 26 March 2013 (UTC)
 * Interesting comparison above. Generally, you'd just "search" Bing, as opposed to Binging Bing. --  Zanimum (talk) 13:18, 27 March 2013 (UTC)


 * Microsoft would have you "bing it", or even "bing it on" . But I don't see it happening. Also note the recent kerfluffle over "ogooglebar" . Google would like to remind you that their name is their brand and trademark, and they have and will take steps to prevent genericisation. SemanticMantis (talk) 16:57, 27 March 2013 (UTC)

1950s IBM printer
Can anyone identify this 1950s IBM printer? - Jmabel &#124; Talk 06:01, 26 March 2013 (UTC)
 * That's not a printer. It's an accounting machine closely resembling the IBM 407, which was used as a component of a calculator system called an IBM CPC.  You can see a picture of the whole system here -- note that on the right you can see a card punch that also appears in your picture. Looie496 (talk) 06:59, 26 March 2013 (UTC)
 * More specifically, it could be an IBM 402, 403, 412, or 418 -- they were all essentially identical in outward appearance, as shown on this page. Looie496 (talk) 06:59, 26 March 2013 (UTC)
 * Thanks! - Jmabel &#124; Talk 16:09, 26 March 2013 (UTC)

Value of null
In C, is it safe to assume that null will always compare as less than a valid pointer? I know it isn't safe to assume null==0. 173.52.95.244 (talk) 11:53, 26 March 2013 (UTC)
 * It is generally considred unsafe to compare pointers that aren't from the same allocation. The C specification doesn't seem to cover relational operators with null pointers, so you should consider the result of the comparision to be undefined and unsafe. It is safe to compare a pointer to 0 (as in if(p) or if(p==0)) to determine if it is null. This is because a 0 constant assigned to a pointer will be converted to a null pointer at compile time, regardless of the system's representation of a null pointer. See for some discussion of null pointers, 5.3 is probably the most applicable question. 38.111.64.107 (talk) 13:18, 26 March 2013 (UTC)
 * yep sounds like mostly what I was going to say but that last edit beat me here. No the assumption is wrong. And comparing a pointer to zero is a valid test for null even if the actual value in a null pointer is not zero (but in practically all machines nowadays it is actually zero and thankfully C++ now has a nullptr to avoid that anyway) Most anything beyond that is machine dependent. See pointer_t and ptrdiff_t for the integer form of a pointer and for getting the difference between two pointers into the same array, they are mainly for getting the size of ther integer form right so a bit of low level work can be done on them. Otherwise one should normally avoid messing around with the contents of pointers. Dmcq (talk) 13:30, 26 March 2013 (UTC)
 * I have good reasons for messing with the contents of pointers! I think what Dmcq intends to say is that application logic rarely ought to directly modify object representation or allocation in memory; application logic should delegate that work to an abstraction of the memory system.  Even if applications are implemented in a language that allows access to the memory representation (like C and C++), the application should let the system library or the program runtime environment handle the details.
 * On checking my copy of K&R, Section 5.4, I interpret the text to mean "NULL" is a convenience macro defined in stdio.h - and not a strict requirement of the language implementation. I believe K&R differs from ISO C in this definition, but I don't have the ISO C reference so readily available (in cached brain memory) .  Many computers - particularly small microcontrollers with simplistic memory hardware - can perfectly well store regular data at address 0; on those systems, programmers need to be extra paranoid about algorithm behavior.  I can not find a reference that indicates such behavior is a strict violation of any standard C requirement.  Nimur (talk) 13:50, 26 March 2013 (UTC)
 * Ok, my memory read through, and was correct: this behavior differs between C standardization efforts. NULL "may" be zero in K&R, and some of the K&R-esque GNU C; but ISO 9899 lays down the law for C11, and NULL shall be exactly the integer constan 0 as documented in ISO/IEC 9899:2011§6.3.2.3.  So to answer the original question: "is it safe...?"  Check which standard your C compiler is enforcing, or pass a standard on the compiler's command line.  Alternately, enforce the condition with a preprocessor directive.  Then it is safe to assume NULL explicitly equals zero.  Nimur (talk) 14:02, 26 March 2013 (UTC)
 * I just had a look there and there doesn't sem to be any real change. According to the standard it is still okay for null pointers to be for instance -1 when stored in memory. For most systems one can assume that zeroing some store with memset will set any pointers in it to null but the standard does not say that. It just says that when one converts between a pointer and integer that 0 means a null pointer. Some old systems for instance had word addresses and the character pointer used the top two bits or a separate word. When converting to integer the two bits would be moved to the bottom and when converting back to pointer they would eb moved to the top. Even nowadays in C++ pointer to member functions for instance can do strange things. Dmcq (talk) 14:29, 26 March 2013 (UTC)


 * Even if one is assured that NULL==0, that's not sufficient to answer the original question. The OP's comparator safely holds iff pointer comparison is unsigned. Pointers themselves cannot be signed or unsigned (one cannot say unsigned void * p). I see nothing in C99 (6.3.2.3) about ordering of pointers, and all it says about comparison is to do with equality and inequality. One can cast pointers to integers (part 754) but the cast is implementation defined (part 755). It would seem overwhelmingly sensible for pointer comparison to be done unsigned, but that's not the same as that being mandatory (and there's always a weird architecture which does odd things for curious reasons). -- Finlay McWalterჷTalk 14:39, 26 March 2013 (UTC)
 * Two remarks. First, the C standard guarantees that the literal '0' in the source code, if used in a pointer context, will result in the NULL pointer. It does not e.g. guarantee that  will result in  's value being the NULL pointer, unless they changed either the standard or my memory. Secondly, there is, however, a guarantee that if you cast an pointer to an integer that is large enough, and then cast it back, you will get back the original pointer. Thus, there is a injective function from the range of   and the range of  . You should be able use that to induce a total ordering on pointers. --Stephan Schulz (talk) 18:23, 26 March 2013 (UTC)
 * If the implementation provides  or   (which is not required), it is guaranteed that casting a void pointer to one of those types and back yields a pointer that compares equal to the original pointer. You could ensure that the null pointer compares less than everything else by using   and subtracting   from both sides. But I don't see any guarantee that   implies   or vice versa even when the pointer comparison is valid (i.e., when p and q point to the same array). -- BenRG (talk) 22:40, 26 March 2013 (UTC)
 * I violently agree. But if you need a total ordering on pointers where the null pointer is minimal, you can construct it. As you say, this total ordering does not necessarily extend the partial ordering on pointers defined in the standard. --Stephan Schulz (talk) 23:04, 26 March 2013 (UTC)
 * I don't have access to the published standard, but the last public draft (N1570) only says that NULL must be defined to be a null pointer constant, which in turn is defined as "an integer constant expression with the value 0, or such an expression cast to type ". Even if NULL was required to be defined as the single token , it wouldn't follow that it would have to compare less than a non-null pointer, or that casting a null pointer to an integral type would necessarily yield the value 0, or that null pointers ever have an all-zero representation at run time. The only connection between the number 0 and null pointers is at compile time, when integer constant expressions with the value 0 are converted to null pointers where required by the type system. -- BenRG (talk) 22:40, 26 March 2013 (UTC)
 * Yes, you are right about the constant expression, not just literal 0. But the example I gave is not a constant expression, so it's still valid. --Stephan Schulz (talk) 23:04, 26 March 2013 (UTC)
 * Raymond Chen just posted a great example of a real world system where assuming a null pointer is zero (and therefore less than all pointers) would fail: . In a Win32s program, NULL is represented by 4194304 internally. Pointers in the high end of address space would wrap around to zero and grow from there, and would therefore compare to less than NULL, assuming the comparision treated them as unsigned ints. 38.111.64.107 (talk) 14:57, 28 March 2013 (UTC)
 * I just came back to point out I was a bit off in how I first interpreted that. The pointer would still be represented by the numeric value zero, but when dereferenced would look up address 4194304. In this case, I suppose (assuming the compiler treats the pointers as unsigned ints) that the comparision would work. But the example is still good for pointing out that pointers are not just a direct index to a specific location in memory, which is why you can't count on null being zero. 38.111.64.107 (talk) 18:48, 28 March 2013 (UTC)
 * The comp.lang.c FAQ has examples. I think newer platforms had to use all zeros for null pointers once enough C code existed that assumed it (e.g. using  to clear pointers, or assuming   returns null pointers, or passing   to   without a cast). -- BenRG (talk) 17:12, 31 March 2013 (UTC)

Samsung devices, Android versions, predictive text
I'm using a Galaxy S III Mini, with Android 4.1.2, with the default Samsung Keyboard, and the predictive text turned on. This works great. I also have the original Samsung Galaxy Tab, with Android 2.2. The predictive text is XT9, which I don't much like. Is there any way to get the current Samsung Keyboard as it works on my phone onto the Tab, or is it an issue of the later version of Android, and I won't be able to get the newer Keyboard and functions working on the older machine? Thanks if you can advise. — Preceding unsigned comment added by 193.173.50.210 (talk) 16:12, 26 March 2013 (UTC)

Sound emulation in E-UAE
The old fogies among you might remember I posted a question about sound emulation not working quite right in E-UAE on Fedora Linux some time in 2011 or 2012. Well, now that I have updated from Fedora 14 to Fedora 17, I tried it again. To my surprise, the sound in nearly every game worked right. The only exception so far was Ork, which had the same problems as before. But then I went to the "CPU" tab, and changed the "Speed" setting from "Maximum" to "Adjusted", setting the speed slider to as fast as it could go. To my surprise, the sound worked perfectly OK. When I set the setting back to "Maximum", the problem resumed. Having the "Speed" setting at anything other than "Maximum" slows the emulation of AmigaOS down terribly, so I prefer to keep it there in all cases except when I encounter sound problems. Does anyone have any idea what could be causing this? J I P &#124; Talk 18:59, 26 March 2013 (UTC)

Furthermore, is it somehow possible to capture the sound output of E-UAE as a .wav or .ogg or whatever file? J I P &#124; Talk 19:25, 26 March 2013 (UTC)


 * Personally I use the rec program that comes with SoX, but you can use any program that records audio, including Audacity, the Gnome Sound Recorder, or whatever. The trick of it is to configure your sound system to record the audio loopback. If your system uses PulseAudio (which some Google searching seems to be the default on Fedora), install the program pavucontrol. Set E-UAE running (so it's generating sound) and the recording program running (just to a disposable file). Then the recording program will appear in pavucontrol's "recording" tab. There you can configure the source that feeds it, and the record level. In my machine it's set for "monitor of built-in analog stereo" - you can tell it's working because the VU meter for that entry moves with the sound. The nice thing is that PulseAudio remembers the mapping in future, so subsequent runs of the recording program will automatically get the "monitor" settings without another reconfiguration (so you won't actually need to run pavucontrol again).  Things should be much the same if, instead of PulseAudio, you use JACK Audio Connection Kit as the audio system: you'd use the program jack_connect instead of pavucontrol. -- Finlay McWalterჷTalk 09:49, 27 March 2013 (UTC)
 * This seems to have worked otherwise, but running both Audacity and E-UAE at the same time strains my system so much that while the sound doesn't miss beats any more, sub-second gaps of silence are inserted every couple of seconds into the sound. This of course shows up in the recorded file. So I guess I do need a faster computer after all. J I P  &#124; Talk 16:54, 27 March 2013 (UTC)
 * rec will be much less, because it has no GUI and only does negligible character animation as it's recording. On my system Audacity takes 41m of resident memory and rec 4m; Audacity uses 11% of (one core of) the CPU; rec uses about 1%. -- Finlay McWalterჷTalk 17:00, 27 March 2013 (UTC)
 * I tried, but the same thing happened, only to a marginally lesser degree. When running UAE without audio recording in the background, the sound is nearly flawless, but of course it is useless for recording purposes. So I think I need a faster CPU or more memory or something.  J I P  &#124; Talk 18:15, 27 March 2013 (UTC)

ipad simulator poor sound quality
Hi all, I'm using Xcode with an ipad simulator, with a sound recorder, but the sound quality occasionally stuffs up. Basically it just comes out blurry, like I'm talking through a fan or something. This happens regardless of where I'm standing, but only occasionally. Does anyone know what's going on? I'm not registered with the ipad developer program yet (will be soon), so I can't test on the ipad itself. Is it a known problem, and does it apply only to the simulator? IBE (talk) 20:50, 26 March 2013 (UTC)
 * According to the Testing and Debugging in iOS Simulator page in Apple's iOS Developer Library, Microphone Input is not officially supported in the iOS Simulator, although it sounds like it is attempting to use the built-in microphone on the Mac. And it may be the actual cooling fan in the computer is causing distortion or noisy input. --Canley (talk) 02:19, 27 March 2013 (UTC)


 * Wow, thanks - useful link and very imaginative suggestion, although I find it slightly unlikely that it's the fan. Mic input would have to be compatible with the Mac itself, if not the simulator, so I doubt they would mess that up - still seems possible, however. IBE (talk) 07:54, 27 March 2013 (UTC)

UPS zip attachment fraud
I received one of the currently rife spam mails that says "The courier company was not able to deliver your parcel by your address. Cause: Error in shipping address." and has you double click on the attachment, which appears to be a ZIP file. Don't worry, I didn't do that! I'm just wondering how it is possible that a such attachments can still be so dangerous? For every little thing I'm asked three times to confirm that I really want to do it, and my computer updates itself at least every week, but security is apparently still so low that such an executable can still impersonate me and perform tasks that have no place in a ZIP file, and should clearly require admin permission. Editor030813 (talk) 22:00, 26 March 2013 (UTC)
 * Some people will click "OK" to let it run. You say you had to give approval three times for it to run, and some people will do that.  That's why these things still work.  RudolfRed (talk) 22:07, 26 March 2013 (UTC)
 * I don't know how often one has to click for this one; as I wrote, I didn't try it out myself. The number three, which referred to other situations, was a bit of a hyperbole; I assumed everyone knows these annoying confirmation messages, which sometimes are doubled, even in trivial cases. Editor030813 (talk) 02:15, 27 March 2013 (UTC)


 * The Snopes page says that the attachment is a zip file containing an executable file, so I assume opening the zip file is not a problem unless you then double-click the executable inside it. As for why all major operating systems assume that you want to grant any executable full read and write access to all of your personal files as well as the ability to connect to any host on the Internet, it's because all major operating systems are awful. -- BenRG (talk) 22:50, 26 March 2013 (UTC)
 * Thanks for your reply. So, is there a better OS out there? Editor030813 (talk) 02:15, 27 March 2013 (UTC)
 * Ben is probably referring to better security on Unix and Linux systems, due the permissions paradigm set up for all users (and programs). I believe it is generally agreed that such systems have the potential to be far more secure than Windows systems. If you want to start an argument fast, we can discuss whether OS X is more secure than Windows as-shipped :) SemanticMantis (talk) 12:50, 27 March 2013 (UTC)
 * I thought of that, but then thought that can't be what he meant, since Unix/Linux clearly is/are a major operating system. Anyway, the reason why I asked here was because I find it hard to believe that such a basic situation still hasn't been solved; it feels as absurd as if people were bemoaning that burglars keep coming in their house, while it has a big opening facing the street. But it seems the consensus here so far is that yes, the house has a big gap that nobody cared to close. My intention is not to trigger an argument about which is the best OS, but rather to understand why this hasn't been fixed yet. Editor030813 (talk) 19:08, 27 March 2013 (UTC)
 * It depends on what you mean by "major", but according to desktop usage share, *NIXes don't count. See Usage_share_of_operating_systems and this external page . Non-windows, non-OSX OS's make up 0.05% to 2% of desktop usage share. So, unless we're talking about servers or super computing clusters, there are basically two modern (families of) OS's: Windows and OSX. There are hundreds of esoteric OS's out there, but almost nobody uses them. SemanticMantis (talk) 19:38, 27 March 2013 (UTC)
 * Well, what counts as "major" is indeed debatable; on the other end of the scale of possible arguments, one might count Android among the *nixes and discount OSX and iOS for only running on specialized hardware. (They're not an alternative unless you're willing to discard your existing hardware.) But be that as it may; I'd rather get back to my original question, which I may rephrase as:
 * Why is this still a problem? The answer "because Microsoft sucks" is not really satisfying. Even if that were the reason, there are plenty of inventive people around, they often have provided third party solutions for something MS failed to address. Editor030813 (talk) 20:13, 27 March 2013 (UTC)
 * I was talking about about the user-oriented model that's used in NT and Unices, where new processes normally get all the permissions of the process that created them, even though the process that created them is likely to be a standard shell and the new process is likely to be some random program you downloaded from the Internet. That makes sense when a machine has many users and you only care about protecting them from each other, not from themselves, but that hasn't been the right model on desktop or server machines for decades. Web browsers and smartphone OSes implement a more sensible security model as an extra layer on top of an OS with the standard near-useless model. We also have "hypervisors" that run multiple well-isolated VMs on the same physical hardware—which is what an OS is supposed to do—and provide useful features like process migration with hot backups and failover, which, again, the OS should do. We're stuck with this setup for the usual reason—backward compatibility—and also because once a design has been standard for long enough, people stop noticing its limitations. -- BenRG (talk) 21:02, 27 March 2013 (UTC)