Wikipedia:Reference desk/Archives/Computing/2014 November 2

= November 2 =

Smartphone alarms (most major vendors) and Daylight Saving Time
If I had set my alarm on my iPhone 3GS or my wife's Samsung Galaxy S5 for 1:30AM this morning, would it have gone off twice? 75.75.42.89 (talk) 13:45, 2 November 2014 (UTC)
 * How's about you tell us next year? --jpgordon:==( o ) 16:18, 2 November 2014 (UTC)
 * Sounds like in the past, Apple devices' alarms just failed to work when DST events occurred. So I guess zero times wrongly (in the past, at least) has been more common than twice wrongly. 75.75.42.89 (talk) 17:12, 2 November 2014 (UTC)

How can you separate logic and data in Excel?
It seems to me as a really bad praxis to have something like a formula in the middle of a column, referring to a location (that might change). If you start working in a team, with lots of copy-and-pasting among users, then, can a catastrophe still be avoided? — Preceding unsigned comment added by Senteni (talk • contribs) 17:58, 2 November 2014 (UTC)


 * It is possible to assign names to cells (and cell ranges), which clearly reduces the problem. One can also separate the logic from the data entirely by placing them on separate worksheets. AndyTheGrump (talk) 19:37, 2 November 2014 (UTC)


 * Or, if you are writing a program, use a programming language! Excel really is like the worlds worse pile of spaghetti. --Stephan Schulz (talk) 14:27, 3 November 2014 (UTC)

KompoZer problem
I use the KompoZer WYSIWYG HTML editor, ver. 0.7.10, and it has a peculiar bug: when saving a file, it intermittently transforms relative references to full-path on my computer. This is part of a Javascript dynamic animation section (which I know nothing about). See the code below. Of course, the local references don't work on the web. I have to remove them all by hand.

The bolded section becomes

I never make any changes in this section of the code. What is going on, and what can I do about it? Hope someone can help. Thanks. --Halcatalyst (talk) 18:25, 2 November 2014 (UTC)

History of the impossibility of computing
What things did they say were impossible, until these things became possible? Chess? Speech recognition? Image processing? Home computers? What else?--Senteni (talk) 18:44, 2 November 2014 (UTC)
 * I'm not a big tech guy, but I know people. Naysayers have likely been around as long as yaysayers. In the case of things that hadn't become possible yet, "they" were right. If you show them a a convincing theory, they'll try to poke holes in it. If you make a prototype, they'll say there's no market for it. If you start selling it, they'll say it's a fad. If you revolutionize the world, they'll be back at square one, doubting something newer.


 * Of course, there will always be the other "they", absolutely convinced all the clues to perpetual motion, eternal life and X-ray specs that really work are right in front of us, just waiting to be arranged. They'll be right eventually, but for now, they're batshit crazy. InedibleHulk (talk) 00:10, 3 November 2014 (UTC)


 * I can think of plenty of individuals who made bad predictions about the future of computers. In 1946, Sir Charles Darwin (grandson of the famous naturalist), head of Britain's NPL (National Physical Laboratory) said: "It is very possible that ... one machine would suffice to solve all the problems that are demanded of it from the whole country."...not exactly "impossible" - and although it seems like he was wildly wrong, it's conceivable that some incredible quantum computer really could do that...so who knows?


 * It's always possible to find one person who says something wildly stupid like that - but it's hard to find some widely accepted view of complete impossibility. I actually kinda doubt you'll find one.  Mathematically, we know that a Turing machine can theoretically simulate an entire universe, and the Church-Turing thesis says that anything that one Turing machine can do, any other can do (given sufficient time and memory) - so a computer with enough time and memory should be capable of doing anything whatever that's physically possible.   A computer won't ever be able to travel faster than the speed of light - but then neither can a rock or a human or anything else imaginable.  It follows that anyone who placed arbitrary limits on what computers can do would either have to have done so before the Church-Turing hypothesis was proven - or would have to be mistaken - or (perhaps) is talking more of a practical limitation than a theoretical one.


 * One common one is "A computer will never be able to love" or "A computer will never be able to enjoy humor"...I doubt those will be proven true...but if they are true, it'll only be because the cost of building a computer to simulate a human brain might be too costly.


 * SteveBaker (talk) 05:08, 3 November 2014 (UTC)


 * Concerning what Darwin said, it is commonly stated that someone at IBM said that there would be a need for only about 15 computers in the world. It seems he was talking one particular computer at the projected cost for it.  IBM calculated that at the price they needed to charge, it would be pretty unlikely that it would be economically viable for many people.  They lowered the price by 1/3 and sold 50-60 of them.  Bubba73 You talkin' to me? 05:47, 3 November 2014 (UTC)


 * The claim that the IBM CEO, Thomas J. Watson said that the world would only ever need 15 computers has never been adequately substantiated (see Thomas_J._Watson) - so the question of whether (if he had said it), he meant 15 actual computers of 15 kinds of computer - or whether he thought it because of the cost or because of the need - is impossible to determine and meaningless to speculate since he almost certainly never said it  (I mean, why would he?  He was the boss of IBM for chrissakes!)   But Darwin said it in print, and clearly explained that one computer would have sufficient computational horsepower (so he definitely wasn't saying "one kind of computer") - so I quoted him instead.   It's clear that because Darwin's position as head of Britain's National Physical Laboratory meant that he was thinking of the needs of scientists to perform the kinds of calculation that a computer of that era could manage but humans could not.  Since many of those calculations would be one-time things, it's actually rather plausible that a single computer would have sufficed back then.  I imagine that he'd have revised his assessment when computers started to get into other fields such as business accounting and payroll - and as their ability to do calculations rapidly outstripped what an army of people with mechanical calculators could possibly achieve. SteveBaker (talk) 15:10, 3 November 2014 (UTC)


 * Part of me wants to throw my laptop as hard as I can and yell "Who's crazy now, Baker?!?" But you're probably right about the speed of light. For now.


 * Isn't it safe to say that physically impossible things are widely considered totally impossible, despite not knowing whether another universe exists where physics are fundamentally different, where we might visit someday? InedibleHulk (talk) 05:51, 3 November 2014 (UTC)


 * I think we generally divide our knowledge into two piles - mathematical knowledge and everything else.  We believe that 2+2=4 in every universe, and that Turing machines and the Church-Turing thesis applies in all universes, no matter their laws of physics.   Mathematics is constructed in a manner that doesn't rely on the properties of our universe...so it should be true everywhere.  So if we end up in some universe where the physical laws are different, so long as we can deduce what those laws are, and reduce them to mathematics, I see no reason why a Turing machine shouldn't simulate it, or why such a machine could circumvent the fundamental physics of that alternate universe.   So I don't think my answer to this question changes if we allow alternate universes to exist and give us some meaningful way to "visit" them.  SteveBaker (talk) 15:36, 3 November 2014 (UTC)


 * Being asked on the computer section, I want to point out that in Computer Science, problems are categorized as difficult or time consuming, not impossible. Give me a big enough computer and plenty of time and I can solve every possible move in Chess. It just takes storage and space - a lot of both. That is why students are forced, usually against their will, to study big O theory. It is a way to categorize problems. If I tell you that a problem requires a O(n3) runtime, you know that it is a time consuming problem. If I tell you that a problem requires a O(log n) runtime, you know that it is a quickly solvable problem. Similarly, I could tell you that a solution, while fast, requires O(n4) storage space. You better get a very large hard drive. So, nothing is impossible as long as there is a solution - any solution. For example, can a computer understand humor? If I had the storage and time to have a computer categorize every joke every made in the history of humanity, then yes, it could understand humor enough to say, "Ha ha. That was a joke." 209.149.115.7 (talk) 15:13, 3 November 2014 (UTC)
 * Yes, but those who criticize the limits of computation would claim that while such a machine could be presented with an input string and asked to compute "bool IsAJoke ( const char *s )" with 100% success, it still wouldn't understand what makes it funny.  I have no doubt that a machine with the ability to make that decision is possible - and perhaps it's even possible today.  But (the critics will argue) the computer still doesn't understand that it's funny.   Personally, I don't doubt that a computer with enough storage and compute power to simulate a human brain at the neuron-by-neuron level in realtime can be constructed, with accurate simulation of enough body chemistry to allow the effects of various hormones to be included.  That should happen in roughly 30 years if Moores' law continues to hold for computation, cost and high-speed storage - so I'm fairly confident it'll happen in my lifetime.   I can only imagine that such a machine would pass the Turing test and be able to explain how it understands a joke or why it's suddenly fallen in love...at least to the degree that we humans are able to do that (which is to say "not very well"!).   Sadly, this will not silence the critics in the least.  SteveBaker (talk) 15:28, 3 November 2014 (UTC)
 * Of note, humans cannot answer the question, "Why is it funny?" There are many books and documentaries about the basis of humor and the only constant that any of them have found is that farts are universally funny. Why? Nobody knows. 75.139.70.50 (talk) 13:39, 4 November 2014 (UTC)

whatspad and whatsapp 2.11.12
Hi,

Does whatspad work with the newest version of whatsapp? — Preceding unsigned comment added by 77.127.25.110 (talk) 19:58, 2 November 2014 (UTC)

Google Earth: bidirectional network links?
Google Earth has this great feature where you can hook it up to an arbitrary http service and, as long as the service emits valid kml (which is basically pretty easy), you can get external data cleanly displayed in Google Earth.

My question is: is there any way to push kml back out to an external service? That is, I'm wondering if there's a way to (1) have my http service emit one or more placemarks, (2) have Google Earth display them, (3) have the user (me) edit them in Google Earth, and (4) get the edited placemarks back out to the external service? (Steps 1-3 are straightforward; it's (4) that's the kicker.)

I know I can get edited placemarks out of Google Earth other ways, e.g. via the clipboard or the filesystem, but in some circumstances it might be very convenient to do it semi (or completely) automatically via the same network link that supplied the placemarks. —Steve Summit (talk) 20:15, 2 November 2014 (UTC)
 * Does this help? The documentation for the API says that it "can also return KML representations of features, whether those features were imported as KML or created with the API." - Cucumber Mike (talk) 08:07, 3 November 2014 (UTC)


 * That's a tantalizing clue, and I thank you for it, but no, I'm afraid it doesn't help.
 * The usage pattern I have in mind is that I have Google Earth configured with a network link fetching data from my custom service. I'd like to push modified kml from Google Earth back across the link to the custom service.
 * The page you've found is part of the documentation for what appears to be a completely different setup. It's a plugin for a webserver.  A user is using an ordinary web browser to view webpages served by that webserver.  Using the plugin, the webserver can embed Google-Earth-like maps in the pages it serves.  Just like GE, the plugin can use network links to fetch kml from a custom service.  What the documentation is saying is that the plugin can also modify the kml before it's rendered onto the maps as it's embedding them into the pages being served to the end user.  But I still don't see a way to push modified kml back to the custom service. —Steve Summit (talk) 13:36, 3 November 2014 (UTC)
 * Steve, it sounds like you want to write a plug-in that would execute inside of Google Earth; monitor user behavior or user-input, and then generate a new URL and/or a new HTTP POST. I do not believe Google Earth allows you to write plug-ins or run scripts hosted within their executable (i.e., the Google Earth stand-alone application).
 * On the other hand, if you use the Google Earth Javascript API, you can embed Google Earth UI and feature-sets into your own web-page. You could use your own JavaScript to interact with the user's in-browser version of the geographical rendering, and to query their user-input within the context of the Google Earth GUI.  Nimur (talk) 18:25, 3 November 2014 (UTC)