Wikipedia:Reference desk/Archives/Computing/2013 October 17

= October 17 =

"Drivers" for Android devices?
Hi desk, my problem is that I have a Snapchat (yes I know don't mock me) and it plays videos fine from most people but two people specifically send videos that result in an error message saying "Sorry, this video cannot be played". I've deduced that I think it is the format of the video they are sending that my Android phone (Kyocera Rise) is unable to play, which is why the problem only happens with these two people. Is it possible to install the Android equivalent of a driver or some new default video player that would allow them to play? I'm understanding that it's loosely like if a friend sent me an .flv file and my default player was Windows Media or something, and then I installed VLC and was able to play it. But that might be totally incorrect. Googling the error message doesn't provide too much insight. Thanks everyone! NIRVANA2764 (talk) 04:33, 17 October 2013 (UTC)
 * Most common types of video files actually consist of a standardized "wrapper" around a variable internal structure called a codec. What is probably happening is that the videos that give you problems use a type of codec that the Snapchat's video player can't handle.  (That's not the only possibility, but I think it is the most likely.)  The first step in solving this would be to figure out what codec or codecs the problematic videos use; then the question would be whether you can find handlers for those codecs for Snapchat.

Natural Language Processing and Intermediate Languages
Has anyone attempted to go the "other direction" in researching NLP- that is by constructing languages that a computer could "understand" and communicate in, with the goal that they are as like as possible to natural languages? In other words, not trying to solve NLP, but to make a working approximation; it seems like this would be both informative and possible, the trick would seem to be balancing freedom in sentence form -vs- rules that limit ambiguous formations. Thank you for any help:-)Phoenixia1177 (talk) 06:33, 17 October 2013 (UTC)


 * Lojban is certainly designed to be easy for computers to parse, and unambiguous. MChesterMC (talk) 13:12, 17 October 2013 (UTC)


 * I think eventually all computer languages will be like this. In fact, writing a program might be more like a question and answer session, where the computer asks you all the requirements, in great detail, including what to do with all sorts of errors that might happen, and then constructs a program based on those requirements. StuRat (talk) 14:46, 17 October 2013 (UTC)


 * That sounds familiar. (I wish I could find some of the old adverts for this product, which was marketed as "the last programming language you'll ever need", or some such nonsense.) AndrewWTaylor (talk) 15:43, 17 October 2013 (UTC)


 * Well, just because there was one failure, this doesn't mean the whole approach is faulty. For example, robotics and speech recognition have been much slower to become practical than we might have hoped, but they finally seem to be making good progress now. StuRat (talk) 03:19, 18 October 2013 (UTC)


 * Thank you:-) Lojban looks interesting:-)Phoenixia1177 (talk) 05:39, 18 October 2013 (UTC)


 * "As like as possible to natural languages" is unfortunately not a well-defined goal, but our article on formal language gives a very brief overview of work in this domain. Looie496 (talk) 16:11, 18 October 2013 (UTC)


 * There have been various attempts at natural language programming. COBOL was an early flawed example. Hypertalk and its successor, Applescript, were designed to look like English. Inform, especially Inform7,  is a specialized language for writing interactive fiction that is very English like; it is designed to appeal to story writers more than programmers. See also Natural language programming for system English as a kind of programming language for systems like Wolfram alpha. And for the more discerning programmers, there is the Shakespeare Programming Language :-) --Mark viking (talk) 16:29, 18 October 2013 (UTC)

Thank you for the responses:-) I'm not so much looking for programming languages, but intermediate languages that computers can converse in- limited, and more structured, artificial languages should be easier for a machine to understand, and at the same time, should also be intelligible to a person. In such a context, various NLP related problems might be solvable (again, in a limited context), I think those cases could shed illumination on the more general problem.Phoenixia1177 (talk) 06:59, 19 October 2013 (UTC)

Faster to check out of bounds or add bounds on 2D tile array
Given an M x N array, each entry 0 or 1- representing if what tiles can be stepped on in a square tile map- when checking what adjacent squares can be moved to (have a 1) is it faster to: In the second case, as long as you were on a viable map tile, you'd never get the okay to move out of bounds, so there's no issue with that. Also, I realize that in general, the difference will be slight, but it involves multiple entities doing the checks regularly as part of an already intensive process, so I want to remove all the overhead I can- the small hit in memory isn't an issue by itself. Finally, you can assume that M and N will never exceed 128. (In general, how does enlarging an array increase the time it takes to fetch an entry out of it?)Phoenixia1177 (talk) 07:17, 17 October 2013 (UTC)
 * 1) Check each adjacent entry and return 0 if it is 0 or out of bounds (4 checks of the array and 16 inequalities total)
 * 2) Add a border of 0's to the array and just do a single check?


 * As to your last question, the access time for an (in cache) array is O(1), because computing the address to fetch from is done arithmetically. For a 1-dimensional array, ar[x] has the address ar + sizeof(ar's elements)*x - so it's an add and a multiply, regardless of the value of x. This will get slower depending on cache and (if it's giant) on virtual memory, but that's more a function of your access pattern than the array's size. I agree that the difference between your two ideas will probably be slight, and I think you'll have to benchmark to really know if it matters at all.  If you're doing something that involves lots of checks, like implementing Conway's Life, you'll probably see much better gains by running checks for multiple tests together rather than looping through the whole array for each - as this will be a smarter way to keep your cache hit ratio up. As with almost every algorithm on a modern computer, cache is likely to have a far more dominant effect on the performance of your algorithm than a few multiplies and comparisons. -- Finlay McWalterჷTalk 08:40, 17 October 2013 (UTC)
 * Bounds checking is not normally done by either of the above suggestions. Consider an array of dimensions MxN, 1-based. An array reference a[x, y] is not checked for validity by retrieval, but by a simple test on the subscripts x and y:
 * Fetching one item from a 2x2 array should be no different in execution time than for a 128x128 array, assuming that the whole array is in RAM and does not need to be retrieved from a swapfile. -- Red rose64 (talk) 08:42, 17 October 2013 (UTC)


 * I've used your 2nd method myself. Besides being more efficient for large arrays, it's also simpler to code and therefore less likely to fail and so requires less testing. StuRat (talk) 14:42, 17 October 2013 (UTC)
 * I've used this method too. It is simple to implement, and it works great as long as you only ever move one square at a time. However, the other way can be more flexible in the end. Say you come up with a jump or teleport ability that lets you move 2 squares - the version with bounds checking will work with no modification. The other one will require more checks, or more likely, a shift to the bounds checking system. K ati e R  (talk) 16:46, 17 October 2013 (UTC)


 * The second method is likely to be faster because it avoids conditional branches, which are surprisingly expensive on modern CPUs. Memory access is also slow, but in this case the extra memory used is small (O(M+N), while the bulk of the array is O(MN)). If you really care about performance, though, you should implement both methods and benchmark them. -- BenRG (talk) 20:40, 17 October 2013 (UTC)


 * I agree that the second one is likely to be quite a bit faster. Things like this are done on a chessboard.  Suppose you are generating all of the moves of a knight on a particular square.  You could have two extra rows and columns with a value that indicates that the knight can't move there (a sentinel value).  That should be faster than checking the X and Y values.  Bubba73 You talkin' to me? 03:27, 18 October 2013 (UTC)


 * Thank you for the answers:-) I'm playing around with a method of precalculating paths that bots can take on a 2d map, then compressing it down using various aspects of the map- I'm able to get fairly decent file sizes for most maps, the tradeoff being that a number of adjacent tile checking needs to happen each time something wants to move- just doing the inequality checking isn't dragging the system down, or anything, but every bit extra bit of speed makes it more viable to actually use. (it's far far faster than A* and more accurate than faster approx methods, the need to carry around a file isn't the greatest, but I'm steadily getting it more manageable). Thanks again:-)Phoenixia1177 (talk) 05:46, 18 October 2013 (UTC)


 * I've written a program to do pretty much the same thing, so it would be interesting to compare notes. How does your program work ?  (We can continue this on our talk pages, if you want, so it won't disappear into the archives.) StuRat (talk) 12:42, 18 October 2013 (UTC)


 * I'd love to discuss it more- I'll post something on my talk page tonight if you want to take a look (it'll take me a few hours before I get a chance to). Fair warning: the only language I know well is Ruby.Phoenixia1177 (talk) 03:52, 19 October 2013 (UTC)


 * I put up an explanation of my methods if you're interested.Phoenixia1177 (talk) 05:58, 19 October 2013 (UTC)


 * OK, have continued convo there. StuRat (talk) 13:02, 21 October 2013 (UTC)

Configuring moden router.
Yesterday I brought a switcher, internet stopped to work and so I called my isp and they told me how to configure and then it didnt worked and they said the problem was at my side.

Tested with the pc connected to modem without a switcher and it worked so I brought a new switcher. Test the new switcher and it didnt worked, send it to store and they tested and it worked there. They said if could be my modem settings. But I configured according to many tutorials on net and followed their steps, yet it didnt worked.

I am missing something? How I did:

1-Modem is tp-link td 8816 adsl+

2-Conection type is pppoe/pppoa

3-Username and password are configured according to my isp stuff

4- Same about VPI and VCI

5-Conection type is pppoe llc

6-DHCP is enabled

201.78.190.108 (talk) 12:14, 17 October 2013 (UTC) The problem was with the modem thanks.201.78.223.139 (talk) 11:30, 18 October 2013 (UTC)

How many Wikipedia articles linked to each other?
Could we link them all? All geographical articles could be connected to a country or ocean, all authors could be connected within a field somehow. But could we link all of them (without meaningless linking, of course), if they are not already all linked. OsmanRF34 (talk) 15:25, 17 October 2013 (UTC)
 * They sort of do: Getting to Philosophy Mingmingla (talk) 16:03, 17 October 2013 (UTC)


 * Oh yeah that's a great game to play! I've had good runs with 1708 in piracy and 4chan.
 * There's a great tool you can use to make the process a lot faster; see here. -- .Yellow1996. (ЬMИED¡) 23:11, 17 October 2013 (UTC)

Evading ISP bandwidth shaping
I'm using CrashPlan (free version) to back my computer up to a drive in my mum's (mom's) computer. If I pause the process, wait a minute and start it again, my connection speed will be 40-60 Mb/s. Over the course of the next minute or so, it will go steadily down to about 800 Kb/s where it stabilises. Could this be my ISP (TalkTalk) deliberately throttling my connection? I know that some ISPs might do this to certain types of traffic (like illegal p2p filesharing (I'm aware that it's not necessarily illegal so no need to point that out)). Could my ISP be mistaking my traffic for filesharing and then throttling my connection? Anyway to get around it? I've got large volumes of data to transfer and my mum/mom doesn't like having her computer on when she's not using it. I can't sit and pause/unpause it every couple of minutes! --2.97.26.56 (talk) 20:39, 17 October 2013 (UTC)


 * You probably have Talk Talk ADSL2+ rather than fiber. If so you weren't getting 40-60 Mb/s, that's just an artefact of the program you were using making rash estimates based on too little data. 0.8 MB/sec upstream is about what you'd expect (bottom row of the table in that article). -- Finlay McWalterჷTalk 22:03, 17 October 2013 (UTC)

iPad
How does one force a refresh given that there appears to be no Ctrl key? Kittybrewster  &#9742;  21:37, 17 October 2013 (UTC)


 * "Tap the ↻ (reload) icon next to the address in the search field to update the page," directly quoting from the iPad user guide for iOS 7. Nimur (talk) 22:24, 17 October 2013 (UTC)

Windows 8.1 search everywhere in apps view
When I set windows 8.1 to search everywhere and not just apps from the apps view, the search bar in the start screens apps view disappears. Why does this happen?Clover345 (talk) 23:29, 17 October 2013 (UTC)