Wikipedia:Reference desk/Archives/Computing/2015 July 7

= July 7 =

"Hashtag" usage
Sometimes e.g. on the BBC they say to contact them using a hashtag, e.g. "#BBCSport". I don't understand. Is this just to do with Twitter? I never use Twitter, and I don't understand much about it, but I thought "hashtags" were for flagging content. How can you "contact" someone using that mechanism? I thought Twitter contact addresses started with "@", no? 109.153.245.89 (talk) 01:01, 7 July 2015 (UTC)


 * The hashtag was brought to many people's attention via Twitter, yes. Though it is possible to use hashtags on Facebook too.  And, though I don't have accounts on other social media, I assume sites like Instagram have the same functionality.
 * You can use '@' and '#' both on Twitter but there is a, sometimes subtle, difference between the two. In order to use '@', there must be an account with the name that follows the @.  So, while there is an account called BBC which is maintained by the BBC and you can use the @ symbol to talk to them, there is no account called BBCGreatFridays.  (I made that up as an example of a hypothetical promotion they may put on to talk about interesting things to do on the weekend.)  They don't want to make up thousands of accounts, one each for every promotion they have going, so they might ask people to tag interesting doings on the weekends with '#BBCGreatFridays'.
 * Then when someone searches for the hashtag, they find everyone's postings about that subject.
 * By tagging things with @, the owner of the account gets a notification that someone said something to them. No notification is sent to anyone simply because someone used a #.  So, to (finally) answer your question, you aren't really "contacting" the BBC by using a hashtag but you both know to use that tag and realize that someone at the BBC will be looking for it.
 * And finally, anyone can make up and start using any hashtag they want. So, if I wanted to start a hashtag for my local library's book drive, I could and there wouldn't be any need to create anything other than my own account to start using it.  Then I could tell other people in my neighborhood to start using the hashtag.  And people, again, could search for it.
 * That clearer? Dismas |(talk) 02:06, 7 July 2015 (UTC)


 * How do you ensure that a hashtag is unique ? In your example, couldn't TGI Fridays have also said "If you think our Barbeque Chicken is great, use hashtag BBCGreatFridays !" StuRat (talk) 02:44, 7 July 2015 (UTC)


 * Short answer is that you can't. Just this past week I heard of some movement in the Middle East using some hashtag that fit what they were doing but I would have been absolutely gobsmacked if it hadn't been used by someone else before.  I can't recall what it was right now though.  Dismas |(talk) 02:56, 7 July 2015 (UTC)
 * Also, there have been a number of times where a company or person has tried to promote themselves with a hashtag and people have used it for... alternate... purposes. link. Dismas |(talk) 03:04, 7 July 2015 (UTC)
 * Thanks Dismas for the very helpful answer. 109.153.245.89 (talk) 03:11, 7 July 2015 (UTC)


 * All the cool kids call the symbol octothorpe, as its creators intended ;) SemanticMantis (talk) 15:03, 7 July 2015 (UTC)
 * There's an argument for deference to Bell Labs in the context of American telephones, but they didn't invent the character and the Interwebs are not telephones. —Tamfang (talk) 08:54, 10 July 2015 (UTC)

Is ALGOL dead?
The article about ALGOL is not quite explicit about this. Is it dead? The last implementation of it appears to be S-algol (1979), but the article says "ALGOL is" not "ALGOL was". Is C a kind of evolution of ALGOL? Or is another language a modern ALGOL? --Yppieyei (talk) 01:22, 7 July 2015 (UTC)


 * Simula is a somewhat more modern ALGOL, One could argue that BETA (programming language) is the successor to Simula.


 * Lots of interesting info here: --Guy Macon (talk) 10:29, 7 July 2015 (UTC)


 * Old programming languages never die, they just fade away. There is always some legacy code out there that needs to be maintained. Looie496 (talk) 12:05, 7 July 2015 (UTC)


 * I do not dispute that there must be some ALGOL program being maintained by a geek somewhere. Apparently Burroughs large systems are an example of it. I am not sure someone still uses them. One day it becomes easier to implement a new system than to maintain an old one. I suppose no one uses BASIC as an entry language anymore, so, it is somehow dead now. Unless you consider it lives through its descendant Visual Basic .NET. Other big languages that flourished at the same time - LISP, COBOL, FORTRAN - seem to be more alive as such that ALGOL or BASIC. --Yppieyei (talk) 17:53, 8 July 2015 (UTC)


 * ALGOL is effectively dead, yes. As far as "modern ALGOL" goes, ALGOL is one of the most influential programming languages ever; pretty much every younger language takes at least some influence from it. As our article describes, ALGOL pioneered things such as the idea of separating code into blocks, and lexical scoping, as well as the Backus–Naur Form for formal descriptions of language grammars. In some sense you could consider any language incorporating any of those concepts to be a descendent of ALGOL. C is even closer to ALGOL than many other languages, being a member of the imperative language family, which is almost entirely patterned after ALGOL in terms of syntax and control flow; indeed, these languages are sometimes called "ALGOL-like", mostly in the '70s and '80s when use of ALGOL itself was still fairly common.


 * Classifying a language as "dead" is always going to be a matter of some debate. As you note, there's still a good amount of legacy ALGOL code being used, mostly on mainframes. COBOL is an even bigger example of a language that's widely considered "dead", but with lots of code still in widespread use. The world financial system runs on COBOL to this day. There was even a new COBOL standard released just last year! The consensus definition of a "dead" language seems to be one that is no longer being used for new large-scale software systems. There are plenty of people still programming in COBOL, and there's always some people interested in retrocomputing, but no one today is seriously considering building complex software from scratch in the language. Comparison with natural languages is useful; Latin and Classical Chinese are universally considered dead languages, yet people today still learn and even use them. The difference is that no one learns them as a native language.


 * Lisp and BASIC, two of the languages you mentioned, are kind of different, since today they are properly language families rather than single languages. No one is using the original Lisp 1 that John McCarthy created, or the original Dartmouth BASIC; they're using descendent languages like Visual Basic or Common Lisp and Scheme. To illustrate the contrast, C has its own descendent languages, most notably C++ and Objective C, but C itself is still enormously popular. --108.38.204.15 (talk) 22:20, 8 July 2015 (UTC)

Blurred bands on sides of photos or video
I'll often see banding on the side of images or video when they are posted to various news sites. I'm sure other sites do it as well but I see it most often on news sites. It is often used when someone has taken a photo with a cell phone in portrait mode and the space the site has for the photo is wider than the photo. So they will blur the sides where the empty space is. You can see an example here. What is the name for that? Thanks, Dismas |(talk) 03:50, 7 July 2015 (UTC)


 * The term "blurred pillarbox" seems to attract the most relevant Google hits, but I have not found anything that indicates that this is an industry standard term. -- Tom N  talk/contrib 05:50, 7 July 2015 (UTC)

Computer science and Axiom of infinity.
Our article Axiom of infinity asserts: "In...the branches of...computer science that use it, the axiom of infinity is..." etc.

I wonder, how and where - the axiom of infinity must be used by Computer science. Isn't Computer science consistent with the finitistic philosophy? HOOTmag (talk) 08:35, 7 July 2015 (UTC)
 * No. Computer science has nothing to do with finitism, which is a branch of mathematical philosophy. You can program computers to do all sorts of math -- no fractions and a finite number of bits are very common variations. Most PCs support IEEE floating point math, which does have the concept of infinity. --Guy Macon (talk) 10:12, 7 July 2015 (UTC)
 * Ok, I see I was not clear enough in the last question of my previous post, so I've just changed it.
 * However, I still wonder - how and where - the axiom of infinity must be used by Computer science.
 * Regarding IEEE floating point (which I think has nothing to do with infiniteness): AFAIK, every program using the floating point, has an upper bound for the number of digits of any integer (and of any real number), so that no program has to presuppose the infiniteness of natural numbers, hence doesn't need the axiom of infinity, does it? HOOTmag (talk) 10:27, 7 July 2015 (UTC)


 * I can't see how the axiom could come into play at a practical level. At a theoretical level it does, though.  For example Alan Turing proved that there is no general solution to the Halting problem, but the proof clearly depends on the existence of an infinite number of natural numbers. Looie496 (talk) 12:01, 7 July 2015 (UTC)
 * The trivial fact (presupposed by Turing) that no maximal finite number exists, doesn't imply that there does exist - an infinite number - or an infinite set of finite numbers (as claimed by the axiom of infinity). HOOTmag (talk) 12:27, 7 July 2015 (UTC)
 * I agree with Looie - My understanding is that it doesn't come up much at all in software development or other applied computer techniques - or at least it doesn't really matter if there is or isn't an infinite set. For another example - Computability_theory is almost exclusively interested in functions on the natural numbers, and hence deals with an infinite set. There is also much more to the halting problem than the lack of a largest finite number. From the article "The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs" - hence the very framing of the problem depends on infinite sets existing, not to mention the proof techniques. SemanticMantis (talk) 14:56, 7 July 2015 (UTC)
 * Regarding Computability theory: yes, but I suspect it's a branch of mathematics rather than of Computer science, although the concept of Turing machine plays a significant role in this mathematical branch.
 * Regarding the sentence you've quoted from the article "halting problem": It only assumes that no finite set contains all programs (or all inputs), but this doesn't imply that there does exit an infinite set containing all programs (or all inputs). HOOTmag (talk) 16:33, 7 July 2015 (UTC)
 * To the last bit - No? How do you figure? If no finite set contains all inputs, then all inputs must form an infinite set. Or are you thinking about category theory and thinking such a collection may fail to be a set? Or maybe you're getting philosophical, and just want to deny that a statement like "all inputs" has any rigorous meaning? Anyway, the distinction between math, applied math, mathematical logic and theoretical computer science is a bit blurry, and often has more to do with convention and history than any real difference in methods. You may find computability theory occasionally covered in a "topics in X" type course in a math program, but computer science is where computability is typically covered. My WP:OR experience is that computer science as a research topic in academia almost always implicitly allows for the conceptual existence of infinite sets, just like almost all areas of math do. But for fairly obvious reasons, it's hard to find a WP:RS that says "The theory of computer science generally allows for the conceptual existence of infinite sets" - even relatively few mathematicians dwell that much on the axioms unless they are doing a type of research that is focused on certain axioms. For examples of the ubiquity of AOI in CS, here's some course notes where they introduce induction for the purpose of proving things in computer science . And you certainly cannot use induction as a system for valid inference if you aren't allowing for the natural numbers to exist. Really, anything that ever acknowledges existence of the natural numbers must then also allow for infinite sets. Additionally, here are some computer science textbooks that use and explain AOI for computer scientists  . Generally, AOI "must" be used in CS in exactly the same places where it "must" be used in math or logic. While the day-to-day practice of writing software doesn't often need to invoke AOI, that's not what we usually mean by "computer science". Computer science uses infinite sets frequently. Though I can't cite a ref for that claim, hopefully the examples above are fairly persuasive evidence in support. SemanticMantis (talk) 17:28, 7 July 2015 (UTC)
 * "If no finite set contains all inputs, then all inputs must form an infinite set". Your assertion presupposes AOI, which cannot be logically proved. Notice that if - AOI had been wrong - so that no infinite object (set / collection / class / whatever) had existed, then the correct fact indicated by you - that "no finite set contains all inputs (integers / objects / whatever)" - wouldn't have implied your pseudo-conclusion that "all inputs (integers / objects / whatever) must form an infinite set (collection / class / object / whatever)".
 * "Or are you thinking about category theory and thinking such a collection may fail to be a set?". No. I'm thinking from an absolutely logical point of view. For this discusstion, it doesn't matter whether you name it a "set" or a "collection" or a "class" or whatever; The point is, that without assuming AOI in advance, you cannot conclude that there exist infinite objects (sets collections classes or whatever), even when you do know in advance - that for every finite number (and every finite input and likewise) there exists a larger finite number (and a larger finite input and likewise).
 * "I can't cite a ref for that claim, hopefully the examples above are fairly persuasive evidence in support". Thank you for the sources, but I still wonder whether CS must assume AOI. Take Turing machine as a simple example: its classic definition assumes, that every Turing machine has - an infinite storage tape - being an infinite object (of course). However, I wonder whether - this assumption contained in that classic definition - is really needed for reaching all the theorems we have already reached about Turing machines; Just try to think about that: What would have happened to those theorems, if we had only presupposed a more modest assumption which had assumed, that "for every Turing machine which uses a given finite storage tape - one can build another Turing machine which uses a longer finite storage tape"? Such a modest assumption, only assumes that no finite set (collection / class / object / whatever) contains - all Turing machines - or all finite lengths of storage tapes used by Turing machines, but this doesn't imply that there does exist an infinite set (collection / class / object / whatever) which contains - all Turing machines - or all finite lengths of storage tapes used by Turing machines; Yet, all the theorems about a Turing machine would have (probably?) been reached - even under the modest assumption mentioned above, wouldn't they?
 * HOOTmag (talk) 18:54, 7 July 2015 (UTC)
 * The last paragraph makes the most sense to me. Points 1/2 are confusing. Can you find examples of serious scholarly research that deals with arbitrarily large natural numbers but yet does not allow for the existence of natural numbers as an infinite set? I don't know how that's supposed to work, but maybe I'm just ignorant of some of the alternative finitism constructions. Even finitism says that classical finitsm allows countable infinities, and only rejects uncountable sets. That is a much more common distinction, but that's a whole different question. You seem to be talking about what the article calls "strict finitism."
 * Of course there are many results in CS that don't depend on AOI. I don't have time to do more research today, but I imagine there is research in CS much like the people in math who work with the axiom of countable choice (or various weaker versions), or use no axiom of choice at all. Much like you can do a lot of math without uncountable choice, you can probably prove a lot of CS results without AOI, and only allow arbitrarily large but finite sets. But since we don't actually compute on infinite sets, anyway, most CS researchers are happy to use infinite sets as a convenience for proving things. I think maybe a few of us got confused by the original wording - to me your point 3 is much clearer and would be a better way to start future discussion. I'd suggest trawling google scholar with searches like /[(strict) finitism], computability, ultrafinitism/ and see what you can find. This paper  and refs therein might be a good starting point. SemanticMantis (talk) 19:15, 7 July 2015 (UTC)
 * The original aim of AOI is to reject strict finitism (this is a well known fact in the philosophy of mathematics), whereas my original question has been about whether the current computer science can be consistent with strict finitism, i.e. without AOI.
 * As for the new source you've provided: thank you, I will have a look at it after I have my dinner... :) HOOTmag (talk) 19:25, 7 July 2015 (UTC)


 * This thread mentions that Peano arithmetic and (ZF − Inf) + ¬Inf are bi-interpretable. I know nothing about model theory but I think that means that anything provable in PA can't really be said to depend on Inf. -- BenRG (talk) 19:31, 7 July 2015 (UTC)
 * Correct. Axiom of infinity, is a huge step (of Set theory) - that goes far beyond Peano arithmetic (which only assumes that for every finite object there exists a larger finite object, without presupposing the existence of infinite objects). HOOTmag (talk) 19:36, 7 July 2015 (UTC)
 * Practical computer scientists don't spend a lot of time worrying about the precise axiom set used to prove a given result. They make free use of mathematics, and mathematics includes infinity.  For any particular result, maybe you can reformulate it and rework the proof in such a way as not to rely on infinity.  But so what?  You don't gain anything; you just make life harder on yourself for no benefit. --Trovatore (talk) 04:27, 8 July 2015 (UTC)
 * Please have a look at the first sentence of this thread, just under the title. This is my point ! Whether the sentence I've quoted from our article is correct. HOOTmag (talk) 06:54, 8 July 2015 (UTC)
 * Well, exactly. The "branches ... that use it".  Not that must use it.  Just use it.  You are unjustifiably conflating the two things, and this is your fundamental mistake. --Trovatore (talk) 07:41, 8 July 2015 (UTC)
 * I think you'd better interpreted "use it" as "must use it". Had one interpreted "use it" as "can use it without it being necessary", one could have added - to our article - lots of other branches that could have used it, e.g. Yoga exercises, tips for journeys to Antarctica, and the like. HOOTmag (talk) 08:36, 8 July 2015 (UTC)
 * To satisfy the Gricean maxim of relevance, it doesn't have to be necessary, but ideally it ought to help. I don't see any obvious way that knowing about the existence of completed infinite totalities helps you do yoga, though I certainly wouldn't exclude it.  But having a mathematical framework that includes completed infinities is helpful in all sorts of situations even if they don't, at the end of the day, refer to completed infinities as their direct objects of discourse. --Trovatore (talk) 16:46, 8 July 2015 (UTC)
 * I removed computer science from the article, not on the grounds that it doesn't use Inf but just because once you start listing specific branches of mathematics (or fields that use mathematics) there's no end to it.
 * The original question was how much of computer science you can construct without depending on Inf. I know you (Trovatore) feel that these kinds of questions aren't interesting and people should just use the axioms that are true and not worry about subsets, but they are interesting to a lot of people including me and the OP. -- BenRG (talk) 22:45, 8 July 2015 (UTC)
 * Hmm? I never said they weren't interesting.  I think they are interesting, but they're not really computer science.  They're more like reverse mathematics. --Trovatore (talk) 23:02, 8 July 2015 (UTC)

command-line tools for matrix arithmetic?
Is anybody aware of a nice little collection of shell tools for manipulating matrices represented as text files? I'm thinking addition, multiplication, inverse, determinant, identity matrix creation, scaling and rotation if the matrix is a Rotation matrix, etc. I'll write my own if I have to, but it seems like the sort of things someone's probably written already. —Steve Summit (talk) 14:20, 7 July 2015 (UTC)


 * GNU Octave has an interactive command-line prompt. It can execute interactively, in a REPL-style environment; and it can run pre-written scripts or execute single commands and then terminate.  It is an excellent tool for matrix math and linear algebra.
 * Here is Simple File I/O from the Octave documentation. You can load and save matrix data using plain-text files, or binary files.  You may wish to use the "csvread" or "textread" style functions.
 * Nimur (talk) 14:25, 7 July 2015 (UTC)


 * You can use NumPy; here's a trivial example. If in.arr is

1 3 5  2 5 3   0 2 -3
 * then this program will read that in as a matrix, transpose it, multiply the original by the transpose, and save the output to out.arr


 * -- Finlay McWalterᚠTalk 14:38, 7 July 2015 (UTC)


 * These are both great solutions, but I read the question a little differently. I thought Steve meant to rule out loading up things like Octave or Python/NumPy... which aren't usually considered "shell tools" like grep et al. are. That said, I don't know of any sort of "native" Bash shell tools to do matrix manipulation, and I would use one of the options above. SemanticMantis (talk) 14:47, 7 July 2015 (UTC)


 * On the one hand, SemanticMantic is exactly right. But on the other hand, I was already thinking of using Mathematica or MathCAD (which I don't have a copy of), meaning that I should have been thinking about Octave or something. :-) —Steve Summit (talk) 15:01, 7 July 2015 (UTC)


 * A great accompaniment to to NumPy/SciPy/MatPlotLib is IPython (and particularly ipython-notebook) which gives an interactive Mathematica-like workbook environment - a (static) example is here; one need only run the ipython-notebook service on one's local machine and one can alter the various in [X] sections and re-run each individual calculation. -- Finlay McWalterᚠTalk 15:11, 7 July 2015 (UTC)

C# Command line options in Visual Studio 2010?
Hello everyone. How would I get the command line options for the csc compiler that Visual Studio uses to compile a a C# project? I can get the options for a C++ project by simply accessing the projects properites and clicking Linker tab and selecting the Command Line tab. This show me the command line options VS uses to compile the C++ project. What I want to know is how do I do this in a C# project? If this isn't possible, then how would I use the C# compiler(csc.exe) to compile a VS 2010 project? I think the input file would be the Program.cs right? —SGA314 I am not available on weekends (talk) 19:31, 7 July 2015 (UTC)
 * This is the relevant page from the Microsoft website. Tevildo (talk) 22:29, 7 July 2015 (UTC)
 * I can't go to external links. My ISP has most links blocked. —SGA314 I am not available on weekends (talk) 14:27, 8 July 2015 (UTC)
 * You can't get to microsoft.com? Okay....  In Visual Studio, Help > Index > command-line building. Tevildo (talk) 23:13, 8 July 2015 (UTC)
 * Um, where would the VS help file(chm or hlp) be stored at? For some reason, VS won't open up the help file. I go to the Help menu and click View Help but nothing happens. I think VS is supposed to startup a server that lets my browser browse the help pages. So where are the help pages? —SGA314 I am not available on weekends (talk) 13:39, 9 July 2015 (UTC)
 * The actual help files, in Microsoft Help Viewer format, should be in C:\ProgramData\Microsoft\HelpLibrary2. Tevildo (talk) 19:20, 9 July 2015 (UTC)
 * I don't have the C:\ProgramData\Microsoft\HelpLibrary2 but I do have the C:\ProgramData\Microsoft\HelpLibrary. The directory contains various folders that have *.mshi, *.mshc, and .*metadata files. I believe VS starts a local web server that interprets these files into HTML webpages. Where would the server executable be located? —SGA314 I am not available on weekends (talk) 16:06, 10 July 2015 (UTC)

Can I capture item details in a Windows Explorer list as text items?
If I have a Windows Explorer (Windows 7) file with twelve items in it, and I want to capture the names or details of those items as text, can I do this by some method like cut and paste so that I can simply send the text describing the files, but not the files themselves? For example, I want to email a friend to let her know I am sending her 12 songs. How can I copy the names of the songs from the WE file and paste that into gmail without actually dragging the files themselves into the email as attachments? Thanks. μηδείς (talk) 21:27, 7 July 2015 (UTC)


 * This may help . SemanticMantis (talk) 21:56, 7 July 2015 (UTC)


 * Thanks, SM, that was unintuitive of Windows, but very helpful on your part. μηδείς (talk) 22:10, 7 July 2015 (UTC)