Wikipedia:Reference desk/Archives/Computing/2015 August 1

= August 1 =

Email recursion
Can a mailing list be subscribed to itself? If yes, what would happen when a new message is posted? -- Ricordi  samoa  07:34, 1 August 2015 (UTC)
 * Hopefully the mailing list software would stop that, otherwise you may get a mail loop, with amplification each time around the loop so that everyone on the list gets thousands of copies of the same message. Graeme Bartlett (talk) 08:14, 1 August 2015 (UTC)


 * For example, GNU Mailman's documentation for how it prevents such "mail loops" is here. Modern listservs add a range of extra headers to emails as they forward them (per RFC 2369) and use them (and some clever filtering, which anticipates some of the more common stupid things that can happen) to handle loops, replies, and the dreaded "I am out of the office" reply. -- Finlay McWalterᚠTalk 17:56, 1 August 2015 (UTC)
 * BTDTGTTS: in the early 1990s I wrote a web-based bulletin board cum mailing list system that had the occasional mail storm, for example when an out-of-office message was malformed, which was pretty scary if it wasn't caught in good time, though I think it was at least sensible enough not to send emails to itself. AndrewWTaylor (talk) 20:22, 1 August 2015 (UTC)

What is inking?
In the custom install of Windows 10, in a couple of places it mentions "inking". What does that mean (I can't find it)? Bubba73 You talkin' to me? 14:42, 1 August 2015 (UTC)


 * "Inking" seems to be their word for the act of writing on a touchscreen with a finger or stylus - e.g. Presumably they say that and not "writing" to make it clear that they're not talking about writing-with-a-keyboard. They could have said "stroking". -- Finlay McWalterᚠTalk 15:09, 1 August 2015 (UTC)


 * Even though there is no ink involved. Thanks. Bubba73 You talkin' to me? 15:35, 1 August 2015 (UTC)


 * I would have favoured "rubbing" as a nod to Isaac Asimov's portrayal of touch-screen technology (or possibly touchpads) in Foundation's Edge (1982) where the native Hamish peasants disparage the academic Second Foundationeers as effete 'pewter rubbers. {The poster formerly known as 87.81.230.195} 212.95.237.92 (talk) 13:46, 3 August 2015 (UTC)

How do they design the italic fonts in word processing?
This question concerns italic fonts in word processing (for example, Microsoft Word). I had always assumed that the italicized letters were completely new and different letters created for each letter (A, B, C, etc.) in the particular font. In other words, the creator of the font "designs" (for lack of a better word) the letter "A" and "B" and so forth. Then, completely independently, he "designs" a new design for italics "A" and for italics "B" and so on. However, it just dawned on me today that perhaps that is not the case. My new theory is that they simply take the "regular font" and merely rotate it a few degrees clockwise. So, they don't really "design" a new letter, they just take the old design of the non-italicized letter and simply rotate it a few degrees clockwise. So, does anyone know anything about this? Do they "design" 52 completely new letters (26 each, upper- and lower-case) for the italicized alphabet that are separate and distinct from the 52 non-italicized letters? Or do they simply take the 52 non-italicized letters and rotate them around a few degrees clockwise? I hope this question makes sense. Thanks. Joseph A. Spadaro (talk) 20:05, 1 August 2015 (UTC)
 * Italic_type has examples of where the italic letter tends to vary in its basic design from the roman equivalent. I think these differences tend to be less marked in sans serif typefaces such as Helvetica, but in general the italic letters are designed separately, though with many of the typographical characteristics and "feel" of the roman. A "rotated" version would be Oblique type. AndrewWTaylor (talk) 20:17, 1 August 2015 (UTC)


 * Thanks. Somewhat totally unrelated question.  Who would go to all the trouble and the bother to design a font?  They don't receive any money (or payment) anywhere in the process, do they?  I mean, fonts are simply "out there" and available free to everybody, no?  Or is there some piece that I am missing?   Joseph A. Spadaro (talk) 20:47, 1 August 2015 (UTC)
 * High-quality fonts certainly are sold for significant amounts of money. Microsoft commissioned the design of Arial to get around licensing Helvetica. Of course, Donald Knuth Metafont and the Computer Modern fonts opened up the field, and there are now quite a few good quality free fonts around, at least partially in the spirit of Free Software. --Stephan Schulz (talk) 21:09, 1 August 2015 (UTC)


 * Thanks. But, I am still missing the part where money/payment enters the picture.  I have been using computers for 25-30 years.  I always have hundreds of fonts in my Microsoft Word program.  I have never once paid a dime for any font.  When I purchase Microsoft Word, is part of that money paying for the fonts that are included in Word?  I was told that Word itself does not have any fonts.  But, rather, the fonts are stored in my computer (maybe in the operating system?) and that Word simply accesses those.  When I purchase Windows, is part of that money paying for the fonts that are included in Windows?  Joseph A. Spadaro (talk) 04:21, 2 August 2015 (UTC)
 * It depends. Some word processors did come with extra fonts as part of the deal. But yes, in modern OSes a lot of system fonts are included and paid for with the OS (or even with the computer, e.g. in the case of Apple). Modern OSes have central repositories for fonts, and may offer facilities to automatically replace fonts you don't have with ones you do have (as in the case of Arial/Helvetica). But take a look at e.g. http://www.webtype.com for what you can buy. --Stephan Schulz (talk) 07:38, 2 August 2015 (UTC)


 * So, just for clarification. When I pay $100 (or whatever) to purchase Windows, a part of that money is to pay for the fonts that Windows gives me.  Correct?  And the Microsoft people are paying somebody out there (whoever) for the rights to include their special font in the Windows package.  Right?   So, out of the (hypothetical) $100 that I pay Microsoft to purchase Windows, Microsoft pays $2 (hypothetical price) of that price to the creator of the "Century Gothic" font?   Joseph A. Spadaro (talk) 20:31, 2 August 2015 (UTC)
 * Roughly. Some of the fonts will be free, and Microsoft has not licensed but created Arial, so they don't pay per sale for that font. I don't know what their arrangement with other font designers is - lump sum or volume pricing. --Stephan Schulz (talk)


 * To be more specific, Microsoft owns some of the fonts (Arial) and licenses others (including Century Gothic) from Monotype Imaging. How much do they pay Monotype Imaging? I haven't seen an exact number, but I've seen estimates that it is about $250k. Now, to make things interesting, Monotype Imaging is a software company at heart. They use computer programs to make their product. So, they use a lot of computers that they need to license. They strictly use Microsoft products. How much do they pay Microsoft? The exact numbers are not easy to find, but I've seen estimates that it is about $250k. Combining the two, I wouldn't be surprised at all if it isn't a handshake, both sides sliding numbers around, and no money exchanging hands. According to Monotype Imaging's stock reports, they depend more on printer licensing than Microsoft. 209.149.113.45 (talk) 17:17, 3 August 2015 (UTC)


 * Thanks. What does "printer licensing" mean? Joseph A. Spadaro (talk) 17:23, 3 August 2015 (UTC)


 * Printer companies license the fonts from Monotype Imaging. 209.149.113.45 (talk) 17:49, 3 August 2015 (UTC)

Thanks, all. Joseph A. Spadaro (talk) 03:49, 4 August 2015 (UTC)

Expressive power of computer languages: it's all about the syntax/logic?
When analyzing the theoretical expressive power of programming languages (not the verbosity of the programming languages or how concise programs are) are there further criteria besides the class of formal grammar in the Chomsky's hierarchy and the type of logic chosen (for example first/second/higher-order, type logic or whatever other logic formalism)?

Is there an analysis of expressive power of data structures? Are there other constrains that limit the expressiveness of a computer language?

It seems that all these formalisms deal only with how we combine ideas, but not how well we can express an idea using a programming language. For example, they don't enter into the question of the limitations of an imaginary programming language with only a binary and integer primitive type. — Preceding unsigned comment added by Denidi (talk • contribs) 21:01, 1 August 2015 (UTC)


 * But it's all about verbosity/conciseness/convenience/elegance. You can write almost any program in almost any language (aside from specially limited languages).  Each language is "turing complete" and the church-turing thesis says that two things that are both turing complete have equivalent expressive power.
 * So the answer is that they are all precisely equal...if you ignore all the things you're telling us to ignore. SteveBaker (talk) 21:08, 1 August 2015 (UTC)


 * That means that Java and C, for example, have the same expressive power, although it might be easier to use one or the other depending on your program at hand. However, are there still things that can't be expressed neither in these two languages nor in any turing complete language?


 * Is there a formal language more expressive and beyond all these languages that we know? Or have we already reach the top, even if we could still develop programming languages that are more convenient or elegant for us humans to use?


 * Is the expressive power of a turing complete language even comparable to a natural human language? --Denidi (talk) 22:40, 1 August 2015 (UTC)


 * We could define a language with non-computable functions but then it is believed it would be theoretically impossible (and not merely practically impossible) to evaluate programs in the language. If we want it to be possible to actually evaluate our programs then all languages can do the same, except if they have constraints which make them unable to evaluate some of the computable functions. There are very simple languages like Turing machines (which are basically a computer language before that became a term) which can evaluate all computable functions. Note: One of the conditions for being able to evaluate all computable functions is being able to use arbitrarily much memory. When it comes to practical implemetations of languages like compilers and interpreters, they may fail on that detail. But who cares whether your program could theoretically handle a googolplex bytes if the universe had just been large enough to fit the memory chips? PrimeHunter (talk) 23:26, 1 August 2015 (UTC)


 * Anything a real 2015 computer can compute (using any computer programming language), a Turing machine can also compute.
 * Most computer programming languages, if we ignore memory resource limitations, are Turing-complete -- anything a Turing machine can compute, you can write a program that (when run on a computer with "enough" memory) can compute the same result.


 * There are many things (such as the halting problem and certain functions written in super-Turing programming languages) that cannot be computed by any Turing machine. Those things therefore cannot be computed by any program in any Turing-complete language on any real computer built before 2015.


 * Some people speculate that we will someday be able to build machines that solve at least some of those things -- super-Turing computation.


 * Other people speculate that anything that can be computed in our physical universe, can be computed by a Turing machine -- the Church–Turing thesis, Church–Turing–Deutsch principle, digital physics, etc. If that is true, this would imply that anything that human brains or anything else using natural languages (or both) can do, or anything that any computational machine we build can do, a Turing machine can (eventually) also do.
 * While we may be able to talk about things like oracle machines, and talk about machines that run super-Turing programming languages, if "digital physics" is true, such devices cannot actually be built in our universe.


 * It is still an open problem which of these two speculations will turn out to be true; one of a list of unsolved problems in physics. --DavidCary (talk) 04:11, 2 August 2015 (UTC)


 * While, I suppose, it's conceivable that a computer with more power than a turing machine could be made (I rather doubt it) - it has not yet been made. Even the simplest machine codes (eg SUBLEQ) are turing-complete - and even the most complex programming languages are able to be run on standard computers - which mean that they to are no more powerful than a turing machine.   So it doesn't matter what might theoretically happen in some astounding future invention - right now, all programming languages are equal in power.
 * That said, there are huge differences. It's a heck of a lot easier to write almost anything in Java, C++, Python, PHP, JavaScript than it is to write in SUBLEQ machine code...but it's only a matter of convenience.  SteveBaker (talk) 05:05, 2 August 2015 (UTC)
 * Cue INTERCAL. The standard way to show that language (or formalism) X is Turing-complete is to show that it can simulate language or formalism Y that already is Turing-complete. Since everything we do on existing computer in the end boils down to machine code (which a TM, or, more conveniently, a C simulator can simulate), nothing we can develop on such a computer can be more expressive than the Church-Turing class. --Stephan Schulz (talk) 07:45, 2 August 2015 (UTC)