Wikipedia:Reference desk/Archives/Computing/2015 October 23

= October 23 =

what are the current impediments in creating one font containing every character in unicode
OP merging unifont with code200x series, got curiousMahfuzur rahman shourov (talk) 14:33, 23 October 2015 (UTC)


 * Unicode is optional. So, as a user, I have the option to include or exclude characters as I see fit. If I don't want CJK codes, I don't have to store them on my hard drive and I don't have to load them in memory. Stepping back to the developer of the font... Understanding that users only require the codes they want to use, should the developer create a font and/or glyph for every possible character that may ever be included in Unicode? They can. It just takes time. If a developer makes such a font, will it be used? I certainly wouldn't use it as I have no interest in about 90% of Unicode. If I see Korean words as a bunch of boxes or as actual Korean characters, I can't read it either way. So, why waste resources on it? It becomes a functional limitation, not a resource limitation. As a function of developing and displaying fonts, having a font file with every possible Unicode character is not very useful. 199.15.144.250 (talk) 14:51, 23 October 2015 (UTC)


 * Here's a font that covers a rather large portion of Unicode . They say "Quivira will never provide every character defined in the Unicode standard. This would be technically impossible, because a font is limited to 65,536 characters, while Unicode already defines more than 100,000." They also point to Code2000, which is more complete, but it takes uses three fonts designed around a common theme. SemanticMantis (talk) 14:59, 23 October 2015 (UTC)


 * (Who says a font is limited to 65,536 characters? They're obviously thinking of a limitation of some specific OS or rendering system.)
 * I suspect that "complete" Unicode fonts are uncommon because, quite besides the significant technical and artistic challenges involved in creating one, it's not clear they'd even be that useful. If you were writing a document in your favorite word processor, and at some point you did a "Select All" and changed the whole document from Times Roman to Bodoni, would you even want it to change the occasional embedded Hanzi or Devanagari words?  (What would it even mean to change some Hanzi or Devanagari from Times to Bodoni?)
 * One can imagine some kind of "Unified" font where every letter and symbol is meticulously crafted to look good in combination, and one can imagine doing a Select All and changing to a second Unified font and thrilling as the occasional embedded Hanzi or Devanagari words change to match, but for most purposes, if I'm using a font I know but which isn't complete, I don't mind too much if my word processor is playing games to assemble my document out of a collection of different fonts based on the various interspersed multilingual and other special characters I'm using. —Steve Summit (talk) 12:28, 24 October 2015 (UTC)
 * Maybe these will help. http://unifoundry.com/ http://directory.fsf.org/wiki/Intlfonts  GangofOne (talk) 08:29, 25 October 2015 (UTC)

How common do programs need Reflection (computer programming)?
How often do concrete programs need to use some reflection (Reflection (computer programming))? When do programmers get stuck with an issue if they are programming in a language without reflection? --Scicurious (talk) 18:25, 23 October 2015 (UTC)
 * Reflection is not necessary. It enables different idioms and more ad-hoc polymorphism, but in my opinion its main value is better support for debugging and inspection during development, especially for languages like Smalltalk or Python which support incremental compilation or are interpreted. If you need reflection, you can always add explicit features to support it - though that may be painful. --Stephan Schulz (talk) 19:24, 23 October 2015 (UTC)


 * You don't "need" to "use" anything. You could write any program in machine language; all Turing-equivalent languages have equivalent expressive power. Things like reflection that exist in high-level languages are abstractions that exist to simplify various aspects of programming. Programming is hard and time-consuming, so we want to make the computer do things instead of us. --71.119.131.184 (talk) 21:49, 23 October 2015 (UTC)


 * Where I'd use it (if I used it, which I confess I don't) is when reading/writing records from/to a tagged, free-form database, or from/to a similarly-structured communications stream, or things like that. It's common to want to have the external records be one-to-one with an in-memory data structure.  So it's common to see, for example, code like this when reading/unpacking a record:

That's obviously tedious, and if you ever add a new field you have to remember to update this code (and the inverse code over in the record-writing routine), and if there's one thing good programmers hate, it's a program where if they make one change they have to make it in two or three places, in synchrony. But this is obviously the kind of problem reflection is made for. —Steve Summit (talk) 11:44, 24 October 2015 (UTC)

Is it possible to make a Raspberry Pi function as a USB device rather than a host?
Is it possible, with simple/inexpensive hardware addition/modification if necessary, to make a Raspberry Pi function as a USB device rather than a host? --134.242.92.2 (talk) 21:28, 23 October 2015 (UTC)


 * Comments on the forums: suggest that it might be possible for the model A, but would require some modification of the model B, and that it is not worth the trouble.    D b f i r s   08:46, 24 October 2015 (UTC)