Wikipedia:Reference desk/Archives/Computing/2014 May 18

= May 18 =

Wikipedia template like embedding outside Wikipedia using HTML
The way we use template and it embeds only page's basics (not sidebar, header, footer), I want to use similar embedding in my website. How can I do it using HTML? Tito ☸ Dutta 07:20, 18 May 2014 (UTC)


 * MediaWiki's template system is an example of a Web template system, of which there are many different implementations. These build complex web pages by substituting data (from some datastore) into templates (often, as on Wikipedia, nested in quite a complex way). But a template system (which you might also see called a "template engine") isn't enough by itself; for it to work there needs to be some software which takes data from the datastore and injects it into templates - and which does this for all the pages on your site. You can see a simple example of that in the Jinja (template engine) article. Unless you're a programmer, that's work you probably don't want to do yourself. It's work that's done for you by MediaWiki, and in this regard one might consider MediaWiki to be a species of content management system (CMS). These manage the pages, store the data (often in a database) and run "pages" through the template engine to generate the final content (that's all stuff MediaWiki does for Wikipedia). There are tons of CMSes too, some very large and complex. Wordpress and Drupal are popular, but learning either (properly) isn't trivial; there are various attempts at simpler CMSes, like PICO - but all impose on you a burden of stuff to configure and stuff to learn. -- Finlay McWalterᚠTalk 15:35, 18 May 2014 (UTC)
 * I have my site at Blogger Tito ☸ Dutta 11:51, 19 May 2014 (UTC)

Language independet link on a wikipedia page
Hi,

I want to put a link to an article on my website. For example: http://en.wikipedia.org/wiki/Hydrogen

Is it possible to create a link that redirects the user to the same article but in the language of the browser?

For example if my computer/browser is in French and I click on http://wikipedia.org/wiki/Hydrogen I get redirected to http://fr.wikipedia.org/wiki/Hydrog%C3%A8ne

Regards

79.220.197.223 (talk) 17:34, 18 May 2014 (UTC)


 * To do that, you'd want to detect the language the browser is set to, and alter the link to the appropriate language Wikipedia. But how to detect that?  This StackOverflow question discusses this. Those folks think the most reliable way is for a server-side program to check the accept-language header that the browser sends; this is supposedly better than checking various navigator properties in JavaScript (but still not very reliable). If none of this makes sense to you, the short answer is "no", I'm afraid. -- Finlay McWalterᚠTalk 17:59, 18 May 2014 (UTC)


 * Determining the target language is only the first part of the problem. When you solve that, you would have to find out the name of a coresponding page in a target language (fr:Hydrogène, sv:Väte, ilo:Hidróheno, not to mention Russian, Hindi or Japanese) and verify whether the corresponding page exists at all (and do that iteratively, if the user chose several languages in the browser's 'accept' setting). --CiaPan (talk) 19:55, 18 May 2014 (UTC)


 * The problem CiaPan raises could probably be solved using the Wikidata API, which can output a list of all the available language versions of an article. —Noiratsi (talk) 20:41, 18 May 2014 (UTC)


 * However, different language versions of an article may not cover the same material. For example, some may break a topic up into several articles, while other languages lump it all into one. StuRat (talk) 21:21, 18 May 2014 (UTC)
 * The problem StuRat brings up is a very challenging issue. Wikidata was one attempt to resolve that problem by representing individual data elements (rather, individual "Q42848" entities) in a machine-readable format, independent of the human languages that normally express such data.  Each data could also contain links to Wikipedia articles, Commons imagery and multimedia, and other external references.  The intent was that bots and software algorithms could intelligently traverse a hyperlinked database of human-knowledge.  You can gauge for yourself whether the project has met with much success.
 * For example, Hydrogen is Q556. Its data entry contains wikilinks to encyclopedia articles written in many human languages.  The data entry also contains several assertions about Hydrogen.  This is sort of like a listing of facts ... but not exactly; each assertion is tagged with a "property" code, for machine-readability; it is tracked as a separate database entry, with its own history and contributors.  What we've learned from this experiment is that it's actually really trivial to make data machine-readable.  We can define a structured data format that is both generic and scalable.  The real challenge - making a bot that can determine which data is worth reading - is still completely unsolved.
 * The good news is, interwiki language links exist for many Wikipedia articles, and there are typically enough human editors watching those interwiki links to ensure that each links to the single most relevant article in the alternative-language. That is a task the bots still aren't very good at.
 * You might also want to read Help:Interlanguage links. Nimur (talk) 04:35, 19 May 2014 (UTC)


 * Thank you for all your answers. So I suppose what you are saying is that it is not possible. At least not in the way that I only need one link. I will have to extract the links myself. 79.220.233.150 (talk) 14:54, 20 May 2014 (UTC)


 * Well, it depends on the topic. For hydrogen, there is really only one main meaning, and I doubt if any language would break it up into subtopics.  But for mercury, we have the element, the planet, and the god, so the default might be different in different languages, and some might toss them all into a single article, while others break them up. StuRat (talk) 15:42, 20 May 2014 (UTC)

Print Error - Screen Scan Reproduction Difficulty
When I look up TCP/IP on Wikipedia, the first page does not print all the details. In particular, I get no bordered area in the upper right of the first page which lists each abstraction layer and it's most central protocols. With the aid of Microsoft Office, I can get this diagram with the list for each layer and related protocols, but this content appears on separate pages in the print preview(not AS shown on the web page). I can reduce it down to one page using Office, but am caught wondering why this should be necessary. So my question is this: Is this bordered layer diagram a product of XHTML rather than SVG? According to the Wikipedia article on SVG, XHTML focuses on communicating CONTENT aspects. SVG focuses on the PRESENTATION aspects. The division between content and presentation is also mentioned with CSS. I suspect two languages were used to produce the diagram image and that this noted division in language capabilities is responsible for the printing quirk. As I said, I can obtain the CONTENT on print preview using Microsoft Office, but the PRESENTATION is chopped up and available as four or five pages, not presented as one nice, neat, bordered diagram. The Wiki article on SVG mentions a print specialized subset of SVG, SVG Print, which is currently in the draft stage before W3C. This border diagram appears directly across from the opening paragraph in the TCP/IP article. All of the other graphics, including "Network Topology" and "Layer Names and Number of Layers in the Literature" all print just fine. Is it possible that the printing difficult results from a compression problem, since the bordered diagram is nested as a cluster of hyperlinks (which serves to reference the specific protocols listed for each layer)? Exactly what language produced this bordered diagram? What I really want to know is why and how this quirk has occured. — Preceding unsigned comment added by 71.239.238.205 (talk) 23:37, 18 May 2014 (UTC)