Wikipedia:Reference desk/Archives/Computing/2015 October 19

= October 19 =

x86 Real Mode huge pointer normalization code
Hi all, the Intel Memory Model has the following:

---

Huge pointers are essentially far pointers, but are (mostly) normalized every time they are modified so that they have the highest possible segment for that address. This is very slow but allows the pointer to point to multiple segments, and allows for accurate pointer comparisons, as if the platform were a flat memory model: It forbids the aliasing of memory as described above, so two huge pointers that reference the same memory location are always equal. LES BX,dword ptr [reg] MOV AX,word ptr ES:[BX] ADD BX,2 TEST BX,0FFF0h JZ lbl SUB BX,10h MOV DX,ES INC DX     MOV ES,DX lbl: MOV DX,word ptr ES:[BX]

---

Can someone explain this code?

This is how I'm stepping through the code, I've added comments with my question (why?):

LES BX,dword ptr [reg]                       ; load BX general register with contents of ES segment register to see where it is pointing to                    MOV AX,word ptr ES:[BX]                       ; move the 16-bit value at address found in ES:[BX] into general register AX - why? ADD BX,2                                     ; add 2 to BX - why? TEST BX,0FFF0h                               ; do an AND on BX to see if it contains 0FFF0h - why do this? what does adding 2 above do? JZ lbl                                       ; so if the above TEST gives a zero, then it jumps to lbl...      SUB BX,10h                                    ; if not, then subtract 16 from BX - why? MOV DX,ES                                    ; move ES into DX to allow for increment INC DX                                       ; increment - why? MOV ES,DX                                    ; move DX back into ES - complete increment of ES segment register lbl: MOV DX,word ptr ES:[BX]                      ; move the contents of ES:[BX] into DX - why?

I realize this is quite historical, but I'd like to update the article with a bit more of an explanation about the normalisation process. - Letsbefiends (talk) 01:17, 19 October 2015 (UTC)


 * This entire operation is designed to prevent pointer aliasing. To understand this, you've got to spend some time to really grok the virtual memory segmentation model in canonical i386-mode.
 * A few take-away concepts:
 * The memory segmentation model is intentionally designed to allow overlaps - multiple ways to point at the same physical address.
 * Whenever ES is used, you're assuming a convention (e.g. one that is usually enforced by your compiler).
 * The purpose of this code pattern is to ensure a "normalized" address - in other words, to put the address in a standard form.
 * Any two addresses that are placed in this "standard form" are guaranteed not to overlap.
 * The code above is re-calculating a (possibly different, guaranteed-unique) pointer. As long as all subsequent uses of the pointer (and the segment registers) follow normal conventions, this ensures that no two pointers can overlap (e.g. no aliasing).
 * All the bit-wise math (the addition and the bitwise-AND) is just a glorified alignment check.
 * (Also: I think your comment misrepresents what the LES instruction does - you might want to review the i386 instruction set. LES loads ES and BX with a pointer from memory).
 * Nimur (talk) 01:57, 19 October 2015 (UTC)


 * This code does the same thing as the previous two code samples in the article: it loads DX:AX from the address pointed to by reg. That's the reason for "MOV AX,word ptr ES:[BX]" and "MOV DX,word ptr ES:[BX]". Presumably later code would use those loaded values. "ADD BX,2" adds 2 to the pointer because that's where DX is ultimately going to be loaded from. The subsequent instructions (except the last one) re-normalize the huge pointer. When you load from seg:[ofs], the computed address is 16×seg+ofs. By definition, huge pointers have ofs < 16 (or, equivalently, (ofs & 0xFFF0) == 0). If the offset is between 16 and 31, subtracting 16 (=10h) from the offset and adding 1 to the segment gives you the same effective address but now with an offset that falls in the correct range.
 * In reality, I doubt that any compiler would generate this code. There is no need to normalize this pointer because it is never stored anywhere or compared with anything. The realistic code would be the same as for the far pointer case, so this is a bad example. -- BenRG (talk) 02:36, 19 October 2015 (UTC)


 * My x86 assembly language is a bit rusty, but I can explain most of the code. It seems to be performing an auto-increment after retrieving the value referenced by the pointer. Something like the C code fragment . The pointer is presumed to already be normalized, so the code only needs to handle the adjustment after the increment.

LES BX,dword ptr [reg]                       ; Load the pointer (segment & offset) into ES & BX      MOV AX,word ptr ES:[BX]                       ; Get the 16-bit value referenced by that pointer (presumably for some later use) ADD BX,2                                     ; Increment the offset part of the pointer (by the size of the referenced value, in this case 2 bytes) TEST BX,0FFF0h                               ; Is the offset >= 16 (15 is the max normalized value for offset) JZ lbl                                       ; If not, skip the re-normalization SUB BX,10h                                   ; Reduce offset by 16 MOV DX,ES                                    ; (move ES into DX to allow for increment) INC DX                                       ; Increase segment value by 1 (a 16 byte shift in the memory it refrences) MOV ES,DX                                    ; (move DX back into ES - complete increment of ES segment register) lbl: MOV DX,word ptr ES:[BX]                      ; ??? Not sure about this. ???
 * Logically, I would expect that last line to save the updated pointer (ES & BX) back to its original location, but this doesn't look right to me. Perhaps someone else can explain. -- Tom N  talk/contrib 04:32, 19 October 2015 (UTC)
 * As I said above, this code does the same thing as the previous two code samples in Intel Memory Model: it loads DX:AX from the address pointed to by reg (without changing reg). It could actually be replaced by the previous code sample (LES BX,[reg]; MOV AX,ES:[BX]; MOV DX,ES:[BX+2]) with no change in behavior, if reg contains a normalized huge pointer. -- BenRG (talk) 16:52, 19 October 2015 (UTC)

Significance of polarizer in LCD displays
I would like to know why we use Polarizer in LCD.I have googled to find out the answer but most of the answers tells "it(LCD) uses polarizers to make the screen dark and bright".Why couldn't we make screen dark and bright without using polarizers.Even if they use polarizer why LCD's have two polarizers instead of one polarizer?Also I still can't get what really happens in polarization.Could anyone help me.JUSTIN JOHNS (talk) 05:34, 19 October 2015 (UTC)
 * How do you plan to block the light (make the screen dark) without the polarisers? As our article explains, the first polariser is one direction (perhaps vertical) and the second polariser is generally perpendicular (so horizontal if the other is vertical). Therefore under normal circumstances, the very little light will make it through the second polariser, as the vertically (or whatever) polarised light coming from the first filter, is blocked by the second filter. However the light crystals in the LCD can be adjusted to induce rotation of polarisation of the light. Therefore the vertically polarised light coming from the first filter, can be rotated to become horizontally polarised and will pass through the second filter which has the same polarisation as this light has now, instead of being blocked by it. As per my first sentence, I'm not sure how you're proposing to block the light without the polarisers, since the whole point is that the light crystals can be used to rotate the polarisation of the light. You also need two filters since you need to light to be polarised initially otherwise it won't be properly blocked by the "second" filter. Instead, you'd just be polarising the light. Well, unless your source is capable of generating polarised light without a filter, then you won't need the first one. P.S. Some understanding of what a polariser is and does is probably essential to understanding this, I haven't read, it but I'm sure our article provides sufficient info. P.P.S. I just noticed you said you don't really get what happens in polarisation, but that seems fairly unspecific. What part of our Polarization (waves) article is confusing you? Nil Einne (talk) 06:20, 19 October 2015 (UTC)

That's a good explanation.Do you mean that polarization tells about the ways in which light's component travels?Or does it only tells about the electric field components as stated in the definition of polarization.To be honest can we make the screen dark without passing any light?Is there use of both the magnetic and electric component of light?Is one better than the other?JUSTIN JOHNS (talk) 08:33, 19 October 2015 (UTC)
 * The electric field and magnetic field are ar right angles, so if you turn one, you also turn the other. And if you block an electrically polarised wave, then the magnetism at right angles will also be blocked.  If you could make the light polarised to start with then the first filter would not be needed. With laser diodes this would be possible.  But if you use a grid of LEDs you can just turn them on and off anyway and no LCD is needed. Graeme Bartlett (talk) 09:32, 19 October 2015 (UTC)

Do you mean to say that while sending a light wave through a polarizer both of it's components are blocked?Could you tell me what happens in polarized sunglasses?If you have said that "if you block an electrically polarised wave, then the magnetism at right angles will also be blocked" then which component of light wave are we seeing through polarized sunglasses?Or did you just mention about an electrically polarized wave but not light?JUSTIN JOHNS (talk) 05:25, 20 October 2015 (UTC)


 * You can't have one without the other, that's why we call it an electromagnetic wave. You can't really speak about the electric or magnetic component as if they're independent, they always exist together. Maybe a bad comparison, but an object has height and width, and if an opening is too narrow for its width, the whole object can't pass through, it's not like the height goes through and the width stays behind.  Ssscienccce  (talk) 23:07, 20 October 2015 (UTC)

Okay that's a very good answer.If both(electric and magnetic component) pass through a polarizer could you tell me what happens when a light wave is passed through a polarizer.Do you mean to say that only it's intensity starts to weaken when it passes through a polarizer?JUSTIN JOHNS (talk) 08:22, 26 October 2015 (UTC)

Is there any Source-to-source compiler that compiles Javascript to Java?
731Butai (talk) 07:21, 19 October 2015 (UTC)
 * Probably no more than when you asked last month. Rojomoke (talk) 08:46, 19 October 2015 (UTC)
 * I was hoping I could reach more eyeballs by re-asking. There's no rule against re-asking questions. 731Butai (talk) 09:43, 19 October 2015 (UTC)


 * I feel that the problem here is the unfounded belief that JavaScript and Java are somehow related. JavaScript and Java have nothing to do with one another. The syntax is radically different between the two. Once you get past the trivial assignment of simple variables, the entire concept of how classes and methods are defined would need to be radically rewritten. So, there isn't a simple "Rewrite my program for me in a completely different programming language" tool. 199.15.144.250 (talk) 11:32, 19 October 2015 (UTC)


 * I just checked a little feasibility of this... JavaScript syntax has a lot more functionality than Java syntax - let alone the issues with variations in syntax from web browser to web browser. So, there is a lot of syntax in JavaScript that simply doesn't exist in Java. The best you could do is get a Java JavaScript interpreter (a JavaScript interpreter written in Java) and feed the JavaScript into it. 199.15.144.250 (talk) 15:32, 19 October 2015 (UTC)


 * Unlikely for any general use. Java is a strongly typed language; every object has all its possible methods known at compile time.  JavaScript is dynamically typed, and you can even add methods and fields to "types" (if there are such things in JS) during runtime. Variables have little or no types - a variable can happily hold a string at one point, and a float or integer at another. The differences in type systems alone make any "source to source compiler" to be just an interpreter in disguise. 91.155.193.199 (talk) 19:15, 19 October 2015 (UTC)
 * There are compilers from many high level languages to C. They can do far more than just feed the source code into an interpreter: consider Stalin (Scheme implementation) for example. There is at least one compiler from JavaScript to Java VM .class files (Rhino). There's no reason a compiler from JavaScript to Java source code couldn't exist. -- BenRG (talk) 19:28, 19 October 2015 (UTC)
 * There are two kinds of conceptually very different language-to-language translators. One kind makes language X programs run under language Y's runtime.  The other kind makes X programs look like they were written in language Y.  The OP appears to want the second kind - not just to run X under Y, but actually transform X into sensible Y code.  The two are quite different requirements. He explicitly does not want JavaScript running under Java, he wants JavaScript source code turned into Java source code. Not just mechanical execution of one code under the other's environment, but source code level translation.  Which is unlikely to happen.  91.155.193.199 (talk) 22:45, 19 October 2015 (UTC)
 * No, you're wrong. I want the first kind. 731Butai (talk) 01:57, 20 October 2015 (UTC)


 * If you want a source-to-object compiler, why did you ask for a source-to-source compiler? This is computer science. You get what you ask for, not what you think you might mean. 199.15.144.250 (talk) 13:14, 20 October 2015 (UTC)
 * To me, "source-to-source compiler" means any compiler whose output is source code (i.e. the input language of another compiler or an interpreter). The Wikipedia article seems to agree. -- BenRG (talk) 02:52, 21 October 2015 (UTC)
 * I think the confusion is even deeper. Do you understand that "source code" is not compiled? It is the human-readable text file that programmers write. You stated, just above, that you want a compiler that converts JavaScript into something that will run in the Java Virtual Machine. Java source code (the .java file) will NOT run in the JVM. It has to be compiled. The object or class code (the .class file) will run in the JVM. So, when you said that you wanted "the first one", you said that you wanted to compile to the object code, not the source code. Then, directly after that, you say that you want it to output source code - which doesn't run until it is compiled. So, which do you want? They are two completely different things and one is far more difficult than the other. Let me put it this way... You want to take JavaScript, which has nothing to do with Java, and turn it into Java source code. Right now, there are programs that take Java class files and decompile them back into Java source code. The result is ugly and very difficult to use. But, they only work because there is a very close relationship between the Java class files and Java source code. Imagine how ugly and unusable they would be if the starting point wasn't Java, but was some other language. I really don't think that you are open to the idea that JavaScript and Java have no relationship with one another. I also think that you are using "source code" to mean whatever feels good to you at the time. The combination of those two things makes it very difficult to explain that there are programming concepts in JavaScript that simply do not exist in Java. For example, I cannot declare a string in Java and then instamagically use it as a number. In JavaScript, I can. So, you can't just "translate the code". You need to comprehend the purpose of the JavaScript code and then think of a completely different way to write it in Java. Computers are very very dumb. They can't think. 199.15.144.250 (talk) 13:24, 21 October 2015 (UTC)


 * To try to avoid possible X-Y problems, I will ask: why are you interested in a JavaScript-to-Java compiler? Is there a task you're trying to accomplish, and if so, what is it? --71.119.131.184 (talk) 19:52, 19 October 2015 (UTC)
 * Thanks for linking to the X-Y Problem wiki. There is a lot of wisdom scattered across the rest of GreyCat's Bash Wiki web page.  The wise reader would spend some time perusing it, especially if the reader plans to write some fancy new scripts to do a thing.  Nimur (talk) 21:23, 19 October 2015 (UTC)
 * I'm trying to port a small Javascript library into Java. Roughly half of the code, the performance critical half, will be hand-ported. Machine translating the non-critical half would save me some time. The resultant Java code doesn't have to look nice or human readable. Hell, it doesn't even have to compile, since I'll be manually rewriting it anyways. I'm just trying to minimizing as much of the boring syntax translation as possible so I can focus on the actual performance enhancing part. 731Butai (talk) 02:03, 20 October 2015 (UTC)
 * If it doesn't have to look nice, be human readable, or compile, and you're rewriting it anyways, what's the purpose of translating it in the first place? Why not just copy-paste the javascript into your java source file and follow it for programs structure? If that won't work, perhaps you should run your code through Rhino then through a java decompiler? you'll get awful, horrible, unreadable code (primarily due to the decompiler), but you should get code at least. Alternatively, if, as you say above, you just want to run your javascript on the JVM, Rhino alone should be sufficient for your needs. 97.90.151.30 (talk) 19:57, 20 October 2015 (UTC)
 * My advice is along the same lines. Granted, I don't know the constraints you're working under; maybe the code absolutely has to all be one big bundle of Java. But in general, if you have code that already works (like the library in question), it's best to just use it as is, rather than performing major surgery on it. You can run pretty much any language under the JVM, which makes it easy for Java code to interface with it. Alternately, since the library you want to use is in JavaScript, have you considered just writing your program in JavaScript? JavaScript doesn't only run in Web browsers. Our article on JavaScript might help you get started. --71.119.131.184 (talk) 21:47, 21 October 2015 (UTC)

Problems with mp3 USB player
Hello. I'm having problems with my mp3 player (USB stick). I can upload up to 70-100 songs and it plays them in chronological order, but when I add some more, they appear somewhere else, among the old tracks. I always just copied some of the songs which had already been on the player so their names changed to »nameofthefile – copy«. After songs started to play chronologicaly (by the time created), I just added new ones and erased the ones that had »copy« in their name… I usually had to make 20-50 copies, but now there is not enough memory left to do this again. Is there any other way I could fix this problem? Thank you in advance. Atacamadesert12 (talk) 20:16, 19 October 2015 (UTC)

Password-based access
In our earliest days, we didn't have a system of user rights: administrative actions could be taken by any user who possessed a specific password, and apparently you'd be prompted to enter the password if you wanted to perform such an action. The current system is an example of what's termed role-based access control. Is there a comparable term for the original system? Password-based access control isn't an article. Nyttend (talk) 22:23, 19 October 2015 (UTC)
 * In our earliest days, there wasn't such a thing as Privilege. But Access control list, and their precursors were probably first developed during the days of the earliest Time-sharing mutli-user computers, at least as early as the 60s. Delving that deeply into computer history, things start getting very "hazy", a lot of computing concepts, particularly to do with networking and programming were not figured out yet and might not have formal "titles", but computers were already quite sophisticated by the very early 70s with the development of unix. So, I "think" your faulty assumption is that some "password based" access control method existed before what we recognize today as "the current user based permissions", which is, just by the way NOT synonymous with RBAC. RBAC is a newer concept, it's a framework of how to delegate permissions within a service. A core concept of RBAC is that a single user can not be given privilege to a resource, only a group can be given privilege to a resource, the user has to be a member of the group (known as a "role group"). That a user logs onto a computer and has access or does not have access to perform some task or access some resource predates RBAC by decades. Possibly you might find File system permissions interesting also. Vespine (talk) 03:17, 20 October 2015 (UTC)
 * Note that even in that article, they just call it "traditional unix permissions", because it does not have a more "specific" name. I guess that comes down to the fact that when unix was being developed, how you implemented permissions wasn't really a "THING" that needed a separate fancy name. It was just a part of the operating system that needed to be developed. Vespine (talk) 03:23, 20 October 2015 (UTC)
 * There definitely was such a system; see, the oldest revision of nost:Wiki Administrators from May 2001, just four months after the website was created. Registered users could perform actions not available to people not using accounts, and people with the administrative password could do more; just look for "There are actually two separate levels of special access" on that page.  See WP:DEAL, the current version of which is the spot from which my question arose; that's where I discovered the "Role-based access control" phrase.  Nyttend (talk) 04:49, 20 October 2015 (UTC)
 * Hi Nyttend, I think you've misunderstood at least some of my reply. I was not meaning to imply that such a system "never existed". I've only just realized you are talking about wikipedia when you said "in our earliest days", I thought you meant computing in general. I think most of my reply still stands, I don't see any reason to assume that such a "system" had a formal "term" or documented specification. It was probably just the fact that no one had implemented a more sophisticated access control system yet. Vespine (talk) 01:15, 21 October 2015 (UTC)
 * Having just a quick read of the article you linked, it seems like what they are describing is no different to giving out the root access password for wikipedia. Vespine (talk) 04:53, 21 October 2015 (UTC)

Computer printer comms
Why cant my old Dell 2400 running XP talk to my new HP ENVY 5530 printer? Ive tried installing it, uninstalling it, reinstalling it, using the disk the printer came with, getting the computer to recognize printer with PnP. The printer appears in Control panel and it seems to be set up OK. Still can I get it it print anything NO!!! Not a damn thing. Ive been thro the troubleshooter about 10 times now. Any suggestions?--213.205.252.131 (talk) 23:00, 19 October 2015 (UTC)


 * The drivers support XP so it's not anything obvious... Does PnP pick the printer up correctly? Have you tried sending a "test print" from the printer settings? Have you tried unplugging all your other USB devices? Have you tried network print? Do you have a wifi network you could connect the printer to? Have you tried to use the printer with any other device? Vespine (talk) 23:52, 19 October 2015 (UTC)


 * The most annoying way printers fail is no error, but nothing ever prints either. I usually try the printer on another PC and/or another printer on that PC, when that happens. StuRat (talk) 00:02, 20 October 2015 (UTC)