Wikipedia:Reference desk/Archives/Computing/2016 January 19

= January 19 =

IPv6
It is well-known that the 32-bit IPv4 ran out of addresses. But IPv6 jumps to 128 bits. Wouldn't 64 bits be enough, at least for an extremely long time? That would give each person on Earth more than 2 billion IP addresses. Bubba73 You talkin' to me? 03:29, 19 January 2016 (UTC)


 * IPv6 is not designed to be "exhausted". As in, someone gets address 1, followed by address 2, followed by address 3.... etc.... Having a much larger address space allows addressing strategies to be implemented which are not possible with IPv4, this is mentioned here IPv6. Vespine (talk) 03:42, 19 January 2016 (UTC)


 * Yes, but my point is that seems like overkill. Wouldn't 64 bits be enough for a very long time?  Bubba73 You talkin' to me? 03:44, 19 January 2016 (UTC)


 * It depends on how it's used. Even if every light switch in your house gets it's own IP address, that won't come anywhere near using them all up, but they might use a system that isn't all that efficient as far as assigning every address.  For example, the first of the 8 parts might be for the nation, the 2nd part for the state or province, the 3rd for the county, etc., the 4th for the IP provider, the 5th for the institution or business, the 6th for the individual location or homeowner, the 7th for a particular local area network at that location, and the 8th for a device on that network.  (I have no idea if this is how they are actually allocated, this is just an example.)  So, while this isn't very efficient as far as percentage of IP's used, it is highly efficient at being able to quickly find info, like the IP provider, from the IP address directly.  A similar example is a car's VIN, which is longer than it would need to be, if it was a simple serial number.  But then, if it was, you couldn't find out much of anything without looking that serial number up in a database. StuRat (talk) 09:25, 19 January 2016 (UTC)


 * Wait, so this means the final sad end to power-cycling my router to get around paywall article limits? :-) 94.12.81.251 (talk) 11:29, 19 January 2016 (UTC)


 * Not sure why there's all this theoretical talk. It's not like we're still in 1997. As our article says, the standard IPv6 subnet is /64. That means it doesn't matter whether you want an IP for every electrical and electronic device in your house; or just to your phone and computer. You should still get a /64 subnet at minimum to play around with. Most commonly, the way IPv6 is assigned, even if only your router (or some other single device) is going to get its own IPv6 address, you'll still end up with /64 subnet. In fact, anyone who may ever want it, e.g. an office or simply a sophisticated home user will probably be assigned multiple subnets (perhaps a /48), to make things easier. We still get 18,446,744,073,709,551,615 subnets (less due to the various reserved etc) so it probably isn't an issue. However if it IPv6 was 64 bit and we used the same scheme, we're not really that much better off than we are with IPv4 (we'd have 4,294,967,295 subnets less reserved etc rather than addresses). I'm not sure if a 48/16 scheme would be that much worse than 64/64, however I'm not sure it's really that much better either. I presume this sort of thing is at least partly what Vespine was referring to. There's some more discussion, & . Nil Einne (talk) 13:22, 19 January 2016 (UTC)


 * Actually on second thoughts, 48/16 would actually likely be a bit limiting. While 65535 hosts would be enough for most use cases, there are surely some cases when it's too few. However if you used 48/16 under a similar scheme to the way IPv6 works, you'd need those hosts to be in more than one subnets. Plus it makes IPv6 address more difficult due to the significantly higher risk of collisions (and inability to simply use something like the MAC address). I guess you could use 96 bits and 48/48 or perhaps 64/32, but I wonder even more how much advantage there is over 64/64. Nil Einne (talk) 15:28, 19 January 2016 (UTC)


 * I probably should add that many people question the wisdom of giving out only a single /64 subnet to even ordinary home customers and suggest /56 or /48 be the default to all customers (and I think this is also what RIR assignment policies and RFCs generally suggest or assume). See e.g.  . I think the takeaway message from all this is that IPv6 is intended to move away from the idea of IPs being a scarce resource that need to be conserved (to be fair IPv4 didn't really start off like that either even if it was like that by the time IPv6 was being worked on let alone now); to the mentality that if there's any resonable possibility they may be needed, they should be assigned to ensure routing etc works properly and in particular, to prevent incorrect usage such as effectively further subnetting a /64. Nil Einne (talk) 17:22, 19 January 2016 (UTC)


 * 128 bits "overkill"? 64 bits "enough"??
 * My memory is that as IPv6 was being finalized, a bunch of us were upset that it was going to use a fixed size at all; we were hoping it would be variable-length and more or less arbitrarily extensible.
 * If there's one thing we've learned, it's that arbitrary, fixed limits always become confining. No matter how generous they are at first, no matter how ironclad the arguments of the form "this allocation is so plentiful that every light switch in the solar system could have both a primary and a a backup address and we'd still have a factor of three left over" seem to be, sooner or later, somebody is going to have a brainstorm which lets us do something hitherto unimaginable, the only cost being that it allocates something incredibly wastefully, but "that's okay because we've still got more than enough".  And soon enough, the hitherto unimaginable becomes the absolutely necessary, and "still more than enough" becomes "just barely enough".
 * See also Parkinson's Law (and its generalization), of course.
 * It may take ten years or more, but I'd guess we'll be seeing localized "shortages" of IPv6 addresses within our lifetimes. —Steve Summit (talk) 15:52, 19 January 2016 (UTC)


 * Thanks for saying it, Steve. I was, and still am, a huge fan of variable-length addresses.  We can always pretend, by building layers and layers and layers of subnetworks inside of deeply nested NATs...
 * A perfect example of why the 128-bits isn't good enough: have a look at some of the recent history in high-performance computing research. There was a serious effort, some time ago, to make the individual CPU-cores network-routable, and in fact to use ethernet as a processor bus (...and why not!  Ethernet was as fast, or faster, than existing bus architectures!)  One could envision a day when every memory-word on every single machine could be individually and globally addressable - if the protocol provided for an inexhaustible address-space.
 * I sort of remember hearing this kind of theory being kicked around for the SiCortex and for the Niagara, and in some transactional memory-over-internet-protocol research papers, and so forth; I'll try to dig some of that up. This was serious pervasive massive parallelism at its best.
 * Nimur (talk) 17:56, 19 January 2016 (UTC)


 * I'd add that getting IPv6 implemented and widely accepted was a gargantuan struggle...it marked a huge change for the underlying mechanisms of the Internet. Given how little extra the additional bits added to the average packet size - it was worth making a change that would be finally, unhesitatingly, "enough" - so we never have to go through this again.  While I agree that it seems unlikely that every human on earth will need a billion IP addresses - a similar train of thought got us where we are today.  In the era where a PDP-11 computer cost $10,000, it was reasonable to say "there will never be as many computers as people on earth" and hence a 32 bit address was considered more than adequate.  In my home, I have 4 smart TV's, 4 Roku boxes, 3 WiFi routers, 4 laptops, 2 desktops, 2 game consoles, 4 cellphones, a printer, two laser cutters and a dozen IOT devices.  Maybe 40 addresses for me, personally, at home.  So you can see that we made a horribly bad assumption when suggesting that 32 bits would be a "forever good" number.  We simply don't know whether there will ever be a need for that 264 addresses.  Suppose we wind up with self-replicating nanotechnological machines?  We could very easily wind up with more than 264 of them and want them all to be individually addressable on the Internet.  That might not be going to happen - but do you really want to have to go through another round of IPvN updates if that happens?
 * With 128 bit addressing, we could give a unique IP address to every molecule making up planet Earth - hopefully that's "enough"...but I'm with Steve Summit here - I'd have used a "high-bit-set-means-more-bytes-to-come" approach and thereby allow the address field to be infinitely extensible. Sadly, that worried people who have to be concerned about some idiot sending a trillion byte address and causing every computer on the planet to run out of memory...or making life too complicated for IoT devices.  So, yeah - I guess the 'prudent' thing was to pick an ungodly large number.  Just don't blame me if/when we need to give a unique address to every quark and photon in the visible universe!
 * SteveBaker (talk) 18:09, 19 January 2016 (UTC)


 * 640k addresses should be enough for anyone! (Yes, I know Bill Gates didn't actually say the original "quote".) --71.119.131.184 (talk) 18:44, 19 January 2016 (UTC)


 * Any addressing scheme doesn't necessarily needs to work forever, just long enough for it to become obsolete for other reasons. For example, if the VIN system for identifying personal vehicles outlives personal vehicles without "filling up", then it served it's purpose. StuRat (talk) 19:32, 19 January 2016 (UTC)

OK, thank you for your answers. Bubba73 You talkin' to me? 21:32, 19 January 2016 (UTC)

Intensive but short programming bootcamps
If I have one month's living expenses as savings, could I learn anything useful at a programming bootcamp in that time? I mean something pretty immersive, where I'd be coding full time instead of working a job. I know some Java already but I'm open to other languages. Location is Edinburgh, Scotland. 94.12.81.251 (talk) 11:34, 19 January 2016 (UTC)


 * I have been teaching programming in many (MANY) different environments since 1989. My experience is that people learn to program when they need to program. The best thing to do is have something you want to do and then do it. For example, you might want to learn Ruby or PHP. Both are popular web development languages. So, come up with a project and develop it in the language you want to learn. Since you already know Java, you know how to do what you want to do in Java. You just Google for how to do it in Ruby/PHP and keep looking at the code examples until you understand what is happening (such as "Why does PHP have all those $ symbols throughout the code!?"). That is how I have become proficient in so many programming languages. I am thrown jobs that people don't want to do, such as adding a feature to a flight simulator written in Ada. It doesn't matter that I've never used Ada before. I just have to look at some references and translate how I'd do it in C into how I should do it in Ada. 199.15.144.250 (talk) 16:29, 19 January 2016 (UTC)


 * Once you have learned the basics, (loops, functions, if statements, arithmetic, I/O) I don't think that programming classes/bootcamps will get you very far. You need to practice.  You need to write MOUNTAINS of code.  You also need a reason to do that.  I always suggest learning enough JavaScript to write a simple web-based game - Pong or Breakout or something like that.  Most people would like to make a simple game - and that makes it a better example to work on than something that doesn't motivate you as much.  Ideally, you'd also want some kind of a mentor who could gently nudge you in the right direction when you get completely stuck. SteveBaker (talk) 17:24, 19 January 2016 (UTC)


 * If your plan is to brush your programming skills in just one month and then work as a programmer right away because you need the income, I am afraid that your deadlines are too tight.
 * On a brighter note, there is something positive about your case. In general, I think some people never learn to program. That sounds pretty harsh, but yes. No matter how much effort they invest into it, their programming sucks. Since you already learned Java and are willing to keep learning, you seem to belong to the other group, the one that learns to program. However, programming at a professional level requires more time.
 * There are other things you could try in the same field though. Having a logical mind, you could try other IT jobs: web-master or tech support, for example. [[Glasgow], not far from you, appears to be a a rising tech hotspot, with many jobs available . [[User:Scicurious|Scicurious]] (talk) 13:47, 20 January 2016 (UTC)
 * Thanks for the support :-) I actually have closer to 2 months' savings, but I was leaving myself time to find another job afterwards. I already work in tech support, but it's call centre shift work and I'd like to move away from that if possible. Probably to desktop support in the short term. So I'll keep plugging away at the job applications, and mucking about with code in my spare time. 94.12.81.251 (talk) 18:05, 20 January 2016 (UTC)

Ripping DVD movies to ISO on one computer, converting them to MP4 on another
I have a stack of movie DVDs, an old slow Windows laptop with a DVD drive, and a much faster Mac without one. I want to get the movies off the discs so I can watch them more easily while travelling. Backing them up all the way to MP4 on the Windows machine would take weeks. Is there a combination of software that will copy encrypted DVDs to ISO on Windows, and convert ISOs to unencrypted MP4 movies on Mac? Something free if possible, naturally :-) 94.12.81.251 (talk) 13:46, 19 January 2016 (UTC)


 * ImgBurn will turn your DVDs into ISOs. http://www.imgburn.com/
 * VLC will turn ISOs into MP4s. http://www.videolan.org/vlc/index.html


 * https://wiki.videolan.org/Rip_a_DVD/ explains how rip DVDs in VLC, but I have never tried it because ImgBurn works so well. --Guy Macon (talk) 14:31, 19 January 2016 (UTC)


 * I've been using HandBrake (which I just learned works on Windows) to convert my old DVDs to MP4s on my wife's Mac. I've been thinking of getting a cheap optical drive to plug into my Mac to speed up the process with two systems doing the conversions. Just another idea for you: Get a cheap disc drive and plug it into the Mac to keep from doing two conversions. Dismas |(talk) 18:25, 19 January 2016 (UTC)
 * (OP) Thanks! I settled on the 21 day free trial of AnyDVD on the Windows laptop, to to decrypt the discs on-the-fly so ImgBurn (free) can read them, and create ISO images. It takes 20-30 minutes even on this pretty old machine, so that works. 21 days is long enough to finish all the discs I've got waiting, but they have a 20% sale on until 24th January if anyone feels like buying. On the Mac I'm using Handbrake (free) to encode high-quality MKV files from the ISOs. It's fast enough to finish a movie at "veryslow" quality in about 90 minutes, and I can leave a queue running all night. Once I figured out I have to choose all my settings THEN create a custom preset to save them, I was fine :-) 94.12.81.251 (talk) 17:59, 20 January 2016 (UTC)
 * USB optical drives are fairly cheap now-a-days. LongHairedFop (talk) 19:36, 20 January 2016 (UTC)
 * Yeah I know, but I didn't feel like going out to buy one or waiting for a delivery. 94.12.81.251 (talk) 20:05, 20 January 2016 (UTC)

In a microchip, what are the physical equivalent of a head, state register, tape or finite table
In a logical description of an abstract machine (able to process information), there is an infinite tape and a head reading/writing and moving the tape left/right. There are also a state register and a finite table. What are the physical equivalents in a real microchip implementation? I suppose an approximation of the infinite tape would be RAM or HDD. And I also suppose the finite table is the instruction set. Is that right? What about the other two? --Llaanngg (talk) 15:54, 19 January 2016 (UTC)


 * It sounds like you're thinking of a Turing machine, but the vast majority of real processors use architectures nothing like a Turing machine, so I'd say looking for the head is futile. It might be kinda sorta similar to the program counter, but not really. —Steve Summit (talk) 16:04, 19 January 2016 (UTC)


 * In a classical Turing machine, the values on the type are somewhat like registers in a CPU. The registers in the CPU don't move around, so you don't need a head to read/write them. Further, the Turing machine stores more than values on the tape. It can store instructions as well. In a modern computer, the instructions are in a process control block, which is stored in memory, not the CPU. Technically, they tend to be stored in logical memory, where part is in a backing store and part is in physical memory, but it appears to be all real memory to the CPU. So, that would be like a separate Turing machine all together that sends information to the CPU. Trying to simplify a CPU down to a Turing machine requires you to ignore the complexities. However, it is good to comprehend how a Turing machine works because that is now linear programming works - which is how most people write programs. 199.15.144.250 (talk) 16:26, 19 January 2016 (UTC)


 * What they said. The Turing machine is an abstract model of computation, used for reasoning about computation. Real-world computer architectures are mostly practical versions of register machines, although there are some stack machines in use. --71.119.131.184 (talk) 18:37, 19 January 2016 (UTC)


 * A typical computer is only 'equivalent' to a turing machine in what it can do and what it can't. There doesn't have to be (nor often is) a direct correspondence between the inner functioning of one versus the other.  Any computer (with sufficient time and memory) can emulate a turing machine - and a turing machine can emulate any computer.  That's a demonstrable mathematical relationship - but it doesn't depend on their internal architectures.  XKCD 505 has a great example of how one can imagine architectures for computation that look nothing like either a turing machine or a modern computer.  THIS, on the other hand is an actual turing machine (albeit with only a finite tape) built out of Lego.  But a system that's equivalent to a turing machine can be made from all sorts of elements.  HERE is one built from model train tracks! SteveBaker (talk) 17:15, 19 January 2016 (UTC)
 * Modern computers are more Neuman machines than Turing machines. Ruslik_ Zero 20:53, 19 January 2016 (UTC)

How does a webmaster program a forum?
What skills are necessary for a webmaster to program a forum? That is, create a working system that allows the end user to type something into a form and automatically see the result printed on the page? The webmaster acts as administrator and can moderate the postings, like a normal bulletin board/online forum. A webmaster may want to create a customized web forum, because the intent of the website may be different from the other types already on the market, or the webmaster may want to have full control over the look and function. 140.254.77.184 (talk) 20:14, 19 January 2016 (UTC)
 * Usually they would install software that has already been written, and then customise that with names, style sheets, set up users, groups etc. Drupal can do the job, but there are many others, see Comparison of Internet forum software. Otherwise the programmer will have to know about forms, POST method, user security, and databases to store the information on the server. From the comparison page you can see that the most popular language is PHP, and database MySQL. So the webmaster should learn these. Is this a homework question? Graeme Bartlett (talk) 21:13, 19 January 2016 (UTC)


 * Do you actually mean "program" as in writing a piece of software, or do you simply want to install existing software? There are tons of Web forum software packages that you can use "off-the-shelf". If you (or whoever) do actually want to write software, well, you need to have general programming knowledge first. Beyond that maybe Web programming will point you in the right direction. --71.119.131.184 (talk) 21:32, 19 January 2016 (UTC)


 * Turning over your question in my head a little more, it seems like what you might want to do is tweak an existing software package. A lot of forum software, CMSes, and the like allow you to extensively customize your installation, including the "look and function". It's unlikely you would need to write new software from scratch unless you really want to do something that's difficult to do with existing software. --71.119.131.184 (talk) 01:49, 20 January 2016 (UTC)

SQL max function question
Today at work, I spent over an hour puzzling over why my code didn't work. In the end, it turned out that I had written an SQL query in the form of, which I thought was supposed to either return null or simply not return anything if there were no rows satisfying the condition. But then I realised it returned 0. So I changed the query to abandon the max function and instead simply select  from , in descending order, and changed the code to stop at the first result. Is there a way, in plain SQL, to make the query do what I thought it would? J I P &#124; Talk 21:36, 19 January 2016 (UTC)


 * How about:

SELECT COUNT, MAX(W.NUMBER) FROM TABLE W WHATEVER WHERE W.THIS = W.THAT;


 * Then use a count greater than zero to indicate a match was found. StuRat (talk) 22:04, 19 January 2016 (UTC)


 * What database engine are you using. In Oracle, I get a single record with the value null. In MySQL/MariaDB, I get a null record. In MS-SQL, I get a single record with a null value. Perhaps you are using an interface like Ruby, Java, or PHP that is translating the value "null" into zero. 209.149.115.240 (talk) 13:16, 20 January 2016 (UTC)
 * There is a ternary operator in SQL. This should work, but I haven't tested if it's even legal SQL.


 * MS SQL Server also has the IFF function, if that's the server you are using. LongHairedFop (talk) 19:24, 20 January 2016 (UTC)