Wikipedia:Reference desk/Archives/Computing/2020 August 17

= August 17 =

DVD ghosting
Apologies in advance, as it's not really a computing problem, but I recently added content to a film article (Robbery (1985 film)) and would like to comment on what I think is an authoring problem with the DVD in my possession. It's a region-free cheapie from a company called Flashback Entertainment, 4:3 of a made-for-TV crime drama, not obvious to me if the original medium was film or (presumably PAL, as it's Australian) videotape. I wouldn't bother except that it's an enjoyable film, or would be if it weren't for a single distinct "ghost" that follows any moving object, as though a fainter copy of the previous frame were superimposed. Stationary images are of adequate quality. All I really want to know is the proper term for this fault; any other info would be a bonus. Doug butler (talk) 05:37, 17 August 2020 (UTC)


 * See Ghosting (television).--Shantavira|feed me 08:37, 17 August 2020 (UTC)
 * Perhaps I shouldn't have used the word "ghost". Having grown up in the age of B/W analog TV I understand multipath and mismatch echoes, where the whole picture is repeated microseconds later. This would have to be a frame problem; a repetition of many tens of milliseconds, as non-moving parts of the image are superimposed and thus inconspicuous. More like motion blur but as a fault in the DVD transfer process. Doug butler (talk) 13:47, 17 August 2020 (UTC)
 * Can the version on the DVD have been grabbed from a broadcast? --Lambiam 19:56, 17 August 2020 (UTC)
 * No, I was taken in by that kind of rip-off once. This is better resolution than domestic VCR (and no line artifacts) and way better than analog TV. Super-8 or 720p quality maybe, though I'm no expert. Doug butler (talk) 20:47, 17 August 2020 (UTC)
 * DVD-ROMs are all standard definition, hence 480p or 576p. Don't fall for "remastered in blah blah" scams. It sounds to me like a case of mangled up deinterlacing but I don't really know much about that. 93.136.174.224 (talk) 21:56, 17 August 2020 (UTC)
 * I'll buy that. Thanks. And an interesting article. Doug butler (talk) 10:53, 18 August 2020 (UTC)

spam, what do
I got an automated reply from some company's customer support's ticket system. The company, a web shop, is legit. I never contacted them before. Apparently (what I gathered from the part of the original message they quoted), it's a piece of spam. What should I do, write and tell them "it wasn't me"? Or not bother? Does it mean someone had my password (I changed it in the meantime) or can spammers spoof From: addresses without having the password? Aecho6Ee (talk) 08:01, 17 August 2020 (UTC)
 * The  address (and  ) in emails are about as secure and accurate as the "return address if undelivered" on snail-mail. i.e. completely forgeable. Unless you know how to parse the   email headers, ignore the email. LongHairedFop (talk) 08:40, 17 August 2020 (UTC)
 * (e/c):Yes, it's easy to spoof the from address. I've never a good idea to respond to spam, you risk being sent more. Just report the spam to your email provider; in Gmail, for example, there is a "report spam" button at the top of the screen.--Shantavira|feed me 08:43, 17 August 2020 (UTC)


 * If I read the OP correctly, they have not received an email with forged headers, but they received an email in reply to something purporting to come from them, which changes the answer a tad.
 * "From" addresses in email are as reliable as in physical mail, that is, not at all (I could write "Elizabeth II, Buckingham Palace" as the return address on physical mail I send). (Well, at least in the base standard (SMTP); additional stuff was tacked on afterwards, but similar to seals in physical correspondence there is no silver bullet.) Our article email authentication is suprisingly well-written for such a technical topic.
 * While email authentication (i.e. ensuring the "from" address is correct) depends on how your email server and the recipient email server are set up, faking headers is easier than breaching passwords, so I would say it is more likely than the headers were faked. In particular, if your email had been breached by a low-quality high-quantity spammer, all your email contacts would have received some Viagra ads by now (one of them would probably have told you about it), and if it had been breached by a low-quantity high-quality scammer (to use for social engineering (security)), they certainly would not risk burning an account by messaging random websites.
 * Regardless of header faking, the "legit web shop" is apparently putting in addresses in their ticket system without validating them beforehand. (Validation: this annoying step where to create an account on site X, they ask for your email, and send you a link you must click before the account is fully active). See for why this is a bad thing. Whether you warn them of this or not is up to you (though personally I would opt for "don't bother").  Tigraan Click here to contact me 09:06, 17 August 2020 (UTC)
 * yes, that's what I meant. someone opened a ticket in their system, with my email as the sender and the contents is spam (of a nasty kind, too. not super-nasty, but, still, quite nasty.) Aecho6Ee (talk) 11:47, 17 August 2020 (UTC)
 * This is known as a "Joe-job" where a spammer subverts the good name of an innocent bystander to generate spam. You might land on some less-vigilant blacklists because of the incident. However, most blacklists use IP addresses and domain names, rather than individual email addresses, to list spammers. So you probably won't be bothered. I would just let sleeping dogs lie, in this case. Elizium23 (talk) 05:51, 18 August 2020 (UTC)
 * Thank you everyone for your help! Aecho6Ee (talk) 11:53, 19 August 2020 (UTC)

Downloading from Google Drive
I've noticed that downloading files from my own Google Drive to own laptop involves some crappy process. If you want to download multiple files or a folder with files, then, after zipping is complete, several dialogue boxes pop up and presumably you have to click OK on all of them. Downloading proceeds in batches named something like drive-download-20008Z-001.zip, -002.zip, etc. In one instance, when I wanted to download 24 files, I ended up with three zip batches instead of one (initially I naturally OK-ed one box and ended up with a batch that contained only some of the files). Then you have to unzip all batches to get all of the desired files. Why is that? Windows 10, just in case. Brandmeistertalk  18:45, 17 August 2020 (UTC)
 * I think it depends on the file size. Drive is trying to prevent you from having a ginormous 8GB download, so breaks it up. Perhaps you can confirm that these are large files as well? I would be surprised if it split smaller files this way. Elizium23 (talk) 19:21, 17 August 2020 (UTC)
 * Yep, seems like it happens only with large filesizes. I've downloaded smaller folders smoothly, without split. Brandmeistertalk  21:01, 17 August 2020 (UTC)
 * There's nothing exceptional about a 8GB download, even over HTTPS. This page lists some workarounds. Why? Consider that the free (as in bar tab) service you're using has no monetary incentive to let you store large files. They take up space a less active user could use and produce comparatively less metadata for ad targeting per MB of disk space used. 93.136.174.224 (talk) 21:52, 17 August 2020 (UTC)
 * I find it quite doubtful Google has no incentive to ensure people are able to store large files in the 8 GB range without problem. They already have the 15 GB limit how much each free user uses. And this is the same company which doesn't seem to give a damn when people with legitimate education/academic accounts stores terabytes, or when people with a single (paid sure, but $12/month isn't much for storing 100TB) G.suite account store well over the 1TB limit for a single account, or when people with free accounts upload to Youtube many many hours (TB worth) of their gaming videos and keep them private i.e. for their own use (okay they made that TOS change which everyone went nuts about but AFAIK there's been no mass crack down yet). They do crack down on resold academic accounts and similar, but someone storing 8GB is like peanuts to them. Remember that while it's true 8GB HTTPS downloads are no big deal for some, there are still plenty people for who it is a problem. And in browser resuming, especially for services like Google where the URL may stop working, doesn't always work. (And frankly, I've found Google Drive's HTTPS has randomly died at times. And not surprisingly, Firefox sometimes has more issues with it than Chrome.) If anything, the fact that they split these up is an indication they do want the maximum number of people being able to download these without problem. The simple fact is the minor cost for storage is nothing compared to what they earn by convincing people to use their service which means they do try to ensure such things work smoothly and definitely don't do dumb shit just to annoy them. (A key point people often miss IMO.) I mean I'm fairly sure the feature does not go away when you are paying them for extra storage on your personal account. For that matter, I suspect it's the same for G.suite. Yes it gets annoying when you have a 1 gigabit connection but you could always use Backup & Sync or one of the many options that just uses the API e.g. rclone. Nil Einne (talk) 13:57, 19 August 2020 (UTC)

Memory allocation question
This came to my mind recently. On AmigaOS, if a process allocates memory and terminates without freeing it, the memory is still left allocated, with no way to free it other than restart the whole computer.

But as I understand it, Linux and Windows work smarter, freeing the memory the process allocated. How does this work? Is there some kind of mapping in the OS kernel connecting a process ID to the memory it allocated, and once the kernel finds out the process has terminated, it goes through all the memory mapped to it and frees it? J I P &#124; Talk 21:00, 17 August 2020 (UTC)
 * Modern operating systems (well, UNIX did it since the 70s, Windows got it much later) use a virtual memory architecture. Each process sees the full address space as its own (potential) memory. The OS uses the memory management unit to map pages of real memory to this virtual address space if addresses are actually accessed, with the MMU doing the translation from virtual to physical addresses. This is managed via page tables, which associate the virtual pages of any process with physical pages of real memory. When the process is terminated, the page tables are purged, and all physical memory assigned to the process is available again. Note that this architecture also stops one process from accessing (or disturbing ;-) the memory of another process - unless there is an explicit request for a shared memory segment, no two processes will see the same memory. This is part of memory protection. --Stephan Schulz (talk) 21:51, 17 August 2020 (UTC)
 * , Linux today allocates process memory in two complementary ways: mmap(2) and sbrk(2).
 * The latter is more ancient, being the way to extend the process "break" line beyond its initial starting point and add more memory to the heap. The memory in the heap is contiguous with text and data of that process. When the process ends, it is trivial for the OS to reclaim all its contiguous memory when its reference count drops to zero.
 * mmap(2) on the other hand, presents an alternative method for memory allocation. Originally this system call was designed to map regular files into memory for fast cached operations. But it can be used to allocate memory as easily as sbrk(2) does, but instead you map anonymous pages. These can be sprinkled out in the virtual process address space pretty much anywhere. Once again, the OS keeps a reference count, and mapping in the page tables, and when the reference count drops to zero, the memory is reclaimed. Elizium23 (talk) 22:21, 17 August 2020 (UTC)
 * mmap(2) on the other hand, presents an alternative method for memory allocation. Originally this system call was designed to map regular files into memory for fast cached operations. But it can be used to allocate memory as easily as sbrk(2) does, but instead you map anonymous pages. These can be sprinkled out in the virtual process address space pretty much anywhere. Once again, the OS keeps a reference count, and mapping in the page tables, and when the reference count drops to zero, the memory is reclaimed. Elizium23 (talk) 22:21, 17 August 2020 (UTC)
 * mmap(2) on the other hand, presents an alternative method for memory allocation. Originally this system call was designed to map regular files into memory for fast cached operations. But it can be used to allocate memory as easily as sbrk(2) does, but instead you map anonymous pages. These can be sprinkled out in the virtual process address space pretty much anywhere. Once again, the OS keeps a reference count, and mapping in the page tables, and when the reference count drops to zero, the memory is reclaimed. Elizium23 (talk) 22:21, 17 August 2020 (UTC)