Wikipedia:Reference desk/Archives/Computing/2020 December 17

= December 17 =

Storage Usage Chart for External Drive
I have an external disk drive with approximately 1 Terabyte of storage on my Dell desktop computer as drive F:, which I am using for multiple backups of my data from my C: drive. I am running Windows 10. Approximately 822 Mb (decimal) are in use, and approximately 177 MB (decimal) are free. Since the C: drive has 143 Mb (decimal) in use, I can see that there are a lot of duplicated backups of the C: drive files, and I would like to delete some of them before I fill the external drive. What I would like to see is how much space each of the main subdirectories of the F: drive is using. I can select it and then click on Properties, but that takes up to a minute while it scans through the folders and files before it displays the actual usage of each. What I would like to know is whether there is a tool or application that I can use that will display either a pie chart or a bar chart, showing what the storage utilization is of each directory. Does such a tool or utility exist? Robert McClenon (talk) 03:13, 17 December 2020 (UTC)


 * Not a pie or bar graph but try "Beyond Compare". The GUI is very much like Explorer but with lots of extra folder information - including displaying sub-folders and their sizes plus many other cool features, like finding duplicate files on different HDDs. Very generous trial period with no "locked until you pay" features. 41.165.67.114 (talk) 06:53, 17 December 2020 (UTC)


 * WinDirStat and similar programs scan your drive(s) and show usage by file extension and directory. It won't help you find duplicate files though. LongHairedFop (talk) 09:46, 17 December 2020 (UTC)
 * Thank you, User:LongHairedFop. I am not looking for an identification of duplicate files.  I know that many of the subdirectories are copies of directories from the C: drive, many of which are archival in nature, so that a copy of outgoing correspondence in 2019 includes all of the outgoing correspondence that is also in a copy that was made in 2018.  I can do that.  If the product is free and open-source, I will try it.  Robert McClenon (talk) 16:00, 17 December 2020 (UTC)
 * Thank you, User:LongHairedFop - How do I download the free open-source product from the repository? It isn't obvious on going to the web site.  Robert McClenon (talk) 05:03, 18 December 2020 (UTC)
 * Are you referring to this https://osdn.net/projects/windirstat/ ? If so I see 5 files on that page under "Recent updates" This may be what you want [//osdn.net/projects/windirstat/storage/historical/windirstat/1.1.2/windirstat1_1_2_setup.exe/] or [//osdn.net/projects/windirstat/storage/historical/windirstat/1.1.2/windirstat_unicode.zip/] if you don't want to install. I think the repository is a little confusing since per our article, there has been no binary (or I think any) releases since 2007 even if there were supposedly some code updates, and it looks like these have all been marked historical. Nil Einne (talk) 02:40, 20 December 2020 (UTC)
 * I use TreeSize free from . By the way explorer/properties of Win10 routinely returns completely wrong figures. TreeSize shows rather quickly the total size of all folders and the size of all single files. 2003:F5:6F19:4300:40E1:C2AF:BF93:B804 (talk) 13:30, 20 December 2020 (UTC) Marco PB

Append Query in Microsoft Access
I am trying to use an Append query in Microsoft Access (Office 365) to copy records from one table in one Access database on my Windows 10 machine to an existing table in another Access database in another directory on the machine. I have set up the append query in Design View, and what I see in SQL view looks correct. I then click the red Exclamation mark Run thing in the upper left. Nothing happens. Are there any obvious pitfalls that I should be considering to get it to work? The Help does mention append queries (and presumably other action queries) that are disabled and that they sometimes display something in the message bar, and they sometimes don't, and one needs to enable the message bar first before enabling the query. Is there anything that may be obvious after the fact that I should consider? Robert McClenon (talk) 03:13, 17 December 2020 (UTC)

Flip Flops made from vaccum tubes?
One of the more serious problem during early computing was memory, and there were a number of creative solutions, from mercury delay lines to magnetic core memory. But if I understand things correctly, it should have been possible to build ordinary flip flops from any kind of electric switch, and in particular from vacuum tubes. Is there a theoretical problem I don't see? Or would this simply be to big and bulky to be practical at all? --Stephan Schulz (talk) 12:46, 17 December 2020 (UTC)
 * Eccles-Jordan trigger circuit flip-flip drawings.png Quite possible, and indeed it was done in limited circumstances.  You wouldn't want to build memory this way though, even using double triodes each bit would occupy about 2 cuin.  In the attached schematic (lower drawing) a pulse on P will turn the left hand valve on or off depending upon the puls direction.  If the left hand valve is conducting then A1 is low, so G2 is low and the right hand valve is off, so that A2 is high and G1 is held up.  If the pulse went the other way G1 goes low, A1 is hight, G2 is high so A2 is low which maintains state. Oops, sorry.  Forgot the signature Martin of Sheffield (talk) 15:40, 17 December 2020 (UTC)


 * Flip-flop gates from tubes were all over the place in the arithmetic and control logic. Short-term working memory on a modest scale could be (and was) realized this way, such as in combination with mercury delay lines. For use on a larger scale, not only would this indeed have been bulky, it would also have been very energy-consuming, producing much heat. Additionally, any power outage would have meant irretrievable memory loss. When solid-state logic became common in the 60s, non-volatile magnetic core memory remained the technology of choice for main memory, surviving now only in terminology like "in core" and "core dump". --Lambiam 13:20, 17 December 2020 (UTC)


 * Power outages would also take out mercury delay lines. Storage scopes were another option, see Oscilloscope_types or its development the Williams tube. Martin of Sheffield (talk) 14:08, 17 December 2020 (UTC)


 * I've seen the Manchester Baby at the Science and Industry Museum, and they use a CRT screen as the memory. Very cool, but also very volatile ;-). --Stephan Schulz (talk) 16:03, 17 December 2020 (UTC)
 * Umm, yes. The "CRT screen" you saw was a Williams Tube, a form of storage scope.  It wasn't quite as volatile as you might think, assuming no power interruptions decay time was measured in hours. Martin of Sheffield (talk) 16:37, 17 December 2020 (UTC)


 * On the ENIAC, each "accumulator" or register consisted of 101 flip-flops made using vacuum tubes. It stored 10 decimal digits, but rather than representing them in binary, each possible value for each of the digits had its own flip-flop, so the digit 7 would look like 0000000100, 8 would be 0000000010, and so on. The 101st flip-flop was for the sign.  There were 20 accumulators, so it each flip-flop needed 2 tubes, that's 4,040 tubes right there.  The whole machine contained over 17,000 tubes. --174.95.161.129 (talk) 21:51, 17 December 2020 (UTC)

Bash XML processing tool
Dear Wikipedians:

I am currently working on processing Google Blogger comments for a blog that I am following. The comments download neatly into an XML file, which is a standard "one-liner" XML file where the entire file is squished onto one line without an ending line break (so the file registers as "0" when I do a  on it.

The overall structure of the XML file is fairly simple, a couple of  header tags, followed by the main body of the file, which consists of a pair of   (equivalent to the   tag for HTML files), inside of which is a number of   records. Each  record denotes one comment in the comments section.

Now each XML file consists of 500 comments. And I am interested in extracting the first 300 comments from it. I am working in the Bash environment of my Ubuntu Bandwagon box. Therefore I am wondering if there is any shell-based tools that would allow me to accomplish this XML records extraction task. Either extracting the first 300 comments or removing the last 200 comments would be fine for me. (And obviously the "300" is a number that I'd given for illustration purposes, the number of comments I am interested in extracting varies from file to file.)

I know about, but apparently it only helps with transforming XML files into HTML files.

Thanks,

172.97.139.62 (talk) 15:33, 17 December 2020 (UTC)


 * I usually use python's xml.dom.minidom for stuff like that. For very large xml files you need stream-like parsers but 500 lines isn't much. 2602:24A:DE47:BB20:50DE:F402:42A6:A17D (talk) 06:29, 18 December 2020 (UTC)


 * Thanks. Wow I noticed you're IPv6. Which ISP are you using? 172.97.139.62 (talk) 16:34, 18 December 2020 (UTC)


 * Try this:
 * This needs to be put in a text file; let's say with the name, after which you can use this on the command line:
 * The resulting file will be 300 lines long, each containing one comment. --Lambiam 23:27, 18 December 2020 (UTC)
 * This needs to be put in a text file; let's say with the name, after which you can use this on the command line:
 * The resulting file will be 300 lines long, each containing one comment. --Lambiam 23:27, 18 December 2020 (UTC)
 * The resulting file will be 300 lines long, each containing one comment. --Lambiam 23:27, 18 December 2020 (UTC)


 * Oh yesssss! That's exaaaaaaactly what I was looking for! Using the Bash's innate abilities to accomplish this task. Thank you so much. Now this issue has been resolved. 172.97.139.62 (talk) 23:35, 18 December 2020 (UTC)