Wikipedia:Reference desk/Archives/Computing/2014 July 18

= July 18 =

Where are google earth "pins" stored?
I've recently changed from an XP PC to a Win7 PC> I'd like to copy my google earth pins across. Where would they be on the XP and where should they go on the Win7? -- SGBailey (talk) 11:45, 18 July 2014 (UTC)


 * I just found this with google:

Your Placemarks are stored in a file named myplaces.kml XP: C:\Documents and Settings\YOUR user name\Application Data\Google\GoogleEarth\myplaces.kml Win7: C:\users\YOUR user name\appData\LocalLow\Google\GoogleEarth\myplaces.kml -- SGBailey (talk) 22:26, 18 July 2014 (UTC)

Reminders
I would like to be able to remind myself of actions to take at some future time. (Eg, pay for a holiday 6 weeks before travel in 6 months time...)

Is there anything that can readily do this on Windows 7? I had thought to use a perl script (which I run at bootup anyway) to add a sticky note to the sticky notes file, but I can't determine how to add a note to the stickynotes file.

Any suggestions? -- SGBailey (talk) 12:20, 18 July 2014 (UTC)


 * Google Calendar does this. You can set any reminder for any time you like and it will e-mail you when you want it to.  Mingmingla (talk) 16:01, 18 July 2014 (UTC)


 * I'm giving that a try. Thanks -- SGBailey (talk) 17:19, 18 July 2014 (UTC)


 * I use my cell phone for this type of thing. Most cell phones have a calendar with this capability.  Then there's also a paper calendar, which is less likely to lose that info if you reinstall the O/S.  Of course, you need to get in the habit of checking your paper calendar every day, as it can't set an alarm.  For really important things, like say your wedding anniversary, it's best to use multiple methods, if you want there to be another anniversary next year.  StuRat (talk) 16:11, 18 July 2014 (UTC)


 * The thing I like about Google Calendar (and I presume Apple's iCal works the same way) is that it syncs across devices and the web services that you have, presuming you have a Google account. So my phone, my tablet, my desktop and my e-mail will all notify me one way or the other with an audible alarm, a notification, and the aforementioned e-mail.  This prevents the O/S installation and accidental deleting from losing a single device from being an issue.  It's "in the cloud". Mingmingla (talk) 22:12, 18 July 2014 (UTC)


 * That's pretty good, but my paper calendar will still work after an EMP. :-) StuRat (talk) 23:00, 18 July 2014 (UTC)

(1)How can I add the music Handel's "Messiah" to my PowerPoint for class; (2)how to documented the music to a "Works Cited" page?
Hi Wiki,

Have questions:

(1)How can I add the music Handel's "Messiah" to my PowerPoint for class;

and

(2)how to documented the music to a "Works Cited" page?

Grateful,

Didi Abel — Preceding unsigned comment added by Didiabel (talk • contribs) 17:05, 18 July 2014 (UTC)


 * 1) Do an Internet search to find the piece in a supported PP format:, then download it and import it into the PP presentation. Note that some formats sound better than others, and some recordings will be better than others, so avoid any version performed on the kazoo. :-) StuRat (talk) 17:40, 18 July 2014 (UTC)

Dataflow driven programming framework
I'm developing a high-performance DSP system in C++. I've designed and simulated the algorithm but now it's time to start working out a structure for the actual code. It can be visualized as a directed graph of data streams, being transformed by MIMO blocks. Most streams will be fixed rate, but there may be some special cases where samples come in at arbitrary intervals. All samples will be timestamped, or a timestamp can be inferred based on the sample rate.

At several stages of the flow there are points I would like to be able to log and analyze data.

The output is being used for real-time control of a low-latency system, so I would like to be able to specify that the output thread must run at real-time priority, and the framework should be able to infer what other components of the system will also need to run at that priority. Other outputs that are being used for diagnostic displays or end-of-run reports would not need to run in real time, so their processing could be done in lower priority threads, or if needed be suspended entirely unless buffering their input starts causing memory pressure.

The basic concept I have is some sort of directed graph model with lazy evaluation. If the output thread is running at real-time priority, then it could work backwards up its dependency chain running any needed calculations at its priority level. Lower priority threads would work the same way, and could take advantadge of the work done by the high-priority sections once they reach a point where they are dependent on them.

I have some ideas of how to go about implementing this, but I was wondering if there are any existing frameworks that would be good for building this sort of system. The concept isn't too hard, but there's no point in me reinventing the wheel, especially with all the possible gotchas in implementing some of the details. K ati e R (talk) 18:02, 18 July 2014 (UTC)


 * Is there a daily maintenance period ? If so, then all the low priority jobs could be run then, if once a day is sufficient.


 * Something else to think about is if you have too many high priority jobs running at once, do you prefer to slow them all down equally, or suspend some until later ? StuRat (talk) 18:21, 18 July 2014 (UTC)


 * By low-priority I mean that the output period can be slowed down to hundreds of milliseconds between updates instead of tens of microseconds, or in some cases it can wait till the end of a test that runs for a few hours. I've designed the system so there will be enough processing overhead to handle everything it needs without violating any timing constraints - the most intense real-time processing comes in bursts and there is enough memory for the lower priority stuff to queue up data instead of process it, then catch up when the real-time code isn't working as hard. The problem now is just finding a framework that lets me implement this sort of system without having to dive into the nitty-gritty details of managing the scheduling of each component. The dataflow is simple enough that I feel like a framework designed for this sort of problem should be able to work out those details itself, now that I've proven that there will be enough overhead for it to work. It would also make things more flexible when it comes to changing the design down the road because I would just have to work at the higher level. K ati e R  (talk) 18:35, 18 July 2014 (UTC)


 * Still, even though you think you have enough resources to get the job done, you should have a backup plan in place for what to do if resources are more limited than you expect. I've seen system go to page swapping when handling too much at a time, and consequently grind to a crawl, where just cutting back on the running processes would have made everything run much more smoothly. StuRat (talk) 19:14, 18 July 2014 (UTC)


 * I know precisely what resources I have, what processes are running on the system and at what priority, the purpose and frequency of each interrupt on the system and the delays involved in handling it. Non-essential interrupts are preemptible, and I can place precise bounds on the delays in the critical data processing interrupts. I monitor the running processes for unexpected amounts of memory usage and the operating system scheduler puts precise limits on the amount of CPU time each process is allowed to use. My process's code segment is pinned in physical memory, and before running a test all data pages are pre-faulted and pinned. I have mathematically proven the correctness and performance of my algorithm, confirmed those results through simulation, and have a test plan in place to empirically verify the performance on actual hardware once it is implemented. System monitoring code watches for anything to go outside of acceptable bounds and immediately attempts a smooth shutdown and termination of the run, because if something goes outside of those bounds then something is happening that I do not yet expect or understand, so it should stop until we analyze the situation.


 * This isn't a question about best practices for handling high CPU loads and many jobs, it's about frameworks for high performance dataflow-based DSP algorithms on multiprocessor systems. K ati e R  (talk) 19:50, 18 July 2014 (UTC)


 * In the functional community, the sort of lazy evaluation you are talking about is sometimes called reactive programming. There are a number of C++ libraries that implement some sort of dataflow structure:


 * DSPatch
 * Boost::Dataflow
 * dc-lib
 * Route11
 * If you want to get closer to the metal, SystemC is more of a software version of an HDL. --Mark viking (talk) 19:28, 18 July 2014 (UTC)


 * Thanks for the links - I'll start looking into them. I should have also added that this is proprietary code, so GPL and other licenses that would require it to be open-sourced aren't acceptable. I'm hoping to stay at the high level here, I've done enough low level stuff writing the DAC/ADC drivers and managing all fiddly bits of the system that can lead to delays, and have a pretty clean abstraction of it all by the time I get to my C++ code. K ati e R  (talk) 19:50, 18 July 2014 (UTC)


 * Oooh and DSPatch looks very shiny! And LGPL so I can use it. :-) K ati e R  (talk) 19:54, 18 July 2014 (UTC)