Wikipedia:Reference desk/Archives/Computing/2012 January 7

From Wikipedia, the free encyclopedia
Computing desk
< January 6 << Dec | January | Feb >> January 8 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 7[edit]

Is redstone Turing complete?[edit]

I heard someone claim that redstone circuitry in Minecraft is actually Turing complete. Unfortunately, I don't know enough about Turing completeness (or, indeed, redstone) to actually verify this. Could someone more knowledgeable than I confirm/deny this? Thanks! 24.247.162.139 (talk) 01:21, 7 January 2012 (UTC)[reply]

I have seen some pretty clever uses of redstone circuitry in Minecraft — adders, RAM, GPUs, etc. I haven't seen anything that would make me think you couldn't make a Turing complete computer using just redstone circuitry, except for physical limitations — there are limits as to how long the circuitry can travel, if I remember (e.g. one chunk or something like that) and still remain in Minecraft's awareness (which puts an upper limit on processor size, memory possibilities, etc.). If you can't theoretically expand it infinitely, then it's not technically Turing complete, I don't believe. But I think that's the only real limitation. --Mr.98 (talk) 01:39, 7 January 2012 (UTC)[reply]
If you can make a NAND gate (or a NOR gate, or NOT and AND or OR, or...), you can make anything. Since you can propagate signals indefinitely in Minecraft with redstone repeaters, Redstone is Turing complete. In fact (if I remember correctly, which is dodgy, since it's not my specialty), in the real world, pretty much everything is made of NAND gates, because they're the natural thing to make with transistors (with the exception of DRAM, which uses capacitors). Paul (Stansifer) 02:05, 7 January 2012 (UTC)[reply]
Yes (at least at a logical level) that's true - one can build a whole CPU from False and NAND (NAND logic), False and NOR (NOR logic) or just implies by itself (implies logic). This great article shows a bunch of logic gates build in CMOS transistors, with the non-elementary gates shown as the composite of elementary ones. In practice I'm not sure designers (tricksy fellows that they are) don't find clever ways to build more complex functions from transistors that don't decompose so obviously into NAND or NOR logic; Wikipedia's own coverage of the implementation of logic gates and higher digital functions is patchy at best. -- Finlay McWalterTalk 12:38, 7 January 2012 (UTC)[reply]
Start reading here. Von Restorff (talk) 02:29, 7 January 2012 (UTC)[reply]
Redstone (with a few other kinds of Minecraft blocks, particularly switches) can be used to construct the full suite of logic gates (see redstone circuits-logic gates on the Minecraft wiki) - as that includes a NAND and NOR you can build either NAND logic or NOR logic (and as there's an implies gate, implies-logic, my fave) to build anything you want. This guy built an 8-bit CPU (I think following the programme of TECS). So, to answer your question: redstone with switches is Turing complete; I don't think redstone by itself can do anything interesting. -- Finlay McWalterTalk 12:28, 7 January 2012 (UTC)[reply]

Forcing internet access to require a user name and password (Win XP and DSL modem)[edit]

Resolved

I have a relatively modern DSL modem (provided by my ISP at a rather insulting rental fee) connected to a Windows XP computer using an ethernet cable. When I first set up the internet connection this modem enables in Windows, I selected Connect using a broadband connection that requires a user name and password as opposed to Connect using a broadband connection that is always on (the latter is described as not requiring the user to sign in and is the configuration I've generally encountered on computers at work and at other people's houses). This created a connection that, when opened, prompts for a user name and password as expected and only connects to the internet once those pieces of information have been provided.

However, I recently noticed that any software programmed to connect to the net (e.g., web browsers, any program that can check for updates, etc.) is able to do so even when the connection I created isn't active. A local area connection is present under Network Connections in addition to the internet connection I manually created; it appears to be active as soon as Windows starts and its record of bytes transferred clearly reflects internet activity.

Given this configuration, is there a clean way I can set up my network situation such that the computer is only connected to the internet when the user has signed into the manually-created internet connection? Simply disabling the local area connection predictably prevents the internet connection I created from working and googling this particular problem has proven fruitless (google seems quite keen on providing me with results pertaining to complete internet dysfunction, which is pretty much the exact opposite of this situation). Additionally, the modem's configuration page doesn't appear to contain any settings relevant to this.

Any suggestions would be appreciated!

Hiram J. Hackenbacker (talk) 03:11, 7 January 2012 (UTC)[reply]

What are you trying to achieve by doing this? Control over who can access the internet (e.g. kids)? When they use it? Where they can visit? Von Restorff (talk) 03:45, 7 January 2012 (UTC)[reply]
It's not a matter of who can access the internet (the computer's mine and I've saved the password in the connection settings anyway, so a cat could connect to the internet if it knew how to work a mouse), but rather what can access the internet. I prefer to know when it's possible for programs installed on my computer to check for updates online, as many (Flash, Java, Adobe Reader, SolidWorks, etc.) tend to do so unprompted. Besides the issue of security, this also makes it easier to ensure I'm under my ISP's bandwidth cap each month.
I used the desired configuration I've described above in the past with an older DSL modem that connected using a USB port, so I don't see any reason why it shouldn't be possible to duplicate the arrangement with a modem that connects via ethernet; I merely haven't been able to get it to work thus far. Hiram J. Hackenbacker (talk) 15:13, 7 January 2012 (UTC)[reply]
One way to be absolutely sure your computer can't go online is to pull the Internet cable out (Wifi may require pulling out the Wifi card). Of course, when you do want to go online, this may allow other nasties on your computer to go online, too. Beware that the little plastic tab on the cable may soon break off if you disconnect and reconnect it frequently. I suggest a short cable you don't mind often replacing. StuRat (talk) 23:32, 7 January 2012 (UTC)[reply]
Local Area Connections can be disabled manually under Network Connections, which I think is rather more convenient than repeatedly unplugging the modem (granted, I don't doubt that there's malicious software that can enable a disabled LAC, but I take every reasonable precaution against viruses and malware so I'm not particularly concerned about that possibility). It's unfortunate that there doesn't appear to be way to separate a LAC connection to the modem from internet connectivity when using one of these more modern modems, but I suppose I'll just take the route of disabling the LAC completely when not using the internet or network. I've marked this as resolved. Hiram J. Hackenbacker (talk) 17:30, 8 January 2012 (UTC)[reply]

Win 7 flaky on system resume[edit]

I have a new Asus laptop running 64 bit Win 7. The whole thing is less than two weeks old and it's a relatively good unit, above the cheap end of the spectrum, with Core i7, 8GB RAM, etc. A problem I have noticed is that when I resume from sleep or hibernation the computer has a funny tendency to crash within about three minutes of the resume (not immediately mind you). And I mean a major crash, in that I have to do a hard boot, with the system becoming entirely unresponsive, even cntl-alt-del etc elicits no response.

OK, so analysing this I'm noticing two trends. One is that if I manage to have to the whole thing set up exactly the same as it was when I hibernated it doesn't seem to crash, and I mean things like power, mouse, wireless USB broadband connection, external drives either plugged in or not - if you change one of them either way while the thing's hibernating it's almost as if it gets confused on the resume. The other thing is that it always crashes while using Firefox, but that could just be that that tends to be the first thing I access on a resume.

Any ideas? As yet it has never crashed in any other circumstances. Is it just flakiness of Win 7 (I haven't used it extensively before, but I'm not entirely impressed with it)? --124.184.191.230 (talk) 03:45, 7 January 2012 (UTC)[reply]

Which model is it? Did you run Windows Update and did you update all the drivers? Do you see a BSOD once in a while or is it just unresponsiveness? Von Restorff (talk) 03:51, 7 January 2012 (UTC)[reply]
Thanks. Model appears to be N53SV-SX788V (interestingly not the model I ordered, but the all specs were right and it's actually a slightly dearer unit). Have run all the required Windows updates, but haven't updated the drivers (I generally don't do so unless there's problems), so might give that a try. Nah, no BSOD, just totally freezes - oh, but the freeze happens in stages, like first it won't reload a webpage even though I can switch from tab to tab, then it won't switch tabs, then the browser freezes totally not highlighting links etc, then I can't switch to a different application, then the mouse itself freezes over about the course of two minutes. Was just wondering if anyone else had seen this with Win 7 in general. --124.184.191.230 (talk) 05:06, 7 January 2012 (UTC)[reply]

Strange "WUDFHost.exe" process - Please help[edit]

In investigating a problem with unexpected high disk I/O activities, I discovered that a WUDFHost.exe process is running on my machine under the LOCAL SERVICE user. The strange thing is, from Windows Task Manager, I cannot view its properties, nor does "Open File Location" open the folder that contains it (assuming that the image is C:\Windows\System32\WUDFHost.exe, not something malicious pretending to be it.) I also tried Process Explorer. It could not report the file path of the image either. What gives?

If you have a Vista or Windows 7 machine, could you check & see if you get the same result? Thanks in advance. — Preceding unsigned comment added by 173.49.11.117 (talk) 08:32, 7 January 2012 (UTC)[reply]

Do not worry, it is the Windows User Mode Driver Framework. It's just a container for drivers to live in and get access to the things they need to get to. Normally its path is C:\Windows\System32\WUDFHost.exe. Von Restorff (talk) 08:51, 7 January 2012 (UTC)[reply]
I know that, but why can't Windows Task Manager and Process Explorer report the file location of its image? I tried other services running under the LOCAL SERVICE user; both tools could report the file locations of their images. What worries me is the possibility that the WUDFHost.exe process was not started like the process normally would be but was some malicious software made to look like WUDFHost.exe. --173.49.11.117 (talk) 14:34, 7 January 2012 (UTC)[reply]
My prediction is that it is not infected, but if you are worried here is a simple way to check if I am right or not. Browse to the folder where WUDFHost.exe is located. Make a copy of that file on your desktop. Scan the copy using http://virusscan.jotti.org/ or http://www.virustotal.com/ Von Restorff (talk) 14:58, 7 January 2012 (UTC) p.s. I assume you are using a proper virusscanner and firewall (e.g. ESET Smart Security). If you combine ESET with malwarebytes and hijackthis and adblock plus you are pretty safe. Do you want more safety? Try something like Sandboxie and noscript to make browsing more secure. But if you really care about safety and security the only option is to stop using Windows.[reply]
Thanks, but where the image of the WUDFHost.exe process actually comes from is what I can't figure out. It should be C:\Windows\System32\WUDFHost.exe, but how do I confirm that the process is really running that binary? --173.49.11.117 (talk) 15:54, 7 January 2012 (UTC)[reply]
Hmmm, I am not sure. Do you have a file called WUDFHost.exe in any folder other than C:\Windows\System32\ ??? If not, stop the process and restart it. Process Monitor v2.96 is always nice to have. BTW earlier I forgot to mention that you can probably use the System File Checker to check if your version of WUDFHost.exe is legit. Von Restorff (talk) 16:05, 7 January 2012 (UTC)[reply]
In Task Manager - View - Select Columns - Tick 'Image path name'. Nanonic (talk) 16:36, 7 January 2012 (UTC)[reply]
That works, and its shows that path name as the expected one. Still not sure why right-clicking on it and selecting Properties won't show anything. Thanks. --173.49.11.117 (talk) 17:09, 7 January 2012 (UTC)[reply]

SearchIndexer.exe & SearchProtocolHost.exe keeping computer busy[edit]

This is related to the problem that prompted my question about WUDFHost.exe. It seems that at times SearchIndexer.exe & SearchProtocolHost.exe are keeping my computer's hard drive busy. (Not 100% sure that they are the culprits, but I came to the conclusion from what I saw in Window's Reliability and Performance Monitor). When that happens, it would take quite some time for disk activity to die down.

Is that normal? How do you deal with it? (I tried lowering the priorities of the processes but was denied). I think the problem started recently, although I could be mistaken (I didn't observer/notice it in the past.) The only recent changes to the computer is installation of some Microsoft updates and software that came with a printer.

Thanks in advance. --173.49.11.117 (talk) 15:54, 7 January 2012 (UTC)[reply]

Check your update log. Is there anything in there related to the Windows Search indexer? TBH I personally prefer to disable Windows built-in search because I never need it and it slows my computer down; but I am weird. Von Restorff (talk) 16:00, 7 January 2012 (UTC)[reply]

Oversized display area for hidden systray icons[edit]

For a few days, my computer (running Windows 7) has repeatedly displayed the hidden icons of the systray, when I clicked to have them displayed, in an oversized display area which almost filled half the screen; it's just a white area, at the upper left top of which the 3 or so icons are displayed A few times I've managed to re-shrink that area by clicking into its midst--with each click, the area can get a little smaller, as if one line of a text gets deleted, then the next etc. At the end, it's back to its normal size close to the systray. Very odd.

Anyone knows what's going on and/or how to avoid it? Thanks a lot: Thanks for answering (talk) 12:16, 7 January 2012 (UTC)[reply]

Windows sometimes picks up bizarre behaviour. Have you tried shutting down and restarting? That often fixes minor niggles like this. --jjron (talk) 13:09, 7 January 2012 (UTC)[reply]

Really good virtual reality[edit]

Something like that, but in a video game

How good should be a picture for the complete illusion of reality? Is it possible to achieve such a quality, so no one can see the difference between the 3d-image and a real window to the street?

How big should be a monitor resolution? 10000х10000? More?

How many colors should it have? Is true color not enough?

How many polygons should have the scene? -Ewigekrieg (talk) 12:42, 7 January 2012 (UTC)[reply]

Extremely good. No, that is not yet possible. Outside the Fovea centralis our vision sucks. Human eye reception is measured in resolution rather than pixels. However, it is possible to loosely translate resolution to pixels. As an example, through a series of calculations based on field of view variables, Roger Clark of Clark Vision estimates the human eye to have the resolution equivalent of 576 megapixels, or 576 million pixels (1 megapixel equals 1 million pixels). True color is probably enough (at least 16,777,216 color variations). The human eye is popularly believed to be capable of discriminating among as many as ten million colors. I think the question about polygons is impossible to answer. XKCD Von Restorff (talk) 13:17, 7 January 2012 (UTC)[reply]
I once saw a question in Amiga Format (I believe) asking if there ever was going to be computer graphics with more than 24 bits for colour. The reply was that since the human eye cannot distinguish between more than 16 million colours, there isn't exactly much point to having more than 24 bits for colour. JIP | Talk 22:28, 7 January 2012 (UTC)[reply]
Color representations with more than 24 bits are used by professionals for better dynamic range and to avoid posterization during image processing. -- BenRG (talk) 01:55, 8 January 2012 (UTC)[reply]
Polygons are a measure of complexity of a scene. You can get a pretty realistic representation of the Bonneville Salt Flats with just one polygon (okay, texture resolution would be a problem, and it's also not perfectly flat). Other aspects of graphics systems have effects on how many polygons you need to produce a realistic result, such as normal mapping. And it's possible to do computer graphics without polygons at all; raytracers will happily render spheres for you that don't have edges, no matter how close you examine them. They're slow, though.
There are some other problems to overcome. I don't think any monitor exists that can match the gamut of the human eye. Monitors have a far smaller range of brightness than the eye can see, and even if you can fool most forms of depth perception, I don't think you'll ever be able to fool the sense of accomodation without physically placing objects different distances from the eye. Paul (Stansifer) 14:41, 7 January 2012 (UTC)[reply]

I've seen plenty of renders that were virtually imperceptible from reality, but they no doubt took a great deal of time and effort to create. ¦ Reisio (talk) 20:21, 7 January 2012 (UTC)[reply]

I've asked something like this before. I was photographing the Helsinki Market Square and noticed that my eyes did a far better job at seeing the image than my camera did. I deduced that it wasn't really because of my eyes (they worked similarly to my camera), but my brain, which was constantly analysing what my eyes saw and constructing an image best suited for viewing on the fly. However, when looking at the image taken by my camera, my brain couldn't do this, but instead had to work on what was "really there". JIP | Talk 22:28, 7 January 2012 (UTC)[reply]


According to naked eye, the angular resolution of the fovea is about 0.02°. The eye doesn't have a linear resolution (like dots per inch) that you could directly compare to a monitor. Any monitor can match the resolution of the eye if it's far enough away. At a distance D, the monitor's resolution needs to be about 1 / (D tan 0.02°) ≈ 3000 / D. The Galaxy Nexus has a 316-pixel-per-inch screen, which ought to match the resolution of the fovea at a typical reading distance. I have good eyesight and I've seen this phone and I can confirm that it looks pretty much as sharp as print.

If you wanted a cube map that would match the eye's resolution in every direction, each edge of the cube would have to be 2 / tan 0.02° ≈ 6000 pixels, for a total of 6×6000×6000 ≈ 200,000,000 pixels. Of course, the eye doesn't have nearly this many cones. A display that "drew" directly onto the retina with a laser could get away with a much lower resolution.

There are other things besides pixel resolution that contribute to the realism of a scene, such as:

  • Stereoscopic vision, obviously.
  • Dynamic range: can your display reproduce everything from the sun in the sky to a dark corner of a room? They may both be visible at once.
  • Focus: objects outside the viewer's dynamic focal plane are blurred. But if you blur them on a computer display then they remain blurred even when the viewer tries to focus on them.
  • The eye can see colors outside the gamut of any three-color display, though those colors aren't very common in real-world scenes.
  • And, of course, the many difficulties of realistic 3D rendering. But the problems above apply even to recorded images of the real world.

-- BenRG (talk) 01:55, 8 January 2012 (UTC)[reply]

The Wikipedia search facility is seriously broken[edit]

Whatz up? --Epipelagic (talk) 12:53, 7 January 2012 (UTC)[reply]

It's working fine for me. Can you add something more specific about your problem? --jjron (talk) 13:10, 7 January 2012 (UTC)[reply]
Oh it's a false alarm. It's a software "upgrade". For example, if you type something that doesn't exist as the title of an article, such as "76e373dfs", then previously you got a search screen, which you will no doubt remember, where you could set the particular Wikipedia spaces you wanted to search in. Now you get a gray bar which presents this possibility as one among several. --Epipelagic (talk) 15:12, 7 January 2012 (UTC)[reply]
Oh yeah. I had seen that earlier on, but hadn't actually noticed it, if you get what I mean. --jjron (talk) 16:37, 7 January 2012 (UTC)[reply]

Synthesizer voice[edit]

What is a synthesizer "voice" more specifically ..? Electron9 (talk) 15:46, 7 January 2012 (UTC)[reply]

Our MIDI article says: "Early analog synthesizers could only play one note or "voice" at a time". Or do you mean a speech synthesizer? Von Restorff (talk) 15:50, 7 January 2012 (UTC)[reply]
Nah, I mean the ones used in the music synthesizer context. Like Ensonic chips that has 32 "voices" etc.. Electron9 (talk) 16:29, 7 January 2012 (UTC)[reply]
They can play 32 tones at the same time. A synthesizer with eight voices can play eight tones at the same time. For more info read this. Von Restorff (talk) 16:33, 7 January 2012 (UTC)[reply]
A synth with only one voice is called monophonic, if it has more then one voice it is polyphonic. 32 voice polyphony is how you would describe a synth with 32 voices. The Polyphony article describes the broader concept as it applies to music in general, rather then for instruments. Vespine (talk) 23:02, 8 January 2012 (UTC)[reply]

Google search[edit]

When I perform a Google search, I expect to get a couple of (irrelevant) sponsored links at the top, with a yellow background. Fine. A friend of mine has a new Windows 7 laptop, and Firefox, and whenever she performs a Google search, whatever the subject, she has to scroll through a whole page of sponsored ads before she gets to the real results. And because there is a whole page of them, I mean like ten or twelve ads, she not surprisingly thinks she is getting irrelevant results. I've never come across this before, Google help seems to have nothing on this, and I cannot find a relevant setting among the search options. Does anyone know what is going on? Thanks.--Shantavira|feed me 17:56, 7 January 2012 (UTC)[reply]

Sounds like malware. Start by running malwarebytes. If possible scan the computer with a decent virusscanner (like ESET Smart Security, the trail version is free!). Install AdBlock plus. Von Restorff (talk) 18:09, 7 January 2012 (UTC)[reply]
Also, are you sure she really is doing a Google search directly ? There are other "search engines" that do a Google search and add their own ads on top. The AOL search did this, for example. StuRat (talk) 23:24, 7 January 2012 (UTC)[reply]
Bing does this too. Really embarrassing when Google figured it out. Von Restorff (talk) 01:34, 8 January 2012 (UTC)[reply]

Question about destructors in C++[edit]

Hi everyone!

I'm learning how to program in C++ and I had a question about how exactly to properly write destructors.

Lets say I have a very simple class that looks something like this:

class SomeClass {
    private:
        vector<int> some_list;
}

That is, the class has a vector in it because it needs to stuff with lists. Now, If I don't write a destructor for this class and instead just use the implicit or default destructor (or whatever it's called), it's going to automatically destroy that vector, right? Like, I don't have to worry about it?

But in the case I'm facing right now, I have to write my own explicit destructor, because it needs to do some other clean-up in addition to the class-variables. But in that case, how do I destroy some_list? It wasn't dynamically allocated, so I can't type "delete some_list" (right?). Is it automatically destroyed, even if I write my own destructor?

Also, another quick question while were on the subject of vectors: if I delete a vector (either by typing "delete some_vector" or it being destroyed in an implicit destructor or whatever), will all the objects in that vector be destroyed also? Or do I have to destroy those manually?

As I said, I'm very, very new at this so I appriciate any help I can get, especially when it comes to memory management. Thanks! 80.216.1.161 (talk) 20:06, 7 January 2012 (UTC)[reply]

Even if you write your own destructor ~SomeClass() the vector will be destroyed correctly by the compiler when it goes out of scope (ie when the SomeClass object is destroyed) - you don't need to destroy it manually.
The vector's destructor will automatically call the destructor of each of its elements when the vector is destroyed, so again you don't have to do anything. I suggest that you prove this to yourself by writing some simple test code to demonstrate. Instead of vector<int> use vector<SomeOtherClass> where SomeOtherClass has a destructor that does something you can see, eg cout << "In ~SomeOtherClass\n". Execute the code and when the SomeClass object goes out of scope you should see the ~SomeOtherClass output occur for each element of the vector, without your having to explicitly destroy anything.
Note that there is one time where you might have to explicitly delete the SomeClass object, and that is if you created it with new, rather than as an automatic/stack variable. Eg auto/stack variable:
class SomeClass {
    private:
        vector<int> some_list;
}

fn()
{
    SomeClass sc;

    // do stuff

} // the compiler will call sc.~SomeClass() here, which in turn will destroy some_list and its contents

fn2()
{
    SomeClass *psc = new SomeClass;

    // do stuff

    delete psc; // the compiler will call psc->~SomeClass() here, which in turn will destroy some_list and its contents
}
Mitch Ames (talk) 23:40, 7 January 2012 (UTC)[reply]

How to raise an IOI kid?[edit]

Dear Math Wikipedians:

I myself am a failed IOI contender, I was able to only achieve a bronze medal at the national olympiad level and therefore was unable to make it into the IOI team of my country. Now I am in university and am ineligible to participate in olympiads anymore. While preparing for these competitions I have heard a lot about how people like Reid Barton did math and coded when they were still wearing diapers. I am wondering what I can do, as a future parent, to make sure that my kid grows up to be just like Reid Barton, doing math and programming in their diapers, and win the IMO and IOI when they are 8 or 9? I have looked up the corresponding Wikipedia pages, but other than providing information about the contests themselves, and a list of past contests that have happened and some notable winners and achievements, the Wikipedia pages did not have a section that taught people how to raise children that are capable to achieving those feats.

I hope that my future children will be able to walk further than I have in these endeavours.

Thanks for all your help and suggestions.

70.29.24.167 (talk) 20:18, 7 January 2012 (UTC)[reply]

While I'd consider it fairly noble to endeavour to know how to make your child excel at math, raising one to win at some competition that you didn't quite win completely at seems rather absurd. If he doesn't win will he have to deal with your disappointment and continued obsession? Did even your father put you through that? Have you considered how much you enjoyed your time as a child when you weren't training to win some math competition? 2¢ ¦ Reisio (talk) 20:28, 7 January 2012 (UTC)[reply]
Choosing the direction for your kid to excel in is very rarely successful. Let them be generalists for as long as they want, then do everything they want from you (no more) to help them succeed in a field they choose to love. HiLo48 (talk) 22:41, 7 January 2012 (UTC)[reply]
Talk to a mental health professional about your problem. This is your problem, it has nothing to do with your kid. You described yourself as a "failed IOI contender" (does that define who you are?) and you think a bronze medal means defeat. Sorry for being honest; but living your life through your kid, making them succeed where you failed, is child abuse. DON'T DO IT. If you do it will backfire in an ironic way; especially during puberty. The sad part is that you failed at something that is more important and easier to achieve than winning that shitty competition no one gives a f### about; you failed to become a person who is reasonably content with his/her life. As George Carlin said: "I think every day all children should have three hours of day-dreaming. Just day-dreaming. You can use a little of it yourself. just sit at the window and stare at the clouds, it's good for you. If you wanna know how you can help your children: Leave them the fuck alone!". Von Restorff (talk) 02:07, 8 January 2012 (UTC) p.s. The question you should be asking is: how do I give my child the best chance at being happy and healthy?[reply]
Check the Abecedarian Early Intervention Project. However!, the important thing is to spend TIME, be NICE and happy with your child, PLAY the games the child wants. Give a few stimulating toys that requires the child to think like Lego, Meccano, Capsela and avoid static things like Model cars and dolls. Absolutely don't give everything they point at. Things like knowing your own value and respecting others (kids) are also important. Electron9 (talk) 04:43, 8 January 2012 (UTC)[reply]
There's László_Polgár, who intended for all of his daughters to become chess champions, all of whom did. 69.243.220.115 (talk) 13:59, 8 January 2012 (UTC)[reply]
And there's Joe Jackson, who intended for all of his children to become kings and queens of pop, some of them did, but they all had miserable childhoods and were severely traumatized by the experience. Von Restorff (talk) 14:05, 8 January 2012 (UTC) p.s. Nota bene: László was not trying to live his life through his children; he is an expert on chess theory himself and he taught his kids a couple of tricks so that they could easily defeat other kids their age.[reply]
I see your NB, and would agree that one's motivation would probably have an impact on the even-handedness of one's application of the principles of concerted cultivation, which themselves do not inherently lead to traumatic childhoods. 69.243.220.115 (talk) 14:23, 8 January 2012 (UTC)[reply]
Indeed. If you have smart kids you should not force your children to get intellectual stimulation (that is unnecessary and counterproductive); but you should provide it. My parents put all the books for children on the bottom shelves and bought lots of LEGO; this is a good strategy. At some point they even told my sister she was not allowed to read books for a while because it was more important to go play outside and have a normal childhood with human friends instead of characters in a book. Von Restorff (talk) 14:29, 8 January 2012 (UTC)[reply]
"put all the books for children on the bottom shelves and bought lots of LEGO" ;-) I agree with the point of not forcing agendas upon children. However the exceptions would be treating others right, getting sleep on time, eating vegetables besides sweets etc. Children are people, not a personal accessory. Electron9 (talk) 16:08, 8 January 2012 (UTC)[reply]
I do not remember my parents ever forcing any agenda upon me, not even about things like how I should treat others, they encouraged me to think about how I would like to be treated and explained that other people would probably want the same thing. They encouraged me to debate the rules they made for me (e.g. about bedtime) because that made me understand their POV which made forcing me to follow the rule unnecessary; I understood it, agreed to it and it became my own rule for myself. If you explain your child your POV and lead by example you can allow your child to form his/her own opinion and you do not have to force them to adopt your views. To make a long story short; I never went to Disneyland, I never got a gameconsole on my birthday and I ate a lot less sugar than most kids. But they did give me my own computer at a very young age, I was allowed to join lots of clubs and do all kinds of sports and activities, I started my own bookcollection at the age of 6, I had a lot more freedom than other kids my age (e.g. my own bike + housekey + permission to play with friends after school but before dinner). I didn't mind not visiting Disneyland because our holidays were much better than that, my parents bought expensive sweets for us at an alternative store that tasted much better and my 286 (and later my 386 and 486) had much more capabilities than the gameconsoles in those days. Let's be honest, learning BASIC is more fun than playing with a gameboy. I discovered that myself. Von Restorff (talk) 23:03, 8 January 2012 (UTC) p.s. In the mid-1950s, Joe Jackson started a music career with his brother Luther, playing guitar in a band called The Ford Falcons. The group split up a couple of years later after failing to get a recording deal. Joseph returned full time to his job at U.S. Steel.[reply]


Hi everyone, I am the OP. Thank all for your inputs. I think I know what I should do in the future now. Thanks again. 128.100.113.126 (talk) 18:15, 9 January 2012 (UTC)[reply]
Resolved

problem with networking[edit]

I have a problem with a wired/wireless network. The desktop computers are wired. Yesterday one of them locked up and to make a long story short, when it got back to normal, other computers can't access it on the network but it can access the others. It shows up as \\name (where name is its name) on the network, but it can't be accessed. I checked all of the sharing settings that I know about and they are correct. All computers and the router have been rebooted several times.

I'm thinking about putting in a wireless network adapter to fix the problem, but it might have the same problem. Any ideas on how to fix this? Bubba73 You talkin' to me? 20:31, 7 January 2012 (UTC)[reply]

Identifying people on Twitter[edit]

Hi, I don't ever use Twitter ... not really interested in it ... but nowadays one often sees the "tweets" of well-known people quoted in other media. Hence I was wondering how, on Twitter, you know that someone is actually well-known person X, rather than random person Y who either happens to really have the same name, or deliberately intends to impersonate someone. For example, if I tried to set up an account called "RooneyManUtd", would anyone stop me, or actually check who I was? 109.151.39.98 (talk) 21:42, 7 January 2012 (UTC)[reply]

Twitter has a "verified" badge (which it doesn't offer to the public; presumably celebrities' agents know how to get Twitter to add it). Without that, there's no way to know, and every reason to doubt. -- Bonnie Prince Charlie 21:57, 7 January 2012 (UTC)[reply]

They often post links to their profiles on their official pages elsewhere or whatever. I've also seen them take pictures of themselves identifying their accounts in the picture, thus proving they are that person. In regards to impersonators, usually they aren't challenged. However, I know of at least two cases where an impersonator account was usurped by the actual person, presumably with the help of the admins or whoever is in charge of running the site 82.45.62.107 (talk) 01:05, 8 January 2012 (UTC)[reply]

........and who cares about the so called celebs anyway, only their mums probably, I certainly don't and would not touch Twitter with a barge pole. — Preceding unsigned comment added by 85.211.148.143 (talk) 07:23, 8 January 2012 (UTC)[reply]

The other way to know is if one celeb (especially a verified celeb) is either a follower of or tweets directly at another celeb. Also, often when a new celeb in a related field joins Twitter, their real life celeb friends will tweet their followers to follow so-and-so, with a link to the account, so you know it's genuine. Another thing is that celebs will often only follow say twenty or thirty other people on their official account, even though they may have a million followers, so you can be pretty sure the ones they're following are other celebs they actually know. Re setting up a fake account, no there's nothing stopping it. However it is encouraged that you either make the username directly say it's fake e.g., FakeWayneRooney, or in the little bio piece state that it is a fake account. Alternatively you can just make the Tweets obviously not be something real. If you don't do these things and say usurp a real name pretending to be someone else, they can (and do) block the account, or reallocate it to the real person if they request it. --jjron (talk) 10:58, 8 January 2012 (UTC)[reply]
And in fact, making (obviously) fake twitter accounts of famous people is a relatively common thing to do. For a while, a fake Rahm Emanuel would make obviously fake, profanity-laden tweets in the mayor's name [1]. At one point, the fake had three times the number of followers as the real deal. Buddy431 (talk) 21:29, 9 January 2012 (UTC)[reply]

Do Android developers dream of owning lots of physical Android phones?[edit]

How many physical phones do they need? 0? (and use an emulator on a PC), 1? (and risk that his app only runs in his phone) a handful of popular phones? (just to be sure). 88.9.214.197 (talk) 21:54, 7 January 2012 (UTC)[reply]

I'd assume they dream of developing Electric Sheep. AndyTheGrump (talk) 21:58, 7 January 2012 (UTC)[reply]
Foo bar lambs! fredgandt 04:34, 8 January 2012 (UTC)[reply]
Depending on the kind of application, testing using emulators may be sufficient.
However, The emulator has very poor OpenGL support (No OpenGL support on Mac!), so if you're making games you're going to need real devices. Especially if you're making a game that really pushes the limits of the phone/tablet's graphics cards.
People talk about "Android fragmentation", but unless you're doing something really fancy, what works on one phone will work OK on another phone, and you can release patches easily. But if you have to be absolutely sure, you can use a service like Device Anywhere or Perfecto Mobile which offer "cloud-based" testing of a large number of different phones. APL (talk) 00:37, 8 January 2012 (UTC)[reply]
Another consideration is Accelerated stress testing. In the trivial case where a software Heisenbug occurs "1 time out of 1,000,000 trials," the obvious solution is to run your software on one million test-devices. (Real software bugs aren't so uniformly random - but nasty issues like race conditions are sometimes better analyzed using statistical approaches). Prioritization of bug-fixes is also better accomplished if you have statistics for failure types and frequencies. It's impractical for a small Android developer to have a test rack of thousands of devices, but it isn't impossible. Even having two devices is a good idea. If the test regimen is well designed, running two devices "doubles" the code coverage / fault coverage. In practice, this isn't exactly a linear scale; for a lot of reasons, doubling the test-units doesn't double the test coverage; but it should be better than running on one test device.
I would find it difficult to develop application software of any type if my only test machine were forced to double as my development environment. Nimur (talk) 19:25, 9 January 2012 (UTC)[reply]

Typing password twice in Windows 7[edit]

Just wondering if anyones experienced any problems with Windows 7 when locking the screen where you have to type the password in twice to unlock it. --212.120.242.42 (talk) 23:47, 7 January 2012 (UTC)[reply]

I intermittently encounter the same problem when my laptop is resuming from sleep mode. It concerned me at first, but multiple virus scans came up clean. I treat it as a nuisance. A search turned up this page where Microsoft acknowledges the problem. 24.254.222.77 (talk) 00:21, 8 January 2012 (UTC)[reply]
That bug description is over a year old, and apparently applies only to people who installed one hotfix but not another. I'd be somewhat surprised if it is the original's poster's problem. As you implied, one possible reason for this behavior is that one of the two screens is a fake login screen designed to capture your credentials. If you're ever worried about this, you can check by pressing Ctrl+Alt+Delete. If it's a real login screen this will do nothing. If it's a fake login screen it will bring up a different screen showing which user is logged in. In Windows XP (and Vista?) this may not work unless you disable fast user switching. -- BenRG (talk) 00:40, 8 January 2012 (UTC)[reply]