Wikipedia:Reference desk/Archives/Computing/2009 May 12

= May 12 =

Music Transcriber software
Is there any software out there that will allow me to manually enter in notes for a sheet music and transcribe the entered notes into any key or any octave I want? Acceptable (talk) 02:58, 12 May 2009 (UTC)
 * Sibelius certainly does this, and no doubt other score-writing packages do so as well how do I do a wikilink to a category? . AndrewWTaylor (talk) 11:11, 12 May 2009 (UTC)
 * Put a colon after the second square bracket: Category:Scorewriters— Matt Eason (Talk &#149; Contribs) 12:42, 12 May 2009 (UTC)
 * Easy when you know how - thanks. AndrewWTaylor (talk) 14:17, 12 May 2009 (UTC)

Ah thanks a bunch. Acceptable (talk) 03:46, 13 May 2009 (UTC)

Dotcom Bubble mk II
How do Twitter, Facebook, and their ilk make money? I overheard someone saying something like "Twitter must be worth a fortune"? srsly?? They don't charge for their service and are barely providing anything of actual value to society. Is this Phase II of the dot-com bubble? Incidentally, Twitter has zero information about their revenue or business model, to the extent it's possible to have a business model based on free blogging. —Preceding unsigned comment added by 98.28.115.135 (talk) 07:21, 12 May 2009 (UTC)
 * Adverts —Preceding unsigned comment added by 82.44.54.169 (talk) 07:35, 12 May 2009 (UTC)


 * First off, both of you should try to sign your posts with four ~'s. And now for the actual answer: As far as I know, they don't. I recall reading that neither Facebook nor Twitter have ever turned a profit. Thanks,  gENIUS 101  21:21, 12 May 2009 (UTC)


 * So it is overvalued then? Even if they make some money from advertising, I don't see how that would even come close to paying for their costs. 98.28.115.135 (talk) 02:31, 13 May 2009 (UTC)
 * It can't be overvalued if it's not for sale. Twitter mostly runs on private funding, from what I understand, though they're looking at deals to bring in revenue from corporations. They've been fairly mum on what that entails. &mdash;  The Hand That Feeds You :Bite 13:27, 13 May 2009 (UTC)


 * It comes from venture capital too. Also the $200 million that Microsoft paid Facebook for a small percentage could have helped you think? (heavy sarcasm here) Sandman30s (talk) 07:05, 13 May 2009 (UTC)


 * You guys are overlooking the obvious answers: Facebook is funded entirely by the NSA (google it) and Twitter is run out of the back of some guy's van (seriously, how hard is it to ship 160 byte messages around?)  On a more serious note, both of these companies demonstrate a post-dotcom agenda of "It's the visibility, stupid".  If Twitter or Facebook went out tomorrow and said "hey microsoft, yahoo, google, what will you give me for my domain and all my customers?" the price would be astoundingly high considering the estimated ROI would be "never".  Instead, it's the fact that those services simply appear in front of millions of eyeballs a day that makes them incredibly valuable.  Just look at the sale of Myspace (except nevermind that it's been relegated to the social networking dustbin).  --66.195.232.121 (talk) 17:32, 13 May 2009 (UTC)

Sorry for continuing with this, but I still don't understand the economics of the situation. The "venture capital" and "money from Microsoft" answers above would seem to be begging the question -- why do those companies invest? They must perceive Facebook etc to be very valuable. What is the value? Is it really the eyeballs? This "mind-share" idea was a major part of the dot-com bubble, if I recall. Why would Microsoft, Google, or whoever shell out millions for one of these services and all their subscribers (not "customers", since the users don't apparently pay anything)? Is it for the right to mine the data or send "special offers" to their email accounts? In the dot-com bubble, everyone's business model seemed to be based on this kind of "potential revenue" concept. "Build mind-share now; we'll figure out how to make money with it later." Is that what's going on here? Please help me understand.

By the way, if I owned Facebook, I would start charging a yearly subscription fee, or implement a micropayment scheme. But I know very little about finance and marketing (obviously, since I'm asking all these questions). Surely there's a good reason that the people who control Facebook and other social networking sites don't do this. What is it? —Preceding unsigned comment added by 98.28.115.135 (talk) 06:05, 14 May 2009 (UTC)
 * Yes after the Microsoft purchase, Facebook was valued at over $10 billion and the owner said that he thought it was worth much more. It is worth that much simply because of the number of people that use it and the potential in advertising revenue. Any marketing company would tell you that the more people that see the product, the more effecting the marketing. The world's most powerful revenue streams are from advertising - look at the billions of dollars of television rights being sold? Imagine your product being shown on the front page of facebook - what that would be worth to you, and how much you would have to pay to advertise there!? Sandman30s (talk) 14:55, 14 May 2009 (UTC)
 * But advertising business is speculative, and has all the makings for a bubble. Money is poured in to advertisements on the presupposition that there is a return on that investment.  Now that the internet medium has become commonly used for advertisements, metrics and data-collection are available which may indicate that the advertisements are not as effective as originally believed.  Look at click-through rates.  They are disturbingly low, and I have a hard time believing that the costs are justifiable - even on a high-volume, high-traffic site like Facebook.  Nimur (talk) 14:59, 14 May 2009 (UTC)


 * I recently signed up to Twitter and was surprised at the lack of adverts so I did some rummaging through their "about" pages to find out where their money was coming from and the answer was "it isn't". They don't make any money and spend large amounts of it. I think their plan is to build up a massive customer base (so far so good!) and then find a way to make money out of it. I'm not sure how they intend to do that. Simple adverts probably won't cover much (a lot of twitter users access it through 3rd party clients, so they won't see any adverts put on twitter.com), they'll have to come up with something more inventive, probably unique to their service. I'm not convinced they'll manage it... --Tango (talk) 15:12, 14 May 2009 (UTC)
 * I don't know, some websites aren't looking to make a profit, just to keep existing and provide a service. I think I'm on one right now. —Preceding unsigned comment added by 82.44.54.169 (talk) 16:09, 14 May 2009 (UTC)
 * Wikipedia is run by charity that is funded by donations. Twitter isn't funded by donations, it is funded by investors. People only invest if they expect to get a return on that investment, ie. the company will turn a profit. --Tango (talk) 16:26, 14 May 2009 (UTC)
 * Just giving wikipedia as an example. I can think of loads of websites that seem to exist only to provide a service to its userbase with no real profit goal in mind, yet they survive and thrive
 * Hosting a website costs money, that money has to come from somewhere. Either it comes from donations, or it comes from investors that expect a return. I can't see people donating to keep Twitter running, it's just not that kind of site. --Tango (talk) 21:20, 14 May 2009 (UTC)

''It is worth that much simply because of the number of people that use it and the potential in advertising revenue. Any marketing company would tell you that the more people that see the product, the more effecting the marketing. The world's most powerful revenue streams are from advertising - look at the billions of dollars of television rights being sold.''

I guess this is the concept I'm struggling with. Business people (who presumably know more than me about this stuff) seem to regard this "potential revenue" very favorably, despite no provable way to turn it into real revenue, at least as far as I can see. The analogy with television isn't very good, because television viewers are passively focused on TV over very long periods of time, and very subtle forms of advertisement (such as product placement, or themed shows) can be effective. Moreover, TV commercials have become an institution in their own right. Many web users block ads, and the remainder probably just ignore them since they are actively mousing around to things they are interested in on the page. Is ownership of Facebook, Twitter, and the like maybe just a "branding" thing (as in, "everyone uses Facebook...Company X owns Facebook and has their logo on it...therefore Company X has huge brand awareness")? Or maybe I'm reading too much into this, and websites really are worth whatever people think they're worth? (Is there an economic term for this sort of "virtual value"?) And didn't this idea crash and burn in the late 90s? 98.28.115.135 (talk) 18:11, 14 May 2009 (UTC)
 * OK let me answer this based on the dot-com bubble link you provided. This link speaks about several factors in its third paragraph: "A combination of rapidly increasing stock prices, individual speculation in stocks, and widely available venture capital created an exuberant environment in which many of these businesses dismissed standard business models, focusing on increasing market share at the expense of the bottom line."


 * Based on this, they state that there was widely available venture capital at the time. Stock markets were bearish and America was in an economic boom during the 90's - and there was this phenonemom called the internet that was growing exponentially. Investors poured loads of "other" money into dot-coms. This created an irrational over-exuberance, because most of the dot-coms had shaky business models without foundations (bottom lines). This created speculation. An example of a dot-com that survived was amazon, because it had a tangible business model of selling physical stock (books) as opposed to virtual stock (stock value on the stock markets). Many other dot-coms (50%) went bust (to the tune of $5 trillion) because when the stock market crashed, they lost large volumes of stock, and lots of them had already burned their venture capital. Nobody wants to invest more in a crash, with the exception of purchasing cash-strapped companies with tangible models and solid contracts. And so that was the end of the dot-com phenomenon.


 * The new decade ushered in a new breed of internet companies - those based on physical sales with an underlying structure that could survive another crash. As the stock markets entered another bear run from about 2004 onwards, came the social networking heavyweights. Once again, people had venture capital to invest in myspace, facebook, etc. Even with the recent financial crisis and stock market crashes, these sites have managed to survive. Whether it's because of an excess of venture capital or because of the existence of powerful advertising streams (google makes the world go round), I don't know. Whether social networking sites are here to stay or not, is speculation. The fact that Microsoft spent so much on Facebook, and wanted to buy Yahoo for gazillions of dollars, and Oracle buying Sun (and in turn Java) really says something: the big boys of the tech world want to have their presence felt online and that is a trend that these companies have paid big money to actuaries and business analysts to OK. Sandman30s (talk) 19:24, 14 May 2009 (UTC)

Simplify bitwise expressions?
Is there a way to simply bitwise expressions? For example, in CRC32 there is an expression:. Most compilers don't implement this very efficiently but with some bitwise operator tricks you can implement this in 4 x86 assembly instructions. Is there some systematic way of doing things like this? --wj32 t/c 11:26, 12 May 2009 (UTC)


 * Yes - you need to learn about Karnaugh maps (or "K-maps" for short). The problem is that in your example here, 'h' is not a boolean but an unsigned integer or something...which makes it harder.  However, understanding what the expression is doing and writing that down in English will usually help you figure out another way to handle it.  In this case, we're downshifting 'h' and XOR'ing it with either POLYNOMIAL or zero.  Since XOR'ing with zero does nothing, we're saying "downshift h ; if the bit that fell off the bottom end is a 1 then XOR h with POLYNOMIAL" - what you can do to optimize this depends on the machine-code of whatever processor you are using.  On some CPU architectures, there is a down-shift operator that puts the bit that falls off the end into the carry flag or something - in which case you can downshift and do a conditional branch around the XOR instruction...but what is optimal depends on the architecture of the CPU you are using - and also on whether you are optimizing for minimal code size or maximum performance - and whether your input variables are already in registers...lots of things!  SteveBaker (talk) 12:48, 12 May 2009 (UTC)


 * I assume the four instructions you're talking about are

shr eax, 1 sbb ebx, ebx and ebx, POLYNOMIAL xor eax, ebx
 * I think the major x86 optimizing compilers know the SBB/AND trick, but they're not very good at making use of implicitly set flags (the carry flag in this case). They generate carry-setting instructions followed by SBB and AND, but only as part of specific three-instruction idioms. Probably that's because the optimization phase that could make use of the carry flag runs before the code-generation phase that could notice that the carry flag is available. But this is clearly a soluble problem because they solve it for / and %. X86 has a single instruction that produces both a quotient and a remainder and every optimizing compiler I've used is smart enough to generate a single division instruction for code that uses both. I don't know how they do that, but probably it's by turning x = b / c; y = b % c; into (x,dummy1) = divmod(b,c); (dummy2,y) = divmod(b,c); and letting common subexpression elimination handle the rest. Whatever trick they use, it would surely also work for >>1 and &1. So why don't they do that? My best guess is because it would seriously slow compilation to generate those compound assignments for every instruction that sets flag registers. Every addition and subtraction would become a compound assignment to five registers, which would cause major intermediate-language code bloat. So, yes, I think standard compiler tricks could systematically produce better bit-twiddling code, but the compiler writers aren't willing to pay the price in compilation speed.


 * Or maybe they just don't care. Look at the output of Microsoft C 15.0:

mov ebx, eax and al, 1 movzx eax, al   neg eax sbb eax, eax and eax, POLYNOMIAL shr ebx, 1 xor eax, ebx
 * So it converts the value to a byte before doing the &, then converts it back to a word, then does the NEG-SBB-AND idiom even though a simple NEG-AND would have worked. GCC 3.4.4 emits a conditional jump that will be mispredicted 50% of the time (though it can't be expected to know that). At least it's smart enough to change that to a CMOV when I specify -march=pentiumpro. GCC 4.3.2 was the only compiler that produced the kind of code I expected them all to produce, namely

mov ebx, eax and eax, 1 neg eax and eax, POLYNOMIAL shr ebx, 1 xor eax, ebx
 * I'm curious to see what Intel's compiler does with this code, but I don't have it installed and I'm not too keen to download a 586 MB tar.gz file just to try a trivial two-second example.


 * If you were wondering about searching for provably optimal code sequences, I'm afraid I have no idea. -- BenRG (talk) 14:28, 12 May 2009 (UTC)


 * See Superoptimization for optimal code for things like this. It has to be fairly important to start running that sort of optimization though. Optimizing a really critical section is one rason many compilers allow you to put a little bit of assembly inline. Dmcq (talk) 15:41, 12 May 2009 (UTC)


 * Thanks for all your responses. I had come up with these five instructions originally (similar to what GCC produces):

; eax = edi = h and	eax, 0xfffffffe ; if h & 1 then eax = h - 1, otherwise eax = h sub	eax, edi ; if h & 1 then eax = h - 1 - h = -1, otherwise eax = h - h = 0 and	eax, POLYNOMIAL ; if h & 1 then eax = POLYNOMIAL, otherwise eax = 0 shr	edi, 1 xor	edi, eax
 * --wj32 t/c 05:57, 13 May 2009 (UTC)


 * Definitely check out the wonderful book Hacker's Delight for lots of this kind of thing. --Sean 17:41, 14 May 2009 (UTC)

Tree structure
In a tree structure, the disadvantages lie in what is obscured or left out: relations that are neither hierarchical nor transitive, or overlapping. However, are there structures that specialize in every single kind of relation - overlapping, circular, etc? What are they?--Mr.K. (talk) 17:06, 12 May 2009 (UTC)


 * A graph is a generalization of a tree that removes hierarchy and the parent-child relationship. There are many ways to implement graphs, such as using linked lists with multiple links per node. --  k a i n a w &trade; 19:42, 12 May 2009 (UTC)

Collision detection in games
What kind of things they do for that in modern games? I've had a look on BSP, BHV and BIH but they don't answer my question. --194.197.235.70 (talk) 20:42, 12 May 2009 (UTC)


 * Collision of spheres (or circles in 2D) is a very simple calculation, based only on the center of two spheres and the radius. Therefore, games tend to simplify things to spheres.  When it is acceptable, things may become cylinders.  In this case, the collision is based on the center of the cylinders and radius as before, but with height included.  It is a 2D view from overhead.  When things collide in the 2D view, height is checked to see if one object is above the other.  Because games need to make many calculations quickly, it is rare that collision detection will go further.  that is why it is not uncommon to see object partially intersect other objects before the collision detection kicks in. --  k a i n a w &trade; 20:49, 12 May 2009 (UTC)


 * This "Advanced Collision Detection Techniques" article on Gamasutra from 2000 may be informative. Tempshill (talk) 02:45, 13 May 2009 (UTC)


 * It's an expensive calculation (in general) and the idea is to use the simplest possible shapes - consistent with the needs of the game mechanics. The critical thing here is that you very rarely need an exact test.  Spheres, boxes, axially-aligned boxes, frustums and 'capsules' (cylinders with hemispherical end-caps) are very popular - but some things just have to be done with detailed polygon meshes - which is a pain to program correctly and hideously expensive at runtime.


 * In most cases, we use a simpler shape (typically a sphere) to do a rough check - if you penetrate the sphere, then we go on to do the full calculation with a more elaborate shape...but in cases where collisions are very likely (eg the wheels of a car being tested against the road in a driving sim), that might just add more time penalty. When there are a vast number of objects and even comparing sphere-to-sphere is too costly, we'll use a 'broadphase' check where the objects are dumped into a quad-tree or octtree structure first so that we only have to check things that are in the same or adjacent cells...when they are close enough, you dump potential collision candidate pairs into a queue and perform 'narrow-phase' checks on more detailed geometry.


 * Very often, this odeous problem gets handled by the physics software middleware - so you just let Havok or whatever handle it...and they are welcome to that because collision detection is a tedious and generally thankless programming task! You might just have your artists model the collision volumes on an object-by-object basis rather than attempting a one-size-fits-all solution.  There are tricks using BSP trees that can be used where a full mesh collision is needed - but BSP's are a pain to maintain when objects are moving or changing shape - so you have to be pretty desperate to resort to using them.


 * However, no two games are the same - and collision detection is one of those places where "domain-specific-knowledge" can get you huge wins.


 * SteveBaker (talk) 03:35, 13 May 2009 (UTC)
 * Aah gone are the days of 8-bit sprites and simple XOR algorithms... oh wherefore art thou Commodore 64? Sandman30s (talk) 07:02, 13 May 2009 (UTC)
 * They aren't entirely gone - iPhone and flash games still do that kind of stuff. SteveBaker (talk) 20:28, 13 May 2009 (UTC)

Thanks for the replies. --194.197.235.70 (talk) 19:24, 14 May 2009 (UTC)

Problem transferring files to CD from digital camera card
Hello! When I transfer a photo (.jpg) from my digital camera to a CD to archive it and clear the camera card, a small amount of the images strangely get corrupted. I've uploaded some of the files from the CD-corrupted copy (the file originally looked okay on the camera card) to Commons as examples:

(I've cropped two of them to preserve the anonymity of the subjects.)



Why does this happen? Can it be fixed with some kind of photo-editing software (I no longer have the original file on the camera card to just re-copy it)? What steps can I take to prevent this in the future when I move photo files from a camera card to a CD? Thank you very much.--el Aprel (facta-facienda) 21:39, 12 May 2009 (UTC)


 * This looks like a software bug in the implementation of the file system, or the JPEG compressor, on the digital camera (maybe there was a CPU brownout during file-transfer due to low battery on the camera). In some of the pictures it looks like you dropped a color channel (or something); in others, there is irrecoverable data loss.  It's not likely that these photos can be recovered.  Nimur (talk) 21:57, 12 May 2009 (UTC)
 * He said the images look fine before they are transferred to the cd, so it's not the camera corrupting the jpeg. I've seen something similar to this when downloading pictures from the internet and the download is interrupted half way through, perhaps your software isn't burning the images to the disk properly
 * ps I moved the images over to the right of the page as they were interfering with the text formatting


 * I would check each part of the process. Somewhere along the line, it looks like data is being dropped.  Maybe the camera isn't recording it properly on the card.  Maybe the card has some dead spots (unreadable or unwritable) on it.  Maybe the USB cable is loose.  Maybe your camera or PC has some fault handling Jpegs.  Maybe the CD burning software is suffering from buffer underflows.  Try different cameras. Try different cards in the camera. Try different PCs.  Try different burning software.  and so on.  If everything really is fine until you burn to CD, then check the CD burning software first (try burning at a slower speed, especially if the blank CDs are extra cheap).

C programming
Does C have a built-in command/function/etc. that returns the number of elements of a list? If so, what is it? If not, how can one determine the number of elements of a list of unknown length? Lucas Brown 42 (talk) 23:06, 12 May 2009 (UTC)


 * C does not have a built-in list data type. You can have a static-sized array, but you must know its length ahead of time.  Or, you can have a null-terminated array (or linked-list), and you must traverse it and keep count of number of items, in order to count the list items.  Nimur (talk) 23:25, 12 May 2009 (UTC)


 * Well - under very special circumstances. If you declare your "list" as an array:

int myArray [ 100 ] ;


 * ...then you can say:

x = sizeof(myArray)/sizeof(myArray[0]) ;


 * ...and x will be set to 100. But that can only work when the compiler can 'see' the array declaration.  If you do something like this:

int myArray [ 100 ] ; int *myPointer ; myPointer = myArray ; x = sizeof(myPointer)/sizeof(myPointer[0]) ;


 * ...then x will be the size of a pointer divided by the size of an integer - not the size of myArray! But in general - no. SteveBaker (talk) 02:56, 13 May 2009 (UTC)

Alright... what would happen if I tried to access the n+1th item of an n-element array? --Lucas Brown 42 (talk) 16:28, 13 May 2009 (UTC)


 * That's called a buffer overflow, and is a common cause of bugs and program crashes. -- 128.104.112.117 (talk) 17:53, 13 May 2009 (UTC)


 * ...and because arrays in most modern programming languages start at index zero - accessing the n'th item of an n-element array is also likely to crash your program! What actually happens when you read or write past the end of the array depends on the programming language.  Some languages do 'array bound checking' and making this kind of error will likely trigger an error condition.  Other language (of which C and C++ are good examples) don't do this check (because it takes time that some applications can't afford to waste).  In that case, what happens is that some other location in memory is accessed instead.  That could be the variable declared either just before or just after your array - or it might be something else entirely.  This might cause your program to crash - but it might instead just start behaving really strangely - with variables changing their value for no apparent reason!  While you are still learning to program well (and even afterwards!) it's often wise to do something like this:

#define MY_ARRAY_SIZE 100 int myArray [ MY_ARRAY_SIZE ] ;

assert ( y >= 0 && y < MY_ARRAY_SIZE ) ; x = myArray [ y ] ;


 * the 'assert' command crashes your program in a way that's easy to diagnose if the expression within the round brackets does not evaluate to 'true'. In most systems, these 'assert' commands can easily be 'turned off' at compile time when your program works and you don't need them anymore.  SteveBaker (talk) 20:26, 13 May 2009 (UTC)