Wikipedia:Reference desk/Archives/Computing/2013 November 1

= November 1 =

ESSIV encryption mode and LRW/XEX/XTS
I understand ESSIV and LRW, and I vaguely understand the variations in XEX and XTS, but I'm curious if someone could help me track down in more basic terms the advantage of the tweakable approach over ESSIV. As part of that, are there any public attacks against ESSIV based on the hash algorithm itself? Is using an essiv encrypted block as a brute-force test faster than using a crypto based one (I realize that's an intricate question but I want to know if anyone's published/written on it)? Anything kind of close to these topics would be appreciated. Shadowjams (talk) 04:13, 1 November 2013 (UTC)
 * The advantage (where there is one) is just in performance. ESSIV-CBC uses the underlying block cipher with two different keys while LRW and XEX only use one, and CBC encryption can't be parallelized while LRW and XEX can. These can both cause problems for hardware implementations but probably not for software. Also, ESSIV and XEX have to encrypt one extra block per disk sector, which is a significant extra expense; the flip side is that LRW uses whitening in different sectors that's related in a simple way known to the attacker, which apparently reduces the security somewhat.
 * I think there are no known attacks on ESSIV-CBC because the Wikipedia article doesn't mention any and I didn't find anything on Google Scholar.
 * I'm not sure I understand your last question, but to frustrate brute force attacks you just need a large key (128+ bits) and an expensive key derivation function. You don't need to make sector encryption expensive. -- BenRG (talk) 08:02, 1 November 2013 (UTC)

Youtube not working with chrome.
A little while ago I was informed of an update to google chrome and it installed with out my permission but thats fine because I just set the settings back to the way they were. after I did that youtube would not work anymore it would play about 33% of the video then the whole video would freeze and this happens every time. I can watch youtube videos outside of the youtube website like an embedded video on another site. How can I fix this so I can watch videos on youtube again? — Preceding unsigned comment added by 216.124.224.51 (talk) 14:42, 1 November 2013 (UTC)

MP3 to WAV and WAV to MP3
I have been told that once a song looses quality by either lowering its bit-rate or by changing the file extension from wav to mp3, that the audio quality can’t be regained again. Well, a few days ago I downloaded a song with an mp3 extension at the end of its name and then I decided to change the file extension to .wav just by renaming the song. I noticed that it actually sounded better than the original mp3 extension. Then I changed it back to .mp3 and it sounded like what an mp3 would sound. How can just changing the file extension of a song by just renaming a song actually improve the quality of the song if the data and quality that was lost can’t be regained back? At least that was my experience....I didn’t use a software to convert a song from one file extension to another. I just left clicked on the song mouse, clicked the Rename option, and renamed the file extension. Willminator (talk) 16:17, 1 November 2013 (UTC)
 * If you're using Windows, it's possible that you caused the file to be played by a different program -- the way that a file is handled is determined by its extension. Otherwise this doesn't make any sense as far as I can tell.  Do you know what program is playing the file in each case? Looie496 (talk) 16:36, 1 November 2013 (UTC)
 * I agree that this doesn't make sense unless you have the default player set differently for the two extensions. Media players often ignore the extension (especially if it is wrong) and just recognise the coding, using their inbuilt or added codecs to decode the file.  Changing the extension doesn't actually change the file at all, or how it is handled by any one given media player.    D b f i r s   16:43, 1 November 2013 (UTC)
 * [Edit conflict] I know it doesn't make sense. That's why I ask. I right clicked on the song [That's what I did to change the name of the file extension. I made an edit to the description of my question to say right click, which is what I meant to say, not left click.] and clicked the Open With option and then clicked Windows. Willminator (talk) 16:48, 1 November 2013 (UTC)
 * When you renamed whatever.mp3 to whatever.wav, you still had the same MP3 file, but with a misleading name. It still worked because whatever software you're using ignores the name and looks at the file contents to figure out the format. The difference in sound was almost certainly in your imagination; see the green marker story for a similar case. If it wasn't in your imagination, it had nothing to do with the WAVE file format. It could have been cosmic rays or differences in air currents or some subtle background noise from another source. You'll probably never know, unless you repeat this experiment and get consistent results, in which case it's your imagination. :-) -- BenRG (talk) 16:55, 1 November 2013 (UTC)
 * It is possible that, by changing the extension, this caused a different program to play the file (that is, different players were registered for the .wav and .mp3 extensions). Some players, particularly those that they sometimes package with consumer-grade Windows PCs, can by default "improve" the sound by adding processing like EQ, loudness, and stereo-separation. Forcing the two files to play in the same player (by opening it in the player, rather than double-clicking the file) should, naturally, yield identical results, regardless of the extension. -- Finlay McWalterჷTalk 18:04, 1 November 2013 (UTC)
 * [Edit conflict] Here is an update about an experiment I did now to check out to see if you were right about this being my imagination or not. I have a music editting software called Sony Acid Pro. I dragged the .mp3 one to one track and the .wav one on the second track below it, not to edit them of course, but to look at their wavelengths to determine if one had a higher quality than the other. I zoomed in very closely to different sections of the song to check them out and it turns out that their wavelengths equally match in terms of detail and size, meaning that Sony Acid Pro interpreted both files as having the same quality. That would support your [BenRG's] "imagination theory" as I call it, that I'm imagining a difference when there really isn't. To my ears, the wav file clearly seems to have a higher quality. Then, I decided to burn the .wav changed song to a CD and rip it as an mp3 to see whether it would sound the same as the orignial mp3 song and whether the wavelengths would be different according to Sony Acid Pro. I put the original mp3 and the ripped mp3 to Sony Acid Pro to check out their wavelengths. The wavelengths of the ripped mp3 had less details than the orignal mp3, the one I download in other words, and the orignal mp3 sounded a little bit better than the ripped mp3 (Hopefully that is not due to my imagination either, :) but the wavelengths show that this time its not my imagination). Again, that would support your [BenRG's] "imagination theory." However, even though the original mp3 and the converted wav have the same wavelength details, it just still seems that the converted wav sounds better. Based on my experiments that seems to show that you might be right about it being my imagination, this makes even less sense. Do you say that its my imagination or could there be another explanation? Willminator (talk) 18:09, 1 November 2013 (UTC)
 * P.S: I don't know if this will help with my question, but for mp3 songs, Windows Media Player is the default program used, but for .wav files, iTunes is the default program. Willminator (talk) 22:41, 1 November 2013 (UTC)
 * There are different levels of MP3 compression, so you will certainly get an inferior sound if you rip at a lower quality (higher compression). Try opening both the original (renamed, not re-ripped) files (.wav and .mp3) in Windows Media Player.  They should sound identical, because they are identical files (even though you've changed the filename).  Opening them both in iTunes should also produce identical sounds, but they could be different from the sounds produced by Media Player.  This is rather like playing the same CD track on the same CD player and same speakers, but using different amplifiers.  They start with the same waveform, but distort it in different ways as they amplify.    D b f i r s   09:17, 2 November 2013 (UTC)

How copyright and patent holders know that their code is being stolen?
If someone compiles stolen code into his program, includes code that is supposed to be non-commercial or open-source into his commercial product, or breaks a software patent: how can the legit copyright holders know their code was compiled into another product? The most trivial cases like offering the original program with a crack through eMule or breaking an obvious patent, which shouldn't be protected anyway, are clear. But how is it technically possible to analyze code of others and guess what's inside? OsmanRF34 (talk) 18:38, 1 November 2013 (UTC)


 * Sometimes the easiest way is not technical at all: it is possible to subpoena a series of expert witnesses from the alleged infringer, who must disclose whether they have infringed to a court, under penalty of perjury. More commonly, witnesses provide factual statements about undisputed actions they have taken; and those statements of fact allow a judge to decide whether any of those actions constituted infringement.  Nimur (talk) 19:33, 1 November 2013 (UTC)
 * Disclosure and the usual rules of evidence are the typical means. Nimur's slightly misusing the term subpoena, but for practical purposes it's a meaninless one, you have to prove someone wronged you in any system that cares about actuality. Shadowjams (talk) 10:14, 2 November 2013 (UTC)

test driven development question
how come test driven development is possible? isn't it just as hard (or harder) to write test code as it is to make that functionality work that way?

What I mean is that why not take the same effort and just make the code 'actually work', rather than start writing and debugging test code? — Preceding unsigned comment added by 212.96.61.236 (talk) 19:45, 1 November 2013 (UTC)


 * Of course writing tests takes time, but you have to do something to check whether the code you have written actually works, and there is nothing you can do that doesn't take time. Have you looked at our article on test driven development, by the way? Looie496 (talk) 23:11, 1 November 2013 (UTC)


 * But test driven development is about writing tests ahead of writing code. So, "there is nothing you can do that doesn't take time" - yes, you can simply NOT write a test and just start writing the code.  When you're done writing the code you can see if it works (manually, by trying to compile and run it).  On the other hand, automating testing before you have something to test means that you're solving a problem that doesn't have to exist.  Also, it's impossible to automate something that you can do in a split second: take a look to see if the interface looks "OK" or is horrendous, for example.  212.96.61.236 (talk) 01:09, 2 November 2013 (UTC)


 * That's the method I've used. It has several advantages:


 * 1) It quickly allows you to demo something. Management is much happier if you can demo some portion of the code than if you say that nothing works yet.  (A risk is that they think you are making too much progress and push up the deadline.)


 * 2) Works better if you have to hand off development to somebody else.


 * 3) Works better with incomplete specs. You can demo how the part in question works now, and get the customer to approve it or say what needs to be changed.


 * 4) Just from a morale POV, seeing something work makes you feel a lot happier, and seeing a bit more work each day is encouraging.


 * 5) It there's a portion which may not work at all, you can try that first, before committing to the full project.


 * 6) I find it quicker to start with some similar program, then change it a bit at a time, until it meets the new specs, rather than start from scratch. This approach works well with test-driven development.


 * 7) If your program needs to interface with somebody else's, you can write the interface first, so they can work on their part while you finish yours.


 * 8) Makes debugging easier, since you only change a bit each run, so you know what caused it to break. There are exceptions though, when you add an intermittent bug, for example.  StuRat (talk) 23:25, 1 November 2013 (UTC)


 * Stu, I have no idea what you're talking about! What can you "quickly demo" -- all these tests you've written failing?  Because if you are talking about functionality, how does writing tests first in any way give you a 'quick demo' faster than just writing some code first?  Likewise I hvae no idea what any of your sentences have to do with testing... maybe my understanding of TDD is only partial...could you be a lot more explicit? 212.96.61.236 (talk) 01:12, 2 November 2013 (UTC)


 * OK, let's say I am to write a "people search" program where you enter the person's vital info and it searches multiple databases and returns the matches it finds. I could start by getting it working so it searches one of the databases and returns that info.  I would then demo to management, so they could comment on the info it requires, how long it takes, how it presents the results, etc., even though I haven't written the rest of the code yet. StuRat (talk) 01:21, 2 November 2013 (UTC)


 * I'm with you so far - and it's exactly how I would do it. Were you meaning to get around to mentioning TDD at all?  So far the process you've described exactly matches what I would have done - and does not involve writing test code either before coding functionality or afterward.  You don't even mention the word test.  212.96.61.236 (talk) 01:50, 2 November 2013 (UTC)


 * At each point I would do a test. First I would have a test to see that the the target's name is entered correctly, then I'd add a test to verify that their birth data was entered correctly, then I'd add a test to see if it is able to get an exact match from one of the databases using that data, then I'd add a test to see if I'm able to retrieve the record correctly for that match, then I'd add a test to see that I'm able to display that data correctly.  Next I might experiment with inexact matches like "J Brown" finding "James Brown". Once I had that working, I'd feel good. StuRat (talk) 01:56, 2 November 2013 (UTC)


 * Stu, everything you've described could apply to all styles of iterative development, but this question was specifically about writing the unit-tests that make TDD unique. APL (talk) 01:33, 3 November 2013 (UTC)


 * How can you do iterative development without testing at each iteration ? StuRat (talk) 04:50, 3 November 2013 (UTC)
 * Well, there's "I tested it and it looked good." and then there's the more formal unit testing associated with TDD.
 * These can be miles apart. APL (talk) 20:12, 3 November 2013 (UTC)


 * Like a lot of software engineering buzz-words, there's a good deal of fluff surrounding "test driven development." But at its core, the phrase "test-driven development" is simply summarizing the very good organizational strategy of being careful to test important features: old features and new features.  As anyone with product experience can tell you, feature lists can be enormous; and they can be totally dissimilar between different types of engineered products; so if we dive into specific strategies for testing and quality-assurance, there are lots of ideas for dealing with each type of complexity.  In software, for example, we use the concept of unit-testing and encapsulation.
 * There is also an element of social engineering at play. In certain (dysfunctional) organizations, "QA" is a bad word, denoting the least-qualified engineers who didn't make the cut for developing new projects.  This stigma infests many organizations: it stems from an organizational failure to recognize the immense value of quality-assurance and testing.  This rotten idea infects management-strategies for hiring and compensation.  It skews estimates about budget and time scheduling in project management.  Contrast this with properly operating engineering organizations, which have procedures in place to make sure that testing gets the attention, and the resources, it needs, enabling a better product.  The buzz-word, "test-driven development," can serve as a de-stigmatizing reminder to managers, engineers, and testers: it is very important to test the product.  The functionality of the product is the only relevant factor, irrespective of any other project benchmarks.  Nimur (talk) 17:12, 2 November 2013 (UTC)


 * But isn't the essential difference in TDD that you test each feature as it is added, as opposed to waiting until the code is completed ? StuRat (talk) 19:07, 6 November 2013 (UTC)