Talk:Deniable encryption/Archive 1

Can we please craft a 'scenario' where encryption is not used for illicit activities?
I find it somewhat objectionable that the "Scenario" entails someone committing adultery and using deniable encryption to support her activities. Deniable encryption has all sorts of uses for the good. Can we please craft a scenario in which the character is using deniable encryption for a good cause? Thanks.

152.3.68.83 (talk) 13:38, 10 October 2013 (UTC)

Bruce Schener's 'Deniable File System' link
Recently a link "Deniable File System" was recently added to the 'see also' section, but I believe it's rather loosely related to the topic and not very informative and reverted it for now. In any case, if anyone wants to re-add it, make sure you add it under the 'External links' section. -- intgr 13:10, 9 June 2006 (UTC)

Deniability not feasible
"In practice, deniable encryption is very difficult to execute. ... It is nearly impossible to construct keys and ciphertexts for modern block ciphers such that one ciphertext will decrypt to two comprehensible plaintexts."


 * That's not true. The idea is not to take an existing encryption scheme and make it deniable, but to use new algorithms that incorporate deniablility. In this sense deniable encryption is decidedly possible and practially feasible. Arvindn 02:34, 11 Oct 2004 (UTC)


 * OK, I'm removing that last paragraph until I get a chance to rework it. Decrypt3 15:29, Oct 11, 2004 (UTC)


 * Given that non-deniable encryption systems are the norm, the only reason one would use a deniable encryption system is so that he is able to deny it. Completely useless (at least in its original formulation). Steganography, on the other hand, would work better (i.e. an image is an image, it may contain a hidden message, or not). NabilStendardo (talk) 20:28, 20 December 2011 (UTC)

CDMA as deniable encryption?
CDMA mobile phones essentially combine several different messages (from the tower to the handsets or vice versa) in the air, and the decoding process allows each phone to use its unique code to pick out a message intended for it, or to detect that it isn't being addressed, from a shared bitstream. Could this be considered a form of deniable encryption (since the same bitstream can be decoded to produce different messages), and if so, perhaps it should be referenced in this article? Mr2001 3 July 2005 06:44 (UTC)
 * Doesn't sound like it's a particularly good example, IMHO... H8gaR 17:34, 26 July 2007 (UTC)

Example
Do we have to use negative examples of why someone would want deniable encryption? Why can't we use positive examples like free-speech advocates in China instead? —The preceding unsigned comment was added by 192.43.65.245 (talk • contribs) 19:18, 18 May 2006 (UTC)
 * I don't know about a better example, but it would be problematic, from the NPOV, to use a specific political example (And I do abhor China's free-speech problems, by the way!) &mdash; Matt Crypto  21:02, 18 May 2006 (UTC)


 * I'm not a wikipedian, but from a layman's perspective, the current example seems VERY point-of-view. Probably a military example would be the best NPOV, without identifying the military organization. -Bluenail


 * Utterly inappropriate example for an encyclopedia. "Being bold" and commenting it out in the hopes someone will fix the content. I'm not a fan of leaving garbage in the wikipedia until someone decides to fix it. Have an objection? Then fix it, don't merely reinstate it. -kid in study hall —Preceding unsigned comment added by 69.74.19.226 (talk) 14:44, 16 October 2008 (UTC)

Misattribution
The notion of "deniable encryption" was introduced by Julian Assange & Ralf Weinmann in the Rubberhose filesystem[1] and explored in detail in a paper by Ran Canetti, Cynthia Dwork, Moni Naor, and Rafail Ostrovsky[2] in 1996.

The Rubberhose stuff seems to be from 2000, about 4 years later than the CDNO paper. The Rubberhose reference doesn't point to a precise document, so the claim that deniable encryption originates with them is a bit hard to verify. Stevo2001 (talk) 20:12, 9 February 2009 (UTC)

questions
Hello, could someone add some other (than one-time pad) simple example of construction of such cipher? Also, I also don't understand, can this also be used as protection from brute-force cryptanalysis, because you don't know which message is the right one? Samohyl Jan 07:35, 28 Feb 2005 (UTC)

"Significance" and "essence" sections
The recently added "significance" and "essence" sections, in my opinion, read too much like an essay. Besides that, they also widen the already problematic gap between traditional "one ciphertext decrypts into multiple plaintexts" kind of deniability, and deniability offered by upper layers (as in TrueCrypt, PhoneBookFS or StegFS; or OTR where it has an entirely different meaning). Notice how earlier sections seem to imply that OTP-kind of deniability is the only kind of deniability.

Another problem I have with this is the apparent promotion of YouDeny.com and a new patent, calling it a "novel technique." I am not implying that this edit was made in a bad faith, but I believe it's not appropriate for Wikipedia without significant media coverage (WP:V, WP:RS).

I am therefore reverting these edits for now. Does anyone else have a second opinion on this? -- intgr [talk] 15:56, 27 May 2008 (UTC)

--- The deleted contribution eliminates the clarity they added to a description that otherwise reads very thick and mysterious. It must be stressed that One-Time Pad provides total deniability, owing to its key size, and therefore any ciphersystem that allows for keys as large as desired may provide classical deniability. It's not intellectually honest to remove this point from the discussion. For reference see the quoted patent as well as:

http://eprint.iacr.org/2008/222 "Encryption on Demand"

Samid, G. 2001 "Re-Dividing Complexity Between Algorithms and Keys (Key Scripts)" The Second International Conference on Cryptology in India, Indian Institute of Technology, Madras, Chennai, India. December 2001. "

Samid, G. 2002 " At-Will Intractability Up to Plaintext Equivocation Achieved via a Cryptographic Key Made As Small, or As Large As Desired - Without Computational Penalty " 2002 International Workshop on CRYPTOLOGY AND NETWORK SECURITY San Francisco, California, USA September 26 -- 28, 2002

Samid, G. 2001 "Anonymity Management: A Blue Print For Newfound Privacy" The Second International Workshop on Information Security Applications (WISA 2001), Seoul, Korea, September 13-14, 2001 (Best Paper Award).

Samid, G. 2005 "The Myth of Invincible Encryption" Digital Transactions May-June 2005 —Preceding unsigned comment added by Ignorexxia (talk • contribs) 20:26, 27 May 2008 (UTC)


 * No, the added contribution adds to the confusion because it suggests that the "long key encryption" approach is the only way to achieve deniability; yet some practical end-user applications that offer a level of deniability (listed above), do not use this method. There is a very good reason for this: if your data is encrypted with a long secret key, then managing this key itself becomes a problem, as it cannot be memorized. What do you do with a long secret key -- do you encrypt it?
 * I do agree that the article is unclear, but I believe this addition is a step backwards. Not only because it misrepresents the practical approaches of deniable encryption; phrases like "Freedom without privacy is not. Privacy without deniability is not." might belong to an essay or a sales brochure, but it's not something that an encyclopedia would say.
 * I'm not particularly impressed by your excessive citing of Gideon Samid, either, because it is naturally in his interests to promote his product and patent. It doesn't add to his integrity that his papers never cite other authors' research, and that reputable cryptography researchers like Bruce Schneier and Peter Gutmann  have publicly criticized the claims of "AGS Encryptions Ltd.", which is the company that sells Gideon Samid's product. -- intgr [talk] 13:55, 28 May 2008 (UTC)

This topic, like so many others is clouded with needless complications. It boils down to simple statistics: if the key is much shorter than the message, then it's statistically impossible to find a second plausible plaintext for a given ciphertext. And this holds true regardless of the algorithm that is being used. That is why all the mainstay ciphersystems don't offer deniability. They all use very small keys. By contrast any ciphersystem that can handle a large as desired key (without choking on the computation load) will be able to use a sufficiently large key to enable deniability. The above article by Gideon Samid, and the corresponding patent (US Patent 6,823,068) do allow a key of any size, and hence this ciphersystem offers deniability. Any other ciphersystem that can handle large keys will also offer deniability. Small keys systems -- not. Now that is solid mathematics. You can't intimidate this truth by citing crypto-theologians. And since nobody today remembers keys (like yesterday's spies had to), it's all in memory anyway, there is no real reason not to use large keys. And yes, deniability is the only weapon against coercive invastion of privacy. —Preceding unsigned comment added by Ignorexxia (talk • contribs) 22:13, 21 September 2008 (UTC)


 * I understand where you're getting at, and I believe I already addressed that above. This article isn't just about deniable ciphers, but also applications that provide deniability using classical (non-deniable) cryptography primitives; take for example the TrueCrypt hidden container technique that only uses a symmetric cipher. It doesn't encode two messages in the same ciphertext, but instead hides the hidden volume in the free space of the outer volume. I don't think there can be any doubt about which approach is more popular: it's AGS Encryptions alone versus TrueCrypt, BestCrypt, FreeOTFE, Rubberhose FS, PhonebookFS, StegFS, Magikfs etc etc. Whereas you're trying to claim that using a large key cipher is the only way to achieve deniability &mdash; it's not.


 * I'm not even convinced that long key deniability is practical; in a classical cryptosystem, you normally pick one or more memorable passphrases, which are the only secrets in the system. It works because this secret can be kept in a relatively secure place &mdash; your mind. In the long key approach however, you're stuck with the cryptosystem telling you what the key is. Even worse, if you want to deniably encrypt significant amounts of data, the key can be megabytes or gigabytes long. You can't memorize it, so you most likely store it on a computer.
 * Q1: How do you deny the existence of a key that's accessible for the attacker?
 * Q2: If you do have a good place where to hide the key so the attacker doesn't find it &mdash; why not store your data there in the first place? If the attacker never finds your data, then there is no need to deny it.


 * Third, the entire world of cryptography depends on peer review, and I can't find any evidence of thorough reviews of his approach. Having Wikipedia suggest that this is secure at all is, I think, optimistic. -- intgr [talk] 03:40, 22 September 2008 (UTC)

Phrase is insulting to many readers (or redundant)
In the section titled "Modern forms of deniable encryption" the reader is addressed: "Needless to say, insecure block ciphers or pseudorandom number generators can..."

Some will be less familiar with encryption (or the technical details of its implementation) than others, and do in fact need the explanation.

Alternately: if we can somehow assume it's truly "needless to say," then why say it?drone5 (talk) 06:59, 4 October 2010 (UTC)


 * Wikipedia is a wiki. You're welcome to change it. -- intgr [talk] 12:53, 4 October 2010 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 1 one external link on Deniable encryption. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20100915130330/http://iq.org/~proff/rubberhose.org/ to http://iq.org/~proff/rubberhose.org/

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.— InternetArchiveBot  (Report bug) 23:45, 10 December 2016 (UTC)

True crypt deniability
Following up a minor change I put in, Truecrypt only supports a single hidden container per host container, the others systems mentioned allow numerous hidden containers. This means "the guy with the rubber hose" only need beat the person with the container file until they spill the details of that single hidden container (or a dummy hidden container) - at which point the person with the container file can prove there are no more hidden volumes and the beating will stop - so a beating would act as an active incentive to give up any hidden volume. OTOH, anyone with a freeotfe or bestcrypt container knows there's no point in giving up the details of any hidden volume as any beating won't stop. This means the hidden data they stores is more likely to remain secure (there's no incentive to give up the hidden volume's details). This isn't an anti-truecrypt comment, but when protecting data against someone with no respect for human rights, this is a big concern. Cralar (talk) 17:00, 19 June 2010 (UTC)


 * Then don't beat around the bush. Claiming that TrueCrypt is worse than everything else without any substantiation is clearly a bad edit. Just state the facts in the article and cite your sources. -- intgr [talk] 20:28, 19 June 2010 (UTC)
 * I can see your point that wasn't what I was looking to do. i'll elaborate more to make it clearer Cralar (talk) 21:03, 19 June 2010 (UTC)

Can you quote the exact part of your provided sources that suggests that either FreeOTFE or BestCrypt support multiple hidden containers per volume? I skimmed through all of those sources, but did not find supporting evidence. As far as I can see, this is also problematic given the design of the hidden containers approach, to prevent different hidden containers from overwriting each other and not leaking their neighbors' existence.

Also you cannot claim that TrueCrypt provides "less" deniability because of it, because that is clearly not present in the sources, it's synthesis. -- intgr [talk] 14:25, 22 June 2010 (UTC)


 * BestCrypt allows multiple hidden volumes; each "normal" volume has some number of keys associated with it which may or may not be in use, and may or may not be a key to a hidden volume. It's in the API documentation - I'll try and pull out a better reference (the one I put up previously maybe wasn't too clear about this). The FreeOTFE WWW site certainly states that it allows arbitary numbers of hidden volumes. OTOH, Truecrypt certainly can't support more than one hidden volume - it's inherent in its design, so that reference is OK. Whether truecrypt is "less deniable" depends on your security model, but the limitation to a single hidden volume per normal volume can certainly have security implications that don't apply to the other two.
 * I'll have a stab at phrasing it better - it really needs making clearer (I'm certainly not trying to claim "Truecrypt is worse than everything else!" - a lot of disk encryption systems don't have any support for hidden volumes). Please could you help me out trying to phrase this to get it across? Cralar (talk) 20:46, 22 June 2010 (UTC)


 * You're right, the FreeOTFE documentation does say it and I missed it on my first read. How about phrasing it like this? -- intgr [talk] 23:01, 22 June 2010 (UTC)
 * Yup - thanks for rephrasing, that's a bit better. I've also found a better reference to what I was trying to get at above; the Rubberhose project has details on why limiting the number of hidden volumes can be a serious problem; see: http://iq.org/~proff/rubberhose.org/current/src/doc/beatings.txt - limiting a system to just one or two hidden containers increases the risk of the data secured being revealed Cralar (talk) 18:34, 23 June 2010 (UTC)
 * There are always limits, even if they're just soft limits on human memory and the usability of the system you created. An attacker might drug you and beat you virtually indefinitely. Regardless of all the rationalisations, I think even a moderately skilled torturer could draw all passwords out of anyone in a relatively short time-frame. — Preceding unsigned comment added by 84.243.199.239 (talk) 11:38, 6 March 2015 (UTC)