Talk:/dev/random

FreeBSD claims are wrong
It seems some claims for the FreeBSD implementation in the article are incorrect. Since no sources are given, I'd like to change them to something better, since I have no idea, where the FreeBSD /dev/random device never blocks statement originally came from.

FreeBSD documentation (http://www.freebsd.org/cgi/man.cgi?query=random&apropos=0&sektion=4&manpath=FreeBSD+10.0-stable&arch=default&format=html) clearly states that /dev/{u,}random will block if there is not enough entropy and has done so since at least 5.0-RELEASE.

https://wiki.freebsd.org/201308DevSummit/Security/DevRandom has a lot of discussion going on on improving /dev/random on FreeBSD. Most importantly: ''Both Yarrow and Fortuna need to accumulate a certain amount of entropy before they can start generating random numbers. Until that happens, reads from /dev/random will block.'' dhke (talk) 09:22, 28 April 2014 (UTC)

Should this all really be in the FreeBSD section?
In 2004, Landon Curt Noll tested the FreeBSD 5.2.1 version of /dev/random and found that it was not a cryptographically strong random number generator because its output had multiple uniformity flaws according to the Billion bit test. Similar flaws were found in the Linux 2.4.21-20, Solaris 8 patch 108528-18, and Mac OS X 10.3.5 implementations of /dev/random.


 * I'm removing that part altogether per WP:NOR: the only reference is a link to the web page of the person who added that paragraph. I can't see any reliable-looking source (e.g. academic paper) referenced on that web page. Lamellae (talk) 22:06, 13 August 2013 (UTC)

Typical?
The article says:


 * It is typically used for providing a character stream for encryption, incompressible data, or securely overwriting files.

No. These are not typical uses but rather they are just some of the typical uses. But it is no more secure to overwrite a file with random data than it is do to do will nulls, so that is a poor and inefficient way of overwriting files!

Paul Beardsell 23:27, 27 Sep 2004 (UTC)

Actually, I (and apparently the U.S. gov't) disagree.

The DoD regulation for deleting files securely states that overwriting with random material (maybe up to 5 times) will ensure that any orderly data is gone. If you only use 0s and 1s, the possibility is there that a clever recovery could analyze the HD and determine if parts are not totally zeroed out. Theoretically, this would permit an attacker/recovery expert to locate segments of files. Random overwrite makes this impossible: the underlying structure is now totally random. nneonneo 18:48, 30 January 2006 (UTC)


 * This is wrong. First, the DoD would rather you melt or grind the media. Second, it's silly to go to extreme measures defeating an exotic threat without also addressing the problem of bad block replacement relocating your data. Your data may be all over the drive, never overwritten, because the hard drive decided that a sector was about to fail. It is better to be fast, just in case you get interrupted by the enemy, and so that you are not tempted to skip the operation. 24.110.60.225 03:08, 5 July 2006 (UTC)


 * Using a journaling file system carries a similar risk. IMHO, the most reliable way to ensure that deleted data is unreadable is to never write unencrypted data to the platter in the first place. Failing that, overwriting with several passes of random data is best, precisely because it decreases the signal-to-noise ratio for any attacker trying to read the old bits. See Peter Gutmann's 1996 paper Secure Deletion of Data from Magnetic and Solid-State Memory for a thorough discussion of the issues.


 * That said, /dev/random is unsuitable for wiping disks simply because it's slow, even if you use urandom. I use it to seed a fast CSPRNG and use that for the overwriting. Unless you expect a major corporation or intelligence agency to try to read your disk, and have data they would find interesting, it works well enough. Tualha (Talk) 17:46, 24 November 2006 (UTC)

How do we know...
How do we know that stuff that comes from /dev/random isn't actually XOR'd with what goes into /dev/null? That /dev/null info has to go back into the machine somehow, or else it might collect inside the CPU fan and hinder that source of information.

This /dev/random seems like it could be hazardous if overused--it could overflow into /dev/zero and junk up the whole system. There should be a /dev/monitor that puts information back into the entropy pool and junks the pool when it gets close to overflowing. Though where this junk goes to could be a problem. There should probably be some type of information bottle that the user could change when necessary.

This bottle of potentially valuable information could then be sold to an InfoDump, which would distill the sticky black stuff into clear, pure inert information and sell the excess junk to some bigger InfoDump... --Ihope127 23:58, 11 July 2005 (UTC)


 * Sounds to me as if it's already overflowed into your brain. Wiseguy ;) Tualha (Talk) 17:46, 24 November 2006 (UTC)

Confusion
I am confused. urandom takes the output of random, and applies additional transformations? In which case, why is it called a "pseudo-random" number generator? I associate psuedo-random with purely numerical algorithms that produce predictable, but hopefully acceptable output given a seed. Sdedeo 22:24, 2 September 2005 (UTC)


 * /dev/urandom is the non-blocking form of /dev/random. It outputs exactly what /dev/random does up until the entropy pool runs out, at which point it uses a different PRNG to generate values. Thus, it is non-blocking, but output isn't guaranteed to be of high quality. nneonneo 18:35, 30 January 2006 (UTC)


 * This is probably not true. /dev/urandom continues to give out pseudo random numbers with the same algoritm, but without adding new entropy to its pool (and possibly missing some other transformation because of that). --LPfi (talk) 11:29, 11 May 2009 (UTC)


 * Not "probably"; it's just wrong.
 * /dev/random and /dev/urandom share a pool of entropy, which is fed with supposedly-random bits from external events. Each time data is added to that pool, it is "stirred", i.e. its contents are scrambled. That pool is used to feed two further pools, one for urandom and one for random. These are also stirred when bits are added to them. So the two generators should never generate the same sequence; they do not share a seed.
 * Neither random nor urandom output bits from their respective pool; the contents of all the pools are supposed to be seriously secret. Rather the data in the respective pool is passed through a CSPRNG to generate random bits securely (and without exposing the contents of the pool). Each pool is thus used as a fancy sort of seed for the respective CSPRNG. Provided the pool is never exposed, the two CSPRNGs can go on generating unpredictable random bits until the heat-death of the Universe.
 * The "quality" of the numbers generated by these two processes is exactly the same; but /dev/random will block when it thinks it has "used up" all its entropy, while /dev/urandom will carry on outputting bits. It doesn't switch to a new, lower-quality CSPRNG; it just carries on the same as before, using the same seed, as if nothing has changed (and in reality, nothing significant has changed).
 * So the reason for adding new random bits to these pools is to mitigate the risk of the pool being exposed (or known in advance). It is falsely claimed in the documentation for urandom that its use "depletes the entropy pool". It is theoretically possible to reverse-engineer the pool contents given a sufficiently-long stream of bits from the output of [x]random; that is, given enough output, and a method for reversing a secure hash given its output, one could determine the contents of the pool. in practice, for pools the size of those used in these generators, that would require ridiculous amounts of time, given even very conservative assumptions (of the order of billions of billions of times the age of the Universe, using theoretically-optimal computing equipment, and consuming gigantic amounts of power, like the entire output of a star). Seriously, it can't be done.
 * So the threat is not "depletion of entropy"; it's that someone gains root on your computer, and inspects the contents of your pool. Or more seriously, that your computer is at an early stage of booting, and (a)no external random events have yet been detected, and (b) it has not been possible to  initialise your pool using data from a previous session (perhaps you have no persistent storage; perhaps this is the first time it has booted). These are both real and serious risks. But they are not caused by depletion of entropy.
 * Urandom does not use the same pool as random, except indirectly; they both refill their respective pool from the shared pool. Neither will ever output the same bits that one could have got from the other, had one read the other instead.
 * Some people believe that, after drawing some number of bits from urandom equal to the size of its pool, the output of that RNG can no longer be relied on as "random", because all of the true randomness has sort-of been drained away. That is nonsense, and exposes a serious misunderstanding of how a CSPRNG works and what it does.
 * I'm sorry if I seem grumpy; I have been trying all day to find the source of the belief that a CSPRNG that is seeded with random data can have its entropy depleted. I have found no plausible arguments that this is the case. I have seen it asserted quite a lot; and I have read a small number of articles that convincingly debunk it. The idea seems to come from the man page for /dev/urandom in Linux (and some people claim that this belief arose in Linux circles). I have the greatest respect for the person I believe authored that page, and I don't believe he writes nonsense about this kind of thing; and that is why I have been looking for another source.
 * Note that the term "entropy", in this context, means something different from the entropy of James Maxwell, or the entropy of quantum mechanics. It means, roughly, "unpredictable data". That is, whether some data is entropy or not, depends on the context. If your adversary can't predict it, you can consider it entropy and use it to seed random functions (that's right - whether it counts as entropy or not also depends on who your adversary is). The arrival-time of hardware interrupts might be entropy, for these purposes; but if your adversary can, using a super-accurate timer, measure the arrival-times of your keystrokes, then they stop being entropy; they become just data that is useless for seeding RNGs.


 * I would like to "weave" this information into the article; but I don't know how to proceed. Most information on the web about this subject is misleading or wrong (including the man page of /dev/urandom). There's very little stuff about this that isn't web articles; we're talking about software that wasn't invented until the late 90s. And even places where experts discuss this kind of thing are full of misleading nonsense. I can collect some sources, but none of them is going to be a newspaper of record, or a book published by a respectable publisher, or anything like that; at best they'll be blogs, or less satisfyingly comments to threads on Hacker News or the Linux Kernel mailing list.
 * Incidentally, it's not that hard to find people who would like to see /dev/random abolished, because it serves no purpose, and only causes confusion and pain. The reason seems to be backward compatibility, and because this code is part of the Linux kernel, and because there is a religious belief that kernel changes must never "break userland".
 * So I've put it here. If you can find really good sources (good enough to blow away the nonsense), please add them here, or in the article. MrDemeanour (talk) 19:40, 3 October 2019 (UTC)

Python Code
f=open('/dev/random', 'r') print repr(f.read(10)) '\xb6.\x06\xd4\x93\x8d\r\x86\x19W'

I am not sure how to parse this. Looks like there are Unicode characters (prefixed with \) and Ascii characters with the forward slash...

To get random letters from /dev/random, you would likely try code like this: f=open('/dev/random', 'rb') charset='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890' print ''.join([charset[ord(i)%len(charset)] for i in f.read(100)]) f.close This generates 10 random characters in [a-zA-Z0-9]. There is a slight bias that must be noted: due to the limited range of a byte, the letters appearing earlier in the string will appear slightly more frequently than those appearing last. nneonneo 19:07, 30 January 2006 (UTC)
 * /dev/[u]random generate bytes in the entire possible range (0x00 - 0xFF). Thus, there isn't any particular pattern, the bytes are meant to be used in a crypto ky, for example.
 * 1) Run me using sudo on most systems!


 * 2.25 1.25
 * 2.36 5.23
 * 9.26 25.2
 * 30.25 5.22
 * 1.1 2.30 41.36.4.129 (talk) 04:37, 29 January 2023 (UTC)

Others uses of /dev/random and /dev/urandom

 * /dev/urandom can be used to test a sound card,headphones...

cat /dev/urandom > /dev/dsp

Other operating systems
Maybe there are others, but for certain there's NoiseSYS - a device driver which runs under DOS and provides RANDOM and URANDOM (RANDOM$ and URANDOM$ in later versions) which do the same. As DOS allows device names to be preceeded by /DEV the /DEV/RANDOM is a fully possible solution under DOS. maybe that should be mentioned?

UNIX systems?
Is /dev/random a typical device of a UNIX system? I thought that this was introduced with the Linux kernel and adopted by OpenBSD/FreeBSD a bit later and by Solaris a lot later. Anybody has knowledge on that? -Anonymous
 * Sure. Theodore Ts'o &lt;tytso@mit.edu&gt; (pronounced "Cho", with a long O sound) wrote it for Linux. FreeBSD cloned it, badly, failing to do the entropy accounting that is so critical to preventing undiscovered hash/cipher weaknesses from affecting the randomness quality. Solaris is indeed recent. MacOS X has a /dev/random too. 24.110.60.225 03:17, 5 July 2006 (UTC)
 * MacOS X uses the FreeBSD Yarrow implementation -Anonymous
 * Wait, I thought it was part of AT&T UNIX? 71.81.37.151 05:47, 21 November 2006 (UTC)
 * If MacOS X uses the Yarrow thing, I think the article should say so (as it does for FreeBSD). Karadoc** (talk) 06:50, 28 January 2008 (UTC)
 * +1 (I agree) for the MAIN article to include a new section *before* Linux called "Unix," or "*nix." Currently (top to bottom), the WIKI sections are Linux, then FreeBSD, then OpenBSD, then "macOS and iOS," and lastly, "Other operating systems." I believe *nix supersedes all other listings for /dev/[u]random - no? By the way, I am having the craziest hardest time finding a good source on the web detailing the history, evolution, invention process for /dev/random - can anyone kindly help me out please? Thanks in advance! Vid2vid (talk) 04:46, 23 May 2019 (UTC)
 * Not unless some other UN*X had it before Linux. Otherwise, well, no.
 * As of System V Release 4.2, there's no sign of a /dev/random in any UNIX from AT&T, so it doesn't appear to be "part of AT&T UNIX". There's no sign of it in the most recent version of the Single UNIX Standard, so it's not part of "UNIX" in that sense, either. Guy Harris (talk) 07:27, 23 May 2019 (UTC)

Did you say "true random number generator" ?
How an algorithm could be "truely" random without any external source such as the decay of an isotope ? The quality of /dev/random output can't be truely random, indistinguishable from random maybe but truely random it's not. —Preceding unsigned comment added by 83.115.108.91 (talk) 08:14, 28 March 2008 (UTC)


 * I know it's been a long time since the posting of this question, but I hate to leave an unanswered question on the page. Hardware random number generator states that the noise is generated from quantum phenomena, such as thermal noise or the photoelectric effect, which are truly random.  This noise is gathered into the entropy pool that /dev/random uses.  So why could it NOT be truly random, if the sources it retrieves from are?  I can agree that /dev/urandom (the unlocked source) could potentially be less truly random, but /dev/random is by far well enough for high-security purposes. PseudoOne (talk) 21:02, 21 January 2009 (UTC)

The Billion bit test has found multiple uniformity flaws in a number of /dev/random implementations. In fact, I have never found a /dev/random implementation that did not suffer from at least one uniformity flaw, except maybe those /dev/random implementations that bypassed the standard /dev/random driver code and directly accessed a Cryptographically sound hardware source. ( BTW: The sited page lists observed flaws in the FreeBSD 5.2.1, Solaris 8 patch 108528-18, Linux 2.4.21-20, OS X 10.3.5 implementations of /dev/random. Not listed are a numer of more recent tests under RedHat Linux, OS X, FreeBSD, etc.)

Based on my testing experience, I would hesitate to call /dev/random a Cryptographically secure pseudorandom number generator. The output appears to be statistically distinguishable from a true random source. chongo (talk) 03:48, 3 July 2009 (UTC)

I don't think the reference you cite (for your claim that no /dev/random implementation is cryptographically strong) is credible. I'm a cryptographer, and I think that page has misunderstood randomness testing. In my view, that page is not from a credible source, it is not verifiable, it has not been replicated. It is making an extraordinary claim, and extraordinary claims require stronger substantiation than that. The claims that /dev/random is flawed should be deleted from the Wikipedia article: they are not credible. (21:14, 29 July 2010 (UTC)) —Preceding unsigned comment added by 128.32.153.194 (talk)

/dev/urandom not used for cryptographically secure random number generation
As of Feb, 03, 2011 the article has a sentence: "This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. Thus, it may not be used as a cryptographically secure pseudorandom number generator, but it may be suitable for less secure applications."

The first sentence is true. The entropy of say a 1000 digit number taken from /dev/random will contain more entropy than a 1000 digit number taken from /dev/urandom. However, this does not imply that /dev/urandom is not appropriate for use in cryptography. So long as the internal state maintained by /dev/urandom contains more bits than the keys, nonces, etc. generated, it is perfectly suitable for use as a cryptographically secure random number generator.

Consider that Java's SecureRandom number generator is implemented to seed itself from /dev/urandom: http://www.java2s.com/Open-Source/Java-Document/6.0-JDK-Platform/solaris/sun/security/provider/NativePRNG.java.htm Then read the documentation for the SecureRandom class: http://download.oracle.com/javase/1.4.2/docs/api/java/security/SecureRandom.html which clearly states: "This class provides a cryptographically strong pseudo-random number generator (PRNG)."

If /dev/urandom were not a valid cryptographically secure psuedorandom number generator, then it would not be used as the basis for the the SecureRandom implementation. Many applications use /dev/urandom to obtain seed material, keys, nonces, etc. 67.129.215.3 (talk) 22:54, 3 February 2011 (UTC)


 * No, if it does not contain sufficient entropy there is a chance of a distinguishing attack. That it may still provide the basis for a cryptographically secure pseudorandom generator such as SecureRandom does not mean that it is cryptographically secure "as is". What is required though is a suitable randomness extractor to extract the entropy into a uniformly distributed value. The statement that /dev/urandom may not be suitable for cryptographic purposes is directly from the man page. Nageh (talk) 23:00, 3 February 2011 (UTC)


 * I don't see that sentence anywhere in the man page, in fact it says: "If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys."  The man page is plainly advocating using /dev/urandom for all other cryptographic purposes. 67.129.215.3 (talk) 16:36, 4 February 2011 (UTC)


 * Huh? Did you click on the link I provided to you? I quote:
 * "A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead."
 * Nageh (talk) 16:41, 4 February 2011 (UTC)


 * The man page is very confusing here, and honestly, something just is not right there. I don't know how /dev/urandom is implemented internally but I have changed the article accordingly. The statement that /dev/urandom is suitable as a CSPRG is worrisome as there may be efficient distinguishing attacks when the pool's entropy is very low. In a way, the man page is contradicting itself. Also see the analysis by Noll, who found flaws even in the uniform distribution of /dev/random, which means that neither /dev/urandom nor /dev/random are CSPRGs by the actual definition. Nageh (talk) 17:12, 4 February 2011 (UTC)


 * Note that nowhere in that sentence does it say /dev/urandom is not cryptographically secure. What it says is that if there are breaks in the algorithm underlying the pseudorandom generations (Which may be AES, SHA-256, etc.) then it may be feasible to distinguish the output of /dev/urandom from a random source.  Currently this is in theory possible but requires exponential time to do so.  Look at the definition for Cryptographically_secure_pseudorandom_number_generator.  Is this the same definition used in this article?  To me it seems to differ from the overly strict perfect randomness which this article seems to suggest is meant by cryptographically secure.  See: http://www.lavarnd.org/faq/crypto_strong.html  A source doesn't have to be perfectly random to be suitable for cryptographic purposes.  Note that even the best and most robust RNG's fail some tests some of the time: http://www.lavarnd.org/what/nist-test.html 67.129.215.3 (talk) 20:43, 4 February 2011 (UTC)
 * Let me clarify. First, it is obvious that if some cryptographic primitive used is breakable then so is the PRG built on it. Second, any primitive is breakable in exponential time because that refers to a brute-force complexity. Third, there is a precise definition for what it means for a PRG to be CS. It says that for a certain input key size (the seed) the CSPRG is indistinguishable from a truly random sequence in polynomial complexity assuming that the key has been selected uniformly at random. How can you create a seed of uniform distribution from an entropy source that is biased? Right, you attempt to measure the entropy in given data and extract it into a seed that is close to uniform distribution. So assuming the internal CSPRG is secure and the randomness extractor is secure the overall /dev/random is secure as well (i.e., a CSPRG). Now what happens when you don't get a uniform distribution for the seed because of insufficient entropy? In this case you have one more assumption in stating that /dev/urandom is a CSPRG. But an algorithm can only be a CSPRG as long as no polynomial-time distinguishing attack exists. And that was my initial concern since the man page warns against the use of /dev/urandom for some applications. Sorry, but if you don't have the highest trust in your design you cannot call it a CSPRG. That is best practice, btw: see the Handbook of Applied Cryptography, which explicitly discards certain hash-based PRG designs as being CS. Anyway, I think the current wording in the article is quite spot on (as you seem to agree below). Nageh (talk) 21:53, 4 February 2011 (UTC)
 * I think your changes look good 67.129.215.3 (talk) 21:11, 4 February 2011 (UTC)

Probably you should check http://www.2uo.de/myths-about-urandom/ — Preceding unsigned comment added by 138.4.128.32 (talk) 16:12, 29 March 2016 (UTC)

Writing to /dev/random
(In Linux), who says that you can add entropy by writing to /dev/random? I don't see this in any of the Linux man pages. — Preceding unsigned comment added by 17.209.4.116 (talk) 19:54, 13 July 2011 (UTC)

Citation for usage of device driver noise
In response to the opening paragraph needing a citation:

"It allows access to environmental noise collected from device drivers and other sources.[citation needed] "

It can be found here: http://www.kernel.org/doc/man-pages/online/pages/man4/random.4.html#DESCRIPTION — Preceding unsigned comment added by 66.76.35.131 (talk) 19:41, 31 July 2012 (UTC)

Size of the pool
Is there a standard pool size ? If not what is it dependent of ? --Martvefun (talk) 09:00, 10 August 2012 (UTC)


 * This page isn't for questions and answers, but see http://www.kernel.org/doc/man-pages/online/pages/man4/random.4.html


 * That's for Linux kernels and indicates that, for Kernel 2.6, the entropy pool size is 4096 bits. For other Unix-like systems this may be different. For instance on OpenBSD the entropy pool is 2048 words (16384 bits). Needless to say increasing the size of the entropy pool doesn't automatically create more entropy if none is available. --TS 18:48, 10 August 2012 (UTC)

Cross-article contradiction
The /dev/random article asserts that, /dev/random is suitable for generating one-time pads or keys. But, the one-time pad article asserts the opposite. The /dev/random article has a citation while the one-time pad article does not. Also, the citation link given is broken, so I can not establish the credibility of these statements one way or the other. I have come across come across the link: http://www.kernel.org/doc/man-pages/online/pages/man4/random.4.html#DESCRIPTION, but it only claims that "/dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation". The article in question does not provide any references to assert this claim. How should this be approached? Lyle Stephan (talk) 18:41, 5 November 2012 (UTC)
 * Be careful. There is a loose and a strict interpretation of a "one-time pad". In the strict interpretation, you need a truly random stream to encipher your data. In the loose interpretation, your stream may be pseudo-random, something what a stream cipher implements. In explanation, only a truly random key stream can provide you information-theoretic (perfect) secrecy, whereas a pseudo-random stream (such as provided by /dev/random) provides you computational security. The common principle is that you may use the key stream only once. Nageh (talk) 18:54, 5 November 2012 (UTC)
 * I need to slightly correct myself. /dev/random is meant to provide close-to-truly random output data. The extent to which it is random (in an information-theoretic sense) is debatable, but it does not provide "true" randomness. Nageh (talk) 20:37, 5 November 2012 (UTC)
 * I agree with you. I will try and find a source that will allow us to include the caveat that you mentioned. That should resolve it Lyle Stephan (talk) 15:50, 6 November 2012 (UTC)

Update needed
" They indicated that none of these /dev/random implementations were cryptographically secure because their outputs had uniformity flaws." The reference is dated from 8 years ago.I wonder if the flow still exists(They may be fixed now) in particular for linux Thanks.Ytrezq (talk) 19:59, 28 February 2013 (UTC)

Citation needed for OpenBSD arc4 section
Citation needed for OpenBSD arc4 section. IMHO, much of this article seems outdated or inaccurate. For example, I searched the OpenBSD mailing list, and I couldn't find any cited reference to the reason for the rc4 to arc4 rename. — Preceding unsigned comment added by Zacts (talk • contribs) 20:34, 13 May 2013 (UTC)
 * References adjusted. If somebody wants to update the whole section (e.g. correct use of present and past tense), here are the conference slides to look at: "arc4random - 1997 to present" --2001:638:401:102:250:56FF:FEB2:39C6 (talk) 07:43, 7 August 2015 (UTC)

Removed Weaknesses section
I removed the "Weaknesses" section. It was added by User:Landon Curt Noll, who also owns the website cited as a source. Seems to be WP:OR and cites a non-WP:RELY source, besides looking suspiciously like an advertisement (and probably being wrong). Manish Earth Talk • Stalk 22:19, 13 August 2013 (UTC)

Add /dev/urandom article
http://www.2uo.de/myths-about-urandom/ seems to be relevant, maybe add it? — Preceding unsigned comment added by 217.237.185.168 (talk) 07:28, 10 March 2014 (UTC)

Kroah-Hartmann and Torvalds seem to agree: https://plus.google.com/111049168280159033135/posts/AJkqHReK9x9 — Preceding unsigned comment added by 176.198.227.115 (talk) 21:17, 10 March 2014 (UTC)

FreeBSD switching from Yarrow to Fortuna
See https://svnweb.freebsd.org/base?view=revision&revision=273872; the relevant chapter of "Cryptography Engineering" can be downloaded for free from the author's site (https://www.schneier.com/fortuna.html) — Preceding unsigned comment added by 79.6.154.127 (talk) 15:01, 5 November 2014 (UTC)

Introduction/Definition
In Unix-like operating systems, /dev/random is a special file that serves as a blocking pseudorandom number generator. It allows access to environmental noise collected from device drivers and other sources. Not all operating systems implement the same semantics for /dev/random.

I think this introduction and definition flawed out of multiple reasons:
 * It defines "/dev/random" as "blocking" on "Unix-like OS"
 * The citation is flawed for multiple reasons, too.

Ad (1): The article lists how /dev/random is handled on multiple (mainly Unix-like) Operating Systems. Seems in the majority of those /dev/random does not block indeed. A blocking /dev/random seams to be mainly a Linux characteristic -- which is okay as Linux (namely Ted T'so) "invented" it. But that should be stated explicitly, or the "blocking" characteristic should be removed.

Ad (2): The reference is Torvalds, Linus (2005-04-16). "Linux Kernel drivers/char/random.c comment documentation @ 1da177e4". On the one hand, the contents of the introduction of Linux random.c does not refer to "Unix-like Operating Systems", but to Linux specifically. On the other hand, the cited commit is not the one that created the text, but the one with which Linus /imported/ the file (among many others) from the previous repository into the "new" GIT one. Commit message: Initial git repository build. I'm not bothering with the full history, even though we have it. [..]" So Linus himself is probably /not/ the author. --2001:638:401:102:250:56FF:FEB2:39C6 (talk) 08:32, 30 July 2015 (UTC)

/dev/urandom is what you want to use for long-term cryptographic keys
"While it is still intended as a pseudorandom number generator suitable for most cryptographic purposes, it is not recommended for the generation of long-term cryptographic keys."

This sentence needs to be expunged. It's misleading and based on complete bullshit.

http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

http://blog.cr.yp.to/20140205-entropy.html

Kobra (talk) 14:27, 29 September 2015 (UTC)


 * The good news is that Linux also agrees and decided to change; the article is also fixed. The bad news is that the "complete bullshit" in the manual page has not been fixed. It's partly my fault: I sent a patch but got bored by the small code review requests. My manpage-writing computer holding a clone of the linux-man-pages git repo is also in a cold room; turning on the heat is a lot of bother. Artoria2e5 🌉 05:30, 6 February 2024 (UTC)
 * I think something was changed about entropy_avail too -- there used to be something that tracks "spent entropy", which of course is silly. It's not in my proposed revision of the manual, so some more source-reading is required to hunt that down. --Artoria2e5 🌉 05:47, 6 February 2024 (UTC)

macOS No Longer Uses Yarrow
I am not an expert in these matters, but I read that macOS no longer uses Yarrow. According to this post from /r/crypto, Craig Federighi said they now use CoreCrypto and, "The NDRNG feeds entropy from the pool into the DRBG on demand. The NDRNG provides 256-bits of entropy." He also provides a link to the official Apple documenation.

I don't know enough about this topic to properly edit the macOS section of this page. I'm providing this info so someone who knows more than me can update it.

Auctoris (talk) 21:16, 23 November 2019 (UTC)

Update needed
Linux kernel 5.4.53 mixes the CPU hardware random number generator with its entropy pool using exclusive-or. I don't know when this change was introduced into

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/char/random.c

Must have been after this article arose (2017?). /drivers/char/hw_random contains hardware random number drivers for Intel, AMD, Atmel, Broadcom, Niagra2(Sparc), Via, OMAP, Octeon, VirtIO, Freescale, among others.

Assumptions based upon software deterministic random number generators must be updated. /dev/random is no longer correctly described in this article. XOR-ing non-deterministic ring-oscillator noise sources with timer-based entropy obsoletes this article's software-based random number focus.

Hpfeil (talk) 19:28, 23 July 2020 (UTC)

AKJK-Android-DATA
depicts depicted entity (see also P921: main subject) depiction of painting of  motif  portrait of  landscape of represents  portrays  subject depicting  pictures Crtn32002 (talk) 15:07, 25 March 2021 (UTC)

Confusing Lead
A Wikipedia lead section should be accessible to as broad an audience as possible. As a user of random number generators, I was intrigued to find out more about how Linux handles them, but as I am not an expert in computer science, I was very confused by the terminology used in the lead. This is the first time I've come across 'blocking' and 'entropy' used in such a context. Would it be possible to come up with a more accessible lead and leave such terminology for the main text?

Also, I suggest not starting the main text with an example. That would make more sense further down the page. AstroMark (talk) 09:37, 7 February 2022 (UTC)