Related
I want to design a lottery winning mechanism using random number generator. I know that for computer, there is no true randomness but only "pseudorandom". If the system gets hacked and random seed is seen, people will know the sequence of random numbers. In fact, there is news that people did this and won several lotteries. I am thinking about two ways of designing my system:
Use random number generator as a global variable. There is only one
random seed; the sequence is generated when the system starts.
Con:
a. Once the random seed is seen, hackers will know the sequence
easily.
b. Once the system crashes and restarts, the sequence will repeat
itself.
Create a random number generator using timestamp as random seed each
time to generate a number.
Con:
a. Obviously timestamp cannot be directly used. There are some
tricks needed to be done with the timestamp each time. For example,
plus or minus some values each time on the timestamp. What algorithm can I use here to do this kind of modification on timestamp?
b. Is this method even taking advantage of random number generator?
It seems I am just creating a random number by myself...
As we can see, either of the method above is not secure enough. Which way is slightly better? Or is there a better way?
The notion that computers are incapable of truly random numbers hasn't been true for decades. All modern desktop and laptop computers have true hardware-based random number generators. Even most small embedded systems do as well.
That said, it may be the case that your programming language hasn't caught up to the recent hardware, or that even if it has, it's easy to make a mistake with RNGs and get a bad result from a good generator. So it's probably a good idea to use something like random.org unless you know what you're doing.
What are typical means by which a random number can be generated in an embedded system? Can you offer advantages and disadvantages for each method, and/or some factors that might make you choose one method over another?
First, you have to ask a fundamental question: do you need unpredictable random numbers?
For example, cryptography requires unpredictable random numbers. That is, nobody must be able to guess what the next random number will be. This precludes any method that seeds a random number generator from common parameters such as the time: you need a proper source of entropy.
Some applications can live with a non-cryptographic-quality random number generator. For example, if you need to communicate over Ethernet, you need a random number generator for the exponential back-off; statistic randomness is enough for this¹.
Unpredictable RNG
You need an unpredictable RNG whenever an adversary might try to guess your random numbers and do something bad based on that guess. For example, if you're going to generate a cryptographic key, or use many other kinds of cryptographic algorithms, you need an unpredictable RNG.
An unpredictable RNG is made of two parts: an entropy source, and a pseudo-random number generator.
Entropy sources
An entropy source kickstarts the unpredictability. Entropy needs to come from an unpredictable source or a blend of unpredictable sources. The sources don't need to be fully unpredictable, they need to not be fully predictable. Entropy quantifies the amount of unpredictability. Estimating entropy is difficult; look for research papers or evaluations from security professionals.
There are three approaches to generating entropy.
Your device may include some non-deterministic hardware. Some devices include a dedicated hardware RNG based on physical phenomena such as unstable oscillators, thermal noise, etc. Some devices have sensors which capture somewhat unpredictable values, such as the low-order bits of light or sound sensors.
Beware that hardware RNG often have precise usage conditions. Most methods require some time after power-up before their output is truly random. Often environmental factors such as extreme temperatures can affect the randomness. Read the RNG's usage notes very carefully. For cryptographic applications, it is generally recommended to make statistical tests the HRNG's output and refuse to operate if these tests fail.
Never use a hardware RNG directly. The output is rarely fully unpredictable — e.g. each bit may have a 60% probability of being 1, or the probability of two consecutive bits being equal may be only 48%. Use the hardware RNG to seed a PRNG as explained below.
You can preload a random seed during manufacturing and use that afterwards. Entropy doesn't wear off when you use it²: if you have enough entropy to begin with, you'll have enough entropy during the lifetime of your device. The danger with keeping entropy around is that it must remain confidential: if the entropy pool accidentally leaks, it's toast.
If your device has a connection to a trusted third party (e.g. a server of yours, or a master node in a sensor network), it can download entropy from that (over a secure channel).
Pseudo-random number generator
A PRNG, also called deterministic random bit generator (DRBG), is a deterministic algorithm that generates a sequence of random numbers by transforming an internal state. The state must be seeded with sufficient entropy, after which the PRNG can run practically forever. Cryptographic-quality PRNG algorithms are based on cryptographic primitives; always use a vetted algorithm (preferably some well-audited third-party code if available).
The PRNG needs to be seeded with entropy. You can choose to inject entropy once during manufacturing, or at each boot, or periodically, or any combination.
Entropy after a reboot
You need to take care that the device doesn't boot twice in the same RNG state: otherwise an observer can repeat the same sequence of RNG calls after a reset and will know the RNG output the second time round. This is an issue for factory-injected entropy (which by definition is always the same) as well as for entropy derived from sensors (which takes time to accumulate).
If possible, save the RNG state to persistent storage. When the device boots, read the RNG state, apply some transformation to it (e.g. by generating one random word), and save the modified state. After this is done, you can start returning random numbers to applications and system services. That way, the device will boot with a different RNG state each time.
If this is not possible, you ned to be very careful. If your device has factory-injected entropy plus a reliable clock, you can mix the clock value into the RNG state to achieve unicity; however, beware that if your device loses power and the clock restarts from some fixed origin (blinking twelve), you'll be in a repeatable state.
Predictable RNG state after a reset or at the first boot is a common problem with embedded devices (and with servers). For example, a study of RSA public keys showed that many had been generated with insufficient entropy, resulting in many devices generating the same key³.
Statistical RNG
If you can't achieve a cryptographic quality, you can fall back to a less good RNG. You need to be aware that some applications (including a lot of cryptography) will be impossible.
Any RNG relies on a two-part structure: a unique seed (i.e. an entropy source) and a deterministic algorithm based on that seed.
If you can't gather enough entropy, at least gather as much as possible. In particular, make sure that no two devices start from the same state (this can usually be achieved by mixing the serial number into the RNG seed). If at all possible, arrange for the seed not to repeat after a reset.
The only excuse not to use a cryptographic DRBG is if your device doesn't have enough computing power. In that case, you can fall back to faster algorithm that allow observers to guess some numbers based on the RNG's past or future output. The Mersenne twister is a popular choice, but there have been improvements since its invention.
¹ Even this is debatable: with non-crypto-quality random backoff, another device could cause a denial of service by aligning its retransmission time with yours. But there are other ways to cause a DoS, by transmitting more often.
² Technically, it does, but only at an astronomical scale.
³ Or at least with one factor in common, which is just as bad.
One way to do it would be to create a Pseudo Random Bit Sequence, just a train of zeros and ones, and read the bottom bits as a number.
PRBS can be generated by tapping bits off a shift register, doing some logic on them, and using that logic to produce the next bit shifted in. Seed the shift register with any non zero number. There's a math that tells you which bits you need to tap off of to generate a maximum length sequence (i.e., 2^N-1 numbers for an N-bit shift register). There are tables out there for 2-tap, 3-tap, and 4-tap implementations. You can find them if you search on "maximal length shift register sequences" or "linear feedback shift register.
from: http://www.markharvey.info/fpga/lfsr/
HOROWITZ AND HILL gave a great part of a chapter on this. Most of the math surrounds the nature of the PRBS and not the number you generate with the bit sequence. There are some papers out there on the best ways to get a number out of the bit sequence and improving correlation by playing around with masking the bits you use to generate the random number, e.g., Horan and Guinee, Correlation Analysis of Random Number Sequences based on Pseudo Random Binary Sequence Generation, In the Proc. of IEEE ISOC ITW2005 on Coding and Complexity; editor M.J. Dinneen; co-chairs U. Speidel and D. Taylor; pages 82-85
An advantage would be that this can be achieved simply by bitshifting and simple bit logic operations. A one-liner would do it. Another advantage is that the math is pretty well understood. A disadvantage is that this is only pseudorandom, not random. Also, I don't know much about random numbers, and there might be better ways to do this that I simply don't know about.
How much energy you expend on this would depend on how random you need the number to be. If I were running a gambling site, and needed random numbers to generate deals, I wouldn't depend on Pseudo Random Bit Sequences. In those cases, I would probably look into analog noise techniques, maybe Johnson Noise around a big honking resistor or some junction noise on a PN junction, amplify that and sample it. The advantages of that are that if you get it right, you have a pretty good random number. The disadvantages are that sometimes you want a pseudorandom number where you can exactly reproduce a sequence by storing a seed. Also, this uses hardware, which someone must pay for, instead of a line or two of code, which is cheap. It also uses A/D conversion, which is yet another peripheral to use. Lastly, if you do it wrong -- say make a mistake where 60Hz ends up overwhelming your white noise-- you can get a pretty lousy random number.
What are typical means by which a random number can be generated in an embedded system?
Giles indirectly stated this: it depends on the use.
If you are using the generator to drive a simulation, then all you need is a uniform distribution and a linear congruential generator (LCG) will work fine.
If you need a secure generator, then its a trickier problem. I'm side-stepping what it means to be secure, but from 10,000 feet think "wrap it in a cryptographic transformation", like a SHA-1/HMAC or SHA-512/HMAC. There are others ways, like sampling random events, but they may not be viable.
When you need secure random numbers, some low resource devices are notoriously difficult to work with. See, for example, Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices and Traffic sensor flaw that could allow driver tracking fixed. And a caveat for Linux 3.0 kernel users: the kernel removed a couple of entropy sources, so entropy depletion and starvation might have gotten worse. See Appropriate sources of entropy on LWN.
If you have a secure generator, then your problem becomes getting your hands on a good seed (or seeds over time). One of the better methods I have seen for environments that are constrained is Hedging. Hedging was proposed for Virtual Machines where a program could produce the same sequence after a VM reset.
The idea for hedging is to extract the randomness provided by your peer, and use it to keep you secure generator fit. For example, in the case of TLS, there is a client_random and a server_random. If the device is a server, then it would stir in the client_random. If the device is a client, then it would stir in server_random.
You can find the two papers of interest that address hedging at:
When Good Randomness Goes Bad: Virtual Machine Reset
Vulnerabilities and Hedging Deployed Cryptography
When Virtual is Harder than Real: Resource Allocation Challenges in
Virtual Machine Based IT Environments
Using client_random and a server_random is consistent with Peter Guttman's view on the subject: "mix every entropy source you can get your hands on into your PRNG, including less-than-perfect ones". Gutmann is the author of Engineering Security.
Hedging only solves part of the problem. You will still need to solve other problems, like how to bootstrap the entropy pool, how to regenerate system key pairs when the pool is in a bad state, and how persist the entropy across reboots when there's no filesystem.
Although it may not be the most complex or sound method, it can be fun to use external stimuli as your seed for random number generation. Consider using analogue input from a photodiode, or a thermistor. Even random noise from a floating pin could potentially yield some interesting results.
My application would like to get a random number, preferably with entropy if available, but does not need cryptographic quality, and would like to do ensure that the call does not block if the system entropy pool is depleted (e.g. on a server in a farm).
I am aware of CryptGenRandom, but its behaviour with respect to blocking under adverse entropy conditions is not specified.
On Unix, /dev/urandom supports this use case. Is there equivalent functionality available on Windows? I would prefer to avoid using a non-system RNG simply to get non-blocking semantics.
For a toy application, you could use the standard library function rand(), but the implementation on Windows is of notoriously poor quality. For cryptographically secure random numbers, you can use the rand_s() standard library function.
A better bet is simply to include a suitable pseudo-random number generator in your program. The Mersenne Twister is a good choice IMO, particularly as there are plenty of available implementations (including in the C++11 standard library and in Boost).
If I need non-blocking behaviour on random numbers, I generally pre-generate n numbers and store them in an in memory variable: ie if I know I will need 30 random numbers per second, takes 3 seconds to compute them (including blocks), then I will pre-generate 300 while the main code is loading, store them in an array or vector and use them at need; whilst using them I generate another one on a separate thread every time I use one up, replacing the utilised random number with the newly generated one and moving on to the next one in the list, that way when I hit the limit (in this case 300) I know when I can simply start again at the start of my array/vector/list and all the random numbers are fresh and will be non-blocking (as they are pre-generated).
This means you can use any random number generator you like and not worry about blocking behaviour, however it has the expense of utilising more ram, negligible however for the sort of coding I need random numbers for.
Hope this helps, as I couldn't fit this all into a comment:)
You could wait for one good seed full of entropy and follow GMasucci advice to pre-generate a long list of random numbers.
Unless your system is already compromised it seems that a good seed it's good enough to generate a series of non-related numbers as discussed in http://www.2uo.de/myths-about-urandom/
From the discussion I get that a continuous feed of ("true"/"fresh") random numbers it's only needed if your system state (your sources of entropy are known and the attacker knows their current state) it is compromised at some point. After feeding your block cypher more randomness, the predictability of its output will get lower.
Source of seeds? Two or more pieces of trusted software that are less likely to be already compromised. I try to blur out the predictability of the functions that use time functions as seed: local rand_function() + some variable delay + mysql's rand().
From there, a list of pseudo-random numbers generated by some good library.
My program connects to a server, the public key of the server is already known. The program then encrypts a AES key together with an initialization vector, and sends it to the server. The server decrypts the message and from now on AES is used to encrypt the conversation.
My question is about how to generate the IV. If I go the naive way and seed a pseudo random generator with the current time, an attacker could probably make a few very good guesses about the IV, which is of curse not what I want.
As hardware random generators are not only slow, but also not available everywhere, I'd like to go for a different approach. When the client program is first started, I let the user make a few random mouse moves, just like TrueCrypt does. I now save those "random bits" created by the mouse movement and when I need a generator, I'll use them as a seed. Of course, the random bits have to get updated every time I use them as seed. And this is my question: I thought about just saving the first few random bits generated as the new "random bits". (So they get used to initialize the random engine next time the software starts.) Now I'm not sure if this would be random enough or if pseudo random generators would show guessable patterns here. (I'd probably use std::mt19937 http://en.cppreference.com/w/cpp/numeric/random)
Edit: The chaining mode changes, so I want it to work for the mode with the "highest" requirements. Which would be CBC if I remember correctly.
Please note: The software I'm writing is purely experimental.
Use a cryptography PRNG, just like you do for the key.
On windows use CryptGenRandom/RtlGenRandom and on Linux/Unix use /dev/urandom. Those get seeded by the OS, so you don't need to take care of it.
If you really want to create your own PRNG, look into Fortuna. Don't use a Mersenne twister.
You should clarify which chaining mode you plan to use. The security requirements for the initialization vector strongly depend on that.
For instance, in CBC mode the IV must be unpredictable and unique. For CTR mode, it can must only be unique, not necessarily unpredictable.
Pseudo-random generators are nice for things where you don't want users to be able to predict the outcome (such as dice rolls in games), but worthless for cases where you don't want a computer to be able to compute it. For cryptography, don't use pseudo-randomness at all.
If you want randomness, you need actual random data. As you write, mouse movements are a good source for that. Given that you don't talk about /dev/random, I take it you're running on Windows, which unfortunately doesn't gather randomness while running. So you will have to do this yourself. Depening on the use case, you can run a randomness daemon at startup which keeps gathering random data and allows your program to retrieve it when it is needed, or you can ask the user to make some mouse movements when your program starts.
Or you can decide that if Windows doesn't want you to have real random data, you don't want to use Windows, but I suppose that's not an option. ;-)
Is there a difference between generating multiple numbers using a single random number generator (RNG) versus generating one number per generator and discarding it? Do both implementations generate numbers which are equally random? Is there a difference between the normal RNGs and the secure RNGs for this?
I have a web application that is supposed to generate a list of random numbers on behalf of clients. That is, the numbers should appear to be random from each client's point of view. Does this mean I need retain a separate random RNG per client session? Or can I share a single RNG across all sessions? Or can I create and discard a RNG on a per-request basis?
UPDATE: This question is related to Is a subset of a random sequence also random?
A random number generator has a state -- that's actually a necessary feature. The next "random" number is a function of the previous number and the seed/state. The purists call them pseudo-random number generators. The numbers will pass statistical tests for randomness, but aren't -- actually -- random.
The sequence of random values is finite and does repeat.
Think of a random number generator as shuffling a collection of numbers and then dealing them out in a random order. The seed is used to "shuffle" the numbers. Once the seed is set, the sequence of numbers is fixed and very hard to predict. Some seeds will repeat sooner than others.
Most generators have period that is long enough that no one will notice it repeating. A 48-bit random number generator will produce several hundred billion random numbers before it repeats -- with (AFAIK) any 32-bit seed value.
A generator will only generate random-like values when you give it a single seed and let it spew values. If you change seeds, then numbers generated with the new seed value may not appear random when compared with values generated by the previous seed -- all bets are off when you change seeds. So don't.
A sound approach is to have one generator and "deal" the numbers around to your various clients. Don't mess with creating and discarding generators. Don't mess with changing seeds.
Above all, never try to write your own random number generator. The built-in generators in most language libraries are really good. Especially modern ones that use more than 32 bits.
Some Linux distros have a /dev/random and /dev/urandom device. You can read these once to seed your application's random number generator. These have more-or-less random values, but they work by "gathering noise" from random system events. Use them sparingly so there are lots of random events between uses.
I would recommend using a single generator multiple times. As far as I know, all the generators have a state. When you seed a generator, you set its state to something based on the seed. If you keep spawning new ones, it's likely that the seeds you pick will not be as random as the numbers generated by using just one generator.
This is especially true with most generators I've used, which use the current time in milliseconds as a seed.
Hardware-based, true [1], random number generators are possible, but non-trivial and often have low mean rates. Availablity can also be an issue [2]. Googling for "shot noise" or "radioactive decay" in combination with "random number generator" should return some hits.
These systems do not need to maintain state. Probably not what you were looking for.
As noted by others, software systems are only pseudo-random, and must maintain state.
A compromise is to use a hardware based RNG to provide an entropy pool (stored state) which is made available to seed a PRNG. This is done quite explicitly in the linux implementation of /dev/random [3] and /dev/urandom [4].
These is some argument about just how random the default inputs to the /dev/random entropy pool really are.
Footnotes:
modulo any problems with our understanding of physics
because you're waiting for a random process
/dev/random features direct access to the entropy pool seeded from various sources believed to be really or nearly random, and blocks when the entropy is exhausted
/dev/urandom is like /dev/random, but when the entopy is exhausted a cryptographic hash is employed which makes the entropy pool effectively a stateful PRNG
If you create a RNG and generate a single random number from it then discard the RNG, the number generated is only as random as the seed used to start the RNG.
It would be much better to create a single RNG and draw many numbers from it.
As people have already said, it's much better to seed the PRNG once, and reuse it. A secure PRNG is simply one which is suitable for cryptographic applications. The only way re-seeding each time will give reasonably random results is where it comes from a genuinely random "real world" source - ie specialised hardware. Even then, it's possible that the source is biased and it will still be theoretically better to use the same PRNG over.
Normally seeding a new state takes quite while for a serious PRNG, and making new ones each time won't really help much.
The only case I can think of where you might want more than one PRNG is for different systems, say in a casino game you have one generator for shuffling cards and a separate one to generate comments done by the computer control characters, this way REALLY dedicated users can't guess outcomes based on character behaviors.
A nice solution for seeding is to use this (Random.org) , they supply random numbers generated from the atmospheric noise for free. It could be a better source for seeding than using time.
Edit: In your case, I would definitely use one PRNG per client, if for no other reason than for good programming standards. Anyways if you share one PRNG among clients, you will still be providing pseudo-random values to each, of a quality equal to your PRNG's quality. So that's a viable option but seems like a bad policy for programming
It's worth mentioning that Haskell is a language which attempts to entirely eliminate mutable state. In order to reconcile this goal with hard-requirements like IO (which requires some form of mutability), monads must be used to thread state from one calculation to the next. In this way, Haskell implements its pseudo-random number generator. Strictly speaking, generating random numbers is an inherently stateful operation, but Haskell is able to hide this fact by moving the state "mutation" into the bind (>>=) operation.
This probably sounds a little abstract, and it doesn't really answer your question completely, but I think it is still applicable. From a theoretical standpoint, it is impossible to work with a RNG without involving state. Regardless, there are techniques which can be used to mitigate this interaction and make it appear as if the entire operation is of a stateless nature.
It's generally better to create a single PRNG and pull multiple values from it. Creating multiple instances means you need to ensure that the seeds for the instances are guaranteed unique, which will require incorporating instance-specific information.
As an aside, there are better "true" Random Number Generators, but they usually require specialized hardware which does things like derive random data from electrical signal variance inside the computer. Unless you're really worried about it, I'd say the Pseudo Random Number Generators built into the language libraries and/or OS are probably sufficient, as long as your seed value is not easily predictable.
The use of a secure PRNG depends on your application. What are the random numbers used for?
If they're something of real value (e.g. anything cryptographically related), you wouldn't want to use anything less.
Secure PRNGs are much slower, and may require libraries to do operation of arbitrary precision, and primality testing, etc etc...
Well, as long as they are seeded differently each time they're created, then no, I don't think there'd be any difference; however, if it depended on something like the time, then they'd probably be non-uniform, due to the biased seed.